Science.gov

Sample records for accuracy simulation results

  1. Equations of State for Mixtures: Results from DFT Simulations of Xenon/Ethane Mixtures Compared to High Accuracy Validation Experiments on Z

    NASA Astrophysics Data System (ADS)

    Magyar, Rudolph

    2013-06-01

    We report a computational and validation study of equation of state (EOS) properties of liquid / dense plasma mixtures of xenon and ethane to explore and to illustrate the physics of the molecular scale mixing of light elements with heavy elements. Accurate EOS models are crucial to achieve high-fidelity hydrodynamics simulations of many high-energy-density phenomena such as inertial confinement fusion and strong shock waves. While the EOS is often tabulated for separate species, the equation of state for arbitrary mixtures is generally not available, requiring properties of the mixture to be approximated by combining physical properties of the pure systems. The main goal of this study is to access how accurate this approximation is under shock conditions. Density functional theory molecular dynamics (DFT-MD) at elevated-temperature and pressure is used to assess the thermodynamics of the xenon-ethane mixture. The simulations are unbiased as to elemental species and therefore provide comparable accuracy when describing total energies, pressures, and other physical properties of mixtures as they do for pure systems. In addition, we have performed shock compression experiments using the Sandia Z-accelerator on pure xenon, ethane, and various mixture ratios thereof. The Hugoniot results are compared to the DFT-MD results and the predictions of different rules for combing EOS tables. The DFT-based simulation results compare well with the experimental points, and it is found that a mixing rule based on pressure equilibration performs reliably well for the mixtures considered. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. Accuracy of non-Newtonian Lattice Boltzmann simulations

    NASA Astrophysics Data System (ADS)

    Conrad, Daniel; Schneider, Andreas; Böhle, Martin

    2015-11-01

    This work deals with the accuracy of non-Newtonian Lattice Boltzmann simulations. Previous work for Newtonian fluids indicate that, depending on the numerical value of the dimensionless collision frequency Ω, additional artificial viscosity is introduced, which negatively influences the accuracy. Since the non-Newtonian fluid behavior is incorporated through appropriate modeling of the dimensionless collision frequency, a Ω dependent error EΩ is introduced and its influence on the overall error is investigated. Here, simulations with the SRT and the MRT model are carried out for power-law fluids in order to numerically investigate the accuracy of non-Newtonian Lattice Boltzmann simulations. A goal of this accuracy analysis is to derive a recommendation for an optimal choice of the time step size and the simulation Mach number, respectively. For the non-Newtonian case, an error estimate for EΩ in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. With the help of the error functional, the prediction of the global error minimum of the velocity field is excellent in regions where the EΩ error is the dominant source of error. With an optimal simulation Mach number, the simulation is about one order of magnitude more accurate. Additionally, for both collision models a detailed study of the convergence behavior of the method in the non-Newtonian case is conducted. The results show that the simulation Mach number has a major impact on the convergence rate and second order accuracy is not preserved for every choice of the simulation Mach number.

  3. Accuracy of results with NASTRAN modal synthesis

    NASA Technical Reports Server (NTRS)

    Herting, D. N.

    1978-01-01

    A new method for component mode synthesis was developed for installation in NASTRAN level 17.5. Results obtained from the new method are presented, and these results are compared with existing modal synthesis methods.

  4. Establishing precision and accuracy in PDV results

    SciTech Connect

    Briggs, Matthew E.; Howard, Marylesa; Diaz, Abel

    2016-04-19

    We need to know uncertainties and systematic errors because we create and compare against archival weapons data, we constrain the models, and we provide scientific results. Good estimates of precision from the data record are available and should be incorporated into existing results; reanalysis of valuable data is suggested. Estimates of systematic errors are largely absent. The original work by Jensen et al. using gun shots for window corrections, and the integrated velocity comparison with X-rays by Schultz are two examples where any systematic errors appear to be <1% level.

  5. ICAAS piloted simulation results

    NASA Astrophysics Data System (ADS)

    Landy, R. J.; Halski, P. J.; Meyer, R. P.

    1994-05-01

    This paper reports piloted simulation results from the Integrated Control and Avionics for Air Superiority (ICAAS) piloted simulation evaluations. The program was to develop, integrate, and demonstrate critical technologies which will enable United States Air Force tactical fighter 'blue' aircraft to achieve superiority and survive when outnumbered by as much as four to one by enemy aircraft during air combat engagements. Primary emphasis was placed on beyond visual range (BVR) combat with provisions for effective transition to close-in combat. The ICAAS system was developed and tested in two stages. The first stage, called low risk ICAAS, was defined as employing aircraft and avionics technology with an initial operational date no later than 1995. The second stage, called medium risk ICAAS, was defined as employing aircraft and avionics technology with an initial operational date no later than 1998. Descriptions of the low risk and medium risk simulation configurations are given. Normalized (unclassified) results from both the low risk and medium risk ICAAS simulations are discussed. The results show the ICAAS system provided a significant improvement in air combat performance when compared to a current weapon system. Data are presented for both current generation and advanced fighter aircraft. The ICAAS technologies which are ready for flight testing in order to transition to the fighter fleet are described along with technologies needing additional development.

  6. On the accuracy of RANS simulations with DNS data

    NASA Astrophysics Data System (ADS)

    Poroseva, Svetlana V.; Colmenares F., Juan D.; Murman, Scott M.

    2016-11-01

    Simulation results conducted for incompressible planar wall-bounded turbulent flows with the Reynolds-Averaged Navier-Stokes (RANS) equations with no modeling involved are presented. Instead, all terms but the molecular diffusion are represented by the data from direct numerical simulation (DNS). In simulations, the transport equations for velocity moments through the second order (and the fourth order where the data are available) are solved in a zero-pressure gradient boundary layer over a flat plate and in a fully developed channel flow in a wide range of Reynolds numbers using DNS data from Sillero et al., Lee and Moser, and Jeyapaul et al. The results obtained demonstrate that DNS data are the significant and dominant source of uncertainty in such simulations (hereafter, RANS-DNS simulations). Effects of the Reynolds number, flow geometry, and the velocity moment order as well as an uncertainty quantification technique used to collect the DNS data on the results of RANS-DNS simulations are analyzed. New criteria for uncertainty quantification in statistical data collected from DNS are proposed to guarantee the data accuracy sufficient for their use in RANS equations and for the turbulence model validation.

  7. Evaluating the Accuracy of Hessian Approximations for Direct Dynamics Simulations.

    PubMed

    Zhuang, Yu; Siebert, Matthew R; Hase, William L; Kay, Kenneth G; Ceotto, Michele

    2013-01-08

    Direct dynamics simulations are a very useful and general approach for studying the atomistic properties of complex chemical systems, since an electronic structure theory representation of a system's potential energy surface is possible without the need for fitting an analytic potential energy function. In this paper, recently introduced compact finite difference (CFD) schemes for approximating the Hessian [J. Chem. Phys.2010, 133, 074101] are tested by employing the monodromy matrix equations of motion. Several systems, including carbon dioxide and benzene, are simulated, using both analytic potential energy surfaces and on-the-fly direct dynamics. The results show, depending on the molecular system, that electronic structure theory Hessian direct dynamics can be accelerated up to 2 orders of magnitude. The CFD approximation is found to be robust enough to deal with chaotic motion, concomitant with floppy and stiff mode dynamics, Fermi resonances, and other kinds of molecular couplings. Finally, the CFD approximations allow parametrical tuning of different CFD parameters to attain the best possible accuracy for different molecular systems. Thus, a direct dynamics simulation requiring the Hessian at every integration step may be replaced with an approximate Hessian updating by tuning the appropriate accuracy.

  8. "Certified" Laboratory Practitioners and the Accuracy of Laboratory Test Results.

    ERIC Educational Resources Information Center

    Boe, Gerard P.; Fidler, James R.

    1988-01-01

    An attempt to replicate a study of the accuracy of test results of medical laboratories was unsuccessful. Limitations of the obtained data prevented the research from having satisfactory internal validity, so no formal report was published. External validity of the study was also limited because the systematic random sample of 78 licensed…

  9. Simulation of Local Tie Accuracy on VLBI Antennas

    NASA Technical Reports Server (NTRS)

    Kallio, Ulla; Poutanen, Markku

    2010-01-01

    We introduce a new mathematical model to compute the centering parameters of a VLBI antenna. These include the coordinates of the reference point, axis offset, orientation, and non-perpendicularity of the axes. Using the model we simulated how precisely parameters can be computed in different cases. Based on the simulation we can give some recommendations and practices to control the accuracy and reliability of the local ties at the VLBI sites.

  10. Open cherry picker simulation results

    NASA Technical Reports Server (NTRS)

    Nathan, C. A.

    1982-01-01

    The simulation program associated with a key piece of support equipment to be used to service satellites directly from the Shuttle is assessed. The Open Cherry Picker (OCP) is a manned platform mounted at the end of the remote manipulator system (RMS) and is used to enhance extra vehicular activities (EVA). The results of simulations performed on the Grumman Large Amplitude Space Simulator (LASS) and at the JSC Water Immersion Facility are summarized.

  11. Study of accuracy of precipitation measurements using simulation method

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Lajos, Tamás; Morvai, Krisztián

    2013-04-01

    Hungarian Meteorological Service1 Budapest University of Technology and Economics2 Precipitation is one of the the most important meteorological parameters describing the state of the climate and to get correct information from trends, accurate measurements of precipitation is very important. The problem is that the precipitation measurements are affected by systematic errors leading to an underestimation of actual precipitation which errors vary by type of precipitaion and gauge type. It is well known that the wind speed is the most important enviromental factor that contributes to the underestimation of actual precipitation, especially for solid precipitation. To study and correct the errors of precipitation measurements there are two basic possibilities: · Use of results and conclusion of International Precipitation Measurements Intercomparisons; · To build standard reference gauges (DFIR, pit gauge) and make own investigation; In 1999 at the HMS we tried to achieve own investigation and built standard reference gauges But the cost-benefit ratio in case of snow (use of DFIR) was very bad (we had several winters without significant amount of snow, while the state of DFIR was continously falling) Due to the problem mentioned above there was need for new approximation that was the modelling made by Budapest University of Technology and Economics, Department of Fluid Mechanics using the FLUENT 6.2 model. The ANSYS Fluent package is featured fluid dynamics solution for modelling flow and other related physical phenomena. It provides the tools needed to describe atmospheric processes, design and optimize new equipment. The CFD package includes solvers that accurately simulate behaviour of the broad range of flows that from single-phase to multi-phase. The questions we wanted to get answer to are as follows: · How do the different types of gauges deform the airflow around themselves? · Try to give quantitative estimation of wind induced error. · How does the use

  12. Results on fibre scrambling for high accuracy radial velocity measurements

    NASA Astrophysics Data System (ADS)

    Avila, Gerardo; Singh, Paul; Chazelas, Bruno

    2010-07-01

    We present in this paper experimental data on fibres and scramblers to increase the photometrical stability of the spectrograph PSF. We have used round, square, octagonal fibres and beam homogenizers. This study is aimed to enhance the accuracy measurements of the radial velocities for ESO ESPRESSO (VLT) and CODEX (E-ELT) instruments.

  13. Poor Metacomprehension Accuracy as a Result of Inappropriate Cue Use

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Griffin, Thomas D.; Wiley, Jennifer; Anderson, Mary C. M.

    2010-01-01

    Two studies attempt to determine the causes of poor metacomprehension accuracy and then, in turn, to identify interventions that circumvent these difficulties to support effective comprehension monitoring performance. The first study explored the cues that both at-risk and typical college readers use as a basis for their metacomprehension…

  14. Simulation approach for the evaluation of tracking accuracy in radiotherapy: a preliminary study.

    PubMed

    Tanaka, Rie; Ichikawa, Katsuhiro; Mori, Shinichiro; Sanada, Sigeru

    2013-01-01

    Real-time tumor tracking in external radiotherapy can be achieved by diagnostic (kV) X-ray imaging with a dynamic flat-panel detector (FPD). It is important to keep the patient dose as low as possible while maintaining tracking accuracy. A simulation approach would be helpful to optimize the imaging conditions. This study was performed to develop a computer simulation platform based on a noise property of the imaging system for the evaluation of tracking accuracy at any noise level. Flat-field images were obtained using a direct-type dynamic FPD, and noise power spectrum (NPS) analysis was performed. The relationship between incident quantum number and pixel value was addressed, and a conversion function was created. The pixel values were converted into a map of quantum number using the conversion function, and the map was then input into the random number generator to simulate image noise. Simulation images were provided at different noise levels by changing the incident quantum numbers. Subsequently, an implanted marker was tracked automatically and the maximum tracking errors were calculated at different noise levels. The results indicated that the maximum tracking error increased with decreasing incident quantum number in flat-field images with an implanted marker. In addition, the range of errors increased with decreasing incident quantum number. The present method could be used to determine the relationship between image noise and tracking accuracy. The results indicated that the simulation approach would aid in determining exposure dose conditions according to the necessary tracking accuracy.

  15. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry

    SciTech Connect

    Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.

    2015-02-15

    Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.

  16. Assessment of accuracy of CFD simulations through quantification of a numerical dissipation rate

    NASA Astrophysics Data System (ADS)

    Domaradzki, J. A.; Sun, G.; Xiang, X.; Chen, K. K.

    2016-11-01

    The accuracy of CFD simulations is typically assessed through a time consuming process of multiple runs and comparisons with available benchmark data. We propose that the accuracy can be assessed in the course of actual runs using a simpler method based on a numerical dissipation rate which is computed at each time step for arbitrary sub-domains using only information provided by the code in question (Schranner et al., 2015; Castiglioni and Domaradzki, 2015). Here, the method has been applied to analyze numerical simulation results obtained using OpenFOAM software for a flow around a sphere at Reynolds number of 1000. Different mesh resolutions were used in the simulations. For the coarsest mesh the ratio of the numerical dissipation to the viscous dissipation downstream of the sphere varies from 4.5% immediately behind the sphere to 22% further away. For the finest mesh this ratio varies from 0.4% behind the sphere to 6% further away. The large numerical dissipation in the former case is a direct indicator that the simulation results are inaccurate, e.g., the predicted Strouhal number is 16% lower than the benchmark. Low numerical dissipation in the latter case is an indicator of an acceptable accuracy, with the Strouhal number in the simulations matching the benchmark. Supported by NSF.

  17. Accuracy and stability of positioning in radiosurgery: Long term results of the Gamma Knife system

    SciTech Connect

    Heck, Bernhard; Jess-Hempen, Anja; Kreiner, Hans Juerg; Schoepgens, Hans; Mack, Andreas

    2007-04-15

    The primary aim of this investigation was to determine the long term overall accuracy of an irradiation position of Gamma Knife systems. The mechanical accuracy of the system as well as the overall accuracy of an irradiation position was examined by irradiating radiosensitive films. To measure the mechanical accuracy, the GafChromic registered film was fixed by a special tool at the unit center point (UCP). For overall accuracy the film was mounted inside a phantom at a target position given by a two-dimensional cross. Its position was determined by CT or MRI scans, a treatment was planned to hit this target by use of the standard planning software and the radiation was finally delivered. This procedure is named ''system test'' according to DIN 6875-1 and is equivalent to a treatment simulation. The used GafChromic registered films were evaluated by high resolution densitometric measurements. The Munich Gamma Knife UCP coincided within x;y;z: -0.014{+-}0.09 mm; 0.013{+-}0.09 mm; -0.002{+-}0.06 mm (mean{+-}SD) to the center of dose distribution. There was no trend in the measured data observed over more than ten years. All measured data were within a sphere of 0.2 mm radius. When basing the target definition in the system test on MRI scans, we obtained an overall accuracy of an irradiation position in the x direction of 0.21{+-}0.32 mm and in the y direction 0.15{+-}0.26 mm (mean{+-}SD). When a CT-based target definition was used, we measured distances in x direction 0.06{+-}0.09 mm and in y direction 0.04{+-}0.09 mm (mean{+-}SD), respectively. These results were compared with those obtained with a Gamma Knife equipped with an automatic positioning system (APS) by use of a different phantom. This phantom was found to be slightly less accurate due to its mechanical construction and the soft fixation into the frame. The phantom related position deviation was found to be about {+-}0.2 mm, and therefore the measured accuracy of the APS Gamma Knife was evidently less

  18. Analysis of machining accuracy during free form surface milling simulation for different milling strategies

    NASA Astrophysics Data System (ADS)

    Matras, A.; Kowalczyk, R.

    2014-11-01

    The analysis results of machining accuracy after the free form surface milling simulations (based on machining EN AW- 7075 alloys) for different machining strategies (Level Z, Radial, Square, Circular) are presented in the work. Particular milling simulations were performed using CAD/CAM Esprit software. The accuracy of obtained allowance is defined as a difference between the theoretical surface of work piece element (the surface designed in CAD software) and the machined surface after a milling simulation. The difference between two surfaces describes a value of roughness, which is as the result of tool shape mapping on the machined surface. Accuracy of the left allowance notifies in direct way a surface quality after the finish machining. Described methodology of usage CAD/CAM software can to let improve a time design of machining process for a free form surface milling by a 5-axis CNC milling machine with omitting to perform the item on a milling machine in order to measure the machining accuracy for the selected strategies and cutting data.

  19. 4D dose simulation in volumetric arc therapy: Accuracy and affecting parameters.

    PubMed

    Sothmann, Thilo; Gauer, Tobias; Werner, René

    2017-01-01

    Radiotherapy of lung and liver lesions has changed from normofractioned 3D-CRT to stereotactic treatment in a single or few fractions, often employing volumetric arc therapy (VMAT)-based techniques. Potential unintended interference of respiratory target motion and dynamically changing beam parameters during VMAT dose delivery motivates establishing 4D quality assurance (4D QA) procedures to assess appropriateness of generated VMAT treatment plans when taking into account patient-specific motion characteristics. Current approaches are motion phantom-based 4D QA and image-based 4D VMAT dose simulation. Whereas phantom-based 4D QA is usually restricted to a small number of measurements, the computational approaches allow simulating many motion scenarios. However, 4D VMAT dose simulation depends on various input parameters, influencing estimated doses along with mitigating simulation reliability. Thus, aiming at routine use of simulation-based 4D VMAT QA, the impact of such parameters as well as the overall accuracy of the 4D VMAT dose simulation has to be studied in detail-which is the topic of the present work. In detail, we introduce the principles of 4D VMAT dose simulation, identify influencing parameters and assess their impact on 4D dose simulation accuracy by comparison of simulated motion-affected dose distributions to corresponding dosimetric motion phantom measurements. Exploiting an ITV-based treatment planning approach, VMAT treatment plans were generated for a motion phantom and different motion scenarios (sinusoidal motion of different period/direction; regular/irregular motion). 4D VMAT dose simulation results and dose measurements were compared by local 3% / 3 mm γ-evaluation, with the measured dose distributions serving as ground truth. Overall γ-passing rates of simulations and dynamic measurements ranged from 97% to 100% (mean across all motion scenarios: 98% ± 1%); corresponding values for comparison of different day repeat measurements were

  20. 4D dose simulation in volumetric arc therapy: Accuracy and affecting parameters

    PubMed Central

    Werner, René

    2017-01-01

    Radiotherapy of lung and liver lesions has changed from normofractioned 3D-CRT to stereotactic treatment in a single or few fractions, often employing volumetric arc therapy (VMAT)-based techniques. Potential unintended interference of respiratory target motion and dynamically changing beam parameters during VMAT dose delivery motivates establishing 4D quality assurance (4D QA) procedures to assess appropriateness of generated VMAT treatment plans when taking into account patient-specific motion characteristics. Current approaches are motion phantom-based 4D QA and image-based 4D VMAT dose simulation. Whereas phantom-based 4D QA is usually restricted to a small number of measurements, the computational approaches allow simulating many motion scenarios. However, 4D VMAT dose simulation depends on various input parameters, influencing estimated doses along with mitigating simulation reliability. Thus, aiming at routine use of simulation-based 4D VMAT QA, the impact of such parameters as well as the overall accuracy of the 4D VMAT dose simulation has to be studied in detail–which is the topic of the present work. In detail, we introduce the principles of 4D VMAT dose simulation, identify influencing parameters and assess their impact on 4D dose simulation accuracy by comparison of simulated motion-affected dose distributions to corresponding dosimetric motion phantom measurements. Exploiting an ITV-based treatment planning approach, VMAT treatment plans were generated for a motion phantom and different motion scenarios (sinusoidal motion of different period/direction; regular/irregular motion). 4D VMAT dose simulation results and dose measurements were compared by local 3% / 3 mm γ-evaluation, with the measured dose distributions serving as ground truth. Overall γ-passing rates of simulations and dynamic measurements ranged from 97% to 100% (mean across all motion scenarios: 98% ± 1%); corresponding values for comparison of different day repeat measurements were

  1. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    SciTech Connect

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I found that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.

  2. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  3. The Impact of Sea Ice Concentration Accuracies on Climate Model Simulations with the GISS GCM

    NASA Technical Reports Server (NTRS)

    Parkinson, Claire L.; Rind, David; Healy, Richard J.; Martinson, Douglas G.; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    The Goddard Institute for Space Studies global climate model (GISS GCM) is used to examine the sensitivity of the simulated climate to sea ice concentration specifications in the type of simulation done in the Atmospheric Modeling Intercomparison Project (AMIP), with specified oceanic boundary conditions. Results show that sea ice concentration uncertainties of +/- 7% can affect simulated regional temperatures by more than 6 C, and biases in sea ice concentrations of +7% and -7% alter simulated annually averaged global surface air temperatures by -0.10 C and +0.17 C, respectively, over those in the control simulation. The resulting 0.27 C difference in simulated annual global surface air temperatures is reduced by a third, to 0.18 C, when considering instead biases of +4% and -4%. More broadly, least-squares fits through the temperature results of 17 simulations with ice concentration input changes ranging from increases of 50% versus the control simulation to decreases of 50% yield a yearly average global impact of 0.0107 C warming for every 1% ice concentration decrease, i.e., 1.07 C warming for the full +50% to -50% range. Regionally and on a monthly average basis, the differences can be far greater, especially in the polar regions, where wintertime contrasts between the +50% and -50% cases can exceed 30 C. However, few statistically significant effects are found outside the polar latitudes, and temperature effects over the non-polar oceans tend to be under 1 C, due in part to the specification of an unvarying annual cycle of sea surface temperatures. The +/- 7% and 14% results provide bounds on the impact (on GISS GCM simulations making use of satellite data) of satellite-derived ice concentration inaccuracies, +/- 7% being the current estimated average accuracy of satellite retrievals and +/- 4% being the anticipated improved average accuracy for upcoming satellite instruments. Results show that the impact on simulated temperatures of imposed ice concentration

  4. The effects of mapping CT images to Monte Carlo materials on GEANT4 proton simulation accuracy

    SciTech Connect

    Barnes, Samuel; McAuley, Grant; Slater, James; Wroe, Andrew

    2013-04-15

    Purpose: Monte Carlo simulations of radiation therapy require conversion from Hounsfield units (HU) in CT images to an exact tissue composition and density. The number of discrete densities (or density bins) used in this mapping affects the simulation accuracy, execution time, and memory usage in GEANT4 and other Monte Carlo code. The relationship between the number of density bins and CT noise was examined in general for all simulations that use HU conversion to density. Additionally, the effect of this on simulation accuracy was examined for proton radiation. Methods: Relative uncertainty from CT noise was compared with uncertainty from density binning to determine an upper limit on the number of density bins required in the presence of CT noise. Error propagation analysis was also performed on continuously slowing down approximation range calculations to determine the proton range uncertainty caused by density binning. These results were verified with Monte Carlo simulations. Results: In the presence of even modest CT noise (5 HU or 0.5%) 450 density bins were found to only cause a 5% increase in the density uncertainty (i.e., 95% of density uncertainty from CT noise, 5% from binning). Larger numbers of density bins are not required as CT noise will prevent increased density accuracy; this applies across all types of Monte Carlo simulations. Examining uncertainty in proton range, only 127 density bins are required for a proton range error of <0.1 mm in most tissue and <0.5 mm in low density tissue (e.g., lung). Conclusions: By considering CT noise and actual range uncertainty, the number of required density bins can be restricted to a very modest 127 depending on the application. Reducing the number of density bins provides large memory and execution time savings in GEANT4 and other Monte Carlo packages.

  5. Factors influencing QTL mapping accuracy under complicated genetic models by computer simulation.

    PubMed

    Su, C F; Wang, W; Gong, S L; Zuo, J H; Li, S J

    2016-12-19

    The accuracy of quantitative trait loci (QTLs) identified using different sample sizes and marker densities was evaluated in different genetic models. Model I assumed one additive QTL; Model II assumed three additive QTLs plus one pair of epistatic QTLs; and Model III assumed two additive QTLs with opposite genetic effects plus two pairs of epistatic QTLs. Recombinant inbred lines (RILs) (50-1500 samples) were simulated according to the Models to study the influence of different sample sizes under different genetic models on QTL mapping accuracy. RILs with 10-100 target chromosome markers were simulated according to Models I and II to evaluate the influence of marker density on QTL mapping accuracy. Different marker densities did not significantly influence accurate estimation of genetic effects with simple additive models, but influenced QTL mapping accuracy in the additive and epistatic models. The optimum marker density was approximately 20 markers when the recombination fraction between two adjacent markers was 0.056 in the additive and epistatic models. A sample size of 150 was sufficient for detecting simple additive QTLs. Thus, a sample size of approximately 450 is needed to detect QTLs with additive and epistatic models. Sample size must be approximately 750 to detect QTLs with additive, epistatic, and combined effects between QTLs. The sample size should be increased to >750 if the genetic models of the data set become more complicated than Model III. Our results provide a theoretical basis for marker-assisted selection breeding and molecular design breeding.

  6. Real time hybrid simulation with online model updating: An analysis of accuracy

    NASA Astrophysics Data System (ADS)

    Ou, Ge; Dyke, Shirley J.; Prakash, Arun

    2017-02-01

    In conventional hybrid simulation (HS) and real time hybrid simulation (RTHS) applications, the information exchanged between the experimental substructure and numerical substructure is typically restricted to the interface boundary conditions (force, displacement, acceleration, etc.). With additional demands being placed on RTHS and recent advances in recursive system identification techniques, an opportunity arises to improve the fidelity by extracting information from the experimental substructure. Online model updating algorithms enable the numerical model of components (herein named the target model), that are similar to the physical specimen to be modified accordingly. This manuscript demonstrates the power of integrating a model updating algorithm into RTHS (RTHSMU) and explores the possible challenges of this approach through a practical simulation. Two Bouc-Wen models with varying levels of complexity are used as target models to validate the concept and evaluate the performance of this approach. The constrained unscented Kalman filter (CUKF) is selected for using in the model updating algorithm. The accuracy of RTHSMU is evaluated through an estimation output error indicator, a model updating output error indicator, and a system identification error indicator. The results illustrate that, under applicable constraints, by integrating model updating into RTHS, the global response accuracy can be improved when the target model is unknown. A discussion on model updating parameter sensitivity to updating accuracy is also presented to provide guidance for potential users.

  7. Accuracy of Numerical Simulations of Tip Clearance Flow in Transonic Compressor Rotors Improved Dramatically

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R.; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors have a significant impact on rotor and stage performance. Although numerical simulations of these flows are quite sophisticated, they are seldom verified through rigorous comparisons of numerical and measured data because, in high-speed machines, measurements acquired in sufficient detail to be useful are rare. Researchers at the NASA Glenn Research Center at Lewis Field compared measured tip clearance flow details (e.g., trajectory and radial extent) of the NASA Rotor 35 with results obtained from a numerical simulation. Previous investigations had focused on capturing the detailed development of the jetlike flow leaking through the clearance gap between the rotating blade tip and the stationary compressor shroud. However, we discovered that the simulation accuracy depends primarily on capturing the detailed development of a wall-bounded shear layer formed by the relative motion between the leakage jet and the shroud.

  8. The effectiveness of FE model for increasing accuracy in stretch forming simulation of aircraft skin panels

    NASA Astrophysics Data System (ADS)

    Kono, A.; Yamada, T.; Takahashi, S.

    2013-12-01

    In the aerospace industry, stretch forming has been used to form the outer surface parts of aircraft, which are called skin panels. Empirical methods have been used to correct the springback by measuring the formed panels. However, such methods are impractical and cost prohibitive. Therefore, there is a need to develop simulation technologies to predict the springback caused by stretch forming [1]. This paper reports the results of a study on the influences of the modeling conditions and parameters on the accuracy of an FE analysis simulating the stretch forming of aircraft skin panels. The effects of the mesh aspect ratio, convergence criteria, and integration points are investigated, and better simulation conditions and parameters are proposed.

  9. Improved reticle requalification accuracy and efficiency via simulation-powered automated defect classification

    NASA Astrophysics Data System (ADS)

    Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John

    2014-04-01

    Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.

  10. A Bayesian Simulation for Determining Mastery Calssification Accuracy.

    ERIC Educational Resources Information Center

    Steinheiser, Frederick H., Jr.

    A computer simulation of Bayes' Theorem was conducted in order to determine the probability that an examinee was a master conditional upon his test score. The inputs were: number of mastery states assumed, test length, prior expectation of masters in the examinee population, and conditional probability of a master getting a randomly selected test…

  11. Parallel Decomposition of the Fictitious Lagrangian Algorithm and its Accuracy for Molecular Dynamics Simulations of Semiconductors.

    NASA Astrophysics Data System (ADS)

    Yeh, Mei-Ling

    We have performed a parallel decomposition of the fictitious Lagrangian method for molecular dynamics with tight-binding total energy expression into the hypercube computer. This is the first time in literature that the dynamical simulation of semiconducting systems containing more than 512 silicon atoms has become possible with the electrons treated as quantum particles. With the utilization of the Intel Paragon system, our timing analysis predicts that our code is expected to perform realistic simulations on very large systems consisting of thousands of atoms with time requirements of the order of tens of hours. Timing results and performance analysis of our parallel code are presented in terms of calculation time, communication time, and setup time. The accuracy of the fictitious Lagrangian method in molecular dynamics simulation is also investigated, especially the energy conservation of the total energy of ions. We find that the accuracy of the fictitious Lagrangian scheme in small silicon cluster and very large silicon system simulations is good for as long as the simulations proceed, even though we quench the electronic coordinates to the Born-Oppenheimer surface only in the beginning of the run. The kinetic energy of electrons does not increase as time goes on, and the energy conservation of the ionic subsystem remains very good. This means that, as far as the ionic subsystem is concerned, the electrons are on the average in the true quantum ground states. We also tie up some odds and ends regarding a few remaining questions about the fictitious Lagrangian method, such as the difference between the results obtained from the Gram-Schmidt and SHAKE method of orthonormalization, and differences between simulations where the electrons are quenched to the Born -Oppenheimer surface only once compared with periodic quenching.

  12. A high accuracy sequential solver for simulation and active control of a longitudinal combustion instability

    NASA Technical Reports Server (NTRS)

    Shyy, W.; Thakur, S.; Udaykumar, H. S.

    1993-01-01

    A high accuracy convection scheme using a sequential solution technique has been developed and applied to simulate the longitudinal combustion instability and its active control. The scheme has been devised in the spirit of the Total Variation Diminishing (TVD) concept with special source term treatment. Due to the substantial heat release effect, a clear delineation of the key elements employed by the scheme, i.e., the adjustable damping factor and the source term treatment has been made. By comparing with the first-order upwind scheme previously utilized, the present results exhibit less damping and are free from spurious oscillations, offering improved quantitative accuracy while confirming the spectral analysis reported earlier. A simple feedback type of active control has been found to be capable of enhancing or attenuating the magnitude of the combustion instability.

  13. Grid Generation Issues and CFD Simulation Accuracy for the X33 Aerothermal Simulations

    NASA Technical Reports Server (NTRS)

    Polsky, Susan; Papadopoulos, Periklis; Davies, Carol; Loomis, Mark; Prabhu, Dinesh; Langhoff, Stephen R. (Technical Monitor)

    1997-01-01

    Grid generation issues relating to the simulation of the X33 aerothermal environment using the GASP code are explored. Required grid densities and normal grid stretching are discussed with regards to predicting the fluid dynamic and heating environments with the desired accuracy. The generation of volume grids is explored and includes discussions of structured grid generation packages such as GRIDGEN, GRIDPRO and HYPGEN. Volume grid manipulation techniques for obtaining desired outer boundary and grid clustering using the OUTBOUND code are examined. The generation of the surface grid with the required surface grid with the required surface grid topology is also discussed. Utilizing grids without singular axes is explored as a method of avoiding numerical difficulties at the singular line.

  14. Accuracy of flowmeters measuring horizontal groundwater flow in an unconsolidated aquifer simulator.

    USGS Publications Warehouse

    Bayless, E.R.; Mandell, Wayne A.; Ursic, James R.

    2011-01-01

    Borehole flowmeters that measure horizontal flow velocity and direction of groundwater flow are being increasingly applied to a wide variety of environmental problems. This study was carried out to evaluate the measurement accuracy of several types of flowmeters in an unconsolidated aquifer simulator. Flowmeter response to hydraulic gradient, aquifer properties, and well-screen construction was measured during 2003 and 2005 at the U.S. Geological Survey Hydrologic Instrumentation Facility in Bay St. Louis, Mississippi. The flowmeters tested included a commercially available heat-pulse flowmeter, an acoustic Doppler flowmeter, a scanning colloidal borescope flowmeter, and a fluid-conductivity logging system. Results of the study indicated that at least one flowmeter was capable of measuring borehole flow velocity and direction in most simulated conditions. The mean error in direction measurements ranged from 15.1 degrees to 23.5 degrees and the directional accuracy of all tested flowmeters improved with increasing hydraulic gradient. The range of Darcy velocities examined in this study ranged 4.3 to 155 ft/d. For many plots comparing the simulated and measured Darcy velocity, the squared correlation coefficient (r2) exceeded 0.92. The accuracy of velocity measurements varied with well construction and velocity magnitude. The use of horizontal flowmeters in environmental studies appears promising but applications may require more than one type of flowmeter to span the range of conditions encountered in the field. Interpreting flowmeter data from field settings may be complicated by geologic heterogeneity, preferential flow, vertical flow, constricted screen openings, and nonoptimal screen orientation.

  15. Accuracy of MHD simulations: Effects of simulation initialization in GUMICS-4

    NASA Astrophysics Data System (ADS)

    Lakka, Antti; Pulkkinen, Tuija; Dimmock, Andrew; Osmane, Adnane; Palmroth, Minna; Honkonen, Ilja

    2016-04-01

    We conducted a study aimed at revealing how different global magnetohydrodynamic (MHD) simulation initialization methods affect the dynamics in different parts of the Earth's magnetosphere-ionosphere system. While such magnetosphere-ionosphere coupling codes have been used for more than two decades, their testing still requires significant work to identify the optimal numerical representation of the physical processes. We used the Grand Unified Magnetosphere-Ionosphere Coupling Simulation (GUMICS-4), the only European global MHD simulation being developed by the Finnish Meteorological Institute. GUMICS-4 was put to a test that included two stages: 1) a 10 day Omni data interval was simulated and the results were validated by comparing both the bow shock and the magnetopause spatial positions predicted by the simulation to actual measurements and 2) the validated 10 day simulation run was used as a reference in a comparison of five 3 + 12 hour (3 hour synthetic initialisation + 12 hour actual simulation) simulation runs. The 12 hour input was not only identical in each simulation case but it also represented a subset of the 10 day input thus enabling quantifying the effects of different synthetic initialisations on the magnetosphere-ionosphere system. The used synthetic initialisation data sets were created using stepwise, linear and sinusoidal functions. Switching the used input from the synthetic to real Omni data was immediate. The results show that the magnetosphere forms in each case within an hour after the switch to real data. However, local dissimilarities are found in the magnetospheric dynamics after formation depending on the used initialisation method. This is evident especially in the inner parts of the lobe.

  16. Simulation of thalamic prosthetic vision: reading accuracy, speed, and acuity in sighted humans

    PubMed Central

    Vurro, Milena; Crowell, Anne Marie; Pezaris, John S.

    2014-01-01

    The psychophysics of reading with artificial sight has received increasing attention as visual prostheses are becoming a real possibility to restore useful function to the blind through the coarse, pseudo-pixelized vision they generate. Studies to date have focused on simulating retinal and cortical prostheses; here we extend that work to report on thalamic designs. This study examined the reading performance of normally sighted human subjects using a simulation of three thalamic visual prostheses that varied in phosphene count, to help understand the level of functional ability afforded by thalamic designs in a task of daily living. Reading accuracy, reading speed, and reading acuity of 20 subjects were measured as a function of letter size, using a task based on the MNREAD chart. Results showed that fluid reading was feasible with appropriate combinations of letter size and phosphene count, and performance degraded smoothly as font size was decreased, with an approximate doubling of phosphene count resulting in an increase of 0.2 logMAR in acuity. Results here were consistent with previous results from our laboratory. Results were also consistent with those from the literature, despite using naive subjects who were not trained on the simulator, in contrast to other reports. PMID:25408641

  17. Simulation of thalamic prosthetic vision: reading accuracy, speed, and acuity in sighted humans.

    PubMed

    Vurro, Milena; Crowell, Anne Marie; Pezaris, John S

    2014-01-01

    The psychophysics of reading with artificial sight has received increasing attention as visual prostheses are becoming a real possibility to restore useful function to the blind through the coarse, pseudo-pixelized vision they generate. Studies to date have focused on simulating retinal and cortical prostheses; here we extend that work to report on thalamic designs. This study examined the reading performance of normally sighted human subjects using a simulation of three thalamic visual prostheses that varied in phosphene count, to help understand the level of functional ability afforded by thalamic designs in a task of daily living. Reading accuracy, reading speed, and reading acuity of 20 subjects were measured as a function of letter size, using a task based on the MNREAD chart. Results showed that fluid reading was feasible with appropriate combinations of letter size and phosphene count, and performance degraded smoothly as font size was decreased, with an approximate doubling of phosphene count resulting in an increase of 0.2 logMAR in acuity. Results here were consistent with previous results from our laboratory. Results were also consistent with those from the literature, despite using naive subjects who were not trained on the simulator, in contrast to other reports.

  18. Deciphering the impact of uncertainty on the accuracy of large wildfire spread simulations.

    PubMed

    Benali, Akli; Ervilha, Ana R; Sá, Ana C L; Fernandes, Paulo M; Pinto, Renata M S; Trigo, Ricardo M; Pereira, José M C

    2016-11-01

    Predicting wildfire spread is a challenging task fraught with uncertainties. 'Perfect' predictions are unfeasible since uncertainties will always be present. Improving fire spread predictions is important to reduce its negative environmental impacts. Here, we propose to understand, characterize, and quantify the impact of uncertainty in the accuracy of fire spread predictions for very large wildfires. We frame this work from the perspective of the major problems commonly faced by fire model users, namely the necessity of accounting for uncertainty in input data to produce reliable and useful fire spread predictions. Uncertainty in input variables was propagated throughout the modeling framework and its impact was evaluated by estimating the spatial discrepancy between simulated and satellite-observed fire progression data, for eight very large wildfires in Portugal. Results showed that uncertainties in wind speed and direction, fuel model assignment and typology, location and timing of ignitions, had a major impact on prediction accuracy. We argue that uncertainties in these variables should be integrated in future fire spread simulation approaches, and provide the necessary data for any fire model user to do so.

  19. Milestone M4900: Simulant Mixing Analytical Results

    SciTech Connect

    Kaplan, D.I.

    2001-07-26

    This report addresses Milestone M4900, ''Simulant Mixing Sample Analysis Results,'' and contains the data generated during the ''Mixing of Process Heels, Process Solutions, and Recycle Streams: Small-Scale Simulant'' task. The Task Technical and Quality Assurance Plan for this task is BNF-003-98-0079A. A report with a narrative description and discussion of the data will be issued separately.

  20. SAR simulations for high-field MRI: how much detail, effort, and accuracy is needed?

    PubMed

    Wolf, S; Diehl, D; Gebhardt, M; Mallow, J; Speck, O

    2013-04-01

    Accurate prediction of specific absorption rate (SAR) for high field MRI is necessary to best exploit its potential and guarantee safe operation. To reduce the effort (time, complexity) of SAR simulations while maintaining robust results, the minimum requirements for the creation (segmentation, labeling) of human models and methods to reduce the time for SAR calculations for 7 Tesla MR-imaging are evaluated. The geometric extent of the model required for realistic head-simulations and the number of tissue types sufficient to form a reliable but simplified model of the human body are studied. Two models (male and female) of the virtual family are analyzed. Additionally, their position within the head-coil is taken into account. Furthermore, the effects of retuning the coils to different load conditions and the influence of a large bore radiofrequency-shield have been examined. The calculation time for SAR simulations in the head can be reduced by 50% without significant error for smaller model extent and simplified tissue structure outside the coil. Likewise, the model generation can be accelerated by reducing the number of tissue types. Local SAR can vary up to 14% due to position alone. This must be considered and sets a limit for SAR prediction accuracy. All these results are comparable between the two body models tested.

  1. Results of the 2015 Spitzer Exoplanet Data Challenge: Repeatability and Accuracy of Exoplanet Eclipse Depths

    NASA Astrophysics Data System (ADS)

    Ingalls, James G.; Krick, Jessica E.; Carey, Sean J.; Stauffer, John R.; Grillmair, Carl J.; Lowrance, Patrick

    2016-06-01

    We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. At infrared wavelengths secondary eclipses and phase curves are powerful tools for studying a planet’s atmosphere. Extracting information about atmospheres, however, is extremely challenging due to the small differential signals, which are often at the level of 100 parts per million (ppm) or smaller, and require the removal of significant instrumental systematics. For the IRAC 3.6 and 4.5μm InSb detectors that remain active on post-cryogenic Spitzer, the interplay of residual telescope pointing fluctuations with intrapixel gain variations in the moderately under sampled camera is the largest source of time-correlated noise. Over the past decade, a suite of techniques for removing this noise from IRAC data has been developed independently by various investigators. In summer 2015, the Spitzer Science Center hosted a Data Challenge in which seven exoplanet expert teams, each using a different noise-removal method, were invited to analyze 10 eclipse measurements of the hot Jupiter XO-3 b, as well as a complementary set of 10 simulated measurements. In this contribution we review the results of the Challenge. We describe statistical tools to assess the repeatability, reliability, and validity of data reduction techniques, and to compare and (perhaps) choose between techniques.

  2. Accuracy of cutoff probe for measuring electron density: simulation and experiment

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Woong; You, Shin-Jae; Kim, Si-June; Lee, Jang-Jae; Kim, Jung-Hyung; Oh, Wang-Yuhl

    2016-09-01

    The electron density has been used for characterizing the plasma for basic research as well as industrial application. To measure the exact electron density, various type of microwave probe has been developed and improved. The cutoff probe is a promising technique inferring the electron density from the plasma resonance peak on the transmission spectrum. In this study, we present the accuracy of electron density inferred from cutoff probe. The accuracy was investigated by electromagnetic simulation and experiment. The discrepancy between the electron densities from the cutoff probe and other sophisticated microwave probes were investigated and discussed. We found that the cutoff probe has good accuracy in inferred electron density. corresponding author.

  3. Simulations of pulsating one-dimensional detonations with true fifth order accuracy

    SciTech Connect

    Henrick, Andrew K. . E-mail: ahenrick@nd.edu; Aslam, Tariq D. . E-mail: aslam@lanl.gov; Powers, Joseph M. . E-mail: powers@nd.edu

    2006-03-20

    A novel, highly accurate numerical scheme based on shock-fitting coupled with fifth order spatial and temporal discretizations is applied to a classical unsteady detonation problem to generate solutions with unprecedented accuracy. The one-dimensional reactive Euler equations for a calorically perfect mixture of ideal gases whose reaction is described by single-step irreversible Arrhenius kinetics are solved in a series of calculations in which the activation energy is varied. In contrast with nearly all known simulations of this problem, which converge at a rate no greater than first order as the spatial and temporal grid is refined, the present method is shown to converge at a rate consistent with the fifth order accuracy of the spatial and temporal discretization schemes. This high accuracy enables more precise verification of known results and prediction of heretofore unknown phenomena. To five significant figures, the scheme faithfully recovers the stability boundary, growth rates, and wave-numbers predicted by an independent linear stability theory in the stable and weakly unstable regime. As the activation energy is increased, a series of period-doubling events are predicted, and the system undergoes a transition to chaos. Consistent with general theories of non-linear dynamics, the bifurcation points are seen to converge at a rate for which the Feigenbaum constant is 4.66 {+-} 0.09, in close agreement with the true value of 4.669201... As activation energy is increased further, domains are identified in which the system undergoes a transition from a chaotic state back to one whose limit cycles are characterized by a small number of non-linear oscillatory modes. This result is consistent with behavior of other non-linear dynamical systems, but not typically considered in detonation dynamics. The period and average detonation velocity are calculated for a variety of asymptotically stable limit cycles. The average velocity for such pulsating detonations is

  4. Development of a numerical simulator of human swallowing using a particle method (Part 2. Evaluation of the accuracy of a swallowing simulation using the 3D MPS method).

    PubMed

    Kamiya, Tetsu; Toyama, Yoshio; Michiwaki, Yukihiro; Kikuchi, Takahiro

    2013-01-01

    The aim of this study was to develop and evaluate the accuracy of a three-dimensional (3D) numerical simulator of the swallowing action using the 3D moving particle simulation (MPS) method, which can simulate splashes and rapid changes in the free surfaces of food materials. The 3D numerical simulator of the swallowing action using the MPS method was developed based on accurate organ models, which contains forced transformation by elapsed time. The validity of the simulation results were evaluated qualitatively based on comparisons with videofluorography (VF) images. To evaluate the validity of the simulation results quantitatively, the normalized brightness around the vallecula was used as the evaluation parameter. The positions and configurations of the food bolus during each time step were compared in the simulated and VF images. The simulation results corresponded to the VF images during each time step in the visual evaluations, which suggested that the simulation was qualitatively correct. The normalized brightness of the simulated and VF images corresponded exactly at all time steps. This showed that the simulation results, which contained information on changes in the organs and the food bolus, were numerically correct. Based on these results, the accuracy of this simulator was high and it could be used to study the mechanism of disorders that cause dysphasia. This simulator also calculated the shear rate at a specific point and the timing with Newtonian and non-Newtonian fluids. We think that the information provided by this simulator could be useful for development of food products, medicines, and in rehabilitation facilities.

  5. The influence of data shape acquisition process and geometric accuracy of the mandible for numerical simulation.

    PubMed

    Relvas, C; Ramos, A; Completo, A; Simões, J A

    2011-08-01

    Computer-aided technologies have allowed new 3D modelling capabilities and engineering analyses based on experimental and numerical simulation. It has enormous potential for product development, such as biomedical instrumentation and implants. However, due to the complex shapes of anatomical structures, the accuracy of these technologies plays an important key role for adequate and accurate finite element analysis (FEA). The objective of this study was to determine the influence of the geometry variability between two digital models of a human model of the mandible. Two different shape acquisition techniques, CT scan and 3D laser scan, were assessed. A total of 130 points were controlled and the deviations between the measured points of the physical and 3D virtual models were assessed. The results of the FEA study showed a relative difference of 20% for the maximum displacement and 10% for the maximum strain between the two geometries.

  6. Accuracy of user-friendly blood typing kits tested under simulated military field conditions.

    PubMed

    Bienek, Diane R; Charlton, David G

    2011-04-01

    Rapid user-friendly ABO-Rh blood typing kits (Eldon Home Kit 2511, ABO-Rh Combination Blood Typing Experiment Kit) were evaluated to determine their accuracy when used under simulated military field conditions and after long-term storage at various temperatures and humidities. Rates of positive tests between control groups, experimental groups, and industry standards were measured and analyzed using the Fisher's exact chi-square method to identify significant differences (p < or = 0.05). When Eldon Home Kits 2511 were used in various operational conditions, the results were comparable to those obtained with the control group and with the industry standard. The performance of the ABO-Rh Combination Blood Typing Experiment Kit was adversely affected by prolonged storage in temperatures above 37 degrees C. The diagnostic performance of commercial blood typing kits varies according to product and environmental storage conditions.

  7. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    SciTech Connect

    Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen Martin; Tucker, Garritt J.

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers

  8. Thermodynamics of supersaturated steam: Molecular simulation results

    NASA Astrophysics Data System (ADS)

    Moučka, Filip; Nezbeda, Ivo

    2016-12-01

    Supersaturated steam modeled by the Gaussian charge polarizable model [P. Paricaud, M. Předota, and A. A. Chialvo, J. Chem. Phys. 122, 244511 (2005)] and BK3 model [P. Kiss and A. Baranyai, J. Chem. Phys. 138, 204507 (2013)] has been simulated at conditions occurring in steam turbines using the multiple-particle-move Monte Carlo for both the homogeneous phase and also implemented for the Gibbs ensemble Monte Carlo molecular simulation methods. Because of these thermodynamic conditions, a specific simulation algorithm has been developed to bypass common simulation problems resulting from very low densities of steam and cluster formation therein. In addition to pressure-temperature-density and orthobaric data, the distribution of clusters has also been evaluated. The obtained extensive data of high precision should serve as a basis for development of reliable molecular-based equations for properties of metastable steam.

  9. Accuracy of nonmolecular identification of growth-hormone- transgenic coho salmon after simulated escape.

    PubMed

    SundströM, L F; Lõhmus, M; Devlin, R H

    2015-09-01

    Concerns with transgenic animals include the potential ecological risks associated with release or escape to the natural environment, and a critical requirement for assessment of ecological effects is the ability to distinguish transgenic animals from wild type. Here, we explore geometric morphometrics (GeoM) and human expertise to distinguish growth-hormone-transgenic coho salmon (Oncorhynchus kisutch) specimens from wild type. First, we simulated an escape of 3-month-old hatchery-reared wild-type and transgenic fish to an artificial stream, and recaptured them at the time of seaward migration at an age of 13 months. Second, we reared fish in the stream from first-feeding fry until an age of 13 months, thereby simulating fish arising from a successful spawn in the wild of an escaped hatchery-reared transgenic fish. All fish were then assessed from 'photographs by visual identification (VID) by local staff and by GeoM based on 13 morphological landmarks. A leave-one-out discriminant analysis of GeoM data had on average 86% (72-100% for individual groups) accuracy in assigning the correct genotypes, whereas the human experts were correct, on average, in only 49% of cases (range of 18-100% for individual fish groups). However, serious errors (i.e., classifying transgenic specimens as wild type) occurred for 7% (GeoM) and 67% (VID) of transgenic fish, and all of these incorrect assignments arose with fish reared in the stream from the first-feeding stage. The results show that we presently lack the skills of visually distinguishing transgenic coho salmon from wild type with a high level of accuracy, but that further development-of GeoM methods could be useful in identifying second-generation,fish from nature as a nonmolecular approach.

  10. Accuracy of three-dimensional soft tissue simulation in bimaxillary osteotomies.

    PubMed

    Liebregts, Jeroen; Xi, Tong; Timmermans, Maarten; de Koning, Martien; Bergé, Stefaan; Hoppenreijs, Theo; Maal, Thomas

    2015-04-01

    The purpose of this study was to evaluate the accuracy of an algorithm based on the mass tensor model (MTM) for computerized 3D simulation of soft-tissue changes following bimaxillary osteotomy, and to identify patient and surgery-related factors that may affect the accuracy of the simulation. Sixty patients (mean age 26.0 years) who had undergone bimaxillary osteotomy, participated in this study. Cone beam CT scans were acquired pre- and one year postoperatively. The 3D rendered pre- and postoperative scans were matched. The maxilla and mandible were segmented and aligned to the postoperative position. 3D distance maps and cephalometric analyses were used to quantify the simulation error. The mean absolute error between the 3D simulation and the actual postoperative facial profile was 0.81 ± 0.22 mm for the face as a whole. The accuracy of the simulation (average absolute error ≤2 mm) for the whole face and for the upper lip, lower lip and chin subregions were 100%, 93%, 90% and 95%, respectively. The predictability was correlated with the magnitude of the maxillary and mandibular advancement, age and V-Y closure. It was concluded that the MTM-based soft tissue simulation for bimaxillary surgery was accurate for clinical use, though patients should be informed of possible variation in the predicted lip position.

  11. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  12. Evaluation of the soil moisture prediction accuracy of a space radar using simulation techniques. [Kansas

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Stiles, J. A.; Moore, R. K.; Holtzman, J. C.

    1981-01-01

    Image simulation techniques were employed to generate synthetic aperture radar images of a 17.7 km x 19.3 km test site located east of Lawrence, Kansas. The simulations were performed for a space SAR at an orbital altitude of 600 km, with the following sensor parameters: frequency = 4.75 GHz, polarization = HH, and angle of incidence range = 7 deg to 22 deg from nadir. Three sets of images were produced corresponding to three different spatial resolutions; 20 m x 20 m with 12 looks, 100 m x 100 m with 23 looks, and 1 km x 1 km with 1000 looks. Each set consisted of images for four different soil moisture distributions across the test site. Results indicate that, for the agricultural portion of the test site, the soil moisture in about 90% of the pixels can be predicted with an accuracy of = + or - 20% of field capacity. Among the three spatial resolutions, the 1 km x 1 km resolution gave the best results for most cases, however, for very dry soil conditions, the 100 m x 100 m resolution was slightly superior.

  13. Geopositioning accuracy prediction results for registration of imaging and nonimaging sensors using moving objects

    NASA Astrophysics Data System (ADS)

    Taylor, Charles R.; Dolloff, John T.; Lofy, Brian A.; Luker, Steve A.

    2003-08-01

    BAE SYSTEMS is developing a "4D Registration" capability for DARPA's Dynamic Tactical Targeting program. This will further advance our automatic image registration capability to use moving objects for image registration, and extend our current capability to include the registration of non-imaging sensors. Moving objects produce signals that are identifiable across multiple sensors such as radar moving target indicators, unattended ground sensors, and imaging sensors. Correspondences of those signals across sensor types make it possible to improve the support data accuracy for each of the sensors involved in the correspondence. The amount of accuracy improvement possible, and the effects of the accuracy improvement on geopositioning with the sensors, is a complex problem. The main factors that contribute to the complexity are the sensor-to-target geometry, the a priori sensor support data accuracy, sensor measurement accuracy, the distribution of identified objects in ground space, and the motion and motion uncertainty of the identified objects. As part of the 4D Registration effort, BAE SYSTEMS is conducting a sensitivity study to investigate the complexities and benefits of multisensor registration with moving objects. The results of the study will be summarized.

  14. Ventricular Fibrillation in Mammalian Hearts: Simulation Results

    NASA Astrophysics Data System (ADS)

    Fenton, Flavio H.

    2002-03-01

    The computational approach to understanding the initiation and evolution of cardiac arrhythmias forms a necessary link between experiment and theory. Numerical simulations combine useful mathematical models and complex geometry while offering clean and comprehensive data acquisition, reproducible results that can be compared to experiments, and the flexibility of exploring parameter space systematically. However, because cardiac dynamics occurs on many scales (on the order of 10^9 cells of size 10-100 microns with more than 40 ionic currents and time scales as fast as 0.01ms), roughly 10^17 operations are required to simulate just one second of real time. These intense computational requirements lead to significant implementation challenges even on existing supercomputers. Nevertheless, progress over the last decade in understanding the effects of some spatial scales and spatio-temporal dynamics on cardiac cell and tissue behavior justifies the use of certain simplifications which, along with improved models for cellular dynamics and detailed digital models of cardiac anatomy, are allowing simulation studies of full-size ventricles and atria. We describe this simulation problem from a combined numerical, physical and biological point of view, with an emphasis on the dynamics and stability of scroll waves of electrical activity in mammalian hearts and their relation to tachycardia, fibrillation and sudden death. Detailed simulations of electrical activity in ventricles including complex anatomy, anisotropic fiber structure, and electrophysiological effects of two drugs (DAM and CytoD) are presented and compared with experimental results.

  15. Assessment of the accuracy of density functional theory for first principles simulations of water

    NASA Astrophysics Data System (ADS)

    Grossman, J. C.; Schwegler, E.; Draeger, E.; Gygi, F.; Galli, G.

    2004-03-01

    We present a series of Car-Parrinello (CP) molecular dynamics simulation in order to better understand the accuracy of density functional theory for the calculation of the properties of water [1]. Through 10 separate ab initio simulations, each for 20 ps of ``production'' time, a number of approximations are tested by varying the density functional employed, the fictitious electron mass, μ, in the CP Langrangian, the system size, and the ionic mass, M (we considered both H_2O and D_2O). We present the impact of these approximations on properties such as the radial distribution function [g(r)], structure factor [S(k)], diffusion coefficient and dipole moment. Our results show that structural properties may artificially depend on μ, and that in the case of an accurate description of the electronic ground state, and in the absence of proton quantum effects, we obtained an oxygen-oxygen correlation function that is over-structured compared to experiment, and a diffusion coefficient which is approximately 10 times smaller. ^1 J.C. Grossman et. al., J. Chem. Phys. (in press, 2004).

  16. Titan's organic chemistry: Results of simulation experiments

    NASA Technical Reports Server (NTRS)

    Sagan, Carl; Thompson, W. Reid; Khare, Bishun N.

    1992-01-01

    Recent low pressure continuous low plasma discharge simulations of the auroral electron driven organic chemistry in Titan's mesosphere are reviewed. These simulations yielded results in good accord with Voyager observations of gas phase organic species. Optical constants of the brownish solid tholins produced in similar experiments are in good accord with Voyager observations of the Titan haze. Titan tholins are rich in prebiotic organic constituents; the Huygens entry probe may shed light on some of the processes that led to the origin of life on Earth.

  17. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    PubMed

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  18. Simulation-based evaluation of the resolution and quantitative accuracy of temperature-modulated fluorescence tomography

    PubMed Central

    Lin, Yuting; Nouizi, Farouk; Kwong, Tiffany C.; Gulsen, Gultekin

    2016-01-01

    Conventional fluorescence tomography (FT) can recover the distribution of fluorescent agents within a highly scattering medium. However, poor spatial resolution remains its foremost limitation. Previously, we introduced a new fluorescence imaging technique termed “temperature-modulated fluorescence tomography” (TM-FT), which provides high-resolution images of fluorophore distribution. TM-FT is a multimodality technique that combines fluorescence imaging with focused ultrasound to locate thermo-sensitive fluorescence probes using a priori spatial information to drastically improve the resolution of conventional FT. In this paper, we present an extensive simulation study to evaluate the performance of the TM-FT technique on complex phantoms with multiple fluorescent targets of various sizes located at different depths. In addition, the performance of the TM-FT is tested in the presence of background fluorescence. The results obtained using our new method are systematically compared with those obtained with the conventional FT. Overall, TM-FT provides higher resolution and superior quantitative accuracy, making it an ideal candidate for in vivo preclinical and clinical imaging. For example, a 4 mm diameter inclusion positioned in the middle of a synthetic slab geometry phantom (D:40 mm × W :100 mm) is recovered as an elongated object in the conventional FT (x = 4.5 mm; y = 10.4 mm), while TM-FT recovers it successfully in both directions (x = 3.8 mm; y = 4.6 mm). As a result, the quantitative accuracy of the TM-FT is superior because it recovers the concentration of the agent with a 22% error, which is in contrast with the 83% error of the conventional FT. PMID:26368884

  19. Accuracy and precision of free-energy calculations via molecular simulation

    NASA Astrophysics Data System (ADS)

    Lu, Nandou

    A quantitative characterization of the methodologies of free-energy perturbation (FEP) calculations is presented, and optimal implementation of the methods for reliable and efficient calculation is addressed. Some common misunderstandings in the FEP calculations are corrected. The two opposite directions of FEP calculations are uniquely defined as generalized insertion and generalized deletion, according to the entropy change along the perturbation direction. These two calculations are not symmetric; they produce free-energy results differing systematically due to the different capability of each to sample the important phase-space in a finite-length simulation. The FEP calculation errors are quantified by characterizing the simulation sampling process with the help of probability density functions for the potential energy change. While the random error in the FEP calculation is analyzed with a probabilistic approach, the systematic error is characterized as the most-likely inaccuracy, which is modeled considering the poor sampling of low-probability energy distribution tails. Our analysis shows that the entropy difference between the perturbation systems plays a key role in determining the reliability of FEP results, and the perturbation should be carried out in the insertion direction in order to ensure a good sampling and thus a reliable calculation. Easy-to-use heuristics are developed to estimate the simulation errors, as well as the simulation length that ensures a certain accuracy level of the calculation. The fundamental understanding obtained is then applied to tackle the problem of multistage FEP optimization. We provide the first principle of optimal staging: For each substage FEP calculation, the higher entropy system should be used as the reference to govern the sampling, i.e., the calculation should be conducted in the generalized insertion direction for each stage of perturbation. To minimize the simulation error, intermediate states should be

  20. Evaluation of Accuracy and Reliability of the Six Ensemble Methods Using 198 Sets of Pseudo-Simulation Data

    NASA Astrophysics Data System (ADS)

    Suh, M. S.; Oh, S. G.

    2014-12-01

    The accuracy and reliability of the six ensemble methods were evaluated according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) generated by considering the simulation characteristics of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets with 50 samples. The ensemble methods used were as follows: equal weighted averaging with(out) bias correction (EWA_W(N)BC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), WEA based on reliability (WEA_REA), and multivariate linear regression (Mul_Reg). The weighted ensemble methods showed better projection skills in terms of accuracy and reliability than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. In general, WEA_Tay, WEA_REA and WEA_RAC showed superior skills in terms of accuracy and reliability, regardless of the PSD categories, training periods, and ensemble numbers. The evaluation results showed that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of members. However, the EWA_NBC showed a comparable projection skill with the other methods only in the certain categories with unsystematic biases.

  1. Comparison of the Accuracy and Speed of Transient Mobile A/C System Simulation Models: Preprint

    SciTech Connect

    Kiss, T.; Lustbader, J.

    2014-03-01

    The operation of air conditioning (A/C) systems is a significant contributor to the total amount of fuel used by light- and heavy-duty vehicles. Therefore, continued improvement of the efficiency of these mobile A/C systems is important. Numerical simulation has been used to reduce the system development time and to improve the electronic controls, but numerical models that include highly detailed physics run slower than desired for carrying out vehicle-focused drive cycle-based system optimization. Therefore, faster models are needed even if some accuracy is sacrificed. In this study, a validated model with highly detailed physics, the 'Fully-Detailed' model, and two models with different levels of simplification, the 'Quasi-Transient' and the 'Mapped- Component' models, are compared. The Quasi-Transient model applies some simplifications compared to the Fully-Detailed model to allow faster model execution speeds. The Mapped-Component model is similar to the Quasi-Transient model except instead of detailed flow and heat transfer calculations in the heat exchangers, it uses lookup tables created with the Quasi-Transient model. All three models are set up to represent the same physical A/C system and the same electronic controls. Speed and results of the three model versions are compared for steady state and transient operation. Steady state simulated data are also compared to measured data. The results show that the Quasi-Transient and Mapped-Component models ran much faster than the Fully-Detailed model, on the order of 10- and 100-fold, respectively. They also adequately approach the results of the Fully-Detailed model for steady-state operation, and for drive cycle-based efficiency predictions

  2. Sensitivity of Tumor Motion Simulation Accuracy to Lung Biomechanical Modeling Approaches and Parameters

    PubMed Central

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the Neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right (LR), anterior-posterior (AP), and superior-inferior (SI) directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. PMID:26531324

  3. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    PubMed

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  4. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  5. High accuracy binary black hole simulations with an extended wave zone

    SciTech Connect

    Pollney, Denis; Reisswig, Christian; Dorband, Nils; Schnetter, Erik; Diener, Peter

    2011-02-15

    We present results from a new code for binary black hole evolutions using the moving-puncture approach, implementing finite differences in generalized coordinates, and allowing the spacetime to be covered with multiple communicating nonsingular coordinate patches. Here we consider a regular Cartesian near-zone, with adapted spherical grids covering the wave zone. The efficiencies resulting from the use of adapted coordinates allow us to maintain sufficient grid resolution to an artificial outer boundary location which is causally disconnected from the measurement. For the well-studied test case of the inspiral of an equal-mass nonspinning binary (evolved for more than 8 orbits before merger), we determine the phase and amplitude to numerical accuracies better than 0.010% and 0.090% during inspiral, respectively, and 0.003% and 0.153% during merger. The waveforms, including the resolved higher harmonics, are convergent and can be consistently extrapolated to r{yields}{infinity} throughout the simulation, including the merger and ringdown. Ringdown frequencies for these modes (to (l,m)=(6,6)) match perturbative calculations to within 0.01%, providing a strong confirmation that the remnant settles to a Kerr black hole with irreducible mass M{sub irr}=0.884355{+-}20x10{sup -6} and spin S{sub f}/M{sub f}{sup 2}=0.686923{+-}10x10{sup -6}.

  6. Numerical simulations of catastrophic disruption: Recent results

    NASA Technical Reports Server (NTRS)

    Benz, W.; Asphaug, E.; Ryan, E. V.

    1994-01-01

    Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.

  7. Photovoltaic-electrolyzer system transient simulation results

    SciTech Connect

    Leigh, R.W.; Metz, P.D.; Michalek, K.

    1986-05-01

    Brookhaven National Laboratory has developed a Hydrogen Technology Evaluation Center to illustrate advanced hydrogen technology. The first phase of this effort investigated the use of solar energy to produce hydrogen from water via photovoltaic-powered electrolysis. A coordinated program of system testing, computer simulation, and economic analysis has been adopted to characterize and optimize the photovoltaic-electrolyzer system. This paper presents the initial transient simulation results. Innovative features of the modeling include the use of real weather data, detailed hourly modeling of thermal characteristics of the PV array and of system control strategies, and examination of systems over a wide range of power and voltage ratings. The transient simulation system TRNSYS was used, incorporating existing, modified or new component subroutines as required. For directly coupled systems, the authors found the PV array voltage which maximizes hydrogen production to be quite near the nominal electrolyzer voltage for a wide range of PV array powers. The array voltage which maximizes excess electricity production is slightly higher. The use of an ideal (100 percent efficient) maximum power tracking system provides only a six percent increase in annual hydrogen production. An examination of the effect of the PV array tilt indicates, as expected, that annual hydrogen production is insensitive to tilt angle within +-20 deg of latitude. Summer production greatly exceeds winter generation. Tilting the array, even to 90 deg, produces no significant increase in winter hydrogen production.

  8. Fast Plasma Instrument for MMS: Simulation Results

    NASA Technical Reports Server (NTRS)

    Figueroa-Vinas, Adolfo; Adrian, Mark L.; Lobell, James V.; Simpson, David G.; Barrie, Alex; Winkert, George E.; Yeh, Pen-Shu; Moore, Thomas E.

    2008-01-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. The Dual Electron Spectrometer (DES) of the Fast Plasma Instrument (FPI) for MMS meets these demanding requirements by acquiring the electron velocity distribution functions (VDFs) for the full sky with high-resolution angular measurements every 30 ms. This will provide unprecedented access to electron scale dynamics within the reconnection diffusion region. The DES consists of eight half-top-hat energy analyzers. Each analyzer has a 6 deg. x 11.25 deg. Full-sky coverage is achieved by electrostatically stepping the FOV of each of the eight sensors through four discrete deflection look directions. Data compression and burst memory management will provide approximately 30 minutes of high time resolution data during each orbit of the four MMS spacecraft. Each spacecraft will intelligently downlink the data sequences that contain the greatest amount of temporal structure. Here we present the results of a simulation of the DES analyzer measurements, data compression and decompression, as well as ground-based analysis using as a seed re-processed Cluster/PEACE electron measurements. The Cluster/PEACE electron measurements have been reprocessed through virtual DES analyzers with their proper geometrical, energy, and timing scale factors and re-mapped via interpolation to the DES angular and energy phase-space sampling measurements. The results of the simulated DES measurements are analyzed and the full moments of the simulated VDFs are compared with those obtained from the Cluster/PEACE spectrometer using a standard quadrature moment, a newly implemented spectral spherical harmonic method, and a singular value decomposition method. Our preliminary moment calculations show a remarkable agreement within the uncertainties of the measurements, with the

  9. Accuracy of Root ZX in teeth with simulated root perforation in the presence of gel or liquid type endodontic irrigant

    PubMed Central

    Shin, Hyeong-Soon; Yang, Won-Kyung; Kim, Mi-Ri; Ko, Hyun-Jung; Cho, Kyung-Mo; Park, Se-Hee

    2012-01-01

    Objectives To evaluate the accuracy of the Root ZX in teeth with simulated root perforation in the presence of gel or liquid type endodontic irrigants, such as saline, 5.25% sodium hypochlorite (NaOCl), 2% chlorhexidine liquid, 2% chlorhexidine gel, and RC-Prep, and also to determine the electrical conductivities of these endodontic irrigants. Materials and Methods A root perforation was simulated on twenty freshly extracted teeth by means of a small perforation made on the proximal surface of the root at 4 mm from the anatomic apex. Root ZX was used to locate root perforation and measure the electronic working lengths. The results obtained were compared with the actual working length (AWL) and the actual location of perforations (AP), allowing tolerances of 0.5 or 1.0 mm. Measurements within these limits were considered as acceptable. Chi-square test or the Fisher's exact test was used to evaluate significance. Electrical conductivities of each irrigant were also measured with an electrical conductivity tester. Results The accuracies of the Root ZX in perforated teeth were significantly different between liquid types (saline, NaOCl) and gel types (chlorhexidine gel, RC-Prep). The accuracies of electronic working lengths in perforated teeth were higher in gel types than in liquid types. The accuracy in locating root perforation was higher in liquid types than gel types. 5.25% NaOCl had the highest electrical conductivity, whereas 2% chlorhexidine gel and RC-Prep gel had the lowest electrical conductivities among the five irrigants. Conclusions Different canal irrigants with different electrical conductivities may affect the accuracy of the Root ZX in perforated teeth. PMID:23431125

  10. Simulation Results Related to Stochastic Electrodynamics

    NASA Astrophysics Data System (ADS)

    Cole, Daniel C.

    2006-01-01

    Stochastic electrodynamics (SED) is a classical theory of nature advanced significantly in the 1960s by Trevor Marshall and Timothy Boyer. Since then, SED has continued to be investigated by a very small group of physicists. Early investigations seemed promising, as SED was shown to agree with quantum mechanics (QM) and quantum electrodynamics (QED) for a few linear systems. In particular, agreement was found for the simple harmonic electric dipole oscillator, physical systems composed of such oscillators and interacting electromagnetically, and free electromagnetic fields with boundary conditions imposed such as would enter into Casimir-type force calculations. These results were found to hold for both zero-point and non-zero temperature conditions. However, by the late 1970s and then into the early 1980s, researchers found that when investigating nonlinear systems, SED did not appear to provide agreement with the predictions of QM and QED. A proposed reason for this disagreement was advocated by Boyer and Cole that such nonlinear systems are not sufficiently realistic for describing atomic and molecular physical systems, which should be fundamentally based on the Coulombic binding potential. Analytic attempts on these systems have proven to be most difficult. Consequently, in recent years more attention has been placed on numerically simulating the interaction of a classical electron in a Coulombic binding potential, with classical electromagnetic radiation acting on the classical electron. Good agreement was found for this numerical simulation work as compared with predictions from QM. Here this worked is reviewed and possible directions are discussed. Recent simulation work involving subharmonic resonances for the classical hydrogen atom is also discussed; some of the properties of these subharmonic resonances seem quite interesting and unusual.

  11. Results of a remote multiplexer/digitizer unit accuracy and environmental study

    NASA Technical Reports Server (NTRS)

    Wilner, D. O.

    1977-01-01

    A remote multiplexer/digitizer unit (RMDU), a part of the airborne integrated flight test data system, was subjected to an accuracy study. The study was designed to show the effects of temperature, altitude, and vibration on the RMDU. The RMDU was subjected to tests at temperatures from -54 C (-65 F) to 71 C (160 F), and the resulting data are presented here, along with a complete analysis of the effects. The methods and means used for obtaining correctable data and correcting the data are also discussed.

  12. Creating a Standard Set of Metrics to Assess Accuracy of Solar Forecasts: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Banunarayanan, V.; Brockway, A.; Marquis, M.; Haupt, S. E.; Brown, B.; Fowler, T.; Jensen, T.; Hamann, H.; Lu, S.; Hodge, B.; Zhang, J.; Florita, A.

    2013-12-01

    The U.S. Department of Energy (DOE) SunShot Initiative, launched in 2011, seeks to reduce the cost of solar energy systems by 75% from 2010 to 2020. In support of the SunShot Initiative, the DOE Office of Energy Efficiency and Renewable Energy (EERE) is partnering with the National Oceanic and Atmospheric Administration (NOAA) and solar energy stakeholders to improve solar forecasting. Through a funding opportunity announcement issued in the April, 2012, DOE is funding two teams - led by National Center for Atmospheric Research (NCAR), and by IBM - to perform three key activities in order to improve solar forecasts. The teams will: (1) With DOE and NOAA's leadership and significant stakeholder input, develop a standardized set of metrics to evaluate forecast accuracy, and determine the baseline and target values for these metrics; (2) Conduct research that yields a transformational improvement in weather models and methods for forecasting solar irradiance and power; and (3) Incorporate solar forecasts into the system operations of the electric power grid, and evaluate the impact of forecast accuracy on the economics and reliability of operations using the defined, standard metrics. This paper will present preliminary results on the first activity: the development of a standardized set of metrics, baselines and target values. The results will include a proposed framework for metrics development, key categories of metrics, descriptions of each of the proposed set of specific metrics to measure forecast accuracy, feedback gathered from a range of stakeholders on the metrics, and processes to determine baselines and target values for each metric. The paper will also analyze the temporal and spatial resolutions under which these metrics would apply, and conclude with a summary of the work in progress on solar forecasting activities funded by DOE.

  13. Accuracy of the Frensley inflow boundary condition for Wigner equations in simulating resonant tunneling diodes

    SciTech Connect

    Jiang Haiyan; Cai Wei; Tsu, Raphael

    2011-03-01

    In this paper, the accuracy of the Frensley inflow boundary condition of the Wigner equation is analyzed in computing the I-V characteristics of a resonant tunneling diode (RTD). It is found that the Frensley inflow boundary condition for incoming electrons holds only exactly infinite away from the active device region and its accuracy depends on the length of contacts included in the simulation. For this study, the non-equilibrium Green's function (NEGF) with a Dirichlet to Neumann mapping boundary condition is used for comparison. The I-V characteristics of the RTD are found to agree between self-consistent NEGF and Wigner methods at low bias potentials with sufficiently large GaAs contact lengths. Finally, the relation between the negative differential conductance (NDC) of the RTD and the sizes of contact and buffer in the RTD is investigated using both methods.

  14. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  15. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation.

  16. Medical Simulation Practices 2010 Survey Results

    NASA Technical Reports Server (NTRS)

    McCrindle, Jeffrey J.

    2011-01-01

    Medical Simulation Centers are an essential component of our learning infrastructure to prepare doctors and nurses for their careers. Unlike the military and aerospace simulation industry, very little has been published regarding the best practices currently in use within medical simulation centers. This survey attempts to provide insight into the current simulation practices at medical schools, hospitals, university nursing programs and community college nursing programs. Students within the MBA program at Saint Joseph's University conducted a survey of medical simulation practices during the summer 2010 semester. A total of 115 institutions responded to the survey. The survey resus discuss overall effectiveness of current simulation centers as well as the tools and techniques used to conduct the simulation activity

  17. The results of the campaign for evaluating sphygmomanometers accuracy and their physical conditions

    PubMed

    Mion; Pierin; Alavarce; Vasconcellos

    2000-01-01

    OBJECTIVE: To evaluate the sphygmomanometers calibration accuracy and the physical conditions of the cuff-bladder, bulb, pump, and valve. METHODS: Sixty hundred and forty five aneroid sphygmomanometers were evaluated, 521 used in private practice and 124 used in hospitals. Aneroid manometers were tested against a properly calibrated mercury manometer and were considered calibrated when the error was < or = 3 mm Hg. The physical conditions of the cuffs-bladder, bulb, pump, and valve were also evaluated. RESULTS: Of the aneroid sphygmomanometers tested, 51% of those used in private practice and 56% of those used in hospitals were found to be not accurately calibrated. Of these, the magnitude of inaccuracy ranged from 4 to 8 mm Hg in 70% and 51% of the devices, respectively. The problems found in the cuffs--bladders, bulbs, pumps, and valves of the private practice and hospital devices were bladder damage (34% vs. 21%, respectively), holes/leaks in the bulbs (22% vs. 4%, respectively), and rubber aging (15% vs. 12%, respectively). Of the devices tested, 72% revealed at least one problem interfering with blood pressure measurement accuracy. CONCLUSION: Most of the manometers evaluated, whether used in private practice or in hospitals, were found to be inaccurate and unreliable, and their use may jeopardize the diagnosis and treatment of arterial hypertension.

  18. The results of the Campaign for evaluating sphygmomanometers accuracy and their physical conditions.

    PubMed

    Mion; Pierin; Alavarce; Vasconcellos

    2000-01-01

    OBJECTIVE: To evaluate the sphygmomanometers calibration accuracy and the physical conditions of the cuff-bladder, bulb, pump, and valve. METHODS: Sixty hundred and forty five aneroid sphygmomanometers were evaluated, 521 used in private practice and 124 used in hospitals. Aneroid manometers were tested against a properly calibrated mercury manometer and were considered calibrated when the error was RESULTS: Of the aneroid sphygmomanometers tested, 51% of those used in private practice and 56% of those used in hospitals were found to be not accurately calibrated. Of these, the magnitude of inaccuracy ranged from 4 to 8mm Hg in 70% and 51% of the devices, respectively. The problems found in the cuffs - bladders, bulbs, pumps, and valves of the private practice and hospital devices were bladder damage (34% vs. 21%, respectively), holes/leaks in the bulbs (22% vs. 4%, respectively), and rubber aging (15% vs. 12%, respectively). Of the devices tested, 72% revealed at least one problem interfering with blood pressure measurement accuracy. CONCLUSION: Most of the manometers evaluated, whether used in private practice or in hospitals, were found to be inaccurate and unreliable, and their use may jeopardize the diagnosis and treatment of arterial hypertension.

  19. Results of 17 Independent Geopositional Accuracy Assessments of Earth Satellite Corporation's GeoCover Landsat Thematic Mapper Imagery. Geopositional Accuracy Validation of Orthorectified Landsat TM Imagery: Northeast Asia

    NASA Technical Reports Server (NTRS)

    Smith, Charles M.

    2003-01-01

    This report provides results of an independent assessment of the geopositional accuracy of the Earth Satellite (EarthSat) Corporation's GeoCover, Orthorectified Landsat Thematic Mapper (TM) imagery over Northeast Asia. This imagery was purchased through NASA's Earth Science Enterprise (ESE) Scientific Data Purchase (SDP) program.

  20. Summarising and validating test accuracy results across multiple studies for use in clinical practice.

    PubMed

    Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J

    2015-06-15

    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV.

  1. SALTSTONE MATRIX CHARACTERIZATION AND STADIUM SIMULATION RESULTS

    SciTech Connect

    Langton, C.

    2009-07-30

    SIMCO Technologies, Inc. was contracted to evaluate the durability of the saltstone matrix material and to measure saltstone transport properties. This information will be used to: (1) Parameterize the STADIUM{reg_sign} service life code, (2) Predict the leach rate (degradation rate) for the saltstone matrix over 10,000 years using the STADIUM{reg_sign} concrete service life code, and (3) Validate the modeled results by conducting leaching (water immersion) tests. Saltstone durability for this evaluation is limited to changes in the matrix itself and does not include changes in the chemical speciation of the contaminants in the saltstone. This report summarized results obtained to date which include: characterization data for saltstone cured up to 365 days and characterization of saltstone cured for 137 days and immersed in water for 31 days. Chemicals for preparing simulated non-radioactive salt solution were obtained from chemical suppliers. The saltstone slurry was mixed according to directions provided by SRNL. However SIMCO Technologies Inc. personnel made a mistake in the premix proportions. The formulation SIMCO personnel used to prepare saltstone premix was not the reference mix proportions: 45 wt% slag, 45 wt% fly ash, and 10 wt% cement. SIMCO Technologies Inc. personnel used the following proportions: 21 wt% slag, 65 wt% fly ash, and 14 wt% cement. The mistake was acknowledged and new mixes have been prepared and are curing. The results presented in this report are assumed to be conservative since the excessive fly ash was used in the SIMCO saltstone. The SIMCO mixes are low in slag which is very reactive in the caustic salt solution. The impact is that the results presented in this report are expected to be conservative since the samples prepared were deficient in slag and contained excess fly ash. The hydraulic reactivity of slag is about four times that of fly ash so the amount of hydrated binder formed per unit volume in the SIMCO saltstone samples is

  2. Exploring Space Physics Concepts Using Simulation Results

    NASA Astrophysics Data System (ADS)

    Gross, N. A.

    2008-05-01

    The Center for Integrated Space Weather Modeling (CISM), a Science and Technology Center (STC) funded by the National Science Foundation, has the goal of developing a suite of integrated physics based computer models of the space environment that can follow the evolution of a space weather event from the Sun to the Earth. In addition to the research goals, CISM is also committed to training the next generation of space weather professionals who are imbued with a system view of space weather. This view should include an understanding of both helio-spheric and geo-space phenomena. To this end, CISM offers a yearly Space Weather Summer School targeted to first year graduate students, although advanced undergraduates and space weather professionals have also attended. This summer school uses a number of innovative pedagogical techniques including devoting each afternoon to a computer lab exercise that use results from research quality simulations and visualization techniques, along with ground based and satellite data to explore concepts introduced during the morning lectures. These labs are suitable for use in wide variety educational settings from formal classroom instruction to outreach programs. The goal of this poster is to outline the goals and content of the lab materials so that instructors may evaluate their potential use in the classroom or other settings.

  3. Accuracy of relative positioning by interferometry with GPS Double-blind test results

    NASA Technical Reports Server (NTRS)

    Counselman, C. C., III; Gourevitch, S. A.; Herring, T. A.; King, B. W.; Shapiro, I. I.; Cappallo, R. J.; Rogers, A. E. E.; Whitney, A. R.; Greenspan, R. L.; Snyder, R. E.

    1983-01-01

    MITES (Miniature Interferometer Terminals for Earth Surveying) observations conducted on December 17 and 29, 1980, are analyzed. It is noted that the time span of the observations used on each day was 78 minutes, during which five satellites were always above 20 deg elevation. The observations are analyzed to determine the intersite position vectors by means of the algorithm described by Couselman and Gourevitch (1981). The average of the MITES results from the two days is presented. The rms differences between the two determinations of the components of the three vectors, which were about 65, 92, and 124 m long, were 8 mm for the north, 3 mm for the east, and 6 mm for the vertical. It is concluded that, at least for short distances, relative positioning by interferometry with GPS can be done reliably with subcentimeter accuracy.

  4. Effects of experimental protocol on global vegetation model accuracy: a comparison of simulated and observed vegetation patterns for Asia

    USGS Publications Warehouse

    Tang, Guoping; Shafer, Sarah L.; Barlein, Patrick J.; Holman, Justin O.

    2009-01-01

    Prognostic vegetation models have been widely used to study the interactions between environmental change and biological systems. This study examines the sensitivity of vegetation model simulations to: (i) the selection of input climatologies representing different time periods and their associated atmospheric CO2 concentrations, (ii) the choice of observed vegetation data for evaluating the model results, and (iii) the methods used to compare simulated and observed vegetation. We use vegetation simulated for Asia by the equilibrium vegetation model BIOME4 as a typical example of vegetation model output. BIOME4 was run using 19 different climatologies and their associated atmospheric CO2 concentrations. The Kappa statistic, Fuzzy Kappa statistic and a newly developed map-comparison method, the Nomad index, were used to quantify the agreement between the biomes simulated under each scenario and the observed vegetation from three different global land- and tree-cover data sets: the global Potential Natural Vegetation data set (PNV), the Global Land Cover Characteristics data set (GLCC), and the Global Land Cover Facility data set (GLCF). The results indicate that the 30-year mean climatology (and its associated atmospheric CO2 concentration) for the time period immediately preceding the collection date of the observed vegetation data produce the most accurate vegetation simulations when compared with all three observed vegetation data sets. The study also indicates that the BIOME4-simulated vegetation for Asia more closely matches the PNV data than the other two observed vegetation data sets. Given the same observed data, the accuracy assessments of the BIOME4 simulations made using the Kappa, Fuzzy Kappa and Nomad index map-comparison methods agree well when the compared vegetation types consist of a large number of spatially continuous grid cells. The results of this analysis can assist model users in designing experimental protocols for simulating vegetation.

  5. Technical Highlight: NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools

    SciTech Connect

    Ridouane, E.H.

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes.

  6. Balancing simulation accuracy and efficiency with the Amber united atom force field.

    PubMed

    Hsieh, Meng-Juei; Luo, Ray

    2010-03-04

    We have analyzed the quality of a recently proposed Amber united-atom model and its overall efficiency in ab initio folding and thermodynamic sampling of two stable beta-hairpins. It is found that the mean backbone structures are quite consistent between the simulations in the united-atom and its corresponding all-atom models in Amber. More importantly, the simulated beta turns are also consistent between the two models. Finally, the chemical shifts on H alpha are highly consistent between simulations in the two models, although the simulated chemical shifts are lower than experiment, indicating less structured peptides, probably due to the omission of the hydrophobic term in the simulations. More interestingly, the stabilities of both beta-hairpins at room temperature are similar to those derived from the NMR measurement, whether the united-atom or the all-atom model is used. Detailed analysis shows high percentages of backbone torsion angles within the beta region and high percentages of native contacts. Given the reasonable quality of the united-atom model with respect to experimental data, we have further studied the simulation efficiency of the united-atom model over the all-atom model. Our data shows that the united-atom model is a factor of 6-8 faster than the all-atom model as measured with the ab initio first pass folding time for the two tested beta-hairpins. Detailed structural analysis shows that all ab initio folded trajectories enter the native basin, whether the united-atom model or the all-atom model is used. Finally, we have also studied the simulation efficiency of the united-atom model as measured in terms of how fast thermodynamic convergence can be achieved. It is apparent that the united-atom simulations reach convergence faster than the all-atom simulations with respect to both mean potential energies and mean native contacts. These findings show that the efficiency of the united-atom model is clearly beyond the per-step dynamics simulation

  7. NREL Evaluates Thermal Performance of Uninsulated Walls to Improve Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-03-01

    NREL researchers discover ways to increase accuracy in building energy simulations tools to improve predictions of potential energy savings in homes. Uninsulated walls are typical in older U.S. homes where the wall cavities were not insulated during construction or where the insulating material has settled. Researchers at the National Renewable Energy Laboratory (NREL) are investigating ways to more accurately calculate heat transfer through building enclosures to verify the benefit of energy efficiency upgrades that reduce energy use in older homes. In this study, scientists used computational fluid dynamics (CFD) analysis to calculate the energy loss/gain through building walls and visualize different heat transfer regimes within the uninsulated cavities. The effects of ambient outdoor temperature, the radiative properties of building materials, insulation levels, and the temperature dependence of conduction through framing members were considered. The research showed that the temperature dependence of conduction through framing members dominated the differences between this study and previous results - an effect not accounted for in existing building energy simulation tools. The study provides correlations for the resistance of the uninsulated assemblies that can be implemented into building simulation tools to increase the accuracy of energy use estimates in older homes, which are currently over-predicted.

  8. Accuracy of three different electronic apex locators in detecting simulated horizontal and vertical root fractures.

    PubMed

    Ebrahim, Aqeel K; Wadachi, Reiko; Suda, Hideaki

    2006-08-01

    The aim of this in vitro study was to evaluate the accuracy of three electronic apex locators (EALs): Root ZX, Foramatron D10 and Apex NRG, in the detection of fractures in teeth having simulated horizontal and vertical root fractures. A total of 90 extracted intact, straight, single-rooted teeth were divided into six groups of 15 teeth each. In Groups A, B and C, an incomplete horizontal fracture was simulated by preparing a horizontal incision in the coronal, middle or apical portion of the root until the circumferential half of the canal was exposed in the horizontal plane respectively. In Groups D, E and F, an incomplete vertical root fracture was simulated by preparing a vertical straight incision to expose the canal in the coronal, middle or apical portion of the root all the way in the longitudinal plane respectively. The simulated fractures were 0.25 mm in thickness in all groups. The teeth were embedded in 1% agar and the canals were irrigated with saline solution during electronic measurement. Detection of the simulated root fractures was established with a size 10 K-file when the meter value reached 'APEX' on each EAL. In Groups A, B and C, Kruskal-Wallis tests revealed that there were no statistically significant differences between the three EALs. However, statistically significant differences were found among the EALs in Groups D, E and F (P < 0.0001, one-way anova and Tukey's post-hoc test). In conclusion, the three EALs tested were accurate and acceptable clinical tools in the detection of horizontal root fractures. However, the three EALs were unreliable in detecting the position of vertical root fractures.

  9. The accuracy of prostate volume measurement from ultrasound images: a quasi-Monte Carlo simulation study using magnetic resonance imaging.

    PubMed

    Azulay, David-Olivier D; Murphy, Philip; Graham, Jim

    2013-01-01

    Prostate volume is an important parameter to guide management of patients with benign prostatic hyperplasia (BPH) and to deliver clinical trial endpoints. Generally, simple 2D ultrasound (US) approaches are favoured despite the potential for greater accuracy afforded by magnetic resonance imaging (MRI) or complex US procedures. In this study, different approaches to estimate prostate size are evaluated with a simulation to select multiple organ cross-sections and diameters from 22 MRI-defined prostate shapes. A quasi-Monte Carlo (qMC) approach is used to simulate multiple probe positions and angles within prescribed limits resulting in a range of dimensions. The basic ellipsoid calculation which uses two scanning planes compares well to the MRI volume across the range of prostate shapes and sizes (R=0.992). However, using an appropriate linear regression model, accurate volume estimates can be made using prostate diameters calculated from a single scanning plane.

  10. How well do people recall risk factor test results? Accuracy and bias among cholesterol screening participants.

    PubMed

    Croyle, Robert T; Loftus, Elizabeth F; Barger, Steven D; Sun, Yi-Chun; Hart, Marybeth; Gettig, JoAnn

    2006-05-01

    The authors conducted a community-based cholesterol screening study to examine accuracy of recall for self-relevant health information in long-term autobiographical memory. Adult community residents (N = 496) were recruited to participate in a laboratory-based cholesterol screening and were also provided cholesterol counseling in accordance with national guidelines. Participants were subsequently interviewed 1, 3, or 6 months later to assess their memory for their test results. Participants recalled their exact cholesterol levels inaccurately (38.0% correct) but their cardiovascular risk category comparatively well (88.7% correct). Recall errors showed a systematic bias: Individuals who received the most undesirable test results were most likely to remember their cholesterol scores and cardiovascular risk categories as lower (i.e., healthier) than those actually received. Recall bias was unrelated to age, education, knowledge, self-rated health status, and self-reported efforts to reduce cholesterol. The findings provide evidence that recall of self-relevant health information is susceptible to self-enhancement bias.

  11. Ultrasonic noninvasive temperature estimation using echoshift gradient maps: simulation results.

    PubMed

    Techavipoo, Udomchai; Chen, Quan; Varghese, Tomy

    2005-07-01

    Percutaneous ultrasound-image-guided radiofrequency (rf) ablation is an effective treatment for patients with hepatic malignancies that are excluded from surgical resection due to other complications. However, ablated regions are not clearly differentiated from normal untreated regions using conventional ultrasound imaging due to similar echogenic tissue properties. In this paper, we investigate the statistics that govern the relationship between temperature elevation and the corresponding temperature map obtained from the gradient of the echoshifts obtained using consecutive ultrasound radiofrequency signals. A relationship derived using experimental data on the sound speed and tissue expansion variations measured on canine liver tissue samples at different elevated temperatures is utilized to generate ultrasound radiofrequency simulated data. The simulated data set is then utilized to statistically estimate the accuracy and precision of the temperature distributions obtained. The results show that temperature increases between 37 and 67 degrees C can be estimated with standard deviations of +/- 3 degrees C. Our results also indicate that the correlation coefficient between consecutive radiofrequency signals should be greater than 0.85 to obtain accurate temperature estimates.

  12. Accuracy of three-dimensional facial soft tissue simulation in post-traumatic zygoma reconstruction.

    PubMed

    Li, P; Zhou, Z W; Ren, J Y; Zhang, Y; Tian, W D; Tang, W

    2016-12-01

    The aim of this study was to evaluate the accuracy of novel software-CMF-preCADS-for the prediction of soft tissue changes following repositioning surgery for zygomatic fractures. Twenty patients who had sustained an isolated zygomatic fracture accompanied by facial deformity and who were treated with repositioning surgery participated in this study. Cone beam computed tomography (CBCT) scans and three-dimensional (3D) stereophotographs were acquired preoperatively and postoperatively. The 3D skeletal model from the preoperative CBCT data was matched with the postoperative one, and the fractured zygomatic fragments were segmented and aligned to the postoperative position for prediction. Then, the predicted model was matched with the postoperative 3D stereophotograph for quantification of the simulation error. The mean absolute error in the zygomatic soft tissue region between the predicted model and the real one was 1.42±1.56mm for all cases. The accuracy of the prediction (mean absolute error ≤2mm) was 87%. In the subjective assessment it was found that the majority of evaluators considered the predicted model and the postoperative model to be 'very similar'. CMF-preCADS software can provide a realistic, accurate prediction of the facial soft tissue appearance after repositioning surgery for zygomatic fractures. The reliability of this software for other types of repositioning surgery for maxillofacial fractures should be validated in the future.

  13. Technical Note: Maximising accuracy and minimising cost of a potentiometrically regulated ocean acidification simulation system

    NASA Astrophysics Data System (ADS)

    MacLeod, C. D.; Doyle, H. L.; Currie, K. I.

    2014-05-01

    This article describes a potentiometric ocean acidification simulation system which automatically regulates pH through the injection of 100% CO2 gas into temperature-controlled seawater. The system is ideally suited to long-term experimental studies of the effect of acidification on biological processes involving small-bodied (10-20 mm) calcifying or non-calcifying organisms. Using hobbyist grade equipment, the system was constructed for approximately USD 1200 per treatment unit (tank, pH regulation apparatus, chiller, pump/filter unit). An overall accuracy of ±0.05 pHT units (SD) was achieved over 90 days in two acidified treatments (7.60 and 7.40) at 12 °C using glass electrodes calibrated with salt water buffers, thereby preventing liquid junction error. The accuracy of the system was validated through the independent calculation of pHT (12 °C) using dissolved inorganic carbon (DIC) and total alkalinity (AT) data taken from discrete acidified seawater samples. The system was used to compare the shell growth of the marine gastropod Zeacumantus subcarinatus infected with the trematode parasite Maritrema novaezealandensis with that of uninfected snails, at pH levels of 7.4, 7.6, and 8.1.

  14. First experimental results of very high accuracy centroiding measurements for the neat astrometric mission

    NASA Astrophysics Data System (ADS)

    Crouzier, A.; Malbet, F.; Preis, O.; Henault, F.; Kern, P.; Martin, G.; Feautrier, P.; Stadler, E.; Lafrasse, S.; Delboulbé, A.; Behar, E.; Saint-Pe, M.; Dupont, J.; Potin, S.; Cara, C.; Donati, M.; Doumayrou, E.; Lagage, P. O.; Léger, A.; LeDuigou, J. M.; Shao, M.; Goullioud, R.

    2013-09-01

    NEAT is an astrometric mission proposed to ESA with the objectives of detecting Earth-like exoplanets in the habitable zone of nearby solar-type stars. NEAT requires the capability to measure stellar centroids at the precision of 5e-6 pixel. Current state-of-the-art methods for centroid estimation have reached a precision of about 2e-5 pixel at two times Nyquist sampling, this was shown at the JPL by the VESTA experiment. A metrology system was used to calibrate intra and inter pixel quantum efficiency variations in order to correct pixelation errors. The European part of the NEAT consortium is building a testbed in vacuum in order to achieve 5e-6 pixel precision for the centroid estimation. The goal is to provide a proof of concept for the precision requirement of the NEAT spacecraft. In this paper we present the metrology and the pseudo stellar sources sub-systems, we present a performance model and an error budget of the experiment and we report the present status of the demonstration. Finally we also present our first results: the experiment had its first light in July 2013 and a first set of data was taken in air. The analysis of this first set of data showed that we can already measure the pixel positions with an accuracy of about 1e-4 pixel.

  15. Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.

    2008-01-01

    This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.

  16. Accuracy of the unified approach in maternally influenced traits - illustrated by a simulation study in the honey bee (Apis mellifera)

    PubMed Central

    2013-01-01

    Background The honey bee is an economically important species. With a rapid decline of the honey bee population, it is necessary to implement an improved genetic evaluation methodology. In this study, we investigated the applicability of the unified approach and its impact on the accuracy of estimation of breeding values for maternally influenced traits on a simulated dataset for the honey bee. Due to the limitation to the number of individuals that can be genotyped in a honey bee population, the unified approach can be an efficient strategy to increase the genetic gain and to provide a more accurate estimation of breeding values. We calculated the accuracy of estimated breeding values for two evaluation approaches, the unified approach and the traditional pedigree based approach. We analyzed the effects of different heritabilities as well as genetic correlation between direct and maternal effects on the accuracy of estimation of direct, maternal and overall breeding values (sum of maternal and direct breeding values). The genetic and reproductive biology of the honey bee was accounted for by taking into consideration characteristics such as colony structure, uncertain paternity, overlapping generations and polyandry. In addition, we used a modified numerator relationship matrix and a realistic genome for the honey bee. Results For all values of heritability and correlation, the accuracy of overall estimated breeding values increased significantly with the unified approach. The increase in accuracy was always higher for the case when there was no correlation as compared to the case where a negative correlation existed between maternal and direct effects. Conclusions Our study shows that the unified approach is a useful methodology for genetic evaluation in honey bees, and can contribute immensely to the improvement of traits of apicultural interest such as resistance to Varroa or production and behavioural traits. In particular, the study is of great interest for

  17. Early diagnostic suggestions improve accuracy of GPs: a randomised controlled trial using computer-simulated patients

    PubMed Central

    Kostopoulou, Olga; Rosen, Andrea; Round, Thomas; Wright, Ellen; Douiri, Abdel; Delaney, Brendan

    2015-01-01

    Background Designers of computerised diagnostic support systems (CDSSs) expect physicians to notice when they need advice and enter into the CDSS all information that they have gathered about the patient. The poor use of CDSSs and the tendency not to follow advice once a leading diagnosis emerges would question this expectation. Aim To determine whether providing GPs with diagnoses to consider before they start testing hypotheses improves accuracy. Design and setting Mixed factorial design, where 297 GPs diagnosed nine patient cases, differing in difficulty, in one of three experimental conditions: control, early support, or late support. Method Data were collected over the internet. After reading some initial information about the patient and the reason for encounter, GPs requested further information for diagnosis and management. Those receiving early support were shown a list of possible diagnoses before gathering further information. In late support, GPs first gave a diagnosis and were then shown which other diagnoses they could still not discount. Results Early support significantly improved diagnostic accuracy over control (odds ratio [OR] 1.31; 95% confidence interval [95%CI] = 1.03 to 1.66, P = 0.027), while late support did not (OR 1.10; 95% CI = 0.88 to 1.37). An absolute improvement of 6% with early support was obtained. There was no significant interaction with case difficulty and no effect of GP experience on accuracy. No differences in information search were detected between experimental conditions. Conclusion Reminding GPs of diagnoses to consider before they start testing hypotheses can improve diagnostic accuracy irrespective of case difficulty, without lengthening information search. PMID:25548316

  18. Accuracy Rates of Sex Estimation by Forensic Anthropologists through Comparison with DNA Typing Results in Forensic Casework.

    PubMed

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2016-09-01

    A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases.

  19. Speed and Accuracy of Absolute Pitch Judgments: Some Latter-Day Results.

    ERIC Educational Resources Information Center

    Carroll, John B.

    Nine subjects, 5 of whom claimed absolute pitch (AP) ability were instructed to rapidly strike notes on the piano to match randomized tape-recorded piano notes. Stimulus set sizes were 64, 16, or 4 consecutive semitones, or 7 diatonic notes of a designated octave. A control task involved motor movements to notes announced in advance. Accuracy,…

  20. Testing the Accuracy of Data-driven MHD Simulations of Active Region Evolution

    NASA Astrophysics Data System (ADS)

    Leake, James E.; Linton, Mark G.; Schuck, Peter W.

    2017-04-01

    Models for the evolution of the solar coronal magnetic field are vital for understanding solar activity, yet the best measurements of the magnetic field lie at the photosphere, necessitating the development of coronal models which are “data-driven” at the photosphere. We present an investigation to determine the feasibility and accuracy of such methods. Our validation framework uses a simulation of active region (AR) formation, modeling the emergence of magnetic flux from the convection zone to the corona, as a ground-truth data set, to supply both the photospheric information and to perform the validation of the data-driven method. We focus our investigation on how the accuracy of the data-driven model depends on the temporal frequency of the driving data. The Helioseismic and Magnetic Imager on NASA’s Solar Dynamics Observatory produces full-disk vector magnetic field measurements at a 12-minute cadence. Using our framework we show that ARs that emerge over 25 hr can be modeled by the data-driving method with only ∼1% error in the free magnetic energy, assuming the photospheric information is specified every 12 minutes. However, for rapidly evolving features, under-sampling of the dynamics at this cadence leads to a strobe effect, generating large electric currents and incorrect coronal morphology and energies. We derive a sampling condition for the driving cadence based on the evolution of these small-scale features, and show that higher-cadence driving can lead to acceptable errors. Future work will investigate the source of errors associated with deriving plasma variables from the photospheric magnetograms as well as other sources of errors, such as reduced resolution, instrument bias, and noise.

  1. The optimization of accuracy ratio of the two-group diffusion constants in simulation model of RBMK-1000 core

    NASA Astrophysics Data System (ADS)

    Bolsunov, A. A.; Karpov, S. A.

    2013-12-01

    The relative ratio of individual accuracies of the two-group diffusion constants in a dynamic simulation model of a reactor core is optimized. This is done to minimize calculation errors of neutron flux, power, or reactivity distributions in the model. The problem is solved under the assumption that the overall accuracy of the representation of constants is limited by the resources allocated for the approximation of the constants.

  2. NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes. Researchers at the National Renewable Energy Laboratory (NREL) have developed models for evaluating the thermal performance of walls in existing homes that will improve the accuracy of building energy simulation tools when predicting potential energy savings of existing homes. Uninsulated walls are typical in older homes where the wall cavities were not insulated during construction or where the insulating material has settled. Accurate calculation of heat transfer through building enclosures will help determine the benefit of energy efficiency upgrades in order to reduce energy consumption in older American homes. NREL performed detailed computational fluid dynamics (CFD) analysis to quantify the energy loss/gain through the walls and to visualize different airflow regimes within the uninsulated cavities. The effects of ambient outdoor temperature, radiative properties of building materials, and insulation level were investigated. The study showed that multi-dimensional airflows occur in walls with uninsulated cavities and that the thermal resistance is a function of the outdoor temperature - an effect not accounted for in existing building energy simulation tools. The study quantified the difference between CFD prediction and the approach currently used in building energy simulation tools over a wide range of conditions. For example, researchers found that CFD predicted lower heating loads and slightly higher cooling loads. Implementation of CFD results into building energy simulation tools such as DOE2 and EnergyPlus will likely reduce the predicted heating load of homes. Researchers also determined that a small air gap in a partially insulated cavity can lead to a significant reduction in thermal resistance. For instance, a 4-in. tall air gap

  3. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  4. Results of error correction techniques applied on two high accuracy coordinate measuring machines

    SciTech Connect

    Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R.; National Inst. of Standards and Technology, Gaithersburg, MD )

    1990-01-01

    The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.

  5. Improving the accuracy of simulation of radiation-reaction effects with implicit Runge-Kutta-Nyström methods.

    PubMed

    Elkina, N V; Fedotov, A M; Herzing, C; Ruhl, H

    2014-05-01

    The Landau-Lifshitz equation provides an efficient way to account for the effects of radiation reaction without acquiring the nonphysical solutions typical for the Lorentz-Abraham-Dirac equation. We solve the Landau-Lifshitz equation in its covariant four-vector form in order to control both the energy and momentum of radiating particles. Our study reveals that implicit time-symmetric collocation methods of the Runge-Kutta-Nyström type are superior in accuracy and better at maintaining the mass-shell condition than their explicit counterparts. We carry out an extensive study of numerical accuracy by comparing the analytical and numerical solutions of the Landau-Lifshitz equation. Finally, we present the results of the simulation of particle scattering by a focused laser pulse. Due to radiation reaction, particles are less capable of penetrating into the focal region compared to the case where radiation reaction is neglected. Our results are important for designing forthcoming experiments with high intensity laser fields.

  6. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  7. A Bloch-McConnell simulator with pharmacokinetic modeling to explore accuracy and reproducibility in the measurement of hyperpolarized pyruvate

    NASA Astrophysics Data System (ADS)

    Walker, Christopher M.; Bankson, James A.

    2015-03-01

    Magnetic resonance imaging (MRI) of hyperpolarized (HP) agents has the potential to probe in-vivo metabolism with sensitivity and specificity that was not previously possible. Biological conversion of HP agents specifically for cancer has been shown to correlate to presence of disease, stage and response to therapy. For such metabolic biomarkers derived from MRI of hyperpolarized agents to be clinically impactful, they need to be validated and well characterized. However, imaging of HP substrates is distinct from conventional MRI, due to the non-renewable nature of transient HP magnetization. Moreover, due to current practical limitations in generation and evolution of hyperpolarized agents, it is not feasible to fully experimentally characterize measurement and processing strategies. In this work we use a custom Bloch-McConnell simulator with pharmacokinetic modeling to characterize the performance of specific magnetic resonance spectroscopy sequences over a range of biological conditions. We performed numerical simulations to evaluate the effect of sequence parameters over a range of chemical conversion rates. Each simulation was analyzed repeatedly with the addition of noise in order to determine the accuracy and reproducibility of measurements. Results indicate that under both closed and perfused conditions, acquisition parameters can affect measurements in a tissue dependent manner, suggesting that great care needs to be taken when designing studies involving hyperpolarized agents. More modeling studies will be needed to determine what effect sequence parameters have on more advanced acquisitions and processing methods.

  8. Accuracy and convergence of coupled finite-volume/Monte Carlo codes for plasma edge simulations of nuclear fusion reactors

    SciTech Connect

    Ghoos, K.; Dekeyser, W.; Samaey, G.; Börner, P.; Baelmans, M.

    2016-10-01

    The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracy by making use of averaging in the Random Noise coupling technique.

  9. Adaptive constructive processes and memory accuracy: Consequences of counterfactual simulations in young and older adults

    PubMed Central

    Gerlach, Kathy D.; Dornblaser, David W.; Schacter, Daniel L.

    2013-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterized as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b, young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test, participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2, younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterization as an adaptive constructive process. PMID:23560477

  10. Influence of River Bed Elevation Survey Configurations and Interpolation Methods on the Accuracy of LIDAR Dtm-Based River Flow Simulations

    NASA Astrophysics Data System (ADS)

    Santillan, J. R.; Serviano, J. L.; Makinano-Santillan, M.; Marqueso, J. T.

    2016-09-01

    In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS), zig-zag (ZZ), river banks-centerline (RBCL), and river banks-centerline-zig-zag (RBCLZZ), and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging) were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK) as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs. The use of

  11. Accuracy of buffered-force QM/MM simulations of silica

    SciTech Connect

    Peguiron, Anke; Moras, Gianpietro; Colombi Ciacchi, Lucio; De Vita, Alessandro; Kermode, James R.

    2015-02-14

    We report comparisons between energy-based quantum mechanics/molecular mechanics (QM/MM) and buffered force-based QM/MM simulations in silica. Local quantities—such as density of states, charges, forces, and geometries—calculated with both QM/MM approaches are compared to the results of full QM simulations. We find the length scale over which forces computed using a finite QM region converge to reference values obtained in full quantum-mechanical calculations is ∼10 Å rather than the ∼5 Å previously reported for covalent materials such as silicon. Electrostatic embedding of the QM region in the surrounding classical point charges gives only a minor contribution to the force convergence. While the energy-based approach provides accurate results in geometry optimizations of point defects, we find that the removal of large force errors at the QM/MM boundary provided by the buffered force-based scheme is necessary for accurate constrained geometry optimizations where Si–O bonds are elongated and for finite-temperature molecular dynamics simulations of crack propagation. Moreover, the buffered approach allows for more flexibility, since special-purpose QM/MM coupling terms that link QM and MM atoms are not required and the region that is treated at the QM level can be adaptively redefined during the course of a dynamical simulation.

  12. Mapping simulated scenes with skeletal remains using differential GPS in open environments: an assessment of accuracy and practicality.

    PubMed

    Walter, Brittany S; Schultz, John J

    2013-05-10

    Scene mapping is an integral aspect of processing a scene with scattered human remains. By utilizing the appropriate mapping technique, investigators can accurately document the location of human remains and maintain a precise geospatial record of evidence. One option that has not received much attention for mapping forensic evidence is the differential global positioning (DGPS) unit, as this technology now provides decreased positional error suitable for mapping scenes. Because of the lack of knowledge concerning this utility in mapping a scene, controlled research is necessary to determine the practicality of using newer and enhanced DGPS units in mapping scattered human remains. The purpose of this research was to quantify the accuracy of a DGPS unit for mapping skeletal dispersals and to determine the applicability of this utility in mapping a scene with dispersed remains. First, the accuracy of the DGPS unit in open environments was determined using known survey markers in open areas. Secondly, three simulated scenes exhibiting different types of dispersals were constructed and mapped in an open environment using the DGPS. Variables considered during data collection included the extent of the dispersal, data collection time, data collected on different days, and different postprocessing techniques. Data were differentially postprocessed and compared in a geographic information system (GIS) to evaluate the most efficient recordation methods. Results of this study demonstrate that the DGPS is a viable option for mapping dispersed human remains in open areas. The accuracy of collected point data was 11.52 and 9.55 cm for 50- and 100-s collection times, respectfully, and the orientation and maximum length of long bones was maintained. Also, the use of error buffers for point data of bones in maps demonstrated the error of the DGPS unit, while showing that the context of the dispersed skeleton was accurately maintained. Furthermore, the application of a DGPS for

  13. Implication of CT table sag on geometrical accuracy during virtual simulation.

    PubMed

    Zullo, John R; Kudchadker, Rajat; Wu, Richard; Lee, Andrew; Prado, Karl

    2007-01-01

    Computed tomography (CT) scanners are used in hospitals worldwide for radiation oncology treatment simulation. It is critical that the process very accurately represents the patient positioning to be used during the administration of radiation therapy to minimize the dose delivery to normal tissue. Unfortunately, this is not always the case. One problem is that some degree of vertical displacement, or sag, occurs when the table is extended from its base when under a clinical weight load, a problem resulting from mechanical limitations of the CT table. In an effort to determine the extent of the problem, we measured and compared the degree of table sag for various CT scanner tables at our institution. A clinically representative weight load was placed on each table, and the amount of table sag was measured for varying degrees of table extension from its base. Results indicated that the amount of table sag varied from approximately 0.7 to 6.6 mm and that the amount of table sag varied not only between tables from different manufacturers but also between tables of the same model from the same manufacturer. Failure to recognize and prevent this problem could lead to incorrectly derived isocenter localization and subsequent patient positioning errors. Treatment site-specific and scanner-based laser offset correction should be implemented for each patient's virtual simulation procedure. In addition, the amount of sag should be measured under a clinically representative weight load upon CT-simulator commissioning.

  14. Implication of CT Table Sag on Geometrical Accuracy During Virtual Simulation

    SciTech Connect

    Zullo, John R. Kudchadker, Rajat; Wu, Richard; Lee, Andrew; Prado, Karl

    2007-01-01

    Computed tomography (CT) scanners are used in hospitals worldwide for radiation oncology treatment simulation. It is critical that the process very accurately represents the patient positioning to be used during the administration of radiation therapy to minimize the dose delivery to normal tissue. Unfortunately, this is not always the case. One problem is that some degree of vertical displacement, or sag, occurs when the table is extended from its base when under a clinical weight load, a problem resulting from mechanical limitations of the CT table. In an effort to determine the extent of the problem, we measured and compared the degree of table sag for various CT scanner tables at our institution. A clinically representative weight load was placed on each table, and the amount of table sag was measured for varying degrees of table extension from its base. Results indicated that the amount of table sag varied from approximately 0.7 to 6.6 mm and that the amount of table sag varied not only between tables from different manufacturers but also between tables of the same model from the same manufacturer. Failure to recognize and prevent this problem could lead to incorrectly derived isocenter localization and subsequent patient positioning errors. Treatment site-specific and scanner-based laser offset correction should be implemented for each patient's virtual simulation procedure. In addition, the amount of sag should be measured under a clinically representative weight load upon CT-simulator commissioning.

  15. Computer simulation of shading and blocking: Discussion of accuracy and recommendations

    SciTech Connect

    Lipps, F W

    1992-04-01

    A field of heliostats suffers losses caused by shading and blocking by neighboring heliostats. The complex geometry of multiple shading and blocking events suggests that a processing code is needed to update the boundary vector for each shading or blocking event. A new version, RSABS, (programmer`s manual included) simulates the split-rectangular heliostat. Researchers concluded that the dominant error for the given heliostat geometry is caused by the departure from planarity of the neighboring heliostats. It is recommended that a version of the heliostat simulation be modified to include losses due to nonreflective structural margins, if they occur. Heliostat neighbors should be given true guidance rather than assumed to be parallel, and the resulting nonidentical quadrilateral images should be processed, as in HELIOS, by ignoring overlapping events, rare in optimized fields.

  16. Evaluation of deformation accuracy of a virtual pneumoperitoneum method based on clinical trials for patient-specific laparoscopic surgery simulator

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Qu, Jia Di; Nimura, Yukitaka; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku

    2012-02-01

    This paper evaluates deformation accuracy of a virtual pneumoperitoneum method by utilizing measurement data of real deformations of patient bodies. Laparoscopic surgery is an option of surgical operations that is less invasive technique as compared with traditional surgical operations. In laparoscopic surgery, the pneumoperitoneum process is performed to create a viewing and working space. Although a virtual pneumoperitoneum method based on 3D CT image deformation has been proposed for patient-specific laparoscopy simulators, quantitative evaluation based on measurements obtained in real surgery has not been performed. In this paper, we evaluate deformation accuracy of the virtual pneumoperitoneum method based on real deformation data of the abdominal wall measured in operating rooms (ORs.) The evaluation results are used to find optimal deformation parameters of the virtual pneumoperitoneum method. We measure landmark positions on the abdominal wall on a 3D CT image taken before performing a pneumoperitoneum process. The landmark positions are defined based on anatomical structure of a patient body. We also measure the landmark positions on a 3D CT image deformed by the virtual pneumoperitoneum method. To measure real deformations of the abdominal wall, we measure the landmark positions on the abdominal wall of a patient before and after the pneumoperitoneum process in the OR. We transform the landmark positions measured in the OR from the tracker coordinate system to the CT coordinate system. A positional error of the virtual pneumoperitoneum method is calculated based on positional differences between the landmark positions on the 3D CT image and the transformed landmark positions. Experimental results based on eight cases of surgeries showed that the minimal positional error was 13.8 mm. The positional error can be decreased from the previous method by calculating optimal deformation parameters of the virtual pneumoperitoneum method from the experimental

  17. Progress toward chemcial accuracy in the computer simulation of condensed phase reactions

    SciTech Connect

    Bash, P.A.; Levine, D.; Hallstrom, P.; Ho, L.L.; Mackerell, A.D. Jr.

    1996-03-01

    A procedure is described for the generation of chemically accurate computer-simulation models to study chemical reactions in the condensed phase. The process involves (1) the use of a coupled semiempirical quantum and classical molecular mechanics method to represent solutes and solvent, respectively; (2) the optimization of semiempirical quantum mechanics (QM) parameters to produce a computationally efficient and chemically accurate QM model; (3) the calibration of a quantum/classical microsolvation model using ab initio quantum theory; and (4) the use of statistical mechanical principles and methods to simulate, on massively parallel computers, the thermodynamic properties of chemical reactions in aqueous solution. The utility of this process is demonstrated by the calculation of the enthalpy of reaction in vacuum and free energy change in aqueous solution for a proton transfer involving methanol, methoxide, imidazole, and imidazolium, which are functional groups involved with proton transfers in many biochemical systems. An optimized semiempirical QM model is produced, which results in the calculation of heats of formation of the above chemical species to within 1.0 kcal/mol of experimental values. The use of the calibrated QM and microsolvation QM/MM models for the simulation of a proton transfer in aqueous solution gives a calculated free energy that is within 1.0 kcal/mol (12.2 calculated vs. 12.8 experimental) of a value estimated from experimental pKa`s of the reacting species.

  18. Summarizing Simulation Results using Causally-relevant States

    PubMed Central

    Parikh, Nidhi; Marathe, Madhav; Swarup, Samarth

    2016-01-01

    As increasingly large-scale multiagent simulations are being implemented, new methods are becoming necessary to make sense of the results of these simulations. Even concisely summarizing the results of a given simulation run is a challenge. Here we pose this as the problem of simulation summarization: how to extract the causally-relevant descriptions of the trajectories of the agents in the simulation. We present a simple algorithm to compress agent trajectories through state space by identifying the state transitions which are relevant to determining the distribution of outcomes at the end of the simulation. We present a toy-example to illustrate the working of the algorithm, and then apply it to a complex simulation of a major disaster in an urban area. PMID:28042620

  19. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  20. On the accuracy of the two-fluid formulation in direct numerical simulation of bubble-laden turbulent boundary layers

    NASA Astrophysics Data System (ADS)

    Ferrante, Antonino; Elghobashi, Said

    2007-04-01

    The objective of the present paper is to examine the accuracy of the two-fluid (TF) formulation in direct numerical simulation (DNS) of a microbubble-laden spatially developing turbulent boundary layer over a flat plate by comparing the results with those of the Eulerian-Lagrangian (EL) formulation [A. Ferrante and S. Elghobashi, J. Fluid Mech. 543, 93 (2005); A. Ferrante and S. Elghobashi, J. Fluid Mech. 503, 345 (2004)]. Our results show that DNS with TF (TFDNS) does not reproduce the physical mechanisms responsible for drag reduction observed in the EL results. The reason is that TFDNS does not produce accurate instantaneous local bubble concentration C (x,t) gradients which are responsible for the generation of a positive ⟨∇•U⟩ that is essential for the drag reduction mechanism. The inaccuracy of the TFDNS in computing C (x,t) is due to the invalidity of the bubble-phase continuity equation in regions where the continuum assumption for the bubble-phase breaks down. It is recommended that if the real (experimental or DNS) instantaneous spatial distribution of bubble (or particle) concentration is discontinuous, and if this concentration discontinuity is crucial for the realization of the physical phenomenon of interest, then DNS should use the EL formulation. We propose a Knudsen number criterion for the validity of the two-fluid formulation in DNS of dispersed two-phase flows with strong unsteady preferential concentration.

  1. The VIIRS Ocean Data Simulator Enhancements and Results

    NASA Technical Reports Server (NTRS)

    Robinson, Wayne D.; Patt, Fredrick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-01-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  2. Accuracy of tumor motion compensation algorithm from a robotic respiratory tracking system: A simulation study

    SciTech Connect

    Seppenwoolde, Yvette; Berbeco, Ross I.; Nishioka, Seiko; Shirato, Hiroki; Heijmen, Ben

    2007-07-15

    could already be reached with a simple linear model. In case of hysteresis, a polynomial model added some extra reduction. More frequent updating of the correspondence model resulted in slightly smaller errors only for the few recordings with a time trend that was fast, relative to the current x-ray update frequency. In general, the simulations suggest that the applied combined use of internal and external markers allow the robot to accurately follow tumor motion even in the case of irregularities in breathing patterns.

  3. High-Accuracy Near-Surface Large-Eddy Simulation with Planar Topography

    DTIC Science & Technology

    2015-08-03

    SECURITY CLASSIFICATION OF: Large-eddy simulation (LES) has been plagued by an inability to predict the law-of-the-wall (LOTW) in mean velocity in the...Simulation with Planar Topography” Report Title Large-eddy simulation (LES) has been plagued by an inability to predict the law-of-the-wall (LOTW) in mean

  4. High Accuracy Multidimensional Parameterized Surrogate Models for Fast Optimization of Microwave Circuits in the Industry Standard Circuit Simulators

    DTIC Science & Technology

    2006-07-03

    models that could be evaluated at the speed of closed form formulas but having the accuracy comparable to EM simulations. One of the goals of the... evaluated and randomly selected points one can compute various statistical measures. In this report we use the following simple ones: the maximum...can be performed. The test generates several models with increasing orders and evaluates the biggest mismatch between chosen pairs. The orders of a

  5. Simulation of electronic registration of multispectral remote sensing images to 0.1 pixel accuracy

    NASA Technical Reports Server (NTRS)

    Reitsema, H. J.; Mord, A. J.; Fraser, D.; Richard, H. L.; Speaker, E. E.

    1984-01-01

    Band-to-band coregistration of multispectral remote sensing images can be achieved by electronic signal processing techniques rather than by costly and difficult mechanical alignment. This paper describes the results of a study of the end-to-end performance of electronic registration. The software simulation includes steps which model the performance of the geometric calibration process, the instrument image quality, detector performance and the effects of achieving coregistration through image resampling. The image resampling step emulates the Pipelined Resampling Processor, a real-time image resampler. The study demonstrates that the electronic alignment technique produces multispectral images which are superior to those produced by an imager whose pixel geometry is accurate to 0.1 pixel rms. The implications of this approach for future earth observation programs are discussed.

  6. Accuracy evaluation of numerical methods used in state-of-the-art simulators for spiking neural networks.

    PubMed

    Henker, Stephan; Partzsch, Johannes; Schüffny, René

    2012-04-01

    With the various simulators for spiking neural networks developed in recent years, a variety of numerical solution methods for the underlying differential equations are available. In this article, we introduce an approach to systematically assess the accuracy of these methods. In contrast to previous investigations, our approach focuses on a completely deterministic comparison and uses an analytically solved model as a reference. This enables the identification of typical sources of numerical inaccuracies in state-of-the-art simulation methods. In particular, with our approach we can separate the error of the numerical integration from the timing error of spike detection and propagation, the latter being prominent in simulations with fixed timestep. To verify the correctness of the testing procedure, we relate the numerical deviations to theoretical predictions for the employed numerical methods. Finally, we give an example of the influence of simulation artefacts on network behaviour and spike-timing-dependent plasticity (STDP), underlining the importance of spike-time accuracy for the simulation of STDP.

  7. Accuracy of Range Restriction Correction with Multiple Imputation in Small and Moderate Samples: A Simulation Study

    ERIC Educational Resources Information Center

    Pfaffel, Andreas; Spiel, Christiane

    2016-01-01

    Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are…

  8. Measurement and Simulation Results of Ti Coated Microwave Absorber

    SciTech Connect

    Sun, Ding; McGinnis, Dave; /Fermilab

    1998-11-01

    When microwave absorbers are put in a waveguide, a layer of resistive coating can change the distribution of the E-M fields and affect the attenuation of the signal within the microwave absorbers. In order to study such effect, microwave absorbers (TT2-111) were coated with titanium thin film. This report is a document on the coating process and measurement results. The measurement results have been used to check the simulation results from commercial software HFSS (High Frequency Structure Simulator.)

  9. On the accuracy of a video-based drill-guidance solution for orthopedic and trauma surgery: preliminary results

    NASA Astrophysics Data System (ADS)

    Magaraggia, Jessica; Kleinszig, Gerhard; Wei, Wei; Weiten, Markus; Graumann, Rainer; Angelopoulou, Elli; Hornegger, Joachim

    2014-03-01

    Over the last years, several methods have been proposed to guide the physician during reduction and fixation of bone fractures. Available solutions often use bulky instrumentation inside the operating room (OR). The latter ones usually consist of a stereo camera, placed outside the operative field, and optical markers directly attached to both the patient and the surgical instrumentation, held by the surgeon. Recently proposed techniques try to reduce the required additional instrumentation as well as the radiation exposure to both patient and physician. In this paper, we present the adaptation and the first implementation of our recently proposed video camera-based solution for screw fixation guidance. Based on the simulations conducted in our previous work, we mounted a small camera on a drill in order to recover its tip position and axis orientation w.r.t our custom-made drill sleeve with attached markers. Since drill-position accuracy is critical, we thoroughly evaluated the accuracy of our implementation. We used an optical tracking system for ground truth data collection. For this purpose, we built a custom plate reference system and attached reflective markers to both the instrument and the plate. Free drilling was then performed 19 times. The position of the drill axis was continuously recovered using both our video camera solution and the tracking system for comparison. The recorded data covered targeting, perforation of the surface bone by the drill bit and bone drilling. The orientation of the instrument axis and the position of the instrument tip were recovered with an accuracy of 1:60 +/- 1:22° and 2:03 +/- 1:36 mm respectively.

  10. Diagnostic accuracy of GPs when using an early-intervention decision support system: a high-fidelity simulation

    PubMed Central

    Kostopoulou, Olga; Porat, Talya; Corrigan, Derek; Mahmoud, Samhar; Delaney, Brendan C

    2017-01-01

    Background Observational and experimental studies of the diagnostic task have demonstrated the importance of the first hypotheses that come to mind for accurate diagnosis. A prototype decision support system (DSS) designed to support GPs’ first impressions has been integrated with a commercial electronic health record (EHR) system. Aim To evaluate the prototype DSS in a high-fidelity simulation. Design and setting Within-participant design: 34 GPs consulted with six standardised patients (actors) using their usual EHR. On a different day, GPs used the EHR with the integrated DSS to consult with six other patients, matched for difficulty and counterbalanced. Method Entering the reason for encounter triggered the DSS, which provided a patient-specific list of potential diagnoses, and supported coding of symptoms during the consultation. At each consultation, GPs recorded their diagnosis and management. At the end, they completed a usability questionnaire. The actors completed a satisfaction questionnaire after each consultation. Results There was an 8–9% absolute improvement in diagnostic accuracy when the DSS was used. This improvement was significant (odds ratio [OR] 1.41, 95% confidence interval [CI] = 1.13 to 1.77, P<0.01). There was no associated increase of investigations ordered or consultation length. GPs coded significantly more data when using the DSS (mean 12.35 with the DSS versus 1.64 without), and were generally satisfied with its usability. Patient satisfaction ratings were the same for consultations with and without the DSS. Conclusion The DSS prototype was successfully employed in simulated consultations of high fidelity, with no measurable influences on patient satisfaction. The substantially increased data coding can operate as motivation for future DSS adoption. PMID:28137782

  11. Electron-cloud simulation results for the SPS and recent results for the LHC

    SciTech Connect

    Furman, M.A.; Pivi, M.T.F.

    2002-06-19

    We present an update of computer simulation results for some features of the electron cloud at the Large Hadron Collider (LHC) and recent simulation results for the Super Proton Synchrotron (SPS). We focus on the sensitivity of the power deposition on the LHC beam screen to the emitted electron spectrum, which we study by means of a refined secondary electron (SE) emission model recently included in our simulation code.

  12. Thematic accuracy of the 1992 National Land-Cover Data for the eastern United States: Statistical methodology and regional results

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Smith, J.H.; Yang, L.

    2003-01-01

    The accuracy of the 1992 National Land-Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or alternate reference label determined for a sample pixel and a mode class of the mapped 3×3 block of pixels centered on the sample pixel. Results are reported for each of the four regions comprising the eastern United States for both Anderson Level I and II classifications. Overall accuracies for Levels I and II are 80% and 46% for New England, 82% and 62% for New York/New Jersey (NY/NJ), 70% and 43% for the Mid-Atlantic, and 83% and 66% for the Southeast.

  13. Post-glacial landforms dating by lichenometry in Iceland - the accuracy of relative results and conversely

    NASA Astrophysics Data System (ADS)

    Decaulne, Armelle

    2014-05-01

    Lichenometry studies are carried out in Iceland since 1970 all over the country, using various techniques to solve a range of geomorphologic issues, from moraine dating and glacial advances, outwash timing, proglacial river incision, soil erosion, rock-glacier development, climate variations, to debris-flow occurrence and extreme snow-avalanche frequency. Most users have sought to date proglacial landforms in two main areas, around the southern ice-caps of Vatnajökull and Myrdalsjökull; and in Tröllaskagi in northern Iceland. Based on the results of over thirty five published studies, lichenometry is deemed to be successful dating tool in Iceland, and seems to approach an absolute dating technique at least over the last hundred years, under well constrained environmental conditions at local scale. With an increasing awareness of the methodological limitations of the technique, together with more sophisticated data treatments, predicted lichenometric 'ages' are supposedly gaining in robustness and in precision. However, comparisons between regions, and even between studies in the same area, are hindered by the use of different measurement techniques and data processing. These issues are exacerbated in Iceland by rapid environmental changes across short distances and, more generally, by the common problems surrounding lichen species mis-identification in the field; not mentioning the age discrepancy offered by other dating tools, such as tephrochronology. Some authors claim lichenometry can help to a precise reconstruction of landforms and geomorphic processes in Iceland, proposing yearly dating, others includes margin errors in their reconstructions, while some limit its use to generation identifications, refusing to overpass the nature of the gathered data and further interpretation. Finally, can lichenometry be a relatively accurate dating technique or rather an accurate relative dating tool in Iceland?

  14. Computer simulation results of attitude estimation of earth orbiting satellites

    NASA Technical Reports Server (NTRS)

    Kou, S. R.

    1976-01-01

    Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.

  15. Electron-cloud simulation results for the PSR and SNS

    SciTech Connect

    Pivi, M.; Furman, M.A.

    2002-07-08

    We present recent simulation results for the main features of the electron cloud in the storage ring of the Spallation Neutron Source (SNS) at Oak Ridge, and updated results for the Proton Storage Ring (PSR) at Los Alamos. In particular, a complete refined model for the secondary emission process including the so called true secondary, rediffused and backscattered electrons has been included in the simulation code.

  16. MIA computer simulation test results report. [space shuttle avionics

    NASA Technical Reports Server (NTRS)

    Unger, G. E.

    1974-01-01

    Results of the first noise susceptibility computer simulation tests of the complete MIA receiver analytical model are presented. Computer simulation tests were conducted with both Gaussian and pulse noise inputs. The results of the Gaussian noise tests were compared to results predicted previously and were found to be in substantial agreement. The results of the pulse noise tests will be compared to the results of planned analogous tests in the Data Bus Evaluation Laboratory at a later time. The MIA computer model is considered to be fully operational at this time.

  17. Accuracy of core mass estimates in simulated observations of dust emission

    NASA Astrophysics Data System (ADS)

    Malinen, J.; Juvela, M.; Collins, D. C.; Lunttila, T.; Padoan, P.

    2011-06-01

    Aims: We study the reliability of the mass estimates obtained for molecular cloud cores using sub-millimetre and infrared dust emission. Methods: We use magnetohydrodynamic simulations and radiative transfer to produce synthetic observations with spatial resolution and noise levels typical of Herschel surveys. We estimate dust colour temperatures using different pairs of intensities, calculate column densities with opacity at one wavelength, and compare the estimated masses with the true values. We compare these results to the case when all five Herschel wavelengths are available. We investigate the effects of spatial variations of dust properties and the influence of embedded heating sources. Results: Wrong assumptions of dust opacity and its spectral index β can cause significant systematic errors in mass estimates. These are mainly multiplicative and leave the slope of the mass spectrum intact, unless cores with very high optical depth are included. Temperature variations bias the colour temperature estimates and, in quiescent cores with optical depths higher than for normal stable cores, masses can be underestimated by up to one order of magnitude. When heated by internal radiation sources, the dust in the core centre becomes visible and the observations recover the true mass spectra. Conclusions: The shape, although not the position, of the mass spectrum is reliable against observational errors and biases introduced in the analysis. This changes only if the cores have optical depths much higher than expected for basic hydrostatic equilibrium conditions. Observations underestimate the value of β whenever there are temperature variations along the line of sight. A bias can also be observed when the true β varies with wavelength. Internal heating sources produce an inverse correlation between colour temperature and β that may be difficult to separate from any intrinsic β(T) relation of the dust grains. This suggests caution when interpreting the observed

  18. Comprehensive studies on the accuracy of trap characterization by using advanced random telegraph noise simulator

    NASA Astrophysics Data System (ADS)

    Higashi, Yusuke; Matsuzawa, Kazuya; Ishihara, Takamitsu

    2015-04-01

    Our developed noise simulator can represent the dynamic behaviors of electron and hole trapping and de-trapping via interactions with both the Si substrate and the poly-Si gate. Simulations reveal that the conventional analytical model using the ratio between the capture and emission time constants yields large errors in the estimates of trap site positions due to interactions with the Si substrate and poly-Si gate especially in thin gate insulator MOSFETs.

  19. Experimental and simulational result multipactors in 112 MHz QWR injector

    SciTech Connect

    Xin, T.; Ben-Zvi, I.; Belomestnykh, S.; Brutus, J. C.; Skaritka, J.; Wu, Q.; Xiao, B.

    2015-05-03

    The first RF commissioning of 112 MHz QWR superconducting electron gun was done in late 2014. The coaxial Fundamental Power Coupler (FPC) and Cathode Stalk (stalk) were installed and tested for the first time. During this experiment, we observed several multipacting barriers at different gun voltage levels. The simulation work was done within the same range. The comparison between the experimental observation and the simulation results are presented in this paper. The observations during the test are consisted with the simulation predictions. We were able to overcome most of the multipacting barriers and reach 1.8 MV gun voltage under pulsed mode after several round of conditioning processes.

  20. Aerosol kinetic code "AERFORM": Model, validation and simulation results

    NASA Astrophysics Data System (ADS)

    Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.

    2016-06-01

    The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.

  1. Cardiovascular system and microgravity simulation and inflight results

    NASA Astrophysics Data System (ADS)

    Pottier, J. M.; Patat, F.; Arbeille, P.; Pourcelot, L.; Massabuau, P.; Guell, A.; Gharib, C.

    Main results of cardiovascular investigation, performed with ultrasound methods during the common French/Soviet flight aboard Salyut VII in June 1982, are compared to variations of the same parameters studied during ground-based simulations on the same subject or observed by other investigators during various ground-based experiences. The antiorthostatic bed rest simulation partly reproduces microgravity conditions and seems to be better adaptated to cardiac hemodynamics, despite some differences, and to the cerebral circulation, than to the inferior limb circulation.

  2. Hyper-X Stage Separation: Simulation Development and Results

    NASA Technical Reports Server (NTRS)

    Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.

    2001-01-01

    This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.

  3. Accuracy of surface registration compared to conventional volumetric registration in patient positioning for head-and-neck radiotherapy: A simulation study using patient data

    SciTech Connect

    Kim, Youngjun; Li, Ruijiang; Na, Yong Hum; Xing, Lei; Lee, Rena

    2014-12-15

    Purpose: 3D optical surface imaging has been applied to patient positioning in radiation therapy (RT). The optical patient positioning system is advantageous over conventional method using cone-beam computed tomography (CBCT) in that it is radiation free, frameless, and is capable of real-time monitoring. While the conventional radiographic method uses volumetric registration, the optical system uses surface matching for patient alignment. The relative accuracy of these two methods has not yet been sufficiently investigated. This study aims to investigate the theoretical accuracy of the surface registration based on a simulation study using patient data. Methods: This study compares the relative accuracy of surface and volumetric registration in head-and-neck RT. The authors examined 26 patient data sets, each consisting of planning CT data acquired before treatment and patient setup CBCT data acquired at the time of treatment. As input data of surface registration, patient’s skin surfaces were created by contouring patient skin from planning CT and treatment CBCT. Surface registration was performed using the iterative closest points algorithm by point–plane closest, which minimizes the normal distance between source points and target surfaces. Six degrees of freedom (three translations and three rotations) were used in both surface and volumetric registrations and the results were compared. The accuracy of each method was estimated by digital phantom tests. Results: Based on the results of 26 patients, the authors found that the average and maximum root-mean-square translation deviation between the surface and volumetric registrations were 2.7 and 5.2 mm, respectively. The residual error of the surface registration was calculated to have an average of 0.9 mm and a maximum of 1.7 mm. Conclusions: Surface registration may lead to results different from those of the conventional volumetric registration. Only limited accuracy can be achieved for patient

  4. Accuracy of momentum and gyrodensity transport in global gyrokinetic particle-in-cell simulations

    SciTech Connect

    McMillan, B. F.; Villard, L.

    2014-05-15

    Gyrokinetic Particle-In-Cell (PIC) simulations based on conservative Lagrangian formalisms admit transport equations for conserved quantities such as gyrodensity and toroidal momentum, and these can be derived for arbitrary wavelength, even though previous applications have used the long-wavelength approximation. In control-variate PIC simulations, a consequence of the different treatment of the background (f{sub 0}) and perturbed parts (δf), when a splitting f = f{sub 0} + δf is performed, is that analytical transport relations for the relevant fluxes and moments are only reproduced in the large marker number limit. The transport equations for f can be used to write the inconsistency in the perturbed quantities explicitly in terms of the sampling of the background distribution f{sub 0}. This immediately allows estimates of the error in consistency of momentum transport in control-variate PIC simulations. This inconsistency tends to accumulate secularly and is not directly affected by the sources and noise control in the system. Although physical tokamaks often rotate quite strongly, the standard gyrokinetic formalism assumes weak perpendicular flows, comparable to the drift speed. For systems with such weak flows, maintaining acceptably small relative errors requires that a number of markers scale with the fourth power of the linear system size to consistently resolve long-wavelength evolution. To avoid this unfavourable scaling, an algorithm for exact gyrodensity transport has been developed, and this is shown to allow accurate simulations with an order of magnitude fewer markers.

  5. SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors

    SciTech Connect

    Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I

    2014-06-01

    Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though

  6. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-01

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a SNAP derivative reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  7. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    SciTech Connect

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-21

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the potential development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a liquid metal cooled reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  8. Relationships between driving simulator performance and driving test results.

    PubMed

    de Winter, J C F; de Groot, S; Mulder, M; Wieringa, P A; Dankelman, J; Mulder, J A

    2009-02-01

    This article is considered relevant because: 1) car driving is an everyday and safety-critical task; 2) simulators are used to an increasing extent for driver training (related topics: training, virtual reality, human-machine interaction); 3) the article addresses relationships between performance in the simulator and driving test results--a relevant topic for those involved in driver training and the virtual reality industries; 4) this article provides new insights about individual differences in young drivers' behaviour. Simulators are being used to an increasing extent for driver training, allowing for the possibility of collecting objective data on driver proficiency under standardised conditions. However, relatively little is known about how learner drivers' simulator measures relate to on-road driving. This study proposes a theoretical framework that quantifies driver proficiency in terms of speed of task execution, violations and errors. This study investigated the relationships between these three measures of learner drivers' (n=804) proficiency during initial simulation-based training and the result of the driving test on the road, occurring an average of 6 months later. A higher chance of passing the driving test the first time was associated with making fewer steering errors on the simulator and could be predicted in regression analysis with a correlation of 0.18. Additionally, in accordance with the theoretical framework, a shorter duration of on-road training corresponded with faster task execution, fewer violations and fewer steering errors (predictive correlation 0.45). It is recommended that researchers conduct more large-scale studies into the reliability and validity of simulator measures and on-road driving tests.

  9. Results from Binary Black Hole Simulations in Astrophysics Applications

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2007-01-01

    Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.

  10. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  11. Accuracy of Korean-Mini-Mental Status Examination Based on Seoul Neuro-Psychological Screening Battery II Results

    PubMed Central

    Kang, In-Woong; Beom, In-Gyu; Cho, Ji-Yeon

    2016-01-01

    Background The Korean-Mini-Mental Status Examination (K-MMSE) is a dementia-screening test that can be easily applied in both community and clinical settings. However, in 20% to 30% of cases, the K-MMSE produces a false negative response. This suggests that it is necessary to evaluate the accuracy of K-MMSE as a screening test for dementia, which can be achieved through comparison of K-MMSE and Seoul Neuropsychological Screening Battery (SNSB)-II results. Methods The study included 713 subjects (male 534, female 179; mean age, 69.3±6.9 years). All subjects were assessed using K-MMSE and SNSB-II tests, the results of which were divided into normal and abnormal in 15 percentile standards. Results The sensitivity of the K-MMSE was 48.7%, with a specificity of 89.9%. The incidence of false positive and negative results totaled 10.1% and 51.2%, respectively. In addition, the positive predictive value of the K-MMSE was 87.1%, while the negative predictive value was 55.6%. The false-negative group showed cognitive impairments in regions of memory and executive function. Subsequently, in the false-positive group, subjects demonstrated reduced performance in memory recall, time orientation, attention, and calculation of K-MMSE items. Conclusion The results obtained in the study suggest that cognitive function might still be impaired even if an individual obtained a normal score on the K-MMSE. If the K-MMSE is combined with tests of memory or executive function, the accuracy of dementia diagnosis could be greatly improved. PMID:27274389

  12. Simulation of diurnal thermal energy storage systems: Preliminary results

    NASA Astrophysics Data System (ADS)

    Katipamula, S.; Somasundaram, S.; Williams, H. R.

    1994-12-01

    This report describes the results of a simulation of thermal energy storage (TES) integrated with a simple-cycle gas turbine cogeneration system. Integrating TES with cogeneration can serve the electrical and thermal loads independently while firing all fuel in the gas turbine. The detailed engineering and economic feasibility of diurnal TES systems integrated with cogeneration systems has been described in two previous PNL reports. The objective of this study was to lay the ground work for optimization of the TES system designs using a simulation tool called TRNSYS (TRaNsient SYstem Simulation). TRNSYS is a transient simulation program with a sequential-modular structure developed at the Solar Energy Laboratory, University of Wisconsin-Madison. The two TES systems selected for the base-case simulations were: (1) a one-tank storage model to represent the oil/rock TES system; and (2) a two-tank storage model to represent the molten nitrate salt TES system. Results of the study clearly indicate that an engineering optimization of the TES system using TRNSYS is possible. The one-tank stratified oil/rock storage model described here is a good starting point for parametric studies of a TES system. Further developments to the TRNSYS library of available models (economizer, evaporator, gas turbine, etc.) are recommended so that the phase-change processes is accurately treated.

  13. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  14. Tempest: Mesoscale test case suite results and the effect of order-of-accuracy on pressure gradient force errors

    NASA Astrophysics Data System (ADS)

    Guerra, J. E.; Ullrich, P. A.

    2014-12-01

    Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.

  15. Simulation results for the electron-cloud at the PSR

    SciTech Connect

    Furman, M.A.; Pivi, M.

    2001-06-26

    We present a first set of computer simulations for the main features of the electron cloud at the Proton Storage Ring (PSR), particularly its energy spectrum. We compare our results with recent measurements, which have been obtained by means of dedicated probes.

  16. Dosimetric accuracy assessment of a treatment plan verification system for scanned proton beam radiotherapy: one-year experimental results and Monte Carlo analysis of the involved uncertainties.

    PubMed

    Molinelli, S; Mairani, A; Mirandola, A; Vilches Freixas, G; Tessonnier, T; Giordanengo, S; Parodi, K; Ciocca, M; Orecchia, R

    2013-06-07

    During one year of clinical activity at the Italian National Center for Oncological Hadron Therapy 31 patients were treated with actively scanned proton beams. Results of patient-specific quality assurance procedures are presented here which assess the accuracy of a three-dimensional dose verification technique with the simultaneous use of multiple small-volume ionization chambers. To investigate critical cases of major deviations between treatment planning system (TPS) calculated and measured data points, a Monte Carlo (MC) simulation tool was implemented for plan verification in water. Starting from MC results, the impact of dose calculation, dose delivery and measurement set-up uncertainties on plan verification results was analyzed. All resulting patient-specific quality checks were within the acceptance threshold, which was set at 5% for both mean deviation between measured and calculated doses and standard deviation. The mean deviation between TPS dose calculation and measurement was less than ±3% in 86% of the cases. When all three sources of uncertainty were accounted for, simulated data sets showed a high level of agreement, with mean and maximum absolute deviation lower than 2.5% and 5%, respectively.

  17. Dosimetric accuracy assessment of a treatment plan verification system for scanned proton beam radiotherapy: one-year experimental results and Monte Carlo analysis of the involved uncertainties

    NASA Astrophysics Data System (ADS)

    Molinelli, S.; Mairani, A.; Mirandola, A.; Vilches Freixas, G.; Tessonnier, T.; Giordanengo, S.; Parodi, K.; Ciocca, M.; Orecchia, R.

    2013-06-01

    During one year of clinical activity at the Italian National Center for Oncological Hadron Therapy 31 patients were treated with actively scanned proton beams. Results of patient-specific quality assurance procedures are presented here which assess the accuracy of a three-dimensional dose verification technique with the simultaneous use of multiple small-volume ionization chambers. To investigate critical cases of major deviations between treatment planning system (TPS) calculated and measured data points, a Monte Carlo (MC) simulation tool was implemented for plan verification in water. Starting from MC results, the impact of dose calculation, dose delivery and measurement set-up uncertainties on plan verification results was analyzed. All resulting patient-specific quality checks were within the acceptance threshold, which was set at 5% for both mean deviation between measured and calculated doses and standard deviation. The mean deviation between TPS dose calculation and measurement was less than ±3% in 86% of the cases. When all three sources of uncertainty were accounted for, simulated data sets showed a high level of agreement, with mean and maximum absolute deviation lower than 2.5% and 5%, respectively.

  18. Leveraging data analytics, patterning simulations and metrology models to enhance CD metrology accuracy for advanced IC nodes

    NASA Astrophysics Data System (ADS)

    Rana, Narender; Zhang, Yunlin; Kagalwala, Taher; Hu, Lin; Bailey, Todd

    2014-04-01

    Integrated Circuit (IC) technology is changing in multiple ways: 193i to EUV exposure, planar to non-planar device architecture, from single exposure lithography to multiple exposure and DSA patterning etc. Critical dimension (CD) control requirement is becoming stringent and more exhaustive: CD and process window are shrinking., three sigma CD control of < 2 nm is required in complex geometries, and metrology uncertainty of < 0.2 nm is required to achieve the target CD control for advanced IC nodes (e.g. 14 nm, 10 nm and 7 nm nodes). There are fundamental capability and accuracy limits in all the metrology techniques that are detrimental to the success of advanced IC nodes. Reference or physical CD metrology is provided by CD-AFM, and TEM while workhorse metrology is provided by CD-SEM, Scatterometry, Model Based Infrared Reflectrometry (MBIR). Precision alone is not sufficient moving forward. No single technique is sufficient to ensure the required accuracy of patterning. The accuracy of CD-AFM is ~1 nm and precision in TEM is poor due to limited statistics. CD-SEM, scatterometry and MBIR need to be calibrated by reference measurements for ensuring the accuracy of patterned CDs and patterning models. There is a dire need of measurement with < 0.5 nm accuracy and the industry currently does not have that capability with inline measurments. Being aware of the capability gaps for various metrology techniques, we have employed data processing techniques and predictive data analytics, along with patterning simulation and metrology models, and data integration techniques to selected applications demonstrating the potential solution and practicality of such an approach to enhance CD metrology accuracy. Data from multiple metrology techniques has been analyzed in multiple ways to extract information with associated uncertainties and integrated to extract the useful and more accurate CD and profile information of the structures. This paper presents the optimization of

  19. Autonomous navigation accuracy using simulated horizon sensor and sun sensor observations

    NASA Technical Reports Server (NTRS)

    Pease, G. E.; Hendrickson, H. T.

    1980-01-01

    A relatively simple autonomous system which would use horizon crossing indicators, a sun sensor, a quartz oscillator, and a microprogrammed computer is discussed. The sensor combination is required only to effectively measure the angle between the centers of the Earth and the Sun. Simulations for a particular orbit indicate that 2 km r.m.s. orbit determination uncertainties may be expected from a system with 0.06 deg measurement uncertainty. A key finding is that knowledge of the satellite orbit plane orientation can be maintained to this level because of the annual motion of the Sun and the predictable effects of Earth oblateness. The basic system described can be updated periodically by transits of the Moon through the IR horizon crossing indicator fields of view.

  20. Examining the Accuracy of Astrophysical Disk Simulations with a Generalized Hydrodynamical Test Problem

    NASA Astrophysics Data System (ADS)

    Raskin, Cody; Owen, J. Michael

    2016-11-01

    We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.

  1. Reaction cross sections for two direct simulation Monte Carlo models: Accuracy and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Wysong, Ingrid; Gimelshein, Sergey; Gimelshein, Natalia; McKeon, William; Esposito, Fabrizio

    2012-04-01

    The quantum kinetic chemical reaction model proposed by Bird for the direct simulation Monte Carlo method is based on collision kinetics with no assumed Arrhenius-related parameters. It demonstrates an excellent agreement with the best estimates for thermal reaction rates coefficients and with two-temperature nonequilibrium rate coefficients for high-temperature air reactions. This paper investigates this model further, concentrating on the non-thermal reaction cross sections as a function of collision energy, and compares its predictions with those of the earlier total collision energy model, also by Bird, as well as with available quasi-classical trajectory cross section predictions (this paper also publishes for the first time a table of these computed reaction cross sections). A rarefied hypersonic flow over a cylinder is used to examine the sensitivity of the number of exchange reactions to the differences in the two models under a strongly nonequilibrium velocity distribution.

  2. The accuracy of simulated indoor time trials utilizing a CompuTrainer and GPS data.

    PubMed

    Peveler, Willard W

    2013-10-01

    The CompuTrainer is commonly used to measure cycling time trial performance in a laboratory setting. Previous research has demonstrated that the CompuTrainer tends toward underestimating power at higher workloads but provides reliable measures. The extent to which the CompuTrainer is capable of simulating outdoor time trials in a laboratory setting has yet to be examined. The purpose of this study was to examine the validity of replicating an outdoor time trial course indoors by comparing completion times between the actual time trial course and the replicated outdoor time trial course on the CompuTrainer. A global positioning system was used to collect data points along a local outdoor time trial course. Data were then downloaded and converted into a time trial course for the CompuTrainer. Eleven recreational to highly trained cyclists participated in this study. To participate in this study, subjects had to have completed a minimum of 2 of the local Cleves time trial races. Subjects completed 2 simulated indoor time trials on the CompuTrainer. Mean finishing times for the mean indoor performance trial (34.58 ± 8.63 minutes) were significantly slower in relation to the mean outdoor performance time (26.24 ± 3.23 minutes). Cyclists' finish times increased (performance decreased) by 24% on the indoor time trials in relation to the mean outdoor times. There were no significant differences between CompuTrainer trial 1 (34.77 ± 8.54 minutes) and CompuTrainer trial 1 (34.37 ± 8.76 minutes). Because of the significant differences in times between the indoor and outdoor time trials, meaningful comparisons of performance times cannot be made between the two. However, there were no significant differences found between the 2 CompuTrainer trials, and therefore the CompuTrainer can still be recommended for laboratory testing between trials.

  3. ANOVA parameters influence in LCF experimental data and simulation results

    NASA Astrophysics Data System (ADS)

    Delprete, C.; Sesanaa, R.; Vercelli, A.

    2010-06-01

    The virtual design of components undergoing thermo mechanical fatigue (TMF) and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation) and the damage and life model (for life assessment). The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF) tests, low cycle fatigue (LCF) tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo structural FEM

  4. First results of coupled IPS/NIMROD/GENRAY simulations

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Kruger, S. E.; Held, E. D.; Harvey, R. W.; Elwasif, W. R.; Schnack, D. D.

    2010-11-01

    The Integrated Plasma Simulator (IPS) framework, developed by the SWIM Project Team, facilitates self-consistent simulations of complicated plasma behavior via the coupling of various codes modeling different spatial/temporal scales in the plasma. Here, we apply this capability to investigate the stabilization of tearing modes by ECCD. Under IPS control, the NIMROD code (MHD) evolves fluid equations to model bulk plasma behavior, while the GENRAY code (RF) calculates the self-consistent propagation and deposition of RF power in the resulting plasma profiles. GENRAY data is then used to construct moments of the quasilinear diffusion tensor (induced by the RF) which influence the dynamics of momentum/energy evolution in NIMROD's equations. We present initial results from these coupled simulations and demonstrate that they correctly capture the physics of magnetic island stabilization [Jenkins et al, PoP 17, 012502 (2010)] in the low-beta limit. We also discuss the process of code verification in these simulations, demonstrating good agreement between NIMROD and GENRAY predictions for the flux-surface-averaged, RF-induced currents. An overview of ongoing model development (synthetic diagnostics/plasma control systems; neoclassical effects; etc.) is also presented. Funded by US DoE.

  5. Comparative evaluation of the accuracy of two electronic apex locators in determining the working length in teeth with simulated apical root resorption: An in vitro study

    PubMed Central

    Saraswathi, Vidya; Kedia, Archit; Purayil, Tina Puthen; Ballal, Vasudev; Saini, Aakriti

    2016-01-01

    Introduction: Accurate determination of working length (WL) is a critical factor for endodontic success. This is commonly achieved using an apex locator which is influenced by the presence or absence of the apical constriction. Hence, this study was done to compare the accuracy of two generations of apex locators in teeth with simulated apical root resorption. Materials and Methods: Forty maxillary central incisors were selected and after access preparation, were embedded in an alginate mold. On achieving partial set, teeth were removed, and a 45° oblique cut was made at the apex. The teeth were replanted and stabilized in the mold, and WL was determined using two generations of apex locators (Raypex 5 and Apex NRG XFR). Actual length of teeth (control) was determined by visual method. Statistical Analysis: Results were subjected to statistical analysis using the paired t-test. Results: Raypex 5 and Apex NRG was accurate for only 33.75% and 23.75% of samples, respectively. However, with ±0.5 mm acceptance limit, they showed an average accuracy of 56.2% and 57.5%, respectively. There was no significant difference in the accuracy between the two apex locators. Conclusion: Neither of the two apex locators were 100% accurate in determining the WL. PMID:27656055

  6. Technical Note: Maximising accuracy and minimising cost of a potentiometrically regulated ocean acidification simulation system

    NASA Astrophysics Data System (ADS)

    MacLeod, C. D.; Doyle, H. L.; Currie, K. I.

    2015-02-01

    This article describes a potentiometric ocean acidification simulation system which automatically regulates pH through the injection of 100% CO2 gas into temperature-controlled seawater. The system is ideally suited to long-term experimental studies of the effect of acidification on biological processes involving small-bodied (10-20 mm) calcifying or non-calcifying organisms. Using hobbyist-grade equipment, the system was constructed for approximately USD 1200 per treatment unit (tank, pH regulation apparatus, chiller, pump/filter unit). An overall tolerance of ±0.05 pHT units (SD) was achieved over 90 days in two acidified treatments (7.60 and 7.40) at 12 °C using glass electrodes calibrated with synthetic seawater buffers, thereby preventing liquid junction error. The performance of the system was validated through the independent calculation of pHT (12 °C) using dissolved inorganic carbon and total alkalinity data taken from discrete acidified seawater samples. The system was used to compare the shell growth of the marine gastropod Zeacumantus subcarinatus infected with the trematode parasite Maritrema novaezealandensis with that of uninfected snails at pH levels of 7.4, 7.6, and 8.1.

  7. Numerical Simulation of Micronozzles with Comparison to Experimental Results

    NASA Astrophysics Data System (ADS)

    Thornber, B.; Chesta, E.; Gloth, O.; Brandt, R.; Schwane, R.; Perigo, D.; Smith, P.

    2004-10-01

    A numerical analysis of conical micronozzle flows has been conducted using the commercial software package CFD-RC FASTRAN [13]. The numerical results have been validated by comparison with direct thrust and mass flow measurements recently performed in ESTEC Propulsion Laboratory on Polyflex Space Ltd. 10mN Cold-Gas thrusters in the frame of ESA CryoSat mission. The flow is viscous dominated, with a throat Reynolds number of 5000, and the relatively large length of the nozzle causes boundary layer effects larger than usual for nozzles of this size. This paper discusses in detail the flow physics such as boundary layer growth and structure, and the effects of rarefaction. Furthermore a number of different domain sizes and exit boundary conditions are used to determine the optimum combination of computational time and accuracy.

  8. Electronic medical record in the simulation hospital: does it improve accuracy in charting vital signs, intake, and output?

    PubMed

    Mountain, Carel; Redd, Roxanne; O'Leary-Kelly, Colleen; Giles, Kim

    2015-04-01

    Nursing care delivery has shifted in response to the introduction of electronic health records. Adequate education using computerized documentation heavily influences a nurse's ability to navigate and utilize electronic medical records. The risk for treatment error increases when a bedside nurse lacks the correct knowledge and skills regarding electronic medical record documentation. Prelicensure nursing education should introduce electronic medical record documentation and provide a method for feedback from instructors to ensure proper understanding and use of this technology. RN preceptors evaluated two groups of associate degree nursing students to determine if introduction of electronic medical record in the simulation hospital increased accuracy in documenting vital signs, intake, and output in the actual clinical setting. During simulation, the first group of students documented using traditional paper and pen; the second group used an academic electronic medical record. Preceptors evaluated each group during their clinical rotations at two local inpatient facilities. RN preceptors provided information by responding to a 10-question Likert scale survey regarding the use of student electronic medical record documentation during the 120-hour inpatient preceptor rotation. The implementation of the electronic medical record into the simulation hospital, although a complex undertaking, provided students a safe and supportive environment in which to practice using technology and receive feedback from faculty regarding accurate documentation.

  9. Windblown sand on Venus - Preliminary results of laboratory simulations

    NASA Technical Reports Server (NTRS)

    Greeley, R.; Iversen, J.; Leach, R.; Marshall, J.; Williams, S.; White, B.

    1984-01-01

    Small particles and winds of sufficient strength to move them have been detected from Venera and Pioneer-Venus data and suggest the existence of aeolian processes on Venus. The Venus wind tunnel (VWT) was fabricated in order to investigate the behavior of windblown particles in a simulated Venusian environment. Preliminary results show that sand-size material is readily entrained at the wind speeds detected on Venus and that saltating grains achieve velocities closely matching those of the wind. Measurements of saltation threshold and particle flux for various particle sizes have been compared with theoretical models which were developed by extrapolation of findings from Martian and terrestrial simulations. Results are in general agreement with theory, although certain discrepancies are apparent which may be attributed to experimental and/or theoretical-modeling procedures. Present findings enable a better understanding of Venusian surface processes and suggest that aeolian processes are important in the geological evolution of Venus.

  10. ENTROPY PRODUCTION IN COLLISIONLESS SYSTEMS. III. RESULTS FROM SIMULATIONS

    SciTech Connect

    Barnes, Eric I.; Egerer, Colin P. E-mail: egerer.coli@uwlax.edu

    2015-05-20

    The equilibria formed by the self-gravitating, collisionless collapse of simple initial conditions have been investigated for decades. We present the results of our attempts to describe the equilibria formed in N-body simulations using thermodynamically motivated models. Previous work has suggested that it is possible to define distribution functions for such systems that describe maximum entropy states. These distribution functions are used to create radial density and velocity distributions for comparison to those from simulations. A wide variety of N-body code conditions are used to reduce the chance that results are biased by numerical issues. We find that a subset of initial conditions studied lead to equilibria that can be accurately described by these models, and that direct calculation of the entropy shows maximum values being achieved.

  11. Key results from SB8 simulant flowsheet studies

    SciTech Connect

    Koopman, D. C.

    2013-04-26

    Key technically reviewed results are presented here in support of the Defense Waste Processing Facility (DWPF) acceptance of Sludge Batch 8 (SB8). This report summarizes results from simulant flowsheet studies of the DWPF Chemical Process Cell (CPC). Results include: Hydrogen generation rate for the Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) cycles of the CPC on a 6,000 gallon basis; Volume percent of nitrous oxide, N2O, produced during the SRAT cycle; Ammonium ion concentrations recovered from the SRAT and SME off-gas; and, Dried weight percent solids (insoluble, soluble, and total) measurements and density.

  12. Comprehensive simulation of the middle atmospheric climate: some recent results

    NASA Astrophysics Data System (ADS)

    Hamilton, Kevin

    1995-05-01

    This study discusses the results of comprehensive time-dependent, three-dimensional numerical modelling of the circulation in the middle atmosphere obtained with the GFDL “SKYHI” troposphere-stratosphere-mesosphere general circulation model (GCM). The climate in a long control simulation with an intermediate resolution version (≈3° in horizontal) is briefly reviewed. While many aspects of the simulation are quite realistic, the focus in this study is on remaining first-order problems with the modelled middle atmospheric general circulation, notably the very cold high latitude temperatures in the Southern Hemisphere (SH) winter/spring, and the virtual absence of a quasi-biennial oscillation (QBO) in the tropical stratosphere. These problems are shared by other extant GCMs. It was noted that the SH cold pole problem is somewhat ameliorated with increasing horizontal resolution in the model. This suggests that improved resolution increases the vertical momentum fluxes from the explicitly resolved gravity waves in the model, a point confirmed by detailed analysis of the spectrum of vertical eddy momentum flux in the winter SH extratropics. This result inspired a series of experiments with the 3° SKYHI model modified by adding a prescribed zonally-symmetric zonal drag on the SH winter westerlies. The form of the imposed momentum source was based on the simple assumption that the mean flow drag produced by unresolved waves has a spatial distribution similar to that of the Eliassen-Palm flux divergence associated with explicitly resolved gravity waves. It was found that an appropriately-chosen drag confined to the top six model levels (above 0.35 mb) can lead to quite realistic simulations of the SH winter flow (including even the stationary wave fields) through August, but that problems still remain in the late-winter/springtime simulation. While the imposed momentum source was largely confined to the extratropics, it produced considerable improvement in the

  13. On the near space population from simulation results

    NASA Astrophysics Data System (ADS)

    Tischenko, V. I.

    A new computer technology module for studying meteoroid complexes is proposed. Space structure is represented by orbital fragments with their visualization from simulated cometary nucleus disintegration. The modelled section in the ecliptic is shown which presents the complex form and its inner structure. This representation can be used for analysing space filling to establish potentially dangerous regions near the complex and the concrete planet's orbits or other object routes. Main results for specific comets are given.

  14. The Mayfield method of estimating nesting success: A model, estimators and simulation results

    USGS Publications Warehouse

    Hensler, G.L.; Nichols, J.D.

    1981-01-01

    Using a nesting model proposed by Mayfield we show that the estimator he proposes is a maximum likelihood estimator (m.l.e.). M.l.e. theory allows us to calculate the asymptotic distribution of this estimator, and we propose an estimator of the asymptotic variance. Using these estimators we give approximate confidence intervals and tests of significance for daily survival. Monte Carlo simulation results show the performance of our estimators and tests under many sets of conditions. A traditional estimator of nesting success is shown to be quite inferior to the Mayfield estimator. We give sample sizes required for a given accuracy under several sets of conditions.

  15. The diagnostic accuracy of a single CEA blood test in detecting colorectal cancer recurrence: Results from the FACS trial

    PubMed Central

    Nicholson, Brian D.; Primrose, John; Perera, Rafael; James, Timothy; Pugh, Sian; Mant, David

    2017-01-01

    Objective To evaluate the diagnostic accuracy of a single CEA (carcinoembryonic antigen) blood test in detecting colorectal cancer recurrence. Background Patients who have undergone curative resection for primary colorectal cancer are typically followed up with scheduled CEA testing for 5 years. Decisions to investigate further (usually by CT imaging) are based on single test results, reflecting international guidelines. Methods A secondary analysis was undertaken of data from the FACS trial (two arms included CEA testing). The composite reference standard applied included CT-CAP imaging, clinical assessment and colonoscopy. Accuracy in detecting recurrence was evaluated in terms of sensitivity, specificity, likelihood ratios, predictive values, time-dependent area under the ROC curves, and operational performance when used prospectively in clinical practice are reported. Results Of 582 patients, 104 (17.9%) developed recurrence during the 5 year follow-up period. Applying the recommended threshold of 5μg/L achieves at best 50.0% sensitivity (95% CI: 40.1–59.9%); in prospective use in clinical practice it would lead to 56 missed recurrences (53.8%; 95% CI: 44.2–64.4%) and 89 false alarms (56.7% of 157 patients referred for investigation). Applying a lower threshold of 2.5μg/L would reduce the number of missed recurrences to 36.5% (95% CI: 26.5–46.5%) but would increase the false alarms to 84.2% (924/1097 referred). Some patients are more prone to false alarms than others—at the 5μg/L threshold, the 89 episodes of unnecessary investigation were clustered in 29 individuals. Conclusion Our results demonstrated very low sensitivity for CEA, bringing to question whether it could ever be used as an independent triage test. It is not feasible to improve the diagnostic performance of a single test result by reducing the recommended action threshold because of the workload and false alarms generated. Current national and international guidelines merit re

  16. Influence of electron density spatial distribution and X-ray beam quality during CT simulation on dose calculation accuracy.

    PubMed

    Nobah, Ahmad; Moftah, Belal; Tomic, Nada; Devic, Slobodan

    2011-04-06

    Impact of the various kVp settings used during computed tomography (CT) simulation that provides data for heterogeneity corrected dose distribution calculations in patients undergoing external beam radiotherapy with either high-energy photon or electron beams have been investigated. The change of the Hounsfield Unit (HU) values due to the influence of kVp settings and geometrical distribution of various tissue substitute materials has also been studied. The impact of various kVp settings and electron density (ED) distribution on the accuracy of dose calculation in high-energy photon beams was found to be well within 2%. In the case of dose distributions obtained with a commercially available Monte Carlo dose calculation algorithm for electron beams, differences of more than 10% were observed for different geometrical setups and kVp settings. Dose differences for the electron beams are relatively small at shallow depths but increase with depth around lower isodose values.

  17. Improved Accuracy of Continuous Glucose Monitoring Systems in Pediatric Patients with Diabetes Mellitus: Results from Two Studies

    PubMed Central

    2016-01-01

    Abstract Objective: This study was designed to evaluate accuracy, performance, and safety of the Dexcom (San Diego, CA) G4® Platinum continuous glucose monitoring (CGM) system (G4P) compared with the Dexcom G4 Platinum with Software 505 algorithm (SW505) when used as adjunctive management to blood glucose (BG) monitoring over a 7-day period in youth, 2–17 years of age, with diabetes. Research Design and Methods: Youth wore either one or two sensors placed on the abdomen or upper buttocks for 7 days, calibrating the device twice daily with a uniform BG meter. Participants had one in-clinic session on Day 1, 4, or 7, during which fingerstick BG measurements (self-monitoring of blood glucose [SMBG]) were obtained every 30 ± 5 min for comparison with CGM, and in youth 6–17 years of age, reference YSI glucose measurements were obtained from arterialized venous blood collected every 15 ± 5 min for comparison with CGM. The sensor was removed by the participant/family after 7 days. Results: In comparison of 2,922 temporally paired points of CGM with the reference YSI measurement for G4P and 2,262 paired points for SW505, the mean absolute relative difference (MARD) was 17% for G4P versus 10% for SW505 (P < 0.0001). In comparison of 16,318 temporally paired points of CGM with SMBG for G4P and 4,264 paired points for SW505, MARD was 15% for G4P versus 13% for SW505 (P < 0.0001). Similarly, error grid analyses indicated superior performance with SW505 compared with G4P in comparison of CGM with YSI and CGM with SMBG results, with greater percentages of SW505 results falling within error grid Zone A or the combined Zones A plus B. There were no serious adverse events or device-related serious adverse events for either the G4P or the SW505, and there was no sensor breakoff. Conclusions: The updated algorithm offers substantial improvements in accuracy and performance in pediatric patients with diabetes. Use of CGM with improved performance has

  18. Continuum Level Results from Particle Simulations of Active Suspensions

    NASA Astrophysics Data System (ADS)

    Delmotte, Blaise; Climent, Eric; Plouraboue, Franck; Keaveny, Eric

    2014-11-01

    Accurately simulating active suspensions on the lab scale is a technical challenge. It requires considering large numbers of interacting swimmers with well described hydrodynamics in order to obtain representative and reliable statistics of suspension properties. We have developed a computationally scalable model based on an extension of the Force Coupling Method (FCM) to active particles. This tool can handle the many-body hydrodynamic interactions between O (105) swimmers while also accounting for finite-size effects, steady or time-dependent strokes, or variable swimmer aspect ratio. Results from our simulations of steady-stroke microswimmer suspensions coincide with those given by continuum models, but, in certain cases, we observe collective dynamics that these models do not predict. We provide robust statistics of resulting distributions and accurately characterize the growth rates of these instabilities. In addition, we explore the effect of the time-dependent stroke on the suspension properties, comparing with those from the steady-stroke simulations. Authors acknowledge the ANR project Motimo for funding and the Calmip computing centre for technical support.

  19. Airflow Hazard Visualization for Helicopter Pilots: Flight Simulation Study Results

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.; Long, Kurtis R.

    2005-01-01

    Airflow hazards such as vortices or low level wind shear have been identified as a primary contributing factor in many helicopter accidents. US Navy ships generate airwakes over their decks, creating potentially hazardous conditions for shipboard rotorcraft launch and recovery. Recent sensor developments may enable the delivery of airwake data to the cockpit, where visualizing the hazard data may improve safety and possibly extend ship/helicopter operational envelopes. A prototype flight-deck airflow hazard visualization system was implemented on a high-fidelity rotorcraft flight dynamics simulator. Experienced helicopter pilots, including pilots from all five branches of the military, participated in a usability study of the system. Data was collected both objectively from the simulator and subjectively from post-test questionnaires. Results of the data analysis are presented, demonstrating a reduction in crash rate and other trends that illustrate the potential of airflow hazard visualization to improve flight safety.

  20. Accuracy of the Universal Portable Anesthesia Complete Drawover Vaporizer When Using the Anesthesia Simulator

    DTIC Science & Technology

    2000-10-01

    anesthetic partial pressure in arterial blood is 90% complete in 4-8 minutes. Uptake beyond 8 minutes is principally determined by the muscle group... pressure . Figure 1. Field Configuration. (From O Sullivan & Ciresi,1999). Air flow within the UPAC is governed by a rotary valve, which is controlled...resulting photon count is proportional to the partial pressure of the gases present. Gas analysis is an essential element in the administration of

  1. Accuracy of the Universal Portable Anesthesia Complete Drawover Vaporizer When Using the Anesthesia Simulator

    DTIC Science & Technology

    2000-10-01

    75% of the cardiac output. Equilibration of the VRG with anesthetic partial pressure in arterial blood is 90% complete in 4-8 minutes. Uptake...patient. Air is drawn through the UPAC by recoil negative pressure . Figure 1. Field Configuration. (From O Sullivan & Ciresi,1999). Air flow within...for each gas present at UPAC 8 specific wave lengths. The resulting photon count is proportional to the partial pressure of the gases present. Gas

  2. CFD simulation of pollutant dispersion around isolated buildings: on the role of convective and turbulent mass fluxes in the prediction accuracy.

    PubMed

    Gousseau, P; Blocken, B; van Heijst, G J F

    2011-10-30

    Computational Fluid Dynamics (CFD) is increasingly used to predict wind flow and pollutant dispersion around buildings. The two most frequently used approaches are solving the Reynolds-averaged Navier-Stokes (RANS) equations and Large-Eddy Simulation (LES). In the present study, we compare the convective and turbulent mass fluxes predicted by these two approaches for two configurations of isolated buildings with distinctive features. We use this analysis to clarify the role of these two components of mass transport on the prediction accuracy of RANS and LES in terms of mean concentration. It is shown that the proper simulation of the convective fluxes is essential to predict an accurate concentration field. In addition, appropriate parameterization of the turbulent fluxes is needed with RANS models, while only the subgrid-scale effects are modeled with LES. Therefore, when the source is located outside of recirculation regions (case 1), both RANS and LES can provide accurate results. When the influence of the building is higher (case 2), RANS models predict erroneous convective fluxes and are largely outperformed by LES in terms of prediction accuracy of mean concentration. These conclusions suggest that the choice of the appropriate turbulence model depends on the configuration of the dispersion problem under study. It is also shown that for both cases LES predicts a counter-gradient mechanism of the streamwise turbulent mass transport, which is not reproduced by the gradient-diffusion hypothesis that is generally used with RANS models.

  3. Simulation study of the effect of golden-angle KWIC with generalized kinetic model analysis on diagnostic accuracy for lesion discrimination

    PubMed Central

    Freed, Melanie; Kim, Sungheon G.

    2014-01-01

    Purpose To quantitatively evaluate temporal blurring of dynamic contrast-enhanced MRI data generated using a k-space weighted image contrast (KWIC) image reconstruction technique with golden-angle view-ordering. Methods K-space data were simulated using golden-angle view-ordering and reconstructed using a KWIC algorithm with a Fibonacci number of views enforced for each annulus in k-space. Temporal blurring was evaluated by comparing pharmacokinetic model parameters estimated from the simulated data with the true values. Diagnostic accuracy was quantified using receiver operator characteristic curves (ROC) and the area under the ROC curves (AUC). Results Estimation errors of pharmacokinetic model parameters were dependent on the true curve type and the lesion size. For 10 mm benign and malignant lesions, estimated AUC values using the true and estimate AIFs were consistent with the true AUC value. For 5 mm benign and 20 mm malignant lesions, estimated AUC values using the true and estimated AIFs were 0.906±0.020 and 0.905±0.021, respectively, as compared with the true AUC value of 0.896. Conclusions Although the investigated reconstruction algorithm does impose errors in pharmacokinetic model parameter estimation, they are not expected to significantly impact clinical studies of diagnostic accuracy. PMID:25267703

  4. Electron-cloud updated simulation results for the PSR, and recent results for the SNS

    SciTech Connect

    Pivi, M.; Furman, M.A.

    2002-05-29

    Recent simulation results for the main features of the electron cloud in the storage ring of the Spallation Neutron Source (SNS) at Oak Ridge, and updated results for the Proton Storage Ring (PSR) at Los Alamos are presented in this paper. A refined model for the secondary emission process including the so called true secondary, rediffused and backscattered electrons has recently been included in the electron-cloud code.

  5. Modeling results for a linear simulator of a divertor

    SciTech Connect

    Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.

    1993-06-23

    A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach {approximately} 1 Gw/m{sup 2} along the magnetic fieldlines and > 10 MW/m{sup 2} on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report.

  6. Earth resources mission performance studies. Volume 2: Simulation results

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Simulations were made at three month intervals to investigate the EOS mission performance over the four seasons of the year. The basic objectives of the study were: (1) to evaluate the ability of an EOS type system to meet a representative set of specific collection requirements, and (2) to understand the capabilities and limitations of the EOS that influence the system's ability to satisfy certain collection objectives. Although the results were obtained from a consideration of a two sensor EOS system, the analysis can be applied to any remote sensing system having similar optical and operational characteristics. While the category related results are applicable only to the specified requirement configuration, the results relating to general capability and limitations of the sensors can be applied in extrapolating to other U.S. based EOS collection requirements. The TRW general purpose mission simulator and analytic techniques discussed in this report can be applied to a wide range of collection and planning problems of earth orbiting imaging systems.

  7. Planck 2015 results. XII. Full focal plane simulations

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Castex, G.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Karakci, A.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Welikala, N.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    We present the 8th full focal plane simulation set (FFP8), deployed in support of the Planck 2015 results. FFP8 consists of 10 fiducial mission realizations reduced to 18 144 maps, together with the most massive suite of Monte Carlo realizations of instrument noise and CMB ever generated, comprising 104 mission realizations reduced to about 106 maps. The resulting maps incorporate the dominant instrumental, scanning, and data analysis effects, and the remaining subdominant effects will be included in future updates. Generated at a cost of some 25 million CPU-hours spread across multiple high-performance-computing (HPC) platforms, FFP8 is used to validate and verify analysis algorithms and their implementations, and to remove biases from and quantify uncertainties in the results of analyses of the real data.

  8. MicroRNA-155 Hallmarks Promising Accuracy for the Diagnosis of Various Carcinomas: Results from a Meta-Analysis

    PubMed Central

    Wu, Chuancheng; Liu, Qiuyan; Liu, Baoying

    2015-01-01

    Background. Recent studies have shown that microRNAs (miRNAs) have diagnostic values in various cancers. This meta-analysis seeks to summarize the global diagnostic role of miR-155 in patients with a variety of carcinomas. Methods. Eligible studies were retrieved by searching the online databases, and the bivariate meta-analysis model was employed to generate the summary receiver operator characteristic (SROC) curve. Results. A total of 17 studies dealing with various carcinomas were finally included. The results showed that single miR-155 testing allowed for the discrimination between cancer patients and healthy donors with a sensitivity of 0.82 (95% CI: 0.73–0.88) and specificity of 0.77 (95% CI: 0.70–0.83), corresponding to an area under curve (AUC) of 0.85, while a panel comprising expressions of miR-155 yielded a sensitivity of 0.76 (95% CI: 0.68–0.82) and specificity of 0.82 (95% CI: 0.77–0.86) in diagnosing cancers. The subgroup analysis displayed that serum miR-155 test harvested higher accuracy than plasma-based assay (the AUC, sensitivity, and specificity were, resp., 0.87 versus 0.73, 0.78 versus 0.74, and 0.77 versus 0.70). Conclusions. Our data suggest that single miR-155 profiling has a potential to be used as a screening test for various carcinomas, and parallel testing of miR-155 confers an improved specificity compared to single miR-155 analysis. PMID:25918453

  9. The relativity experiment of MORE: Global full-cycle simulation and results

    NASA Astrophysics Data System (ADS)

    Schettino, Giulia

    2015-07-01

    BepiColombo is a joint ESA/JAXA mission to Mercury with challenging objectives regarding geophysics, geodesy and fundamental physics. In particular, the Mercury Orbiter Radio science Experiment (MORE) intends, as one of its goals, to perform a test of General Relativity. This can be done by measuring and constraining the parametrized post-Newtonian (PPN) parameters to an accuracy significantly better than current one. In this work we perform a global numerical full-cycle simulation of the BepiColombo Radio Science Experiments (RSE) in a realistic scenario, focussing on the relativity experiment, solving simultaneously for all the parameters of interest for RSE in a global least squares fit within a constrained multiarc strategy. The results on the achievable accuracy for each PPN parameter will be presented and discussed, confirming the significant improvement to the actual knowledge of gravitation theory expected for the MORE relativity experiment. In particular, we will show that, including realistic systematic effects in the range observables, an accuracy of the order of 10-6 can still be achieved in the Eddington parameter β and in the parameter α1, which accounts for preferred frame effects, while the only poorly determined parameter turns out to be ζ, which describes the temporal variations of the gravitational constant and the Sun mass.

  10. Dosimetric accuracy of a deterministic radiation transport based {sup 192}Ir brachytherapy treatment planning system. Part III. Comparison to Monte Carlo simulation in voxelized anatomical computational models

    SciTech Connect

    Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P.

    2013-01-15

    Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.

  11. Machine learning methods for empirical streamflow simulation: a comparison of model accuracy, interpretability, and uncertainty in seasonal watersheds

    NASA Astrophysics Data System (ADS)

    Shortridge, Julie E.; Guikema, Seth D.; Zaitchik, Benjamin F.

    2016-07-01

    In the past decade, machine learning methods for empirical rainfall-runoff modeling have seen extensive development and been proposed as a useful complement to physical hydrologic models, particularly in basins where data to support process-based models are limited. However, the majority of research has focused on a small number of methods, such as artificial neural networks, despite the development of multiple other approaches for non-parametric regression in recent years. Furthermore, this work has often evaluated model performance based on predictive accuracy alone, while not considering broader objectives, such as model interpretability and uncertainty, that are important if such methods are to be used for planning and management decisions. In this paper, we use multiple regression and machine learning approaches (including generalized additive models, multivariate adaptive regression splines, artificial neural networks, random forests, and M5 cubist models) to simulate monthly streamflow in five highly seasonal rivers in the highlands of Ethiopia and compare their performance in terms of predictive accuracy, error structure and bias, model interpretability, and uncertainty when faced with extreme climate conditions. While the relative predictive performance of models differed across basins, data-driven approaches were able to achieve reduced errors when compared to physical models developed for the region. Methods such as random forests and generalized additive models may have advantages in terms of visualization and interpretation of model structure, which can be useful in providing insights into physical watershed function. However, the uncertainty associated with model predictions under extreme climate conditions should be carefully evaluated, since certain models (especially generalized additive models and multivariate adaptive regression splines) become highly variable when faced with high temperatures.

  12. Some results on ethnic conflicts based on evolutionary game simulation

    NASA Astrophysics Data System (ADS)

    Qin, Jun; Yi, Yunfei; Wu, Hongrun; Liu, Yuhang; Tong, Xiaonian; Zheng, Bojin

    2014-07-01

    The force of the ethnic separatism, essentially originating from the negative effect of ethnic identity, is damaging the stability and harmony of multiethnic countries. In order to eliminate the foundation of the ethnic separatism and set up a harmonious ethnic relationship, some scholars have proposed a viewpoint: ethnic harmony could be promoted by popularizing civic identity. However, this viewpoint is discussed only from a philosophical prospective and still lacks support of scientific evidences. Because ethnic group and ethnic identity are products of evolution and ethnic identity is the parochialism strategy under the perspective of game theory, this paper proposes an evolutionary game simulation model to study the relationship between civic identity and ethnic conflict based on evolutionary game theory. The simulation results indicate that: (1) the ratio of individuals with civic identity has a negative association with the frequency of ethnic conflicts; (2) ethnic conflict will not die out by killing all ethnic members once for all, and it also cannot be reduced by a forcible pressure, i.e., increasing the ratio of individuals with civic identity; (3) the average frequencies of conflicts can stay in a low level by promoting civic identity periodically and persistently.

  13. Dynamic damping control: Implementation issues and simulation results

    SciTech Connect

    Anderson, R.J.

    1989-01-01

    Computed torque algorithms are used to compensate for the changing dynamics of robot manipulators in order to ensure that a constant level of damping is maintained for all configurations. Unfortunately, there are three significant problems with existing computed torque algorithms. First, they are nonpassive and can lead to unstable behavior; second, they make inefficient use of actuator capability; and third, they cannot be used to maintain a constant end-effector stiffness for force control tasks. Recently, we introduced a new control algorithm for robots which, like computed torque, uses a model of the manipulator's dynamics to maintain a constant level of damping in the system, but does so passively. This new class of passive control algorithms has guaranteed stability properties, utilizes actuators more effectively, and can also be used to maintain constant end-effector stiffness. In this paper, this approach is described in detail, implementation issues are discussed, and simulation results are given. 15 refs., 6 figs., 2 tabs.

  14. Aeolian abrasion on Venus: Preliminary results from the Venus simulator

    NASA Technical Reports Server (NTRS)

    Marshall, J. R.; Greeley, Ronald; Tucker, D. W.; Pollack, J. B.

    1987-01-01

    The role of atmospheric pressure on aeolian abrasion was examined in the Venus Simulator with a constant temperature of 737 K. Both the rock target and the impactor were fine-grained basalt. The impactor was a 3 mm diameter angular particle chosen to represent a size of material that is entrainable by the dense Venusian atmosphere and potentially abrasive by virtue of its mass. It was projected at the target 10 to the 5 power times at a velocity of 0.7 m/s. The impactor showed a weight loss of approximately 1.2 x 10 to the -9 power gm per impact with the attrition occurring only at the edges. Results from scanning electron microscope analysis, profilometry, and weight measurement are summarized. It is concluded that particles can incur abrasion at Venusian temperatures even with low impact velocities expected for Venus.

  15. SLAC E144 Plots, Simulation Results, and Data

    DOE Data Explorer

    The 1997 E144 experiments at the Stanford Linear Accelerator Center (SLAC) utilitized extremely high laser intensities and collided huge groups of photons together so violently that positron-electron pairs were briefly created, actual particles of matter and antimatter. Instead of matter exploding into heat and light, light actually become matter. That accomplishment opened a new path into the exploration of the interactions of electrons and photons or quantum electrodynamics (QED). The E144 information at this website includes Feynmann Diagrams, simulation results, and data files. See also aseries of frames showing the E144 laser colliding with a beam electron and producing an electron-positron pair at http://www.slac.stanford.edu/exp/e144/focpic/focpic.html and lists of collaborators' papers, theses, and a page of press articles.

  16. Governance of complex systems: results of a sociological simulation experiment.

    PubMed

    Adelt, Fabian; Weyer, Johannes; Fink, Robin D

    2014-01-01

    Social sciences have discussed the governance of complex systems for a long time. The following paper tackles the issue by means of experimental sociology, in order to investigate the performance of different modes of governance empirically. The simulation framework developed is based on Esser's model of sociological explanation as well as on Kroneberg's model of frame selection. The performance of governance has been measured by means of three macro and two micro indicators. Surprisingly, central control mostly performs better than decentralised coordination. However, results not only depend on the mode of governance, but there is also a relation between performance and the composition of actor populations, which has yet not been investigated sufficiently. Practitioner Summary: Practitioners can gain insights into the functioning of complex systems and learn how to better manage them. Additionally, they are provided with indicators to measure the performance of complex systems.

  17. Assessment of the accuracy of an MCNPX-based Monte Carlo simulation model for predicting three-dimensional absorbed dose distributions.

    PubMed

    Titt, U; Sahoo, N; Ding, X; Zheng, Y; Newhauser, W D; Zhu, X R; Polf, J C; Gillin, M T; Mohan, R

    2008-08-21

    In recent years, the Monte Carlo method has been used in a large number of research studies in radiation therapy. For applications such as treatment planning, it is essential to validate the dosimetric accuracy of the Monte Carlo simulations in heterogeneous media. The AAPM Report no 105 addresses issues concerning clinical implementation of Monte Carlo based treatment planning for photon and electron beams, however for proton-therapy planning, such guidance is not yet available. Here we present the results of our validation of the Monte Carlo model of the double scattering system used at our Proton Therapy Center in Houston. In this study, we compared Monte Carlo simulated depth doses and lateral profiles to measured data for a magnitude of beam parameters. We varied simulated proton energies and widths of the spread-out Bragg peaks, and compared them to measurements obtained during the commissioning phase of the Proton Therapy Center in Houston. Of 191 simulated data sets, 189 agreed with measured data sets to within 3% of the maximum dose difference and within 3 mm of the maximum range or penumbra size difference. The two simulated data sets that did not agree with the measured data sets were in the distal falloff of the measured dose distribution, where large dose gradients potentially produce large differences on the basis of minute changes in the beam steering. Hence, the Monte Carlo models of medium- and large-size double scattering proton-therapy nozzles were valid for proton beams in the 100 MeV-250 MeV interval.

  18. Comparison of Repositioning Accuracy of Two Commercially Available Immobilization Systems for Treatment of Head-and-Neck Tumors Using Simulation Computed Tomography Imaging

    SciTech Connect

    Rotondo, Ronny L.; Sultanem, Khalil Lavoie, Isabelle; Skelly, Julie; Raymond, Luc

    2008-04-01

    Purpose: To compare the setup accuracy, comfort level, and setup time of two immobilization systems used in head-and-neck radiotherapy. Methods and Materials: Between February 2004 and January 2005, 21 patients undergoing radiotherapy for head-and-neck tumors were assigned to one of two immobilization devices: a standard thermoplastic head-and-shoulder mask fixed to a carbon fiber base (Type S) or a thermoplastic head mask fixed to the Accufix cantilever board equipped with the shoulder depression system. All patients underwent planning computed tomography (CT) followed by repeated control CT under simulation conditions during the course of therapy. The CT images were subsequently co-registered and setup accuracy was examined by recording displacement in the three cartesian planes at six anatomic landmarks and calculating the three-dimensional vector errors. In addition, the setup time and comfort of the two systems were compared. Results: A total of 64 CT data sets were analyzed. No difference was found in the cartesian total displacement errors or total vector displacement errors between the two populations at any landmark considered. A trend was noted toward a smaller mean systemic error for the upper landmarks favoring the Accufix system. No difference was noted in the setup time or comfort level between the two systems. Conclusion: No significant difference in the three-dimensional setup accuracy was identified between the two immobilization systems compared. The data from this study reassure us that our technique provides accurate patient immobilization, allowing us to limit our planning target volume to <4 mm when treating head-and-neck tumors.

  19. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES

    PubMed Central

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-01-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  20. New simulation and measurement results on gateable DEPFET devices

    NASA Astrophysics Data System (ADS)

    Bähr, Alexander; Aschauer, Stefan; Hermenau, Katrin; Herrmann, Sven; Lechner, Peter H.; Lutz, Gerhard; Majewski, Petra; Miessner, Danilo; Porro, Matteo; Richter, Rainer H.; Schaller, Gerhard; Sandow, Christian; Schnecke, Martina; Schopper, Florian; Stefanescu, Alexander; Strüder, Lothar; Treis, Johannes

    2012-07-01

    To improve the signal to noise level, devices for optical and x-ray astronomy use techniques to suppress background events. Well known examples are e.g. shutters or frame-store Charge Coupled Devices (CCDs). Based on the DEpleted P-channel Field Effect Transistor (DEPFET) principle a so-called Gatebale DEPFET detector can be built. Those devices combine the DEPFET principle with a fast built-in electronic shutter usable for optical and x-ray applications. The DEPFET itself is the basic cell of an active pixel sensor build on a fully depleted bulk. It combines internal amplification, readout on demand, analog storage of the signal charge and a low readout noise with full sensitivity over the whole bulk thickness. A Gatebale DEPFET has all these benefits and obviates the need for an external shutter. Two concepts of Gatebale DEPFET layouts providing a built-in shutter will be introduced. Furthermore proof of principle measurements for both concepts are presented. Using recently produced prototypes a shielding of the collection anode up to 1 • 10-4 was achieved. Predicted by simulations, an optimized geometry should result in values of 1 • 10-5 and better. With the switching electronic currently in use a timing evaluation of the shutter opening and closing resulted in rise and fall times of 100ns.

  1. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  2. Accuracy and reliability of automated gray matter segmentation pathways on real and simulated structural magnetic resonance images of the human brain.

    PubMed

    Eggert, Lucas D; Sommer, Jens; Jansen, Andreas; Kircher, Tilo; Konrad, Carsten

    2012-01-01

    Automated gray matter segmentation of magnetic resonance imaging data is essential for morphometric analyses of the brain, particularly when large sample sizes are investigated. However, although detection of small structural brain differences may fundamentally depend on the method used, both accuracy and reliability of different automated segmentation algorithms have rarely been compared. Here, performance of the segmentation algorithms provided by SPM8, VBM8, FSL and FreeSurfer was quantified on simulated and real magnetic resonance imaging data. First, accuracy was assessed by comparing segmentations of twenty simulated and 18 real T1 images with corresponding ground truth images. Second, reliability was determined in ten T1 images from the same subject and in ten T1 images of different subjects scanned twice. Third, the impact of preprocessing steps on segmentation accuracy was investigated. VBM8 showed a very high accuracy and a very high reliability. FSL achieved the highest accuracy but demonstrated poor reliability and FreeSurfer showed the lowest accuracy, but high reliability. An universally valid recommendation on how to implement morphometric analyses is not warranted due to the vast number of scanning and analysis parameters. However, our analysis suggests that researchers can optimize their individual processing procedures with respect to final segmentation quality and exemplifies adequate performance criteria.

  3. Accuracy and Reliability of Automated Gray Matter Segmentation Pathways on Real and Simulated Structural Magnetic Resonance Images of the Human Brain

    PubMed Central

    Eggert, Lucas D.; Sommer, Jens; Jansen, Andreas; Kircher, Tilo; Konrad, Carsten

    2012-01-01

    Automated gray matter segmentation of magnetic resonance imaging data is essential for morphometric analyses of the brain, particularly when large sample sizes are investigated. However, although detection of small structural brain differences may fundamentally depend on the method used, both accuracy and reliability of different automated segmentation algorithms have rarely been compared. Here, performance of the segmentation algorithms provided by SPM8, VBM8, FSL and FreeSurfer was quantified on simulated and real magnetic resonance imaging data. First, accuracy was assessed by comparing segmentations of twenty simulated and 18 real T1 images with corresponding ground truth images. Second, reliability was determined in ten T1 images from the same subject and in ten T1 images of different subjects scanned twice. Third, the impact of preprocessing steps on segmentation accuracy was investigated. VBM8 showed a very high accuracy and a very high reliability. FSL achieved the highest accuracy but demonstrated poor reliability and FreeSurfer showed the lowest accuracy, but high reliability. An universally valid recommendation on how to implement morphometric analyses is not warranted due to the vast number of scanning and analysis parameters. However, our analysis suggests that researchers can optimize their individual processing procedures with respect to final segmentation quality and exemplifies adequate performance criteria. PMID:23028771

  4. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study.

    PubMed

    Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M

    2017-02-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.

  5. Accuracy of System Step Response Roll Magnitude Estimation from Central and Peripheral Visual Displays and Simulator Cockpit Motion

    NASA Technical Reports Server (NTRS)

    Hosman, R. J. A. W.; Vandervaart, J. C.

    1984-01-01

    An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.

  6. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study

    PubMed Central

    Barsingerhorn, A. D.; Boonstra, F. N.; Goossens, H. H. L. M.

    2017-01-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea. PMID:28270978

  7. On the accuracy of simulations of a 2D boundary layer with RANS models implemented in OpenFoam

    NASA Astrophysics Data System (ADS)

    Graves, Benjamin J.; Gomez, Sebastian; Poroseva, Svetlana V.

    2013-11-01

    The OpenFoam software is an attractive Computational Fluid Dynamics solver for evaluating new turbulence models due to the open-source nature, and the suite of existing standard model implementations. Before interpreting results obtained with a new model, a baseline for performance of the OpenFoam solver and existing models is required. In the current study we analyze the RANS models in the OpenFoam incompressible solver for two planar (two-dimensional mean flow) benchmark cases generated by the AIAA Turbulence Model Benchmarking Working Group (TMBWG): a zero-pressure-gradient flat plate and a bump-in-channel. The OpenFoam results are compared against both experimental data and simulation results obtained with the NASA CFD codes CFL3D and FUN3D. Sensitivity of simulation results to the grid resolution and model implementation are analyzed. Testing is conducted using the Spalart-Allmaras one-equation model, Wilcox's two-equation k-omega model, and the Launder-Reece-Rodi Reynolds-stress model. Simulations using both wall functions and wall-resolved (low Reynolds number) formulations are considered. The material is based upon work supported by NASA under award NNX12AJ61A.

  8. Relative significance of heat transfer processes to quantify tradeoffs between complexity and accuracy of energy simulations with a building energy use patterns classification

    NASA Astrophysics Data System (ADS)

    Heidarinejad, Mohammad

    the indoor condition regardless of the contribution of internal and external loads. To deploy the methodology to another portfolio of buildings, simulated LEED NC office buildings are selected. The advantage of this approach is to isolate energy performance due to inherent building characteristics and location, rather than operational and maintenance factors that can contribute to significant variation in building energy use. A framework for detailed building energy databases with annual energy end-uses is developed to select variables and omit outliers. The results show that the high performance office buildings are internally-load dominated with existence of three different clusters of low-intensity, medium-intensity, and high-intensity energy use pattern for the reviewed office buildings. Low-intensity cluster buildings benefit from small building area, while the medium- and high-intensity clusters have a similar range of floor areas and different energy use intensities. Half of the energy use in the low-intensity buildings is associated with the internal loads, such as lighting and plug loads, indicating that there are opportunities to save energy by using lighting or plug load management systems. A comparison between the frameworks developed for the campus buildings and LEED NC office buildings indicates these two frameworks are complementary to each other. Availability of the information has yielded to two different procedures, suggesting future studies for a portfolio of buildings such as city benchmarking and disclosure ordinance should collect and disclose minimal required inputs suggested by this study with the minimum level of monthly energy consumption granularity. This dissertation developed automated methods using the OpenStudio API (Application Programing Interface) to create energy models based on the building class. ASHRAE Guideline 14 defines well-accepted criteria to measure accuracy of energy simulations; however, there is no well

  9. An in vitro comparison of diagnostic accuracy of cone beam computed tomography and phosphor storage plate to detect simulated occlusal secondary caries under amalgam restoration

    PubMed Central

    Shahidi, Shoaleh; Zadeh, Nahal Kazerooni; Sharafeddin, Farahnaz; Shahab, Shahriar; Bahrampour, Ehsan; Hamedani, Shahram

    2015-01-01

    Background: This study was aimed to compare the diagnostic accuracy and feasibility of cone beam computed tomography (CBCT) with phosphor storage plate (PSP) in detection of simulated occlusal secondary caries. Materials and Methods: In this in vitro descriptive-comparative study, a total of 80 slots of class I cavities were prepared on 80 extracted human premolars. Then, 40 teeth were randomly selected out of this sample and artificial carious lesions were created on these teeth by a round diamond bur no. 1/2. All 80 teeth were restored with amalgam fillings and radiographs were taken, both with PSP system and CBCT. All images were evaluated by three calibrated observers. The area under the receiver operating characteristic curve was used to compare the diagnostic accuracy of two systems. SPSS (SPSS Inc., Chicago, IL, USA) was adopted for statistical analysis. The difference between Az value of bitewing and CBCT methods were compared by pairwise comparison method. The inter- and intra-operator agreement was assessed by kappa analysis (P < 0.05). Results: The mean Az value for bitewings and CBCT was 0.903 and 0.994, respectively. Significant differences were found between PSP and CBCT (P = 0.010). The kappa value for inter-observer agreement was 0.68 and 0.76 for PSP and CBCT, respectively. The kappa value for intra-observer agreement was 0.698 (observer 1, P = 0.000), 0.766 (observer 2, P = 0.000) and 0.716 (observer 3, P = 0.000) in PSP method, and 0.816 (observer 1, P = 0.000), 0.653 (observer 2, P = 0.000) and 0.744 (observer 3, P = 0.000) in CBCT method. Conclusion: This in vitro study, with a limited number of samples, showed that the New Tom VGI Flex CBCT system was more accurate than the PSP in detecting the simulated small secondary occlusal caries under amalgam restoration. PMID:25878682

  10. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    DOE PAGES

    Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; ...

    2016-08-25

    We derived a consistent “2D/1D” neutron transport method from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. Our paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. We also performed several applications on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less

  11. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    NASA Astrophysics Data System (ADS)

    Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; Young, Mitchell T. H.; Kochunas, Brendan; Graham, Aaron; Larsen, Edward W.; Downar, Thomas; Godfrey, Andrew

    2016-12-01

    A consistent "2D/1D" neutron transport method is derived from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. This paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. Several applications have been performed on both leadership-class and industry-class computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.

  12. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    SciTech Connect

    Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; Young, Mitchell T. H.; Kochunas, Brendan; Graham, Aaron; Larsen, Edward W.; Downar, Thomas; Godfrey, Andrew

    2016-08-25

    We derived a consistent “2D/1D” neutron transport method from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. Our paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. We also performed several applications on both leadership-class and industry-class computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.

  13. Langmuir Wave Decay in Inhomogeneous Solar Wind Plasmas: Simulation Results

    NASA Astrophysics Data System (ADS)

    Krafft, C.; Volokitin, A. S.; Krasnoselskikh, V. V.

    2015-08-01

    Langmuir turbulence excited by electron flows in solar wind plasmas is studied on the basis of numerical simulations. In particular, nonlinear wave decay processes involving ion-sound (IS) waves are considered in order to understand their dependence on external long-wavelength plasma density fluctuations. In the presence of inhomogeneities, it is shown that the decay processes are localized in space and, due to the differences between the group velocities of Langmuir and IS waves, their duration is limited so that a full nonlinear saturation cannot be achieved. The reflection and the scattering of Langmuir wave packets on the ambient and randomly varying density fluctuations lead to crucial effects impacting the development of the IS wave spectrum. Notably, beatings between forward propagating Langmuir waves and reflected ones result in the parametric generation of waves of noticeable amplitudes and in the amplification of IS waves. These processes, repeated at different space locations, form a series of cascades of wave energy transfer, similar to those studied in the frame of weak turbulence theory. The dynamics of such a cascading mechanism and its influence on the acceleration of the most energetic part of the electron beam are studied. Finally, the role of the decay processes in the shaping of the profiles of the Langmuir wave packets is discussed, and the waveforms calculated are compared with those observed recently on board the spacecraft Solar TErrestrial RElations Observatory and WIND.

  14. AGGREGATES: Finding structures in simulation results of solutions.

    PubMed

    Bernardes, Carlos E S

    2017-04-15

    Molecular Dynamic and Monte-Carlo simulations are widely used to investigate the structure and physical properties of solids and liquids at a molecular level. Tools to extract the most relevant information from the obtained results are, however, in considerable demand. One such tool, the program AGGREGATES, is described in this work. Based on distance criteria, the program searches trajectory files for the presence of molecular clusters and computes several statistical and shape properties for these structures. Tools designed to investigate the local organization and the molecular conformations in the clusters are also available. Among these, it is introduced a new approach to perform a First Shell Analysis, by looking for the presence of atomic contacts between molecules. These elements are particularly useful to obtain information on molecular assembly processes (such as the nucleation of crystals or colloidal particles) or to investigate polymorphism in organic compounds. The program features are illustrated here through the investigation of the 4'-hydroxyacetophenone + ethanol system. © 2017 Wiley Periodicals, Inc.

  15. LANGMUIR WAVE DECAY IN INHOMOGENEOUS SOLAR WIND PLASMAS: SIMULATION RESULTS

    SciTech Connect

    Krafft, C.; Volokitin, A. S.; Krasnoselskikh, V. V.

    2015-08-20

    Langmuir turbulence excited by electron flows in solar wind plasmas is studied on the basis of numerical simulations. In particular, nonlinear wave decay processes involving ion-sound (IS) waves are considered in order to understand their dependence on external long-wavelength plasma density fluctuations. In the presence of inhomogeneities, it is shown that the decay processes are localized in space and, due to the differences between the group velocities of Langmuir and IS waves, their duration is limited so that a full nonlinear saturation cannot be achieved. The reflection and the scattering of Langmuir wave packets on the ambient and randomly varying density fluctuations lead to crucial effects impacting the development of the IS wave spectrum. Notably, beatings between forward propagating Langmuir waves and reflected ones result in the parametric generation of waves of noticeable amplitudes and in the amplification of IS waves. These processes, repeated at different space locations, form a series of cascades of wave energy transfer, similar to those studied in the frame of weak turbulence theory. The dynamics of such a cascading mechanism and its influence on the acceleration of the most energetic part of the electron beam are studied. Finally, the role of the decay processes in the shaping of the profiles of the Langmuir wave packets is discussed, and the waveforms calculated are compared with those observed recently on board the spacecraft Solar TErrestrial RElations Observatory and WIND.

  16. Accuracy of the water column approximation in numerically simulating propagation of teleseismic PP waves and Rayleigh waves

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian

    2016-08-01

    Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modelling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5 per cent and 9° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10 per cent in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1 per cent at periods greater than 30 s in most oceanic regions, but the error is up to 2 per cent for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.

  17. Improving stamping simulation accuracy by accounting for realistic friction and lubrication conditions: Application to the door-outer of the Mercedes-Benz C-class Coupé

    NASA Astrophysics Data System (ADS)

    Hol, J.; Wiebenga, J. H.; Stock, J.; Wied, J.; Wiegand, K.; Carleer, B.

    2016-08-01

    In the stamping of automotive parts, friction and lubrication play a key role in achieving high quality products. In the development process of new automotive parts, it is therefore crucial to accurately account for these effects in sheet metal forming simulations. Only then, one can obtain reliable and realistic simulation results that correspond to the actual try-out and mass production conditions. In this work, the TriboForm software is used to accurately account for tribology-, friction-, and lubrication conditions in stamping simulations. The enhanced stamping simulations are applied and validated for the door-outer of the Mercedes- Benz C-Class Coupe. The project results demonstrate the improved prediction accuracy of stamping simulations with respect to both part quality and actual stamping process conditions.

  18. Simulation of optical diagnostics for crystal growth: models and results

    NASA Astrophysics Data System (ADS)

    Banish, Michele R.; Clark, Rodney L.; Kathman, Alan D.; Lawson, Shelah M.

    1991-12-01

    A computer simulation of a two-color holographic interferometric (TCHI) optical system was performed using a physical (wave) optics model. This model accurately simulates propagation through time-varying, 2-D or 3-D concentration and temperature fields as a wave phenomenon. The model calculates wavefront deformations that can be used to generate fringe patterns. This simulation modeled a proposed TriGlycine sulphate TGS flight experiment by propagating through the simplified onion-like refractive index distribution of the growing crystal and calculating the recorded wavefront deformation. The phase of this wavefront was used to generate sample interferograms that map index of refraction variation. Two such fringe patterns, generated at different wavelengths, were used to extract the original temperature and concentration field characteristics within the growth chamber. This proves feasibility for this TCHI crystal growth diagnostic technique. This simulation provides feedback to the experimental design process.

  19. Improving the trust in results of numerical simulations and scientific data analytics

    SciTech Connect

    Cappello, Franck; Constantinescu, Emil; Hovland, Paul; Peterka, Tom; Phillips, Carolyn; Snir, Marc; Wild, Stefan

    2015-04-30

    approaches to address it. This paper does not focus on the trust that the execution will actually complete. The product of simulation or of data analytic executions is the final element of a potentially long chain of transformations, where each stage has the potential to introduce harmful corruptions. These corruptions may produce results that deviate from the user-expected accuracy without notifying the user of this deviation. There are many potential sources of corruption before and during the execution; consequently, in this white paper we do not focus on the protection of the end result after the execution.

  20. Results of a Flight Simulation Software Methods Survey

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce

    1995-01-01

    A ten-page questionnaire was mailed to members of the AIAA Flight Simulation Technical Committee in the spring of 1994. The survey inquired about various aspects of developing and maintaining flight simulation software, as well as a few questions dealing with characterization of each facility. As of this report, 19 completed surveys (out of 74 sent out) have been received. This paper summarizes those responses.

  1. Petascale Kinetic Simulations in Space Sciences: New Simulations and Data Discovery Techniques and Physics Results

    NASA Astrophysics Data System (ADS)

    Karimabadi, Homa

    2012-03-01

    Recent advances in simulation technology and hardware are enabling breakthrough science where many longstanding problems can now be addressed for the first time. In this talk, we focus on kinetic simulations of the Earth's magnetosphere and magnetic reconnection process which is the key mechanism that breaks the protective shield of the Earth's dipole field, allowing the solar wind to enter the Earth's magnetosphere. This leads to the so-called space weather where storms on the Sun can affect space-borne and ground-based technological systems on Earth. The talk will consist of three parts: (a) overview of a new multi-scale simulation technique where each computational grid is updated based on its own unique timestep, (b) Presentation of a new approach to data analysis that we refer to as Physics Mining which entails combining data mining and computer vision algorithms with scientific visualization to extract physics from the resulting massive data sets. (c) Presentation of several recent discoveries in studies of space plasmas including the role of vortex formation and resulting turbulence in magnetized plasmas.

  2. Evaluating the velocity accuracy of an integrated GPS/INS system: Flight test results. [Global positioning system/inertial navigation systems (GPS/INS)

    SciTech Connect

    Owen, T.E.; Wardlaw, R.

    1991-01-01

    Verifying the velocity accuracy of a GPS receiver or an integrated GPS/INS system in a dynamic environment is a difficult proposition when many of the commonly used reference systems have velocity uncertainities of the same order of magnitude or greater than the GPS system. The results of flight tests aboard an aircraft in which multiple reference systems simultaneously collected data to evaluate the accuracy of an integrated GPS/INS system are reported. Emphasis is placed on obtaining high accuracy estimates of the velocity error of the integrated system in order to verify that velocity accuracy is maintained during both linear and circular trajectories. Three different reference systems operating in parallel during flight tests are used to independently determine the position and velocity of an aircraft in flight. They are a transponder/interrogator ranging system, a laser tracker, and GPS carrier phase processing. Results obtained from these reference systems are compared against each other and against an integrated real time differential based GPS/INS system to arrive at a set of conclusions about the accuracy of the integrated system.

  3. Results from teleoperated free-flying spacecraft simulations in the Martin Marietta space operations simulator lab

    NASA Technical Reports Server (NTRS)

    Hartley, Craig S.

    1990-01-01

    To augment the capabilities of the Space Transportation System, NASA has funded studies and developed programs aimed at developing reusable, remotely piloted spacecraft and satellite servicing systems capable of delivering, retrieving, and servicing payloads at altitudes and inclinations beyond the reach of the present Shuttle Orbiters. Since the mid 1970's, researchers at the Martin Marietta Astronautics Group Space Operations Simulation (SOS) Laboratory have been engaged in investigations of remotely piloted and supervised autonomous spacecraft operations. These investigations were based on high fidelity, real-time simulations and have covered a wide range of human factors issues related to controllability. Among these are: (1) mission conditions, including thruster plume impingements and signal time delays; (2) vehicle performance variables, including control authority, control harmony, minimum impulse, and cross coupling of accelerations; (3) maneuvering task requirements such as target distance and dynamics; (4) control parameters including various control modes and rate/displacement deadbands; and (5) display parameters involving camera placement and function, visual aids, and presentation of operational feedback from the spacecraft. This presentation includes a brief description of the capabilities of the SOS Lab to simulate real-time free-flyer operations using live video, advanced technology ground and on-orbit workstations, and sophisticated computer models of on-orbit spacecraft behavior. Sample results from human factors studies in the five categories cited above are provided.

  4. SIMULATION OF DNAPL DISTRIBUTION RESULTING FROM MULTIPLE SOURCES

    EPA Science Inventory

    A three-dimensional and three-phase (water, NAPL and gas) numerical simulator, called NAPL, was employed to study the interaction between DNAPL (PCE) plumes in a variably saturated porous media. Several model verification tests have been performed, including a series of 2-D labo...

  5. FINAL SIMULATION RESULTS FOR DEMONSTRATION CASE 1 AND 2

    SciTech Connect

    David Sloan; Woodrow Fiveland

    2003-10-15

    The goal of this DOE Vision-21 project work scope was to develop an integrated suite of software tools that could be used to simulate and visualize advanced plant concepts. Existing process simulation software did not meet the DOE's objective of ''virtual simulation'' which was needed to evaluate complex cycles. The overall intent of the DOE was to improve predictive tools for cycle analysis, and to improve the component models that are used in turn to simulate equipment in the cycle. Advanced component models are available; however, a generic coupling capability that would link the advanced component models to the cycle simulation software remained to be developed. In the current project, the coupling of the cycle analysis and cycle component simulation software was based on an existing suite of programs. The challenge was to develop a general-purpose software and communications link between the cycle analysis software Aspen Plus{reg_sign} (marketed by Aspen Technology, Inc.), and specialized component modeling packages, as exemplified by industrial proprietary codes (utilized by ALSTOM Power Inc.) and the FLUENT{reg_sign} computational fluid dynamics (CFD) code (provided by Fluent Inc). A software interface and controller, based on an open CAPE-OPEN standard, has been developed and extensively tested. Various test runs and demonstration cases have been utilized to confirm the viability and reliability of the software. ALSTOM Power was tasked with the responsibility to select and run two demonstration cases to test the software--(1) a conventional steam cycle (designated as Demonstration Case 1), and (2) a combined cycle test case (designated as Demonstration Case 2). Demonstration Case 1 is a 30 MWe coal-fired power plant for municipal electricity generation, while Demonstration Case 2 is a 270 MWe, natural gas-fired, combined cycle power plant. Sufficient data was available from the operation of both power plants to complete the cycle configurations. Three runs

  6. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Technical Reports Server (NTRS)

    Barrie, A.; Adrian, Mark L.; Yeh, P.-S.; Winkert, G. E.; Lobell, J. V.; Vinas, A.F.; Simpson, D. J.; Moore, T. E.

    2008-01-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eights (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6 deg x 180 deg fields-of-view (FOV) are set 90 deg apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45 deg x 180 deg fan about its nominal viewing (0 deg deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the results in the DES complement of a given spacecraft generating 6.5-Mbs(exp -1) of electron data while the DIS generates 1.1-Mbs(exp -1) of ion data yielding an FPI total data rate of 6.6-MBs(exp -1). The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mbs(exp -1). Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re-processed Cluster/PEACE electron measurements. Topics to be discussed include: review of compression algorithm; data quality

  7. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Astrophysics Data System (ADS)

    Barrie, A.; Adrian, M. L.; Yeh, P.; Winkert, G.; Lobell, J.; Vinas, A. F.; Simpson, D. G.

    2009-12-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° x 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° x 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 6.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present updated simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data as well as the FPI-DIS ion data. Compression analysis is based upon a seed of re-processed Cluster

  8. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Astrophysics Data System (ADS)

    Barrie, A. C.; Adrian, M. L.; Yeh, P.; Winkert, G. E.; Lobell, J. V.; Viňas, A. F.; Simpson, D. G.; Moore, T. E.

    2008-12-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° × 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° × 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 7.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm- based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re- processed Cluster/PEACE electron measurements. Topics to be

  9. Summary Results of the Neptun Boil-Off Experiments to Investigate the Accuracy and Cooling Influence of LOFT Cladding-Surface Thermocouples (System 00)

    SciTech Connect

    E. L. Tolman S. N. Aksan

    1981-10-01

    Nine boil-off experiments were conducted in the Swiss NEPTUN Facility primarily to obtain experimental data for assessing the perturbation effects of LOFT thermocouples during simulated small-break core uncovery conditions. The data will also be useful in assessing computer model capability to predict thermal hydraulic response data for this type of experiment. System parameters that were varied for these experiments included heater rod power, system pressure, and initial coolant subcooling. The experiments showed that the LOFT thermocouples do not cause a significant cooling influence in the rods to which they are attached. Furthermore, the accuracy of the LOFT thermocouples is within 20 K at the peak cladding temperature zone.

  10. Influence of Geometry and Mechanical Properties on the Accuracy of Patient-Specific Simulation of Women Pelvic Floor.

    PubMed

    Mayeur, Olivier; Witz, Jean-François; Lecomte, Pauline; Brieu, Mathias; Cosson, Michel; Miller, Karol

    2016-01-01

    The woman pelvic system involves multiple organs, muscles, ligaments, and fasciae where different pathologies may occur. Here we are most interested in abnormal mobility, often caused by complex and not fully understood mechanisms. Computer simulation and modeling using the finite element (FE) method are the tools helping to better understand the pathological mobility, but of course patient-specific models are required to make contribution to patient care. These models require a good representation of the pelvic system geometry, information on the material properties, boundary conditions and loading. In this contribution we focus on the relative influence of the inaccuracies in geometry description and of uncertainty of patient-specific material properties of soft connective tissues. We conducted a comparative study using several constitutive behavior laws and variations in geometry description resulting from the imprecision of clinical imaging and image analysis. We find that geometry seems to have the dominant effect on the pelvic organ mobility simulation results. Provided that proper finite deformation non-linear FE solution procedures are used, the influence of the functional form of the constitutive law might be for practical purposes negligible. These last findings confirm similar results from the fields of modeling neurosurgery and abdominal aortic aneurysms.

  11. Effects of heterogeneity in aquifer permeability and biomass on biodegradation rate calculations - Results from numerical simulations

    USGS Publications Warehouse

    Scholl, M.A.

    2000-01-01

    Numerical simulations were used to examine the effects of heterogeneity in hydraulic conductivity (K) and intrinsic biodegradation rate on the accuracy of contaminant plume-scale biodegradation rates obtained from field data. The simulations were based on a steady-state BTEX contaminant plume-scale biodegradation under sulfate-reducing conditions, with the electron acceptor in excess. Biomass was either uniform or correlated with K to model spatially variable intrinsic biodegradation rates. A hydraulic conductivity data set from an alluvial aquifer was used to generate three sets of 10 realizations with different degrees of heterogeneity, and contaminant transport with biodegradation was simulated with BIOMOC. Biodegradation rates were calculated from the steady-state contaminant plumes using decreases in concentration with distance downgradient and a single flow velocity estimate, as is commonly done in site characterization to support the interpretation of natural attenuation. The observed rates were found to underestimate the actual rate specified in the heterogeneous model in all cases. The discrepancy between the observed rate and the 'true' rate depended on the ground water flow velocity estimate, and increased with increasing heterogeneity in the aquifer. For a lognormal K distribution with variance of 0.46, the estimate was no more than a factor of 1.4 slower than the true rate. For aquifer with 20% silt/clay lenses, the rate estimate was as much as nine times slower than the true rate. Homogeneous-permeability, uniform-degradation rate simulations were used to generate predictions of remediation time with the rates estimated from heterogeneous models. The homogeneous models were generally overestimated the extent of remediation or underestimated remediation time, due to delayed degradation of contaminants in the low-K areas. Results suggest that aquifer characterization for natural attenuation at contaminated sites should include assessment of the presence

  12. The structural properties of a two-Yukawa fluid: Simulation and analytical results.

    PubMed

    Broccio, Matteo; Costa, Dino; Liu, Yun; Chen, Sow-Hsin

    2006-02-28

    Standard Monte Carlo simulations are carried out to assess the accuracy of theoretical predictions for the structural properties of a model fluid interacting through a hard-core two-Yukawa potential composed of a short-range attractive well next to a hard repulsive core, followed by a smooth, long-range repulsive tail. Theoretical calculations are performed in the framework provided by the Ornstein-Zernike equation, solved either analytically with the mean spherical approximation (MSA) or iteratively with the hypernetted-chain (HNC) closure. Our analysis shows that both theories are generally accurate in a thermodynamic region corresponding to a dense vapor phase around the critical point. For a suitable choice of potential parameters, namely, when the attractive well is deep and/or large enough, the static structure factor displays a secondary low-Q peak. In this case HNC predictions closely follow the simulation results, whereas MSA results progressively worsen the more pronounced this low-Q peak is. We discuss the appearance of such a peak, also experimentally observed in colloidal suspensions and protein solutions, in terms of the formation of equilibrium clusters in the homogeneous fluid.

  13. The structural properties of a two-Yukawa fluid: Simulation and analytical results

    NASA Astrophysics Data System (ADS)

    Broccio, Matteo; Costa, Dino; Liu, Yun; Chen, Sow-Hsin

    2006-02-01

    Standard Monte Carlo simulations are carried out to assess the accuracy of theoretical predictions for the structural properties of a model fluid interacting through a hard-core two-Yukawa potential composed of a short-range attractive well next to a hard repulsive core, followed by a smooth, long-range repulsive tail. Theoretical calculations are performed in the framework provided by the Ornstein-Zernike equation, solved either analytically with the mean spherical approximation (MSA) or iteratively with the hypernetted-chain (HNC) closure. Our analysis shows that both theories are generally accurate in a thermodynamic region corresponding to a dense vapor phase around the critical point. For a suitable choice of potential parameters, namely, when the attractive well is deep and/or large enough, the static structure factor displays a secondary low-Q peak. In this case HNC predictions closely follow the simulation results, whereas MSA results progressively worsen the more pronounced this low-Q peak is. We discuss the appearance of such a peak, also experimentally observed in colloidal suspensions and protein solutions, in terms of the formation of equilibrium clusters in the homogeneous fluid.

  14. SU-E-T-401: Evaluation of TG-43 Dose Calculation Accuracy for SAVI-Based Accelerated Partial Breast Irradiation (APBI) Via Monte Carlo Simulations

    SciTech Connect

    Xu, Y; Tian, Z; Jiang, S; Jia, X; Scanderbeg, D; Yashar, C; Zhang, M

    2015-06-15

    Purpose: The current standard TG-43 dose calculation method for SAVI-based Accelerated Partial Breast Irradiation (APBI) assumes an ideal geometry of infinite homogeneous water. However, in SAVI treatments, the air cavity inside the device and the short source-to-skin distance raise concerns about the dose accuracy of the TG-43 method. This study is to evaluate TG-43 dose calculation accuracy in SAVI treatments using Monte Carlo (MC) simulations. Methods: We recalculated the dose distributions of 15 APBI patients treated with SAVI devices, including five cases with a size of 6–1, five with 8−1 and five with 10−1, using our in-house developed fast MC dose package for HDR brachytherapy (gBMC). A phase-space file was used to model the Ir-192 HDR source. For each case, the patient CT was converted into a voxelized phantom and the dwell positions and times were extracted from treatment plans for MC dose calculations. Clinically relevant dosimetric parameters of the recalculated dose were compared to those computed via the TG-43 approach. Results: A systematic overestimation of doses was found for the 15 cases in TG-43 results, with D90, V150, and V200 for PTV-eval 2.8±1.8%, 2.0±2.2%, and 1.8±3.5% higher than MC results. TG-43 also overestimated the dose to skin with the maximum dose 4.4±8.4% higher on average. The relatively large standard deviation seen in the difference of maximum skin dose is partially ascribed to the statistical uncertainty of MC simulations when computing the maximum dose. It took gBMC ∼1 minute to compute dose for a SAVI plan. Conclusion: The high efficiency of our gBMC package facilitated the studies with a relatively large number of cases. An overestimation of TG-43 doses was found when using this MC package to recompute doses in SAVI cases. Clinical utilization of TG-43 dose calculation method in this scenario should be aware of this fact.

  15. Preclinical Evaluation of the Accuracy of HIFU Treatments Using a Tumor-Mimic Model. Results of Animal Experiments

    NASA Astrophysics Data System (ADS)

    Melodelima, D.; N'Djin, W. A.; Parmentier, H.; Rivoire, M.; Chapelon, J. Y.

    2009-04-01

    Presented in this paper is a tumor-mimic model that allows the evaluation at a preclinical stage of the targeting accuracy of HIFU treatments in the liver. The tumor-mimics were made by injecting a warm mixture of agarose, cellulose, and glycerol that polymerizes immediately in hepatic tissue and forms a 1 cm discrete lesion that is detectable by ultrasound imaging and gross pathology. Three studies were conducted: (i) in vitro experiments were conducted to study acoustical proprieties of the tumor-mimics, (ii) animal experiments were conducted in ten pigs to evaluate the tolerance of the tumor-mimics at mid-term (30 days), (iii) ultrasound-guided HIFU ablation has been performed in ten pigs with tumor-mimics to demonstrate that it is possible to treat a predetermined zone accurately. The attenuation of tumor-mimics was 0.39 dB.cm-1 at 1 MHz, the ultrasound propagation velocity was 1523 m.s-1, and the acoustic impedance was 1.8 MRayls. The pigs tolerated tumor-mimics and treatment well over the experimental period. Tumor-mimics were visible with high contrast on ultrasound images. In addition, it has been demonstrated by using the tumor-mimic as a reference target, that tissue destruction induced by HIFU and observed on gross pathology corresponded to the targeted area on the ultrasound images. The average difference between the predetermined location of the HIFU ablation and the actual coagulated area was 16%. These tumor-mimics are identifiable by ultrasound imaging, they do not modify the geometry of HIFU lesions and thus constitutes a viable mimic of tumors indicated for HIFU therapy.

  16. Improved Accuracy in RNA-Protein Rigid Body Docking by Incorporating Force Field for Molecular Dynamics Simulation into the Scoring Function.

    PubMed

    Iwakiri, Junichi; Hamada, Michiaki; Asai, Kiyoshi; Kameda, Tomoshi

    2016-09-13

    RNA-protein interactions play fundamental roles in many biological processes. To understand these interactions, it is necessary to know the three-dimensional structures of RNA-protein complexes. However, determining the tertiary structure of these complexes is often difficult, suggesting that an accurate rigid body docking for RNA-protein complexes is needed. In general, the rigid body docking process is divided into two steps: generating candidate structures from the individual RNA and protein structures and then narrowing down the candidates. In this study, we focus on the former problem to improve the prediction accuracy in RNA-protein docking. Our method is based on the integration of physicochemical information about RNA into ZDOCK, which is known as one of the most successful computer programs for protein-protein docking. Because recent studies showed the current force field for molecular dynamics simulation of protein and nucleic acids is quite accurate, we modeled the physicochemical information about RNA by force fields such as AMBER and CHARMM. A comprehensive benchmark of RNA-protein docking, using three recently developed data sets, reveals the remarkable prediction accuracy of the proposed method compared with existing programs for docking: the highest success rate is 34.7% for the predicted structure of the RNA-protein complex with the best score and 79.2% for 3,600 predicted ones. Three full atomistic force fields for RNA (AMBER94, AMBER99, and CHARMM22) produced almost the same accurate result, which showed current force fields for nucleic acids are quite accurate. In addition, we found that the electrostatic interaction and the representation of shape complementary between protein and RNA plays the important roles for accurate prediction of the native structures of RNA-protein complexes.

  17. Simulations Build Efficacy: Empirical Results from a Four-Week Congressional Simulation

    ERIC Educational Resources Information Center

    Mariani, Mack; Glenn, Brian J.

    2014-01-01

    This article describes a four-week congressional committee simulation implemented in upper level courses on Congress and the Legislative process at two liberal arts colleges. We find that the students participating in the simulation possessed high levels of political knowledge and confidence in their political skills prior to the simulation. An…

  18. Simulating Fluid Movement in Saturated Heterogeneous Fractured Watersheds: Exploring the Influence of Data Distribution, Aquifer Structure, and Element Size on Model Accuracy

    NASA Astrophysics Data System (ADS)

    Wellman, T. P.; Poeter, E. P.

    2005-12-01

    Caused in part by population rise and subsequent development, fractured watersheds are increasingly relied upon as primary water resources. Yet due to their complexity, accurate model predictions are often beyond reach. Methods must be developed that aid in the creation of viable models, which is the focus of this study. In light of the computational expense of discrete fracture models and limited ability for characterizing hydraulically conductive fractures, continuum models have remained the preferred tool for simulating hydrologic processes in large-scale fractured aquifers. The major challenge for continuum representation is determining the size of continuum (i.e. element size), if any, that can accurately represent fracture-controlled fluid movement. A common approach is to employ the representative elementary volume in three dimensional systems, which we generically refer to as representative elementary scale (RES). We present an energy-based, multi-scale approach for estimating spatially variable RES, developed in a previous phase of our research. Rather than evaluating fracture structure directly, we spatially analyze the effective fluid energy at varying scales using hydraulic head observations. Building upon this initial framework, we present a method for determining prediction uncertainty in RES selection. Our approach employs Tikhonov regularization, direct inversion, conditioned random walks, and nonparametric bootstrapping. In comparison to geostatistical simulation, we show our method is faster computationally, does not require variogram construction, needs fewer input parameters, and produces reasonably accurate predictions. Although resolving near-field RES resulting from small-scale features may not be possible for many systems, macroscopic continuum structure is shown to be reasonably approximated and useful in developing large-scale hydrologic models. We apply our method of RES estimation and RES uncertainty analysis under varying data

  19. Evaluation of the accuracy of an offline seasonally-varying matrix transport model for simulating ideal age

    DOE PAGES

    Bardin, Ann; Primeau, Francois; Lindsay, Keith; ...

    2016-07-21

    Newton-Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton’s method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earthmore » System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. As a result, for many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.« less

  20. Evaluation of the accuracy of an offline seasonally-varying matrix transport model for simulating ideal age

    SciTech Connect

    Bardin, Ann; Primeau, Francois; Lindsay, Keith; Bradley, Andrew

    2016-07-21

    Newton-Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton’s method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earth System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. As a result, for many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.

  1. Direct drive: Simulations and results from the National Ignition Facility

    SciTech Connect

    Radha, P. B.; Hohenberger, M.; Edgell, D. H.; Marozas, J. A.; Marshall, F. J.; Michel, D. T.; Rosenberg, M. J.; Seka, W.; Shvydky, A.; Boehly, T. R.; Collins, T. J. B.; Campbell, E. M.; Craxton, R. S.; Delettrez, J. A.; Dixit, S. N.; Frenje, J. A.; Froula, D. H.; Goncharov, V. N.; Hu, S. X.; Knauer, J. P.; McCrory, R. L.; McKenty, P. W.; Meyerhofer, D. D.; Moody, J.; Myatt, J. F.; Petrasso, R. D.; Regan, S. P.; Sangster, T. C.; Sio, H.; Skupsky, S.; Zylstra, A.

    2016-04-19

    Here, the direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivity analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.

  2. Direct drive: Simulations and results from the National Ignition Facility

    DOE PAGES

    Radha, P. B.; Hohenberger, M.; Edgell, D. H.; ...

    2016-04-19

    Here, the direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivitymore » analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.« less

  3. Implementation and Simulation Results using Autonomous Aerobraking Development Software

    NASA Technical Reports Server (NTRS)

    Maddock, Robert W.; DwyerCianciolo, Alicia M.; Bowes, Angela; Prince, Jill L. H.; Powell, Richard W.

    2011-01-01

    An Autonomous Aerobraking software system is currently under development with support from the NASA Engineering and Safety Center (NESC) that would move typically ground-based operations functions to onboard an aerobraking spacecraft, reducing mission risk and mission cost. The suite of software that will enable autonomous aerobraking is the Autonomous Aerobraking Development Software (AADS) and consists of an ephemeris model, onboard atmosphere estimator, temperature and loads prediction, and a maneuver calculation. The software calculates the maneuver time, magnitude and direction commands to maintain the spacecraft periapsis parameters within design structural load and/or thermal constraints. The AADS is currently tested in simulations at Mars, with plans to also evaluate feasibility and performance at Venus and Titan.

  4. Simulation study on potential accuracy gains from dual energy CT tissue segmentation for low-energy brachytherapy Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Landry, Guillaume; Granton, Patrick V.; Reniers, Brigitte; Öllers, Michel C.; Beaulieu, Luc; Wildberger, Joachim E.; Verhaegen, Frank

    2011-10-01

    This work compares Monte Carlo (MC) dose calculations for 125I and 103Pd low-dose rate (LDR) brachytherapy sources performed in virtual phantoms containing a series of human soft tissues of interest for brachytherapy. The geometries are segmented (tissue type and density assignment) based on simulated single energy computed tomography (SECT) and dual energy (DECT) images, as well as the all-water TG-43 approach. Accuracy is evaluated by comparison to a reference MC dose calculation performed in the same phantoms, where each voxel's material properties are assigned with exactly known values. The objective is to assess potential dose calculation accuracy gains from DECT. A CT imaging simulation package, ImaSim, is used to generate CT images of calibration and dose calculation phantoms at 80, 120, and 140 kVp. From the high and low energy images electron density ρe and atomic number Z are obtained using a DECT algorithm. Following a correction derived from scans of the calibration phantom, accuracy on Z and ρe of ±1% is obtained for all soft tissues with atomic number Z in [6,8] except lung. GEANT4 MC dose calculations based on DECT segmentation agreed with the reference within ±4% for 103Pd, the most sensitive source to tissue misassignments. SECT segmentation with three tissue bins as well as the TG-43 approach showed inferior accuracy with errors of up to 20%. Using seven tissue bins in our SECT segmentation brought errors within ±10% for 103Pd. In general 125I dose calculations showed higher accuracy than 103Pd. Simulated image noise was found to decrease DECT accuracy by 3-4%. Our findings suggest that DECT-based segmentation yields improved accuracy when compared to SECT segmentation with seven tissue bins in LDR brachytherapy dose calculation for the specific case of our non-anthropomorphic phantom. The validity of our conclusions for clinical geometry as well as the importance of image noise in the tissue segmentation procedure deserves further

  5. Simulation study on potential accuracy gains from dual energy CT tissue segmentation for low-energy brachytherapy Monte Carlo dose calculations.

    PubMed

    Landry, Guillaume; Granton, Patrick V; Reniers, Brigitte; Ollers, Michel C; Beaulieu, Luc; Wildberger, Joachim E; Verhaegen, Frank

    2011-10-07

    This work compares Monte Carlo (MC) dose calculations for (125)I and (103)Pd low-dose rate (LDR) brachytherapy sources performed in virtual phantoms containing a series of human soft tissues of interest for brachytherapy. The geometries are segmented (tissue type and density assignment) based on simulated single energy computed tomography (SECT) and dual energy (DECT) images, as well as the all-water TG-43 approach. Accuracy is evaluated by comparison to a reference MC dose calculation performed in the same phantoms, where each voxel's material properties are assigned with exactly known values. The objective is to assess potential dose calculation accuracy gains from DECT. A CT imaging simulation package, ImaSim, is used to generate CT images of calibration and dose calculation phantoms at 80, 120, and 140 kVp. From the high and low energy images electron density ρ(e) and atomic number Z are obtained using a DECT algorithm. Following a correction derived from scans of the calibration phantom, accuracy on Z and ρ(e) of ±1% is obtained for all soft tissues with atomic number Z ∊ [6,8] except lung. GEANT4 MC dose calculations based on DECT segmentation agreed with the reference within ±4% for (103)Pd, the most sensitive source to tissue misassignments. SECT segmentation with three tissue bins as well as the TG-43 approach showed inferior accuracy with errors of up to 20%. Using seven tissue bins in our SECT segmentation brought errors within ±10% for (103)Pd. In general (125)I dose calculations showed higher accuracy than (103)Pd. Simulated image noise was found to decrease DECT accuracy by 3-4%. Our findings suggest that DECT-based segmentation yields improved accuracy when compared to SECT segmentation with seven tissue bins in LDR brachytherapy dose calculation for the specific case of our non-anthropomorphic phantom. The validity of our conclusions for clinical geometry as well as the importance of image noise in the tissue segmentation procedure deserves

  6. Stellar populations of stellar halos: Results from the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Cook, B. A.; Conroy, C.; Pillepich, A.; Hernquist, L.

    2016-08-01

    The influence of both major and minor mergers is expected to significantly affect gradients of stellar ages and metallicities in the outskirts of galaxies. Measurements of observed gradients are beginning to reach large radii in galaxies, but a theoretical framework for connecting the findings to a picture of galactic build-up is still in its infancy. We analyze stellar populations of a statistically representative sample of quiescent galaxies over a wide mass range from the Illustris simulation. We measure metallicity and age profiles in the stellar halos of quiescent Illustris galaxies ranging in stellar mass from 1010 to 1012 M ⊙, accounting for observational projection and luminosity-weighting effects. We find wide variance in stellar population gradients between galaxies of similar mass, with typical gradients agreeing with observed galaxies. We show that, at fixed mass, the fraction of stars born in-situ within galaxies is correlated with the metallicity gradient in the halo, confirming that stellar halos contain unique information about the build-up and merger histories of galaxies.

  7. Results from modeling and simulation of chemical downstream etch systems

    SciTech Connect

    Meeks, E.; Vosen, S.R.; Shon, J.W.; Larson, R.S.; Fox, C.A.; Buchenauer

    1996-05-01

    This report summarizes modeling work performed at Sandia in support of Chemical Downstream Etch (CDE) benchmark and tool development programs under a Cooperative Research and Development Agreement (CRADA) with SEMATECH. The Chemical Downstream Etch (CDE) Modeling Project supports SEMATECH Joint Development Projects (JDPs) with Matrix Integrated Systems, Applied Materials, and Astex Corporation in the development of new CDE reactors for wafer cleaning and stripping processes. These dry-etch reactors replace wet-etch steps in microelectronics fabrication, enabling compatibility with other process steps and reducing the use of hazardous chemicals. Models were developed at Sandia to simulate the gas flow, chemistry and transport in CDE reactors. These models address the essential components of the CDE system: a microwave source, a transport tube, a showerhead/gas inlet, and a downstream etch chamber. The models have been used in tandem to determine the evolution of reactive species throughout the system, and to make recommendations for process and tool optimization. A significant part of this task has been in the assembly of a reasonable set of chemical rate constants and species data necessary for successful use of the models. Often the kinetic parameters were uncertain or unknown. For this reason, a significant effort was placed on model validation to obtain industry confidence in the model predictions. Data for model validation were obtained from the Sandia Molecular Beam Mass Spectrometry (MBMS) experiments, from the literature, from the CDE Benchmark Project (also part of the Sandia/SEMATECH CRADA), and from the JDP partners. The validated models were used to evaluate process behavior as a function of microwave-source operating parameters, transport-tube geometry, system pressure, and downstream chamber geometry. In addition, quantitative correlations were developed between CDE tool performance and operation set points.

  8. Diamond-NICAM-SPRINTARS: downscaling and simulation results

    NASA Astrophysics Data System (ADS)

    Uchida, J.

    2012-12-01

    As a part of initiative "Research Program on Climate Change Adaptation" (RECCA) which investigates how predicted large-scale climate change may affect a local weather, and further examines possible atmospheric hazards that cities may encounter due to such a climate change, thus to guide policy makers on implementing new environmental measures, a "Development of Seamless Chemical AssimiLation System and its Application for Atmospheric Environmental Materials" (SALSA) project is funded by the Japanese Ministry of Education, Culture, Sports, Science and Technology and is focused on creating a regional (local) scale assimilation system that can accurately recreate and predict a transport of carbon dioxide and other air pollutants. In this study, a regional model of the next generation global cloud-resolving model NICAM (Non-hydrostatic ICosahedral Atmospheric Model) (Tomita and Satoh, 2004) is used and ran together with a transport model SPRINTARS (Spectral Radiation Transport Model for Aerosol Species) (Takemura et al, 2000) and a chemical transport model CHASER (Sudo et al, 2002) to simulate aerosols across urban cities (over a Kanto region including metropolitan Tokyo). The presentation will mainly be on a "Diamond-NICAM" (Figure 1), a regional climate model version of the global climate model NICAM, and its dynamical downscaling methodologies. Originally, a global NICAM can be described as twenty identical equilateral triangular-shaped panels covering the entire globe where grid points are at the corners of those panels, and to increase a resolution (called a "global-level" in NICAM), additional points are added at the middle of existing two adjacent points, so a number of panels increases by fourfold with an increment of one global-level. On the other hand, a Diamond-NICAM only uses two of those initial triangular-shaped panels, thus only covers part of the globe. In addition, NICAM uses an adaptive mesh scheme and its grid size can gradually decrease, as the grid

  9. Electron transport in the solar wind -results from numerical simulations

    NASA Astrophysics Data System (ADS)

    Smith, Håkan; Marsch, Eckart; Helander, Per

    A conventional fluid approach is in general insufficient for a correct description of electron trans-port in weakly collisional plasmas such as the solar wind. The classical Spitzer-Hürm theory is a not valid when the Knudsen number (the mean free path divided by the length scale of tem-perature variation) is greater than ˜ 10-2 . Despite this, the heat transport from Spitzer-Hürm a theory is widely used in situations with relatively long mean free paths. For realistic Knud-sen numbers in the solar wind, the electron distribution function develops suprathermal tails, and the departure from a local Maxwellian can be significant at the energies which contribute the most to the heat flux moment. To accurately model heat transport a kinetic approach is therefore more adequate. Different techniques have been used previously, e.g. particle sim-ulations [Landi, 2003], spectral methods [Pierrard, 2001], the so-called 16 moment method [Lie-Svendsen, 2001], and approximation by kappa functions [Dorelli, 2003]. In the present study we solve the Fokker-Planck equation for electrons in one spatial dimension and two velocity dimensions. The distribution function is expanded in Laguerre polynomials in energy, and a finite difference scheme is used to solve the equation in the spatial dimension and the velocity pitch angle. The ion temperature and density profiles are assumed to be known, but the electric field is calculated self-consistently to guarantee quasi-neutrality. The kinetic equation is of a two-way diffusion type, for which the distribution of particles entering the computational domain in both ends of the spatial dimension must be specified, leaving the outgoing distributions to be calculated. The long mean free path of the suprathermal electrons has the effect that the details of the boundary conditions play an important role in determining the particle and heat fluxes as well as the electric potential drop across the domain. Dorelli, J. C., and J. D. Scudder, J. D

  10. Frontotemporal oxyhemoglobin dynamics predict performance accuracy of dance simulation gameplay: temporal characteristics of top-down and bottom-up cortical activities.

    PubMed

    Ono, Yumie; Nomoto, Yasunori; Tanaka, Shohei; Sato, Keisuke; Shimada, Sotaro; Tachibana, Atsumichi; Bronner, Shaw; Noah, J Adam

    2014-01-15

    We utilized the high temporal resolution of functional near-infrared spectroscopy to explore how sensory input (visual and rhythmic auditory cues) are processed in the cortical areas of multimodal integration to achieve coordinated motor output during unrestricted dance simulation gameplay. Using an open source clone of the dance simulation video game, Dance Dance Revolution, two cortical regions of interest were selected for study, the middle temporal gyrus (MTG) and the frontopolar cortex (FPC). We hypothesized that activity in the FPC would indicate top-down regulatory mechanisms of motor behavior; while that in the MTG would be sustained due to bottom-up integration of visual and auditory cues throughout the task. We also hypothesized that a correlation would exist between behavioral performance and the temporal patterns of the hemodynamic responses in these regions of interest. Results indicated that greater temporal accuracy of dance steps positively correlated with persistent activation of the MTG and with cumulative suppression of the FPC. When auditory cues were eliminated from the simulation, modifications in cortical responses were found depending on the gameplay performance. In the MTG, high-performance players showed an increase but low-performance players displayed a decrease in cumulative amount of the oxygenated hemoglobin response in the no music condition compared to that in the music condition. In the FPC, high-performance players showed relatively small variance in the activity regardless of the presence of auditory cues, while low-performance players showed larger differences in the activity between the no music and music conditions. These results suggest that the MTG plays an important role in the successful integration of visual and rhythmic cues and the FPC may work as top-down control to compensate for insufficient integrative ability of visual and rhythmic cues in the MTG. The relative relationships between these cortical areas indicated

  11. A comparison of two position estimate algorithms that use ILS localizer and DME information. Simulation and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Scanlon, C.

    1984-01-01

    Simulation and flight tests were conducted to compare the accuracy of two algorithms designed to compute a position estimate with an airborne navigation computer. Both algorithms used ILS localizer and DME radio signals to compute a position difference vector to be used as an input to the navigation computer position estimate filter. The results of these tests show that the position estimate accuracy and response to artificially induced errors are improved when the position estimate is computed by an algorithm that geometrically combines DME and ILS localizer information to form a single component of error rather than by an algorithm that produces two independent components of error, one from a DMD input and the other from the ILS localizer input.

  12. Covariate-Based Assignment to Treatment Groups: Some Simulation Results.

    ERIC Educational Resources Information Center

    Jain, Ram B.; Hsu, Tse-Chi

    1980-01-01

    Six estimators of treatment effect when assignment to treatment groups is based on the covariate are compared in terms of empirical standard errors and percent relative bias. Results show that simple analysis of covariance estimator is not always appropriate. (Author/GK)

  13. Preliminary Benchmarking and MCNP Simulation Results for Homeland Security

    SciTech Connect

    Robert Hayes

    2008-03-01

    The purpose of this article is to create Monte Carlo N-Particle (MCNP) input stacks for benchmarked measurements sufficient for future perturbation studies and analysis. The approach was to utilize historical experimental measurements to recreate the empirical spectral results in MCNP, both qualitatively and quantitatively. Results demonstrate that perturbation analysis of benchmarked MCNP spectra can be used to obtain a better understanding of field measurement results which may be of national interest. If one or more spectral radiation measurements are made in the field and deemed of national interest, the potential source distribution, naturally occurring radioactive material shielding, and interstitial materials can only be estimated in many circumstances. The effects from these factors on the resultant spectral radiation measurements can be very confusing. If benchmarks exist which are sufficiently similar to the suspected configuration, these benchmarks can then be compared to the suspect measurements. Having these benchmarks with validated MCNP input stacks can substantially improve the predictive capability of experts supporting these efforts.

  14. Analysis of Numerical Simulation Results of LIPS-200 Lifetime Experiments

    NASA Astrophysics Data System (ADS)

    Chen, Juanjuan; Zhang, Tianping; Geng, Hai; Jia, Yanhui; Meng, Wei; Wu, Xianming; Sun, Anbang

    2016-06-01

    Accelerator grid structural and electron backstreaming failures are the most important factors affecting the ion thruster's lifetime. During the thruster's operation, Charge Exchange Xenon (CEX) ions are generated from collisions between plasma and neutral atoms. Those CEX ions grid's barrel and wall frequently, which cause the failures of the grid system. In order to validate whether the 20 cm Lanzhou Ion Propulsion System (LIPS-200) satisfies China's communication satellite platform's application requirement for North-South Station Keeping (NSSK), this study analyzed the measured depth of the pit/groove on the accelerator grid's wall and aperture diameter's variation and estimated the operating lifetime of the ion thruster. Different from the previous method, in this paper, the experimental results after the 5500 h of accumulated operation of the LIPS-200 ion thruster are presented firstly. Then, based on these results, theoretical analysis and numerical calculations were firstly performed to predict the on-orbit lifetime of LIPS-200. The results obtained were more accurate to calculate the reliability and analyze the failure modes of the ion thruster. The results indicated that the predicted lifetime of LIPS-200's was about 13218.1 h which could satisfy the required lifetime requirement of 11000 h very well.

  15. Head Kinematics Resulting from Simulated Blast Loading Scenarios

    DTIC Science & Technology

    2012-09-17

    pressure wave and the body which commonly damages air-filled organs such as the lungs , gastrointestinal tract, and ears. Secondary blast injury...subsequent impact with surrounding obstacles or the ground. Quaternary injury is the result of other factors including burns or inhalation of dust and gas... Woods , W., Feldman, S., Cummings, T., et al. (2011). Survival Risk Assessment for Primary Blast Exposures to the Head. Journal of neurotrauma, 2328

  16. Comparing the auscultatory accuracy of health care professionals using three different brands of stethoscopes on a simulator

    PubMed Central

    Mehmood, Mansoor; Abu Grara, Hazem L; Stewart, Joshua S; Khasawneh, Faisal A

    2014-01-01

    Background It is considered standard practice to use disposable or patient-dedicated stethoscopes to prevent cross-contamination between patients in contact precautions and others in their vicinity. The literature offers very little information regarding the quality of currently used stethoscopes. This study assessed the fidelity with which acoustics were perceived by a broad range of health care professionals using three brands of stethoscopes. Methods This prospective study used a simulation center and volunteer health care professionals to test the sound quality offered by three brands of commonly used stethoscopes. The volunteer’s proficiency in identifying five basic ausculatory sounds (wheezing, stridor, crackles, holosystolic murmur, and hyperdynamic bowel sounds) was tested, as well. Results A total of 84 health care professionals (ten attending physicians, 35 resident physicians, and 39 intensive care unit [ICU] nurses) participated in the study. The higher-end stethoscope was more reliable than lower-end stethoscopes in facilitating the diagnosis of the auscultatory sounds, especially stridor and crackles. Our volunteers detected all tested sounds correctly in about 69% of cases. As expected, attending physicians performed the best, followed by resident physicians and subsequently ICU nurses. Neither years of experience nor background noise seemed to affect performance. Postgraduate training continues to offer very little to improve our trainees’ auscultation skills. Conclusion The results of this study indicate that using low-end stethoscopes to care for patients in contact precautions could compromise identifying important auscultatory findings. Furthermore, there continues to be an opportunity to improve our physicians and ICU nurses’ auscultation skills. PMID:25152636

  17. Diffusion of emergency warning: Comparing empirical and simulation results

    SciTech Connect

    Rogers, G.O.; Sorensen, J.H.

    1988-10-01

    As officials consider emergency warning systems to alert the public to potential danger in areas surrounding hazardous facilities, the issue of warning system effectiveness is of critical importance. The purpose of this paper is to present the results of an analysis on the timing of warning system information dissemination including the alert of the public and delivery of a warning message. A general model of the diffusion of emergency warning is specified as a logistic function. Alternative warning systems are characterized in terms of the parameters of the model, which generally constrain the diffusion process to account for judged maximum penetration of each system for various locations and likelihood of being in those places by time of day. The results indicate that the combination of either telephone ring-down warning systems or tone-alert radio systems combined with sirens provide the most effective warning system under conditions of either very rapid onset, or close proximity or both. These results indicate that single technology systems provide adequate warning effectiveness when available warning time (to the public after detection and the decision to warn) extends to as much as an hour. Moreover, telephone ring-down systems provide similar coverage at approximately 30 minutes of available public warning time. 36 refs., 5 figs., 3 tabs.

  18. Aeolian Simulations: A Comparison of Numerical and Experimental Results

    NASA Astrophysics Data System (ADS)

    Mathews, O.; Burr, D. M.; Bridges, N. T.; Lyne, J. E.; Marshall, J. R.; Greeley, R.; White, B. R.; Hills, J.; Smith, K.; Prissel, T. C.; Aliaga-Caro, J. F.

    2010-12-01

    Aeolian processes are a major geomorphic agent on solid planetary bodies with atmospheres (Earth, Mars, Venus, and Titan). This paper describes preliminary efforts to model aeolian saltation using computational fluid dynamics (CFD) and to compare the results with those obtained in wind tunnel testing conducted in the Planetary Aeolian Laboratory at NASA Ames Research Center at ambient pressure. The end goal of the project is to develop an experimentally validated CFD approach for modeling aeolian sediment transport on Titan and other planetary bodies. The MARSWIT open-circuit tunnel in this work was specifically designed for atmospheric boundary layer studies. It is a variable-speed, continuous flow tunnel with a test section 1.0 m by 1.2 m in size; the tunnel is able to operate at pressures from 10 millibar to one atmosphere. Flow trips near the tunnel inlet ensure a fully developed, turbulent boundary layer in the test section. Wind speed and axial velocity profiles can be measured with a traversing pitot tube. In this study, sieved walnut shell particles (Greeley et al. 1976) with a density of ~1.1 g/cm3 were used to correlate the low gravity conditions and low sediment density on a body of interest to that of Earth. This sediment was placed in the tunnel, and the freestream airspeed raised to 5.4 m/s. A Phantom v12 camera imaged the resulting particle motion at 1000 frames per second, which was analyzed with ImageJ open-source software (Fig. 1). Airflow in the tunnel was modeled with FLUENT, a commercial CFD program. The turbulent scheme used in FLUENT to obtain closed-form solutions to the Navier-Stokes equations was a 1st Order, k-epsilon model. These methods produced computational velocity profiles that agree with experimental data to within 5-10%. Once modeling of the flow field had been achieved, a Euler-Lagrangian scheme was employed, treating the particles as spheres and tracking each particle at its center. The particles are assumed to interact with

  19. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1991-01-01

    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  20. Preliminary Benchmarking Efforts and MCNP Simulation Results for Homeland Security

    SciTech Connect

    Robert Hayes

    2008-04-18

    It is shown in this work that basic measurements made from well defined source detector configurations can be readily converted in to benchmark quality results by which Monte Carlo N-Particle (MCNP) input stacks can be validated. Specifically, a recent measurement made in support of national security at the Nevada Test Site (NTS) is described with sufficient detail to be submitted to the American Nuclear Society’s (ANS) Joint Benchmark Committee (JBC) for consideration as a radiation measurement benchmark. From this very basic measurement, MCNP input stacks are generated and validated both in predicted signal amplitude and spectral shape. Not modeled at this time are those perturbations from the more recent pulse height light (PHL) tally feature, although what spectral deviations are seen can be largely attributed to not including this small correction. The value of this work is as a proof-of-concept demonstration that with well documented historical testing can be converted into formal radiation measurement benchmarks. This effort would support virtual testing of algorithms and new detector configurations.

  1. Impact of Assimilation on Heavy Rainfall Simulations Using WRF Model: Sensitivity of Assimilation Results to Background Error Statistics

    NASA Astrophysics Data System (ADS)

    Rakesh, V.; Kantharao, B.

    2017-03-01

    Data assimilation is considered as one of the effective tools for improving forecast skill of mesoscale models. However, for optimum utilization and effective assimilation of observations, many factors need to be taken into account while designing data assimilation methodology. One of the critical components that determines the amount and propagation observation information into the analysis, is model background error statistics (BES). The objective of this study is to quantify how BES in data assimilation impacts on simulation of heavy rainfall events over a southern state in India, Karnataka. Simulations of 40 heavy rainfall events were carried out using Weather Research and Forecasting Model with and without data assimilation. The assimilation experiments were conducted using global and regional BES while the experiment with no assimilation was used as the baseline for assessing the impact of data assimilation. The simulated rainfall is verified against high-resolution rain-gage observations over Karnataka. Statistical evaluation using several accuracy and skill measures shows that data assimilation has improved the heavy rainfall simulation. Our results showed that the experiment using regional BES outperformed the one which used global BES. Critical thermo-dynamic variables conducive for heavy rainfall like convective available potential energy simulated using regional BES is more realistic compared to global BES. It is pointed out that these results have important practical implications in design of forecast platforms while decision-making during extreme weather events

  2. Conditions Affecting the Accuracy of Classical Equating Methods for Small Samples under the NEAT Design: A Simulation Study

    ERIC Educational Resources Information Center

    Sunnassee, Devdass

    2011-01-01

    Small sample equating remains a largely unexplored area of research. This study attempts to fill in some of the research gaps via a large-scale, IRT-based simulation study that evaluates the performance of seven small-sample equating methods under various test characteristic and sampling conditions. The equating methods considered are typically…

  3. Accuracy in contouring of small and low contrast lesions: Comparison between diagnostic quality computed tomography scanner and computed tomography simulation scanner-A phantom study

    SciTech Connect

    Ho, Yick Wing; Wong, Wing Kei Rebecca; Yu, Siu Ki; Lam, Wai Wang; Geng Hui

    2012-01-01

    To evaluate the accuracy in detection of small and low-contrast regions using a high-definition diagnostic computed tomography (CT) scanner compared with a radiotherapy CT simulation scanner. A custom-made phantom with cylindrical holes of diameters ranging from 2-9 mm was filled with 9 different concentrations of contrast solution. The phantom was scanned using a 16-slice multidetector CT simulation scanner (LightSpeed RT16, General Electric Healthcare, Milwaukee, WI) and a 64-slice high-definition diagnostic CT scanner (Discovery CT750 HD, General Electric Healthcare). The low-contrast regions of interest (ROIs) were delineated automatically upon their full width at half maximum of the CT number profile in Hounsfield units on a treatment planning workstation. Two conformal indexes, CI{sub in}, and CI{sub out}, were calculated to represent the percentage errors of underestimation and overestimation in the automated contours compared with their actual sizes. Summarizing the conformal indexes of different sizes and contrast concentration, the means of CI{sub in} and CI{sub out} for the CT simulation scanner were 33.7% and 60.9%, respectively, and 10.5% and 41.5% were found for the diagnostic CT scanner. The mean differences between the 2 scanners' CI{sub in} and CI{sub out} were shown to be significant with p < 0.001. A descending trend of the index values was observed as the ROI size increases for both scanners, which indicates an improved accuracy when the ROI size increases, whereas no observable trend was found in the contouring accuracy with respect to the contrast levels in this study. Images acquired by the diagnostic CT scanner allow higher accuracy on size estimation compared with the CT simulation scanner in this study. We recommend using a diagnostic CT scanner to scan patients with small lesions (<1 cm in diameter) for radiotherapy treatment planning, especially for those pending for stereotactic radiosurgery in which accurate delineation of small

  4. Accuracy for detection of simulated lesions: comparison of fluid-attenuated inversion-recovery, proton density--weighted, and T2-weighted synthetic brain MR imaging

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.; Itoh, R.; Melhem, E. R.

    2001-01-01

    OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.

  5. Comparison of the effect of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume on midwifery students: A randomized clinical trial

    PubMed Central

    Kordi, Masoumeh; Fakari, Farzaneh Rashidi; Mazloum, Seyed Reza; Khadivzadeh, Talaat; Akhlaghi, Farideh; Tara, Mahmoud

    2016-01-01

    Introduction: Delay in diagnosis of bleeding can be due to underestimation of the actual amount of blood loss during delivery. Therefore, this research aimed to compare the efficacy of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume. Materials and Methods: This three-group randomized clinical trial study was performed on 105 midwifery students in Mashhad School of Nursing and Midwifery in 2013. The samples were selected by the convenience method and were randomly divided into three groups of web-based, simulation-based, and conventional training. The three groups participated before and 1 week after the training course in eight station practical tests, then, the students of the web-based group were trained on-line for 1 week, the students of the simulation-based group were trained in the Clinical Skills Centre for 4 h, and the students of the conventional group were trained for 4 h presentation by researchers. The data gathering tool was a demographic questionnaire designed by the researchers and objective structured clinical examination. Data were analyzed by software version 11.5. Results: The accuracy of visual estimation of postpartum hemorrhage volume after training increased significantly in the three groups at all stations (1, 2, 4, 5, 6 and 7 (P = 0.001), 8 (P = 0.027)) except station 3 (blood loss of 20 cc, P = 0.095), but the mean score of blood loss estimation after training did not significantly different between the three groups (P = 0.95). Conclusion: Training increased the accuracy of estimation of postpartum hemorrhage, but no significant difference was found among the three training groups. We can use web-based training as a substitute or supplement of training along with two other more common simulation and conventional methods. PMID:27500175

  6. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures.

  7. Accuracy of diagnostic heat and moisture budgets using SESAME-79 field data as revealed by observing system simulation experiments. [Severe Environmental Storm and Mesoscale Experiment

    NASA Technical Reports Server (NTRS)

    Kuo, Y.-H.; Anthes, R. A.

    1984-01-01

    Observing system simulation experiments are used to investigate the accuracy of diagnostic heat and moisture budgets which employ the AVE-SESAME 1979 data. The time-including, four-dimensional data set of a mesoscale model is used to simulate rawinsonde observations from AVE-SESAME 1979. The 5 C/day (heat budget) and 2 g/kg per day (moisture budget) magnitudes of error obtained indicate difficulties in the diagnosis of the heating rate in weak convective systems. The influences exerted by observational frequency, objective analysis, observational density, vertical interpolation, and observational errors on the budget are also studied, and it is found that the temporal and spatial resolution of the SESAME regional network is marginal for diagnosing convective effects on a horizontal time scale of 550 x 550 km.

  8. Simulated changes in ground-water levels resulting from proposed phosphate mining, west-central Florida; preliminary results

    USGS Publications Warehouse

    Wilson, William Edward

    1977-01-01

    A digital model of two-dimensional ground-water flow was used to simulate projected changes in the Floridan aquifer potentiometric surface in 1985 and 2000, resulting from proposed ground-water developments by the phosphate mining industry in west-central Florida. The .model was calibrated under steady-state conditions to simulate the September 1975 potentiometric surface. Under one development plan, existing phosphate mines in Polk County would continue to withdraw ground water at 1975 rates, until phased out as the ore is depleted; no new mines would be introduced. Preliminary results indicate that under this plan, maximum simulated recovery of the potentiometric surface is 11.9 feet by 1985 and 36.5 feet by 2000. Under an alternative plan, all proposed mines in Polk, Hardee, DeSoto, Hillsborough and Manatee Counties would begin operations: in addition to the continuation and phasing out of existing mines. Preliminary results indicate that the potentiometric surface would generally recover in Polk County and decline elsewhere in the modeled area. Maximum simulated recovery is 4.5 feet by 1985 and 29.6 feet by 2000; maximum simulated drawdown is 15.1 feet by 1985 and feet by 2000. All results are preliminary and subject to revision as the investigation continues.

  9. Evaluation of the accuracy of an offline seasonally-varying matrix transport model for simulating ideal age

    NASA Astrophysics Data System (ADS)

    Bardin, Ann; Primeau, François; Lindsay, Keith; Bradley, Andrew

    2016-09-01

    Newton-Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton's method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earth System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. For many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.

  10. Evaluation of the efficiency and accuracy of new methods for atmospheric opacity and radiative transfer calculations in planetary general circulation model simulations

    NASA Astrophysics Data System (ADS)

    Zube, Nicholas Gerard; Zhang, Xi; Natraj, Vijay

    2016-10-01

    General circulation models often incorporate simple approximations of heating between vertically inhomogeneous layers rather than more accurate but computationally expensive radiative transfer (RT) methods. With the goal of developing a GCM package that can model both solar system bodies and exoplanets, it is vital to examine up-to-date RT models to optimize speed and accuracy for heat transfer calculations. Here, we examine a variety of interchangeable radiative transfer models in conjunction with MITGCM (Hill and Marshall, 1995). First, for atmospheric opacity calculations, we test gray approximation, line-by-line, and correlated-k methods. In combination with these, we also test RT routines using 2-stream DISORT (discrete ordinates RT), N-stream DISORT (Stamnes et al., 1988), and optimized 2-stream (Spurr and Natraj, 2011). Initial tests are run using Jupiter as an example case. The results can be compared in nine possible configurations for running a complete RT routine within a GCM. Each individual combination of opacity and RT methods is contrasted with the "ground truth" calculation provided by the line-by-line opacity and N-stream DISORT, in terms of computation speed and accuracy of the approximation methods. We also examine the effects on accuracy when performing these calculations at different time step frequencies within MITGCM. Ultimately, we will catalog and present the ideal RT routines that can replace commonly used approximations within a GCM for a significant increase in calculation accuracy, and speed comparable to the dynamical time steps of MITGCM. Future work will involve examining whether calculations in the spatial domain can also be reduced by smearing grid points into larger areas, and what effects this will have on overall accuracy.

  11. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  12. A simulation study of the flight dynamics of elastic aircraft. Volume 1: Experiment, results and analysis

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Davidson, John B.; Schmidt, David K.

    1987-01-01

    The simulation experiment described addresses the effects of structural flexibility on the dynamic characteristics of a generic family of aircraft. The simulation was performed using the NASA Langley VMS simulation facility. The vehicle models were obtained as part of this research. The simulation results include complete response data and subjective pilot ratings and comments and so allow a variety of analyses. The subjective ratings and analysis of the time histories indicate that increased flexibility can lead to increased tracking errors, degraded handling qualities, and changes in the frequency content of the pilot inputs. These results, furthermore, are significantly affected by the visual cues available to the pilot.

  13. Impact of Calibrated Land Surface Model Parameters on the Accuracy and Uncertainty of Land-Atmosphere Coupling in WRF Simulations

    NASA Technical Reports Server (NTRS)

    Santanello, Joseph A., Jr.; Kumar, Sujay V.; Peters-Lidard, Christa D.; Harrison, Ken; Zhou, Shujia

    2012-01-01

    Land-atmosphere (L-A) interactions play a critical role in determining the diurnal evolution of both planetary boundary layer (PBL) and land surface temperature and moisture budgets, as well as controlling feedbacks with clouds and precipitation that lead to the persistence of dry and wet regimes. Recent efforts to quantify the strength of L-A coupling in prediction models have produced diagnostics that integrate across both the land and PBL components of the system. In this study, we examine the impact of improved specification of land surface states, anomalies, and fluxes on coupled WRF forecasts during the summers of extreme dry (2006) and wet (2007) land surface conditions in the U.S. Southern Great Plains. The improved land initialization and surface flux parameterizations are obtained through the use of a new optimization and uncertainty estimation module in NASA's Land Information System (LIS-OPT/UE), whereby parameter sets are calibrated in the Noah land surface model and classified according to a land cover and soil type mapping of the observation sites to the full model domain. The impact of calibrated parameters on the a) spinup of the land surface used as initial conditions, and b) heat and moisture states and fluxes of the coupled WRF simulations are then assessed in terms of ambient weather and land-atmosphere coupling along with measures of uncertainty propagation into the forecasts. In addition, the sensitivity of this approach to the period of calibration (dry, wet, average) is investigated. Finally, tradeoffs of computational tractability and scientific validity, and the potential for combining this approach with satellite remote sensing data are also discussed.

  14. Improving the accuracy of hohlraum simulations by calibrating the `SNB' multigroup diffusion model for nonlocal heat transport against a VFP code

    NASA Astrophysics Data System (ADS)

    Brodrick, Jonathan; Ridgers, Christopher; Dudson, Ben; Kingham, Robert; Marinak, Marty; Patel, Mehul; Umansky, Maxim; Chankin, Alex; Omotani, John

    2016-10-01

    Nonlocal heat transport, occurring when temperature gradients become steep on the scale of the electron mean free path (mfp), has proven critical in accurately predicting ignition-scale hohlraum energetics. A popular approach, and modern alternative to flux limiters, is the `SNB' model. This is implemented in both the HYDRA code used for simulating National Ignition Facility experiments and the CHIC code developed at the CELIA laboratory. We have performed extensive comparisons of the SNB heat flow predictions with two VFP codes, IMPACT and KIPP and found that calibrating the mfp to achieve agreement for a linear problem also improves nonlinear accuracy. Furthermore, we identify that using distinct electron-ion and electron-electron mfp's instead of a geometrically averaged one improves predictive capability when there are strong ionisation (Z) gradients. This work is funded by EPSRC Grant EP/K504178/1.

  15. Assessment of the accuracy of snow surface direct beam spectral albedo under a variety of overcast skies derived by a reciprocal approach through radiative transfer simulation.

    PubMed

    Li, Shusun; Zhou, Xiaobing

    2003-09-20

    With radiative transfer simulations it is suggested that stable estimates of the highly anisotropic direct beam spectral albedo of snow surface can be derived reciprocally under a variety of overcast skies. An accuracy of +/- 0.008 is achieved over a solar zenith angle range of theta0 < or = 74 degrees for visible wavelengths and up to theta0 < or = 63 degrees at the near-infrared wavelength lambda = 862 nm. This new method helps expand the database of snow surface albedo for the polar regions where direct measurement of clear-sky surface albedo is limited to large theta0's only. The enhancement will assist in the validation of snow surface albedo models and improve the representation of polar surface albedo in global circulation models.

  16. Linear and Logarithmic Speed-Accuracy Trade-Offs in Reciprocal Aiming Result from Task-Specific Parameterization of an Invariant Underlying Dynamics

    ERIC Educational Resources Information Center

    Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.

    2009-01-01

    The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…

  17. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  18. Evaluating the Effects of Ankle-Foot Orthosis Mechanical Property Assumptions on Gait Simulation Muscle Force Results.

    PubMed

    Hegarty, Amy K; Petrella, Anthony J; Kurz, Max J; Silverman, Anne K

    2017-03-01

    Musculoskeletal modeling and simulation techniques have been used to gain insights into movement disabilities for many populations, such as ambulatory children with cerebral palsy (CP). The individuals who can benefit from these techniques are often limited to those who can walk without assistive devices, due to challenges in accurately modeling these devices. Specifically, many children with CP require the use of ankle-foot orthoses (AFOs) to improve their walking ability, and modeling these devices is important to understand their role in walking mechanics. The purpose of this study was to quantify the effects of AFO mechanical property assumptions, including rotational stiffness, damping, and equilibrium angle of the ankle and subtalar joints, on the estimation of lower-limb muscle forces during stance for children with CP. We analyzed two walking gait cycles for two children with CP while they were wearing their own prescribed AFOs. We generated 1000-trial Monte Carlo simulations for each of the walking gait cycles, resulting in a total of 4000 walking simulations. We found that AFO mechanical property assumptions influenced the force estimates for all the muscles in the model, with the ankle muscles having the largest resulting variability. Muscle forces were most sensitive to assumptions of AFO ankle and subtalar stiffness, which should therefore be measured when possible. Muscle force estimates were less sensitive to estimates of damping and equilibrium angle. When stiffness measurements are not available, limitations on the accuracy of muscle force estimates for all the muscles in the model, especially the ankle muscles, should be acknowledged.

  19. The accuracy of linear measurements of maxillary and mandibular edentulous sites in cone-beam computed tomography images with different fields of view and voxel sizes under simulated clinical conditions

    PubMed Central

    Ramesh, Aruna; Pagni, Sarah

    2016-01-01

    Purpose The objective of this study was to investigate the effect of varying resolutions of cone-beam computed tomography images on the accuracy of linear measurements of edentulous areas in human cadaver heads. Intact cadaver heads were used to simulate a clinical situation. Materials and Methods Fiduciary markers were placed in the edentulous areas of 4 intact embalmed cadaver heads. The heads were scanned with two different CBCT units using a large field of view (13 cm×16 cm) and small field of view (5 cm×8 cm) at varying voxel sizes (0.3 mm, 0.2 mm, and 0.16 mm). The ground truth was established with digital caliper measurements. The imaging measurements were then compared with caliper measurements to determine accuracy. Results The Wilcoxon signed rank test revealed no statistically significant difference between the medians of the physical measurements obtained with calipers and the medians of the CBCT measurements. A comparison of accuracy among the different imaging protocols revealed no significant differences as determined by the Friedman test. The intraclass correlation coefficient was 0.961, indicating excellent reproducibility. Inter-observer variability was determined graphically with a Bland-Altman plot and by calculating the intraclass correlation coefficient. The Bland-Altman plot indicated very good reproducibility for smaller measurements but larger discrepancies with larger measurements. Conclusion The CBCT-based linear measurements in the edentulous sites using different voxel sizes and FOVs are accurate compared with the direct caliper measurements of these sites. Higher resolution CBCT images with smaller voxel size did not result in greater accuracy of the linear measurements. PMID:27358816

  20. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  1. ON THE MINIMAL ACCURACY REQUIRED FOR SIMULATING SELF-GRAVITATING SYSTEMS BY MEANS OF DIRECT N-BODY METHODS

    SciTech Connect

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-10

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.

  2. Results of GEANT simulations and comparison with first experiments at DANCE.

    SciTech Connect

    Reifarth, R.; Bredeweg, T. A.; Browne, J. C.; Esch, E. I.; Haight, R. C.; O'Donnell, J. M.; Kronenberg, A.; Rundberg, R. S.; Ullmann, J. L.; Vieira, D. J.; Wilhelmy, J. B.; Wouters, J. M.

    2003-07-29

    This report describes intensive Monte Carlo simulations carried out to be compared with the results of the first run cycle with DANCE (Detector for Advanced Neutron Capture Experiments). The experimental results were gained during the commissioning phase 2002/2003 with only a part of the array. Based on the results of these simulations the most important items to be improved before the next experiments will be addressed.

  3. DoSSiER: Database of scientific simulation and experimental results

    SciTech Connect

    Wenzel, Hans; Yarba, Julia; Genser, Krzystof; Elvira, Daniel; Pokorski, Witold; Carminati, Federico; Konstantinov, Dmitri; Ribon, Alberto; Folger, Gunter; Dotti, Andrea

    2016-08-01

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  4. A method for data handling numerical results in parallel OpenFOAM simulations

    SciTech Connect

    Anton, Alin; Muntean, Sebastian

    2015-12-31

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  5. Examining the accuracy of astrophysical disk simulations with a generalized hydrodynamical test problem [The role of pressure and viscosity in SPH simulations of astrophysical disks

    SciTech Connect

    Raskin, Cody; Owen, J. Michael

    2016-10-24

    Here, we discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.

  6. Examining the accuracy of astrophysical disk simulations with a generalized hydrodynamical test problem [The role of pressure and viscosity in SPH simulations of astrophysical disks

    DOE PAGES

    Raskin, Cody; Owen, J. Michael

    2016-10-24

    Here, we discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extensionmore » of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less

  7. Results from a round-robin study assessing the precision and accuracy of LA-ICPMS U/Pb geochronology of zircon

    NASA Astrophysics Data System (ADS)

    Hanchar, J. M.

    2009-12-01

    A round-robin study was undertaken to assess the current state of precision and accuracy that can be achieved in LA-ICPMS U/Pb geochronology of zircon. The initial plan was to select abundant, well-characterized zircon samples to distribute to participants in the study. Three suitable samples were found, evaluated, and dated using ID-TIMS. Twenty-five laboratories in North America and Europe were asked to participate in the study. Eighteen laboratories agreed to participate, of which seventeen submitted final results. It was decided at the outset of the project that the identities of the participating researchers and laboratories not be revealed until the manuscript stemming from the project was completed. Participants were sent either fragments of zircon crystal or whole zircon crystals, selected randomly after being thoroughly mixed. Participants were asked to conform to specific requirements. These include providing all analytical conditions and equipment used, submission of all data acquired, and submitting their preferred data and preferred ages for the three samples. The participating researchers used a wide range of analytical methods (e.g., instrumentation, data reduction, error propagation) for the LA-ICPMS U/Th geochronology. These combined factors made it difficult for direct comparison of the results that were submitted. Most of the LA-ICPMS results submitted were within 2% r.s.d. of the ID-TIMS values for the three samples in the study. However, the error bars for the majority of the LA-ICPMS results for the three samples did not overlap with the ID-TIMS results. These results suggest a general underestimation of the errors calculated for the LA-ICPMS analyses U/Pb zircon analyses.

  8. Simulation loop between cad systems, GEANT-4 and GeoModel: Implementation and results

    NASA Astrophysics Data System (ADS)

    Sharmazanashvili, A.; Tsutskiridze, Niko

    2016-09-01

    Compare analysis of simulation and as-built geometry descriptions of detector is important field of study for data_vs_Monte-Carlo discrepancies. Shapes consistency and detalization is not important while adequateness of volumes and weights of detector components are essential for tracking. There are 2 main reasons of faults of geometry descriptions in simulation: (1) Difference between simulated and as-built geometry descriptions; (2) Internal inaccuracies of geometry transformations added by simulation software infrastructure itself. Georgian Engineering team developed hub on the base of CATIA platform and several tools enabling to read in CATIA different descriptions used by simulation packages, like XML->CATIA; VP1->CATIA; Geo-Model->CATIA; Geant4->CATIA. As a result it becomes possible to compare different descriptions with each other using the full power of CATIA and investigate both classes of reasons of faults of geometry descriptions. Paper represents results of case studies of ATLAS Coils and End-Cap toroid structures.

  9. Comparison between simulations and lab results on the ASSIST test-bench

    NASA Astrophysics Data System (ADS)

    Le Louarn, Miska; Madec, Pierre-Yves; Kolb, Johann; Paufique, Jerome; Oberti, Sylvain; La Penna, Paolo; Arsenault, Robin

    2016-07-01

    We present the latest comparison results between laboratory tests carried out on the ASSIST test bench and Octopus end-to end simulations. We simulated, as closely to the lab conditions as possible, the different AOF modes (Maintenance and commissioning mode (SCAO), GRAAL (GLAO in the near IR), Galacsi Wide Field mode (GLAO in the visible) and Galacsi narrow field mode (LTAO in the visible)). We then compared the simulation results to the ones obtained on the lab bench. Several aspects were investigated, like number of corrected modes, turbulence wind speeds, LGS photon flux etc. The agreement between simulations and lab is remarkably good for all investigated parameters, giving great confidence in both simulation tool and performance of the AO system in the lab.

  10. Should we adjust for a confounder if empirical and theoretical criteria yield contradictory results? A simulation study

    PubMed Central

    Lee, Paul H.

    2014-01-01

    Confounders can be identified by one of two main strategies: empirical or theoretical. Although confounder identification strategies that combine empirical and theoretical strategies have been proposed, the need for adjustment remains unclear if the empirical and theoretical criteria yield contradictory results due to random error. We simulated several scenarios to mimic either the presence or the absence of a confounding effect and tested the accuracy of the exposure-outcome association estimates with and without adjustment. Various criteria (significance criterion, Change-in-estimate(CIE) criterion with a 10% cutoff and with a simulated cutoff) were imposed, and a range of sample sizes were trialed. In the presence of a true confounding effect, unbiased estimates were obtained only by using the CIE criterion with a simulated cutoff. In the absence of a confounding effect, all criteria performed well regardless of adjustment. When the confounding factor was affected by both exposure and outcome, all criteria yielded accurate estimates without adjustment, but the adjusted estimates were biased. To conclude, theoretical confounders should be adjusted for regardless of the empirical evidence found. The adjustment for factors that do not have a confounding effect minimally effects. Potential confounders affected by both exposure and outcome should not be adjusted for. PMID:25124526

  11. A Novel Simulation Technician Laboratory Design: Results of a Survey-Based Study

    PubMed Central

    Hughes, Patrick G; Friedl, Ed; Ortiz Figueroa, Fabiana; Cepeda Brito, Jose R; Frey, Jennifer; Birmingham, Lauren E; Atkinson, Steven Scott

    2016-01-01

    Objective  The purpose of this study was to elicit feedback from simulation technicians prior to developing the first simulation technician-specific simulation laboratory in Akron, OH. Background Simulation technicians serve a vital role in simulation centers within hospitals/health centers around the world. The first simulation technician degree program in the US has been approved in Akron, OH. To satisfy the requirements of this program and to meet the needs of this special audience of learners, a customized simulation lab is essential.  Method A web-based survey was circulated to simulation technicians prior to completion of the lab for the new program. The survey consisted of questions aimed at identifying structural and functional design elements of a novel simulation center for the training of simulation technicians. Quantitative methods were utilized to analyze data. Results Over 90% of technicians (n=65) think that a lab designed explicitly for the training of technicians is novel and beneficial. Approximately 75% of respondents think that the space provided appropriate audiovisual (AV) infrastructure and space to evaluate the ability of technicians to be independent. The respondents think that the lab needed more storage space, visualization space for a large number of students, and more space in the technical/repair area. Conclusions  A space designed for the training of simulation technicians was considered to be beneficial. This laboratory requires distinct space for technical repair, adequate bench space for the maintenance and repair of simulators, an appropriate AV infrastructure, and space to evaluate the ability of technicians to be independent. PMID:27096134

  12. Results of computer calculations for a simulated distribution of kidney cells

    NASA Technical Reports Server (NTRS)

    Micale, F. J.

    1985-01-01

    The results of computer calculations for a simulated distribution of kidney cells are given. The calculations were made for different values of electroosmotic flow, U sub o, and the ratio of sample diameter to channel diameter, R.

  13. Comparison of experimental results with numerical simulations for pulsed thermographic NDE

    NASA Astrophysics Data System (ADS)

    Sripragash, Letchuman; Sundaresan, Mannur

    2017-02-01

    This paper examines pulse thermographic nondestructive evaluation of flat bottom holes of isotropic materials. Different combinations of defect diameters and depths are considered. Thermographic Signal Reconstruction (TSR) method is used to analyze these results. In addition, a new normalization procedure is used to remove the dependence of thermographic results on the material properties and instrumentation settings during these experiments. Hence the normalized results depend only on the geometry of the specimen and the defects. These thermographic NDE procedures were also simulated using finite element technique for a variety of defect configurations. The data obtained from numerical simulations were also processed using the normalization scheme. Excellent agreement was seen between the results obtained from experiments and numerical simulations. Therefore, the scheme is extended to introduce a correlation technique by which numerical simulations are used to quantify the defect parameters.

  14. Updated electron-cloud simulation results for the Large Hadron Collider (LHC)

    SciTech Connect

    Furman, M. A.; Pivi, M.

    2001-06-26

    This paper presents new simulation results for the power deposition from the electron cloud in the beam screen of the Large Hadron Collider (LHC). We pay particular attention to the sensitivity of the results to certain low-energy parameters of the secondary electron (SE)emission. Most of these parameters, which constitute an input to the simulation program, are extracted from recent measurements at CERN and SLAC.

  15. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  16. Computational manufacturing of optical interference coatings: method, simulation results, and comparison with experiment.

    PubMed

    Friedrich, Karen; Wilbrandt, Steffen; Stenzel, Olaf; Kaiser, Norbert; Hoffmann, Karl Heinz

    2010-06-01

    Virtual deposition runs have been performed to estimate the production yield of selected oxide optical interference coatings when plasma ion-assisted deposition with an advanced plasma source is applied. Thereby, deposition of each layer can be terminated either by broadband optical monitoring or quartz crystal monitoring. Numerous deposition runs of single-layer coatings have been performed to investigate the reproducibility of coating properties and to quantify deposition errors for the simulation. Variations of the following parameters are considered in the simulation: refractive index, extinction coefficient, and film thickness. The refractive index and the extinction coefficient are simulated in terms of the oscillator model. The parameters are varied using an apodized normal distribution with known mean value and standard deviation. Simulation of variations in the film thickness is performed specific to the selected monitoring strategy. Several deposition runs of the selected oxide interference coatings have been performed to verify the simulation results by experimental data.

  17. Exploratory Analysis Of The 3D Cloud Resolving Model Simulations of TOGA COARE: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Mendes, S.; Bretherton, C.

    2007-12-01

    Global climate model studies suggest that cumulus momentum transport (CMT) in tropical oceanic convective cloud systems plays a significant role in the tropical mean circulation and transient variability. CMT is difficult to measure directly and can depend on the detailed structure and organization of the convection. Yet there have been comparatively few evaluations of CMT parameterizations and the assumptions underlying them using 3D cloud resolving model (CRM) simulations. We have analyzed CMT in a four month 3D 64x64x64 gridpoint CRM simulation of TOGA COARE with 1 km horizontal resolution. An additional 256x256x64 large-domain simulation was performed for a 10 day subperiod with strong convection combined with substantial mean vertical zonal wind shear, conditions favorably for strong CMT. Both simulations were identically forced with prescribed vertical motion, horizontal temperature and moisture advection, and relaxation of the domain-mean wind profile to observations on a one-hour timescale. Both were initialized with small amplitude white noise, but spun up realistic convection in less than a day. The domain-mean CMT in the small and large domain simulations for the 10-day common simulation period was compared. The two simulations showed remarkably similar CMT profiles on daily-mean timescales, suggesting that mesoscale contributions to CMT of scales greater than 64 km were small. The skill of a downgradient mixing-length parameterization CMT = Mc*L*DU/Dz was also tested. Here , Mc is convective mass flux, dU/dz is mean vertical shear, and L is a mixing length for updraft zonal velocity perturbations associated with entrainment and horizontal pressure gradient accelerations. This was done by regressing CMT at each height was regressed against Mc*DU/Dz at the same height across all 3D model snapshots over the 10 days. The correlation coefficient describes the accuracy of this downgradient parameterization, and L was calculated as the regression slope. In the

  18. Application of ARM Cloud Radar Simulator to GCMs: Plan, Issues, and Preliminary Results

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Xie, S.; Klein, S. A.; Marchand, R.; Lin, W.; Kollias, P.; Clothiaux, E. E.

    2015-12-01

    It has been challenging to directly compare ARM ground-based cloud radar measurements with climate model output because of limitations or features of the observing process. To address this issue, an ongoing effort in ARM is to implement ARM cloud radar simulator, similar to satellite simulators that have been widely used in the global climate modeling community, to convert model data into pseudo-ARM cloud radar observations. The simulator mimics the instrument view of a narrow atmospheric column (as compared to a large GCM grid-cell) thus allowing meaningful comparison between model output and ARM cloud observations. This work is being closely coordinated with the CFMIP (the Cloud-Feedback Model Intercomparison Project) Observation Simulator Package (COSP, www.cfmip.net; Bodas-Salcedo et al. 2011) project. The goal is to incorporate ARM simulators into COSP with the global climate modeling community as the target user. This poster provides details about the implementation plan, discusses potential issues with ground-based simulators for both ARM radars, and presents preliminary results in evaluating the DOE Accelerated Climate Model for Energy (ACME) simulated clouds with ARM radar observations through applying the ARM radar simulator to ACME. Future plans on this project are discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  19. Flow and transport in highly heterogeneous formations: 3. Numerical simulations and comparison with theoretical results

    NASA Astrophysics Data System (ADS)

    Janković, I.; Fiori, A.; Dagan, G.

    2003-09-01

    In parts 1 [, 2003] and 2 [, 2003] a multi-indicator model of heterogeneous formations is devised in order to solve flow and transport in highly heterogeneous formations. The isotropic medium is made up from circular (2-D) or spherical (3-D) inclusions of different conductivities K, submerged in a matrix of effective conductivity. This structure is different from the multi-Gaussian one, even for equal log conductivity distribution and integral scale. A snapshot of a two-dimensional plume in a highly heterogeneous medium of lognormal conductivity distribution shows that the model leads to a complex transport picture. The present study was limited, however, to investigating the statistical moments of ergodic plumes. Two approximate semianalytical solutions, based on a self-consistent model (SC) and on a first-order perturbation in the log conductivity variance (FO), are used in parts 1 and 2 in order to compute the statistical moments of flow and transport variables for a lognormal conductivity pdf. In this paper an efficient and accurate numerical procedure, based on the analytic-element method [, 1989], is used in order to validate the approximate results. The solution satisfies exactly the continuity equation and at high-accuracy the continuity of heads at inclusion boundaries. The dimensionless dependent variables depend on two parameters: the volume fraction n of inclusions in the medium and the log conductivity variance σY2. For inclusions of uniform radius, the largest n was 0.9 (2-D) and 0.7 (3-D), whereas the largest σY2 was equal to 10. The SC approximation underestimates the longitudinal Eulerian velocity variance for increasing n and increasing σY2 in 2-D and, to a lesser extent, in 3-D, as compared to numerical results. The FO approximation overestimates these variances, and these effects are larger in the transverse direction. The longitudinal velocity pdf is highly skewed and negative velocities are present at high σY2, especially in 2-D. The main

  20. Wave spectra of a shoaling wave field: A comparison of experimental and simulated results

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; Grosch, C. E.; Poole, L. R.

    1982-01-01

    Wave profile measurements made from an aircraft crossing the North Carolina continental shelf after passage of Tropical Storm Amy in 1975 are used to compute a series of wave energy spectra for comparison with simulated spectra. Results indicate that the observed wave field experiences refraction and shoaling effects causing statistically significant changes in the spectral density levels. A modeling technique is used to simulate the spectral density levels. Total energy levels of the simulated spectra are within 20 percent of those of the observed wave field. The results represent a successful attempt to theoretically simulate, at oceanic scales, the decay of a wave field which contains significant wave energies from deepwater through shoaling conditions.

  1. Results from a limited area mesoscale numerical simulation for 10 April 1979

    NASA Technical Reports Server (NTRS)

    Kalb, M. W.

    1985-01-01

    Results are presented from a nine-hour limited area fine mesh (35-km) mesoscale model simulation initialized with SESAME-AVE I radiosonde data for Apr. 10, 1979 at 2100 GMT. Emphasis is on the diagnosis of mesoscale structure in the mass and precipitation fields. Along the Texas/Oklahoma border, independent of the short wave, convective precipitation formed several hours into the simulation and was organized into a narrow band suggestive of the observed April 10 squall line.

  2. Columbus meteoroid/debris protection study - Experimental simulation techniques and results

    NASA Astrophysics Data System (ADS)

    Schneider, E.; Kitta, K.; Stilp, A.; Lambert, M.; Reimerdes, H. G.

    1992-08-01

    The methods and measurement techniques used in experimental simulations of micrometeoroid and space debris impacts with the ESA's laboratory module Columbus are described. Experiments were carried out at the two-stage light gas gun acceleration facilities of the Ernst-Mach Institute. Results are presented on simulations of normal impacts on bumper systems, oblique impacts on dual bumper systems, impacts into cooled targets, impacts into pressurized targets, and planar impacts of low-density projectiles.

  3. Handling Qualities Results of an Initial Geared Flap Tilt Wing Piloted Simulation

    NASA Technical Reports Server (NTRS)

    Guerrero, Lourdes M.; Corliss, Lloyd D.

    1991-01-01

    An exploratory simulation study of a novel approach to pitch control for a tilt wing aircraft was conducted in 1990 on the NASA-Ames Vertical Motion Simulator. The purpose of the study was to evaluate and compare the handling qualities of both a conventional and a geared flap tilt wing control configuration. The geared flap is an innovative control concept which has the potential for reducing or eliminating the horizontal pitch control tail rotor or reaction jets required by prior tilt wing designs. The handling qualities results of the geared flap control configuration are presented in this paper and compared to the conventional (programmed flap) tilt wing control configuration. This paper also describes the geared flap concept, the tilt wing aircraft, the simulation model, the simulation facility and experiment setup, and the pilot evaluation tasks and procedures.

  4. Ship's behaviour during hurricane Sandy near the USA coasts. Simulation results

    NASA Astrophysics Data System (ADS)

    Chiotoroiu, B.; Grosan, N.; Soare, L.

    2015-11-01

    The aim of this study is to analyze the impact of the stormy weather during hurricane Sandy on an oil tank using the navigation simulator. Meteorological and waves maps from forecast models are used, together with relevant information from the meteorological warnings. The simulation sessions were performed on the navigation simulator from the Constanta Maritime University and allowed us the selection of specific parameters for the ship and the environment in order to observe the ship's behavior in heavy sea conditions. Simulation results are important due to the unexpected environmental conditions and the ship position: very close to the hurricane centre when the storm began to change its track and to transform into an extra tropical cyclone.

  5. THEMATIC ACCURACY OF THE 1992 NATIONAL LAND-COVER DATA (NLCD) FOR THE EASTERN UNITED STATES: STATISTICAL METHODOLOGY AND REGIONAL RESULTS

    EPA Science Inventory

    The accuracy of the National Land Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or a...

  6. Comparing Simulation Results with Traditional PRA Model on a Boiling Water Reactor Station Blackout Case Study

    SciTech Connect

    Zhegang Ma; Diego Mandelli; Curtis Smith

    2011-07-01

    A previous study used RELAP and RAVEN to conduct a boiling water reactor station black-out (SBO) case study in a simulation based environment to show the capabilities of the risk-informed safety margin characterization methodology. This report compares the RELAP/RAVEN simulation results with traditional PRA model results. The RELAP/RAVEN simulation run results were reviewed for their input parameters and output results. The input parameters for each simulation run include various timing information such as diesel generator or offsite power recovery time, Safety Relief Valve stuck open time, High Pressure Core Injection or Reactor Core Isolation Cooling fail to run time, extended core cooling operation time, depressurization delay time, and firewater injection time. The output results include the maximum fuel clad temperature, the outcome, and the simulation end time. A traditional SBO PRA model in this report contains four event trees that are linked together with the transferring feature in SAPHIRE software. Unlike the usual Level 1 PRA quantification process in which only core damage sequences are quantified, this report quantifies all SBO sequences, whether they are core damage sequences or success (i.e., non core damage) sequences, in order to provide a full comparison with the simulation results. Three different approaches were used to solve event tree top events and quantify the SBO sequences: “W” process flag, default process flag without proper adjustment, and default process flag with adjustment to account for the success branch probabilities. Without post-processing, the first two approaches yield incorrect results with a total conditional probability greater than 1.0. The last approach accounts for the success branch probabilities and provides correct conditional sequence probabilities that are to be used for comparison. To better compare the results from the PRA model and the simulation runs, a simplified SBO event tree was developed with only four

  7. High Fidelity Thermal Simulators for Non-Nuclear Testing: Analysis and Initial Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David

    2007-01-01

    Non-nuclear testing can be a valuable tool in the development of a space nuclear power system, providing system characterization data and allowing one to work through various fabrication, assembly and integration issues without the cost and time associated with a full ground nuclear test. In a non-nuclear test bed, electric heaters are used to simulate the heat from nuclear fuel. Testing with non-optimized heater elements allows one to assess thermal, heat transfer, and stress related attributes of a given system, but fails to demonstrate the dynamic response that would be present in an integrated, fueled reactor system. High fidelity thermal simulators that match both the static and the dynamic fuel pin performance that would be observed in an operating, fueled nuclear reactor can vastly increase the value of non-nuclear test results. With optimized simulators, the integration of thermal hydraulic hardware tests with simulated neutronie response provides a bridge between electrically heated testing and fueled nuclear testing, providing a better assessment of system integration issues, characterization of integrated system response times and response characteristics, and assessment of potential design improvements' at a relatively small fiscal investment. Initial conceptual thermal simulator designs are determined by simple one-dimensional analysis at a single axial location and at steady state conditions; feasible concepts are then input into a detailed three-dimensional model for comparison to expected fuel pin performance. Static and dynamic fuel pin performance for a proposed reactor design is determined using SINDA/FLUINT thermal analysis software, and comparison is made between the expected nuclear performance and the performance of conceptual thermal simulator designs. Through a series of iterative analyses, a conceptual high fidelity design can developed. Test results presented in this paper correspond to a "first cut" simulator design for a potential

  8. Obtaining identical results on varying numbers of processors in domain decomposed particle Monte Carlo simulations.

    SciTech Connect

    Brunner, Thomas A.; Kalos, Malvin H.; Gentile, Nicholas A.

    2005-03-01

    Domain decomposed Monte Carlo codes, like other domain-decomposed codes, are difficult to debug. Domain decomposition is prone to error, and interactions between the domain decomposition code and the rest of the algorithm often produces subtle bugs. These bugs are particularly difficult to find in a Monte Carlo algorithm, in which the results have statistical noise. Variations in the results due to statistical noise can mask errors when comparing the results to other simulations or analytic results.

  9. Geometry and Simulation Results for a Gas Turbine Representative of the Energy Efficient Engine (EEE)

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Beach, Tim; Turner, Mark; Siddappaji, Kiran; Hendricks, Eric S.

    2015-01-01

    This paper describes the geometry and simulation results of a gas-turbine engine based on the original EEE engine developed in the 1980s. While the EEE engine was never in production, the technology developed during the program underpins many of the current generation of gas turbine engines. This geometry is being explored as a potential multi-stage turbomachinery test case that may be used to develop technology for virtual full-engine simulation. Simulation results were used to test the validity of each component geometry representation. Results are compared to a zero-dimensional engine model developed from experimental data. The geometry is captured in a series of Initial Graphical Exchange Specification (IGES) files and is available on a supplemental DVD to this report.

  10. Results of a 3-D full particle simulation of quasi-perpendicular shock

    NASA Astrophysics Data System (ADS)

    Shinohara, I.; Fujimoto, M.

    2010-12-01

    Recent progress of computational power enables us to perform really macro-scale three-dimensional situations with full particle codes. In this presentation, we will report results of a three-dimensional simulation of a quasi-perpendicular shock. The simulation parameters were selected to simulate a Cluster-II observational result reported by Seki et al. (2009), M_A=7.4 and beta=0.16. The realistic mass ratio mi/me=1840 was taken, and almost one ion inertia length square could be allocated to the plane perpendicular to the upstream flow axis. The result shows that both the self-reformation process and whistler emission are observed. However, the 3-D result is not a simple superposition of 2-D results. The most impressive feature is that quite complicated wave activity is found in the shock foot region. With the help of this wave activity, electron heating observed in the 3-D run is more efficient than those in the 1-D and 2-D runs with the same shock parameters. Moreover, non-thermal electrons are also produced only in the 3D run. In this paper, comparing the 3-D result with previous 1-D and 2-D simulation results, three dimensional nature of the shock transition region of quasi-perpendicular shock is discussed.

  11. Simulation and Analysis of Microwave Transmission through an Electron Cloud, a Comparison of Results

    SciTech Connect

    Sonnad, Kiran; Sonnad, Kiran; Furman, Miguel; Veitzer, Seth; Stoltz, Peter; Cary, John

    2007-03-12

    Simulation studies for transmission of microwaves through electron cloudes show good agreement with analytic results. The elctron cloud produces a shift in phase of the microwave. Experimental observation of this phenomena would lead to a useful diagnostic tool for acessing the local density of electron clouds in an accelerator. These experiments are being carried out at the CERN SPS and the PEP-II LER at SLAC and is proposed to be done at the Fermilab maininjector. In this study, a brief analysis of the phase shift is provided and the results are compared with that obtained from simulations.

  12. SIMULATION AND ANALYSIS OF MICROWAVE TRANSMISSION THROUGH ANELECTRON CLOUD, A COMPARISON OF RESULTS

    SciTech Connect

    Sonnad, Kiran G.; Furman, Miguel; Veitzer, Seth A.; Cary, John

    2006-04-15

    Simulation studies for transmission of microwaves through electron clouds show good agreement with analytic results. The electron cloud produces a shift in phase of the microwave. Experimental observation of this phenomena would lead to a useful diagnostic tool for accessing the local density of electron clouds in an accelerator. These experiments are being carried out at the CERN SPS and the PEP-II LER at SLAC and is proposed to be done at the Fermilab main injector. In this study, a brief analysis of the phase shift is provided and the results are compared with that obtained from simulations.

  13. SU-E-T-35: An Investigation of the Accuracy of Cervical IMRT Dose Distribution Using 2D/3D Ionization Chamber Arrays System and Monte Carlo Simulation

    SciTech Connect

    Zhang, Y; Yang, J; Liu, H; Liu, D

    2014-06-01

    Purpose: The purpose of this work is to compare the verification results of three solutions (2D/3D ionization chamber arrays measurement and Monte Carlo simulation), the results will help make a clinical decision as how to do our cervical IMRT verification. Methods: Seven cervical cases were planned with Pinnacle 8.0m to meet the clinical acceptance criteria. The plans were recalculated in the Matrixx and Delta4 phantom with the accurate plans parameters. The plans were also recalculated by Monte Carlo using leaf sequences and MUs for individual plans of every patient, Matrixx and Delta4 phantom. All plans of Matrixx and Delta4 phantom were delivered and measured. The dose distribution of iso slice, dose profiles, gamma maps of every beam were used to evaluate the agreement. Dose-volume histograms were also compared. Results: The dose distribution of iso slice and dose profiles from Pinnacle calculation were in agreement with the Monte Carlo simulation, Matrixx and Delta4 measurement. A 95.2%/91.3% gamma pass ratio was obtained between the Matrixx/Delta4 measurement and Pinnacle distributions within 3mm/3% gamma criteria. A 96.4%/95.6% gamma pass ratio was obtained between the Matrixx/Delta4 measurement and Monte Carlo simulation within 2mm/2% gamma criteria, almost 100% gamma pass ratio within 3mm/3% gamma criteria. The DVH plot have slightly differences between Pinnacle and Delta4 measurement as well as Pinnacle and Monte Carlo simulation, but have excellent agreement between Delta4 measurement and Monte Carlo simulation. Conclusion: It was shown that Matrixx/Delta4 and Monte Carlo simulation can be used very efficiently to verify cervical IMRT delivery. In terms of Gamma value the pass ratio of Matrixx was little higher, however, Delta4 showed more problem fields. The primary advantage of Delta4 is the fact it can measure true 3D dosimetry while Monte Carlo can simulate in patients CT images but not in phantom.

  14. Assessment of the improvements in accuracy of aerosol characterization resulted from additions of polarimetric measurements to intensity-only observations using GRASP algorithm (Invited)

    NASA Astrophysics Data System (ADS)

    Dubovik, O.; Litvinov, P.; Lapyonok, T.; Herman, M.; Fedorenko, A.; Lopatin, A.; Goloub, P.; Ducos, F.; Aspetsberger, M.; Planer, W.; Federspiel, C.

    2013-12-01

    During last few years we were developing GRASP (Generalized Retrieval of Aerosol and Surface Properties) algorithm designed for the enhanced characterization of aerosol properties from spectral, multi-angular polarimetric remote sensing observations. The concept of GRASP essentially relies on the accumulated positive research heritage from previous remote sensing aerosol retrieval developments, in particular those from the AERONET and POLDER retrieval activities. The details of the algorithm are described by Dubovik et al. (Atmos. Meas. Tech., 4, 975-1018, 2011). The GRASP retrieves properties of both aerosol and land surface reflectance in cloud-free environments. It is based on highly advanced statistically optimized fitting and deduces nearly 50 unknowns for each observed site. The algorithm derives a similar set of aerosol parameters as AERONET including detailed particle size distribution, the spectrally dependent the complex index of refraction and the fraction of non-spherical particles. The algorithm uses detailed aerosol and surface models and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are done on-line without using traditional look-up tables. In addition, the algorithm uses the new multi-pixel retrieval concept - a simultaneous fitting of a large group of pixels with additional constraints limiting the time variability of surface properties and spatial variability of aerosol properties. This principle is expected to result in higher consistency and accuracy of aerosol products compare to conventional approaches especially over bright surfaces where information content of satellite observations in respect to aerosol properties is limited. The GRASP is a highly versatile algorithm that allows input from both satellite and ground-based measurements. It also has essential flexibility in measurement processing. For example, if observation data set includes spectral

  15. Assessing effects of the e-Chasqui laboratory information system on accuracy and timeliness of bacteriology results in the Peruvian tuberculosis program.

    PubMed

    Blaya, Joaquin A; Shin, Sonya S; Yagui, Martin J A; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Fraser, Hamish

    2007-10-11

    We created a web-based laboratory information system, e-Chasqui to connect public laboratories to health centers to improve communication and analysis. After one year, we performed a pre and post assessment of communication delays and found that e-Chasqui maintained the average delay but eliminated delays of over 60 days. Adding digital verification maintained the average delay, but should increase accuracy. We are currently performing a randomized evaluation of the impacts of e-Chasqui.

  16. Battery Performance of ADEOS (Advanced Earth Observing Satellite) and Ground Simulation Test Results

    NASA Technical Reports Server (NTRS)

    Koga, K.; Suzuki, Y.; Kuwajima, S.; Kusawake, H.

    1997-01-01

    The Advanced Earth Observing Satellite (ADEOS) is developed with the aim of establishment of platform technology for future spacecraft and inter-orbit communication technology for the transmission of earth observation data. ADEOS uses 5 batteries, consists of two packs. This paper describes, using graphs and tables, the ground simulation tests and results that are carried to determine the performance of the ADEOS batteries.

  17. Analysis Results for Lunar Soil Simulant Using a Portable X-Ray Fluorescence Analyzer

    NASA Technical Reports Server (NTRS)

    Boothe, R. E.

    2006-01-01

    Lunar soil will potentially be used for oxygen generation, water generation, and as filler for building blocks during habitation missions on the Moon. NASA s in situ fabrication and repair program is evaluating portable technologies that can assess the chemistry of lunar soil and lunar soil simulants. This Technical Memorandum summarizes the results of the JSC 1 lunar soil simulant analysis using the TRACeR III IV handheld x-ray fluorescence analyzer, manufactured by KeyMaster Technologies, Inc. The focus of the evaluation was to determine how well the current instrument configuration would detect and quantify the components of JSC-1.

  18. The Aurora radiation-hydrodynamical simulations of reionization: calibration and first results

    NASA Astrophysics Data System (ADS)

    Pawlik, Andreas H.; Rahmati, Alireza; Schaye, Joop; Jeon, Myoungwon; Dalla Vecchia, Claudio

    2017-04-01

    We introduce a new suite of radiation-hydrodynamical simulations of galaxy formation and reionization called Aurora. The Aurora simulations make use of a spatially adaptive radiative transfer technique that lets us accurately capture the small-scale structure in the gas at the resolution of the hydrodynamics, in cosmological volumes. In addition to ionizing radiation, Aurora includes galactic winds driven by star formation and the enrichment of the universe with metals synthesized in the stars. Our reference simulation uses 2 × 5123 dark matter and gas particles in a box of size 25 h-1 comoving Mpc with a force softening scale of at most 0.28 h-1 kpc. It is accompanied by simulations in larger and smaller boxes and at higher and lower resolution, employing up to 2 × 10243 particles, to investigate numerical convergence. All simulations are calibrated to yield simulated star formation rate functions in close agreement with observational constraints at redshift z = 7 and to achieve reionization at z ≈ 8.3, which is consistent with the observed optical depth to reionization. We focus on the design and calibration of the simulations and present some first results. The median stellar metallicities of low-mass galaxies at z = 6 are consistent with the metallicities of dwarf galaxies in the Local Group, which are believed to have formed most of their stars at high redshifts. After reionization, the mean photoionization rate decreases systematically with increasing resolution. This coincides with a systematic increase in the abundance of neutral hydrogen absorbers in the intergalactic medium.

  19. Reconfigurable computing for Monte Carlo simulations: Results and prospects of the Janus project

    NASA Astrophysics Data System (ADS)

    Baity-Jesi, M.; Baños, R. A.; Cruz, A.; Fernandez, L. A.; Gil-Narvion, J. M.; Gordillo-Guerrero, A.; Guidetti, M.; Iñiguez, D.; Maiorano, A.; Mantovani, F.; Marinari, E.; Martin-Mayor, V.; Monforte-Garcia, J.; Muñoz Sudupe, A.; Navarro, D.; Parisi, G.; Pivanti, M.; Perez-Gaviro, S.; Ricci-Tersenghi, F.; Ruiz-Lorenzo, J. J.; Schifano, S. F.; Seoane, B.; Tarancon, A.; Tellez, P.; Tripiccione, R.; Yllanes, D.

    2012-08-01

    We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non-equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin-glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.

  20. Convergence and shear statistics in galaxy clusters as a result of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Poplavsky, Alexander

    2016-03-01

    In this paper the influence of galaxy cluster halo environment on the deflection properties of its galaxies is investigated. For this purpose circular and elliptical projected cluster haloes obeying Einasto density profiles are modelled in the \\varLambdaCDM cosmological model. By Monte-Carlo simulations external shear and convergence are calculated for random positions of a test galaxy within its cluster. Throughout the simulations the total virial mass, profile concentration and slope parameters are varied both for cluster and its galaxies. The cluster is composed of smooth matter distribution (intergalactic gas and dark matter) and randomly placed galaxies. As a result of multiple simulation runs robust statistical estimations of external shear and convergence are derived for variable cluster characteristics and its redshift. In addition, the models for external shear and convergence are applied for the galaxy lens seen through the cluster IRC-0218.

  1. Monte Carlo simulations of microchannel plate detectors I: steady-state voltage bias results

    SciTech Connect

    Ming Wu, Craig Kruschwitz, Dane Morgan, Jiaming Morgan

    2008-07-01

    X-ray detectors based on straight-channel microchannel plates (MCPs) are a powerful diagnostic tool for two-dimensional, time-resolved imaging and timeresolved x-ray spectroscopy in the fields of laser-driven inertial confinement fusion and fast z-pinch experiments. Understanding the behavior of microchannel plates as used in such detectors is critical to understanding the data obtained. The subject of this paper is a Monte Carlo computer code we have developed to simulate the electron cascade in a microchannel plate under a static applied voltage. Also included in the simulation is elastic reflection of low-energy electrons from the channel wall, which is important at lower voltages. When model results were compared to measured microchannel plate sensitivities, good agreement was found. Spatial resolution simulations of MCP-based detectors were also presented and found to agree with experimental measurements.

  2. Monte Carlo simulations of microchannel plate detectors. I. Steady-state voltage bias results

    SciTech Connect

    Wu Ming; Kruschwitz, Craig A.; Morgan, Dane V.; Morgan, Jiaming

    2008-07-15

    X-ray detectors based on straight-channel microchannel plates (MCPs) are a powerful diagnostic tool for two-dimensional, time-resolved imaging and time-resolved x-ray spectroscopy in the fields of laser-driven inertial confinement fusion and fast Z-pinch experiments. Understanding the behavior of microchannel plates as used in such detectors is critical to understanding the data obtained. The subject of this paper is a Monte Carlo computer code we have developed to simulate the electron cascade in a MCP under a static applied voltage. Also included in the simulation is elastic reflection of low-energy electrons from the channel wall, which is important at lower voltages. When model results were compared to measured MCP sensitivities, good agreement was found. Spatial resolution simulations of MCP-based detectors were also presented and found to agree with experimental measurements.

  3. Computer simulation of shelf and stream profile geomorphic evolution resulting from eustasy and uplift

    SciTech Connect

    Johnson, R.M. )

    1993-04-01

    A two-dimensional computer simulation of shelf and stream profile evolution with sea level oscillation has been developed to illustrate the interplay of coastal and fluvial processes on uplifting continental margins. The shelf evolution portion of the simulation is based on the erosional model of Trenhaile (1989). The rate of high tide cliff erosion decreases as abrasion platform gradient decreases the sea cliff height increases. The rate of subtidal erosion decreases as the subtidal sea floor gradient decreases. Values are specified for annual wave energy, energy required to erode a cliff notch 1 meter deep, nominal low tidal erosion rate, and rate of removal of cliff debris. The values were chosen arbitrarily to yield a geomorphic evolution consistent with the present coast of northern California, where flights of uplifted marine terraces are common. The stream profile evolution simulation interfaces in real time with the shelf simulation. The stream profile consists of uniformly spaced cells, each representing the median height of a profile segment. The stream simulation results show that stream response to sea level change on an uplifting coast is dependent on the profile gradient near the stream mouth, relative to the shelf gradient. Small streams with steep gradients aggrade onto the emergent shelf during sea level fall and incise at the mountain front during sea level rise. Large streams with low gradients incise the emergent shelf during sea level fall and aggrade in their valleys during sea level rise.

  4. Ejector nozzle test results at simulated flight conditions for an advanced supersonic transport propulsion system

    NASA Technical Reports Server (NTRS)

    Nelson, D. P.; Bresnahan, D. L.

    1983-01-01

    Results are presented of wind tunnel tests conducted to verify the performance improvements of a refined ejector nozzle design for advanced supersonic transport propulsion systems. The analysis of results obtained at simulated engine operating conditions is emphasized. Tests were conducted with models of approximately 1/10th scale which were configured to simulate nozzle operation at takeoff, subsonic cruise, transonic cruise, and supersonic cruise. Transonic cruise operation was not a consideration during the nozzle design phase, although an evaluation at this condition was later conducted. Test results, characterized by thrust and flow coefficients, are given for a range of nozzle pressure ratios, emphasizing the thrust performance at the engine operating conditions predicted for each flight Mach number. The results indicate that nozzle performance goals were met or closely approximated at takeoff and supersonic cruise, while subsonic cruise performance was within 2.3 percent of the goal with further improvement possible.

  5. Parameter Accuracy in Meta-Analyses of Factor Structures

    ERIC Educational Resources Information Center

    Gnambs, Timo; Staufenbiel, Thomas

    2016-01-01

    Two new methods for the meta-analysis of factor loadings are introduced and evaluated by Monte Carlo simulations. The direct method pools each factor loading individually, whereas the indirect method synthesizes correlation matrices reproduced from factor loadings. The results of the two simulations demonstrated that the accuracy of…

  6. Molecular Dynamics Simulations of Intrinsically Disordered Proteins: On the Accuracy of the TIP4P-D Water Model and the Representativeness of Protein Disorder Models.

    PubMed

    Henriques, João; Skepö, Marie

    2016-07-12

    Here, we first present a follow-up to a previous work by our group on the problematic of molecular dynamics simulations of intrinsically disordered proteins (IDPs) [ Henriques et al. J. Chem. Theory Comput. 2015 , 11 , 3420 - 3431 ], using the recently developed TIP4P-D water model. When used in conjunction with the standard AMBER ff99SB-ILDN force field and applied to the simulation of Histatin 5, our IDP model, we obtain results which are in excellent agreement with the best performing IDP-suitable force field from the earlier study and with experiment. We then assess the representativeness of the IDP models used in these and similar studies, finding that most are too short in comparison to the average IDP and contain a bias toward hydrophilic amino acid residues. Moreover, several key order- and disorder-promoting residues are also found to be misrepresented. It seems appropriate for future studies to address these issues.

  7. Methods for improving accuracy and extending results beyond periods covered by traditional ground-truth in remote sensing classification of a complex landscape

    NASA Astrophysics Data System (ADS)

    Mueller-Warrant, George W.; Whittaker, Gerald W.; Banowetz, Gary M.; Griffith, Stephen M.; Barnhart, Bradley L.

    2015-06-01

    Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that ground truth data from one year could be used to extrapolate previous or future landuse in a complex landscape where cropping systems do not generally change greatly from year to year because the majority of crops are established perennials or the same annual crops grown on the same fields over multiple years. Prior to testing this hypothesis, it was first necessary to classify 57 major landuses in the Willamette Valley of western Oregon from 2005 to 2011 using normal same year ground-truth, elaborating on previously published work and traditional sources such as Cropland Data Layers (CDL) to more fully include minor crops grown in the region. Available remote sensing data included Landsat, MODIS 16-day composites, and National Aerial Imagery Program (NAIP) imagery, all of which were resampled to a common 30 m resolution. The frequent presence of clouds and Landsat7 scan line gaps forced us to conduct of series of separate classifications in each year, which were then merged by choosing whichever classification used the highest number of cloud- and gap-free bands at any given pixel. Procedures adopted to improve accuracy beyond that achieved by maximum likelihood pixel classification included majority-rule reclassification of pixels within 91,442 Common Land Unit (CLU) polygons, smoothing and aggregation of areas outside the CLU polygons, and majority-rule reclassification over time of forest and urban development areas. Final classifications in all seven years separated annually disturbed agriculture, established perennial crops, forest, and urban development from each other at 90 to 95% overall 4-class validation accuracy. In the most successful use of subsequent year ground-truth data to classify prior year landuse, an

  8. Results of an A109 simulation validation and handling qualities study

    NASA Technical Reports Server (NTRS)

    Eshow, Michelle M.; Orlandi, Diego; Bonaita, Giovanni; Barbieri, Sergio

    1989-01-01

    The results for the validation of a mathematical model of the Agusta A109 helicopter, and subsequent use of the model as the baseline for a handling qualities study of cockpit centerstick requirements, are described. The technical approach included flight test, non-realtime analysis, and realtime piloted simulation. Results of the validation illustrate a time- and frequency-domain approach to the model and simulator issues. The final A109 model correlates well with the actual aircraft with the Stability Augmentation System (SAS) engaged, but is unacceptable without the SAS because of instability and response coupling at low speeds. Results of the centerstick study support the current U.S. Army handling qualities requirements for centerstick characteristics.

  9. Simulation Results for the New NSTX HHFW Antenna Straps Design by Using Microwave Studio

    SciTech Connect

    Kung, C C; Brunkhorst, C; Greenough, N; Fredd, E; Castano, A; Miller, D; D'Amico, G; Yager, R; Hosea, J; Wilson, J R; Ryan, P

    2009-05-26

    Experimental results have shown that the high harmonic fast wave (HHFW) at 30 MHz can provide substantial plasma heating and current drive for the NSTX spherical tokamak operation. However, the present antenna strap design rarely achieves the design goal of delivering the full transmitter capability of 6 MW to the plasma. In order to deliver more power to the plasma, a new antenna strap design and the associated coaxial line feeds are being constructed. This new antenna strap design features two feedthroughs to replace the old single feed-through design. In the design process, CST Microwave Studio has been used to simulate the entire new antenna strap structure including the enclosure and the Faraday shield. In this paper, the antenna strap model and the simulation results will be discussed in detail. The test results from the new antenna straps with their associated resonant loops will be presented as well.

  10. Results from tight and loose coupled multiphysics in nuclear fuels performance simulations using BISON

    SciTech Connect

    Novascone, S. R.; Spencer, B. W.; Andrs, D.; Williamson, R. L.; Hales, J. D.; Perez, D. M.

    2013-07-01

    The behavior of nuclear fuel in the reactor environment is affected by multiple physics, most notably heat conduction and solid mechanics, which can have a strong influence on each other. To provide credible solutions, a fuel performance simulation code must have the ability to obtain solutions for each of the physics, including coupling between them. Solution strategies for solving systems of coupled equations can be categorized as loosely-coupled, where the individual physics are solved separately, keeping the solutions for the other physics fixed at each iteration, or tightly coupled, where the nonlinear solver simultaneously drives down the residual for each physics, taking into account the coupling between the physics in each nonlinear iteration. In this paper, we compare the performance of loosely and tightly coupled solution algorithms for thermomechanical problems involving coupled thermal and mechanical contact, which is a primary source of interdependence between thermal and mechanical solutions in fuel performance models. The results indicate that loosely-coupled simulations require significantly more nonlinear iterations, and may lead to convergence trouble when the thermal conductivity of the gap is too small. We also apply the tightly coupled solution strategy to a nuclear fuel simulation of an experiment in a test reactor. Studying the results from these simulations indicates that perhaps convergence for either approach may be problem dependent, i.e., there may be problems for which a loose coupled approach converges, where tightly coupled won't converge and vice versa. (authors)

  11. Results of Small-scale Solid Rocket Combustion Simulator testing at Marshall Space Flight Center

    NASA Astrophysics Data System (ADS)

    Goldberg, Benjamin E.; Cook, Jerry

    1993-06-01

    The Small-scale Solid Rocket Combustion Simulator (SSRCS) program was established at the Marshall Space Flight Center (MSFC), and used a government/industry team consisting of Hercules Aerospace Corporation, Aerotherm Corporation, United Technology Chemical Systems Division, Thiokol Corporation and MSFC personnel to study the feasibility of simulating the combustion species, temperatures and flow fields of a conventional solid rocket motor (SRM) with a versatile simulator system. The SSRCS design is based on hybrid rocket motor principles. The simulator uses a solid fuel and a gaseous oxidizer. Verification of the feasibility of a SSRCS system as a test bed was completed using flow field and system analyses, as well as empirical test data. A total of 27 hot firings of a subscale SSRCS motor were conducted at MSFC. Testing of the Small-scale SSRCS program was completed in October 1992. This paper, a compilation of reports from the above team members and additional analysis of the instrumentation results, will discuss the final results of the analyses and test programs.

  12. Results from Tight and Loose Coupled Multiphysics in Nuclear Fuels Performance Simulations using BISON

    SciTech Connect

    S. R. Novascone; B. W. Spencer; D. Andrs; R. L. Williamson; J. D. Hales; D. M. Perez

    2013-05-01

    The behavior of nuclear fuel in the reactor environment is affected by multiple physics, most notably heat conduction and solid mechanics, which can have a strong influence on each other. To provide credible solutions, a fuel performance simulation code must have the ability to obtain solutions for each of the physics, including coupling between them. Solution strategies for solving systems of coupled equations can be categorized as loosely-coupled, where the individual physics are solved separately, keeping the solutions for the other physics fixed at each iteration, or tightly coupled, where the nonlinear solver simultaneously drives down the residual for each physics, taking into account the coupling between the physics in each nonlinear iteration. In this paper, we compare the performance of loosely and tightly coupled solution algorithms for thermomechanical problems involving coupled thermal and mechanical contact, which is a primary source of interdependence between thermal and mechanical solutions in fuel performance models. The results indicate that loosely-coupled simulations require significantly more nonlinear iterations, and may lead to convergence trouble when the thermal conductivity of the gap is too small. We also apply the tightly coupled solution strategy to a nuclear fuel simulation of an experiment in a test reactor. Studying the results from these simulations indicates that perhaps convergence for either approach may be problem dependent, i.e., there may be problems for which a loose coupled approach converges, where tightly coupled won’t converge and vice versa.

  13. High-Alpha Research Vehicle Lateral-Directional Control Law Description, Analyses, and Simulation Results

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Murphy, Patrick C.; Lallman, Frederick J.; Hoffler, Keith D.; Bacon, Barton J.

    1998-01-01

    This report contains a description of a lateral-directional control law designed for the NASA High-Alpha Research Vehicle (HARV). The HARV is a F/A-18 aircraft modified to include a research flight computer, spin chute, and thrust-vectoring in the pitch and yaw axes. Two separate design tools, CRAFT and Pseudo Controls, were integrated to synthesize the lateral-directional control law. This report contains a description of the lateral-directional control law, analyses, and nonlinear simulation (batch and piloted) results. Linear analysis results include closed-loop eigenvalues, stability margins, robustness to changes in various plant parameters, and servo-elastic frequency responses. Step time responses from nonlinear batch simulation are presented and compared to design guidelines. Piloted simulation task scenarios, task guidelines, and pilot subjective ratings for the various maneuvers are discussed. Linear analysis shows that the control law meets the stability margin guidelines and is robust to stability and control parameter changes. Nonlinear batch simulation analysis shows the control law exhibits good performance and meets most of the design guidelines over the entire range of angle-of-attack. This control law (designated NASA-1A) was flight tested during the Summer of 1994 at NASA Dryden Flight Research Center.

  14. Laboratory simulations of lidar returns from clouds - Experimental and numerical results

    NASA Astrophysics Data System (ADS)

    Zaccanti, Giovanni; Bruscaglioni, Piero; Gurioli, Massimo; Sansoni, Paola

    1993-03-01

    The experimental results of laboratory simulations of lidar returns from clouds are presented. Measurements were carried out on laboratory-scaled cloud models by using a picosecond laser and a streak-camera system. The turbid structures simulating clouds were suspensions of polystyrene spheres in water. The geometrical situation was similar to that of an actual lidar sounding a cloud 1000 m distant and with a thickness of 300 m. Measurements were repeated for different concentrations and different sizes of spheres. The results show how the effect of multiple scattering depends on the scattering coefficient and on the phase function of the diffusers. The depolarization introduced by multiple scattering was also investigated. The results were also compared with numerical results obtained by Monte Carlo simulations. Substantially good agreement between numerical and experimental results was found. The measurements showed the adequacy of modern electro-optical systems to study the features of multiple-scattering effects on lidar echoes from atmosphere or ocean by means of experiments on well-controlled laboratory-scaled models. This adequacy provides the possibility of studying the influence of different effects in the laboratory in well-controlled situations.

  15. Laboratory simulations of lidar returns from clouds: experimental and numerical results.

    PubMed

    Zaccanti, G; Bruscaglioni, P; Gurioli, M; Sansoni, P

    1993-03-20

    The experimental results of laboratory simulations of lidar returns from clouds are presented. Measurements were carried out on laboratory-scaled cloud models by using a picosecond laser and a streak-camera system. The turbid structures simulating clouds were suspensions of polystyrene spheres in water. The geometrical situation was similar to that of an actual lidar sounding a cloud 1000 m distant and with a thickness of 300 m. Measurements were repeated for different concentrations and different sizes of spheres. The results show how the effect of multiple scattering depends on the scattering coefficient and on the phase function of the diffusers. The depolarization introduced by multiple scattering was also investigated. The results were also compared with numerical results obtained by Monte Carlo simulations. Substantially good agreement between numerical and experimental results was found. The measurements showed the adequacy of modern electro-optical systems to study the features of multiple-scattering effects on lidar echoes from atmosphere or ocean by means of experiments on well-controlled laboratory-scaled models. This adequacy provides the possibility of studying the influence of different effects in the laboratory in well-controlled situations.

  16. Stable water isotope simulation by current land-surface schemes:Results of IPILPS phase 1

    SciTech Connect

    Henderson-Sellers, A.; Fischer, M.; Aleinov, I.; McGuffie, K.; Riley, W.J.; Schmidt, G.A.; Sturm, K.; Yoshimura, K.; Irannejad, P.

    2005-10-31

    Phase 1 of isotopes in the Project for Intercomparison of Land-surface Parameterization Schemes (iPILPS) compares the simulation of two stable water isotopologues ({sup 1}H{sub 2} {sup 18}O and {sup 1}H{sup 2}H{sup 16}O) at the land-atmosphere interface. The simulations are off-line, with forcing from an isotopically enabled regional model for three locations selected to offer contrasting climates and ecotypes: an evergreen tropical forest, a sclerophyll eucalypt forest and a mixed deciduous wood. Here we report on the experimental framework, the quality control undertaken on the simulation results and the method of intercomparisons employed. The small number of available isotopically-enabled land-surface schemes (ILSSs) limits the drawing of strong conclusions but, despite this, there is shown to be benefit in undertaking this type of isotopic intercomparison. Although validation of isotopic simulations at the land surface must await more, and much more complete, observational campaigns, we find that the empirically-based Craig-Gordon parameterization (of isotopic fractionation during evaporation) gives adequately realistic isotopic simulations when incorporated in a wide range of land-surface codes. By introducing two new tools for understanding isotopic variability from the land surface, the Isotope Transfer Function and the iPILPS plot, we show that different hydrological parameterizations cause very different isotopic responses. We show that ILSS-simulated isotopic equilibrium is independent of the total water and energy budget (with respect to both equilibration time and state), but interestingly the partitioning of available energy and water is a function of the models' complexity.

  17. Simulated Driving Assessment (SDA) for Teen Drivers: Results from a Validation Study

    PubMed Central

    McDonald, Catherine C.; Kandadai, Venk; Loeb, Helen; Seacrist, Thomas S.; Lee, Yi-Ching; Winston, Zachary; Winston, Flaura K.

    2015-01-01

    Background Driver error and inadequate skill are common critical reasons for novice teen driver crashes, yet few validated, standardized assessments of teen driving skills exist. The purpose of this study was to evaluate the construct and criterion validity of a newly developed Simulated Driving Assessment (SDA) for novice teen drivers. Methods The SDA's 35-minute simulated drive incorporates 22 variations of the most common teen driver crash configurations. Driving performance was compared for 21 inexperienced teens (age 16–17 years, provisional license ≤90 days) and 17 experienced adults (age 25–50 years, license ≥5 years, drove ≥100 miles per week, no collisions or moving violations ≤3 years). SDA driving performance (Error Score) was based on driving safety measures derived from simulator and eye-tracking data. Negative driving outcomes included simulated collisions or run-off-the-road incidents. A professional driving evaluator/instructor reviewed videos of SDA performance (DEI Score). Results The SDA demonstrated construct validity: 1.) Teens had a higher Error Score than adults (30 vs. 13, p=0.02); 2.) For each additional error committed, the relative risk of a participant's propensity for a simulated negative driving outcome increased by 8% (95% CI: 1.05–1.10, p<0.01). The SDA demonstrated criterion validity: Error Score was correlated with DEI Score (r=−0.66, p<0.001). Conclusions This study supports the concept of validated simulated driving tests like the SDA to assess novice driver skill in complex and hazardous driving scenarios. The SDA, as a standard protocol to evaluate teen driver performance, has the potential to facilitate screening and assessment of teen driving readiness and could be used to guide targeted skill training. PMID:25740939

  18. A computerised third molar surgery simulator--results of supervision by different professionals.

    PubMed

    Rosen, A; Eliassi, S; Fors, U; Sallnäs, E-L; Forsslund, J; Sejersen, R; Lund, B

    2014-05-01

    The purpose of the study was to investigate which supervisory approach afforded the most efficient learning method for undergraduate students in oral and maxillofacial surgery (OMS) using a computerised third molar surgery simulator. Fifth year dental students participated voluntarily in a randomised experimental study using the simulator. The amount of time required and the number of trials used by each student were evaluated as a measure of skills development. Students had the opportunity to practise the procedure until no further visible improvements were achieved. The study assessed four different types of supervision to guide the students. The first group was where they were supported by a teacher/specialist in OMS, the second by a teaching assistant, the third group practised without any supervision and the fourth received help from a simulator technician/engineer. A protocol describing assessment criteria was designed for this purpose, and a questionnaire was completed by all participating students after the study. The average number of attempts required to virtually remove a third molar tooth in the simulator was 1.44 times for the group supervised by an OMS teacher; 1.5 times for those supervised by a teaching assistant; 2.8 times for those who had no supervision; and 3.6 times when support was provided only by a simulator technician. The results showed that the most efficient experience of the students was when they were helped by an OMS teacher or a teaching assistant. In a time and cost-effective perspective, supervision by a teaching assistant for a third molar surgery simulator would be the optimal choice.

  19. SU-D-16A-04: Accuracy of Treatment Plan TCP and NTCP Values as Determined Via Treatment Course Delivery Simulations

    SciTech Connect

    Siebers, J; Xu, H; Gordon, J

    2014-06-01

    Purpose: To to determine if tumor control probability (TCP) and normal tissue control probability (NTCP) values computed on the treatment planning image are representative of TCP/NTCP distributions resulting from probable positioning variations encountered during external-beam radiotherapy. Methods: We compare TCP/NTCP as typically computed on the planning PTV/OARs with distributions of those parameters computed for CTV/OARs via treatment delivery simulations which include the effect of patient organ deformations for a group of 19 prostate IMRT pseudocases. Planning objectives specified 78 Gy to PTV1=prostate CTV+5 mm margin, 66 Gy to PTV2=seminal vesicles+8 mm margin, and multiple bladder/rectum OAR objectives to achieve typical clinical OAR sparing. TCP were computed using the Poisson Model while NTCPs used the Lyman-Kutcher-Bruman model. For each patient, 1000 30-fraction virtual treatment courses were simulated with each fractional pseudo- time-oftreatment anatomy sampled from a principle component analysis patient deformation model. Dose for each virtual treatment-course was determined via deformable summation of dose from the individual fractions. CTVTCP/ OAR-NTCP values were computed for each treatment course, statistically analyzed, and compared with the planning PTV-TCP/OARNTCP values. Results: Mean TCP from the simulations differed by <1% from planned TCP for 18/19 patients; 1/19 differed by 1.7%. Mean bladder NTCP differed from the planned NTCP by >5% for 12/19 patients and >10% for 4/19 patients. Similarly, mean rectum NTCP differed by >5% for 12/19 patients, >10% for 4/19 patients. Both mean bladder and mean rectum NTCP differed by >5% for 10/19 patients and by >10% for 2/19 patients. For several patients, planned NTCP was less than the minimum or more than the maximum from the treatment course simulations. Conclusion: Treatment course simulations yield TCP values that are similar to planned values, while OAR NTCPs differ significantly, indicating the

  20. Analysis of formation pressure test results in the Mount Elbert methane hydrate reservoir through numerical simulation

    USGS Publications Warehouse

    Kurihara, M.; Sato, A.; Funatsu, K.; Ouchi, H.; Masuda, Y.; Narita, H.; Collett, T.S.

    2011-01-01

    Targeting the methane hydrate (MH) bearing units C and D at the Mount Elbert prospect on the Alaska North Slope, four MDT (Modular Dynamic Formation Tester) tests were conducted in February 2007. The C2 MDT test was selected for history matching simulation in the MH Simulator Code Comparison Study. Through history matching simulation, the physical and chemical properties of the unit C were adjusted, which suggested the most likely reservoir properties of this unit. Based on these properties thus tuned, the numerical models replicating "Mount Elbert C2 zone like reservoir" "PBU L-Pad like reservoir" and "PBU L-Pad down dip like reservoir" were constructed. The long term production performances of wells in these reservoirs were then forecasted assuming the MH dissociation and production by the methods of depressurization, combination of depressurization and wellbore heating, and hot water huff and puff. The predicted cumulative gas production ranges from 2.16??106m3/well to 8.22??108m3/well depending mainly on the initial temperature of the reservoir and on the production method.This paper describes the details of modeling and history matching simulation. This paper also presents the results of the examinations on the effects of reservoir properties on MH dissociation and production performances under the application of the depressurization and thermal methods. ?? 2010 Elsevier Ltd.

  1. Simulating Late Ordovician deep ocean O2 with an earth system climate model. Preliminary results.

    NASA Astrophysics Data System (ADS)

    D'Amico, Daniel F.; Montenegro, Alvaro

    2016-04-01

    The geological record provides several lines of evidence that point to the occurrence of widespread and long lasting deep ocean anoxia during the Late Ordovician, between about 460-440 million years ago (ma). While a series of potential causes have been proposed, there is still large uncertainty regarding how the low oxygen levels came about. Here we use the University of Victoria Earth System Climate Model (UVic ESCM) with Late Ordovician paleogeography to verify the impacts of paleogeography, bottom topography, nutrient loading and cycling and atmospheric concentrations of O2 and CO2 on deep ocean oxygen concentration during the period of interest. Preliminary results so far are based on 10 simulations (some still ongoing) covering the following parameter space: CO2 concentrations of 2240 to 3780 ppmv (~8x to 13x pre-industrial), atmospheric O2 ranging from 8% to 12% per volume, oceanic PO4 and NO3 loading from present day to double present day, reductions in wind speed of 50% and 30% (winds are provided as a boundary condition in the UVic ESCM). For most simulations the deep ocean remains well ventilated. While simulations with higher CO2, lower atmospheric O2 and greater nutrient loading generate lower oxygen concentration in the deep ocean, bottom anoxia - here defined as concentrations <10 μmol L-1 - in these cases is restricted to the high-latitue northern hemisphere. Further simulations will address the impact of greater nutrient loads and bottom topography on deep ocean oxygen concentrations.

  2. Recent results from the GISS model of the global atmosphere. [circulation simulation for weather forecasting

    NASA Technical Reports Server (NTRS)

    Somerville, R. C. J.

    1975-01-01

    Large numerical atmospheric circulation models are in increasingly widespread use both for operational weather forecasting and for meteorological research. The results presented here are from a model developed at the Goddard Institute for Space Studies (GISS) and described in detail by Somerville et al. (1974). This model is representative of a class of models, recently surveyed by the Global Atmospheric Research Program (1974), designed to simulate the time-dependent, three-dimensional, large-scale dynamics of the earth's atmosphere.

  3. Trace the Denmark Strait Overflow Water in an Eddy-Resolving Atlantic Simulation: Some Preliminary Results

    DTIC Science & Technology

    2013-05-01

    Trace the Denmark Strait overflow water in an eddy-resolving Atlantic simulation: some preliminary results Xiaobiao Xu ( COAPS /FSU), Alan...Wallcraft (NRL/SSC), Eric Chassignet ( COAPS /FSU) Thanks: Peter Rhines (UW) and William Schmitz May 21-23, Layered ocean modeling workshop, Ann Arbor, MI...Prediction Studies ( COAPS ),2000 Levy Avenue, Building A, Suite 292,Tallahassee,FL,32306-2741 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING

  4. Simulation of human atherosclerotic femoral plaque tissue: the influence of plaque material model on numerical results

    PubMed Central

    2015-01-01

    Background Due to the limited number of experimental studies that mechanically characterise human atherosclerotic plaque tissue from the femoral arteries, a recent trend has emerged in current literature whereby one set of material data based on aortic plaque tissue is employed to numerically represent diseased femoral artery tissue. This study aims to generate novel vessel-appropriate material models for femoral plaque tissue and assess the influence of using material models based on experimental data generated from aortic plaque testing to represent diseased femoral arterial tissue. Methods Novel material models based on experimental data generated from testing of atherosclerotic femoral artery tissue are developed and a computational analysis of the revascularisation of a quarter model idealised diseased femoral artery from a 90% diameter stenosis to a 10% diameter stenosis is performed using these novel material models. The simulation is also performed using material models based on experimental data obtained from aortic plaque testing in order to examine the effect of employing vessel appropriate material models versus those currently employed in literature to represent femoral plaque tissue. Results Simulations that employ material models based on atherosclerotic aortic tissue exhibit much higher maximum principal stresses within the plaque than simulations that employ material models based on atherosclerotic femoral tissue. Specifically, employing a material model based on calcified aortic tissue, instead of one based on heavily calcified femoral tissue, to represent diseased femoral arterial vessels results in a 487 fold increase in maximum principal stress within the plaque at a depth of 0.8 mm from the lumen. Conclusions Large differences are induced on numerical results as a consequence of employing material models based on aortic plaque, in place of material models based on femoral plaque, to represent a diseased femoral vessel. Due to these large

  5. Numerical simulations of soft and hard turbulence - Preliminary results for two-dimensional convection

    NASA Technical Reports Server (NTRS)

    Deluca, E. E.; Werne, J.; Rosner, R.; Cattaneo, F.

    1990-01-01

    Results on the transition from soft to hard turbulence in simulations of two-dimensional Boussinesq convection are reported. The computed probability densities for temperature fluctuations are exponential in form in both soft and hard turbulence, unlike what is observed in experiments. In contrast, a change is obtained in the Nusselt number scaling on Rayleigh number in good agreement with the three-dimensional experiments.

  6. PRELIMINARY RESULTS FROM A SIMULATION OF QUENCHED QCD WITH OVERL AP FERMIONS ON A LARGE LATTICE.

    SciTech Connect

    BERRUTO,F.GARRON,N.HOELBLING,D.LELLOUCH,L.REBBI,C.SHORESH,N.

    2003-07-15

    We simulate quenched QCD with the overlap Dirac operator. We work with the Wilson gauge action at {beta} = 6 on an 18{sup 3} x 64 lattice. We calculate quark propagators for a single source point and quark mass ranging from am{sub 4} = 0.03 to 0.75. We present here preliminary results based on the propagators for 60 gauge field configurations.

  7. Urinary Biomarker Panel to Improve Accuracy in Predicting Prostate Biopsy Result in Chinese Men with PSA 4–10 ng/mL

    PubMed Central

    Zhou, Yongqiang; Li, Yun; Li, Xiangnan

    2017-01-01

    This study aims to evaluate the effectiveness and clinical performance of a panel of urinary biomarkers to diagnose prostate cancer (PCa) in Chinese men with PSA levels between 4 and 10 ng/mL. A total of 122 patients with PSA levels between 4 and 10 ng/mL who underwent consecutive prostate biopsy at three hospitals in China were recruited. First-catch urine samples were collected after an attentive prostate massage. Urinary mRNA levels were measured by quantitative real-time polymerase chain reaction (qRT-PCR). The predictive accuracy of these biomarkers and prediction models was assessed by the area under the curve (AUC) of the receiver-operating characteristic (ROC) curve. The diagnostic accuracy of PCA3, PSGR, and MALAT-1 was superior to that of PSA. PCA3 performed best, with an AUC of 0.734 (95% CI: 0.641, 0.828) followed by MALAT-1 with an AUC of 0.727 (95% CI: 0.625, 0.829) and PSGR with an AUC of 0.666 (95% CI: 0.575, 0.749). The diagnostic panel with age, prostate volume, % fPSA, PCA3 score, PSGR score, and MALAT-1 score yielded an AUC of 0.857 (95% CI: 0.780, 0.933). At a threshold probability of 20%, 47.2% of unnecessary biopsies may be avoided whereas only 6.2% of PCa cases may be missed. This urinary panel may improve the current diagnostic modality in Chinese men with PSA levels between 4 and 10 ng/mL. PMID:28293631

  8. Cost and accuracy comparison between the diffuse interface method and the geometric volume of fluid method for simulating two-phase flows

    NASA Astrophysics Data System (ADS)

    Mirjalili, Shahab; Ivey, Christopher Blake; Mani, Ali

    2016-11-01

    The diffuse interface(DI) and volume of fluid(VOF) methods are mass conserving front capturing schemes which can handle large interfacial topology changes in realistic two phase flows. The DI method is a conservative phase field method that tracks an interface with finite thickness spread over a few cells and does not require reinitialization. In addition to having the desirable properties of level set methods for naturally capturing curvature and surface tension forces, the model conserves mass continuously and discretely. The VOF method, which tracks the fractional tagged volume in a cell, is discretely conservative by requiring costly geometric reconstructions of the interface and the fluxes. Both methods however, suffer from inaccuracies in calculation of curvature and surface tension forces. We present a quantitative comparison of these methods in terms of their accuracy, convergence rate, memory, and computational cost using canonical 2D two-phase test cases: damped surface wave, oscillating drop, equilibrium static drop, and dense moving drop. We further compared the models in their ability to handle thin films by looking at the impact of a water drop onto a deep water pool. Considering these results, we suggest qualitative guidelines for using the DI and VOF methods. Supported by ONR.

  9. SU-E-T-795: Validations of Dose Calculation Accuracy of Acuros BV in High-Dose-Rate (HDR) Brachytherapy with a Shielded Cylinder Applicator Using Monte Carlo Simulation

    SciTech Connect

    Li, Y; Tian, Z; Hrycushko, B; Jiang, S; Jia, X

    2015-06-15

    Purpose: Acuros BV has become available to perform accurate dose calculations in high-dose-rate (HDR) brachytherapy with phantom heterogeneity considered by solving the Boltzmann transport equation. In this work, we performed validation studies regarding the dose calculation accuracy of Acuros BV in cases with a shielded cylinder applicator using Monte Carlo (MC) simulations. Methods: Fifteen cases were considered in our studies, covering five different diameters of the applicator and three different shielding degrees. For each case, a digital phantom was created in Varian BrachyVision with the cylinder applicator inserted in the middle of a large water phantom. A treatment plan with eight dwell positions was generated for these fifteen cases. Dose calculations were performed with Acuros BV. We then generated a voxelized phantom of the same geometry, and the materials were modeled according to the vendor’s specifications. MC dose calculations were then performed using our in-house developed fast MC dose engine for HDR brachytherapy (gBMC) on a GPU platform, which is able to simulate both photon transport and electron transport in a voxelized geometry. A phase-space file for the Ir-192 HDR source was used as a source model for MC simulations. Results: Satisfactory agreements between the dose distributions calculated by Acuros BV and those calculated by gBMC were observed in all cases. Quantitatively, we computed point-wise dose difference within the region that receives a dose higher than 10% of the reference dose, defined to be the dose at 5mm outward away from the applicator surface. The mean dose difference was ∼0.45%–0.51% and the 95-percentile maximum difference was ∼1.24%–1.47%. Conclusion: Acuros BV is able to accurately perform dose calculations in HDR brachytherapy with a shielded cylinder applicator.

  10. Obtaining Identical Results on Varying Numbers of Processors In Domain Decomposed particle Monte Carlo Simulations

    SciTech Connect

    Gentile, N A; Kalos, M H; Brunner, T A

    2005-03-22

    Domain decomposed Monte Carlo codes, like other domain-decomposed codes, are difficult to debug. Domain decomposition is prone to error, and interactions between the domain decomposition code and the rest of the algorithm often produces subtle bugs. These bugs are particularly difficult to find in a Monte Carlo algorithm, in which the results have statistical noise. Variations in the results due to statistical noise can mask errors when comparing the results to other simulations or analytic results. If a code can get the same result on one domain as on many, debugging the whole code is easier. This reproducibility property is also desirable when comparing results done on different numbers of processors and domains. We describe how reproducibility, to machine precision, is obtained on different numbers of domains in an Implicit Monte Carlo photonics code.

  11. Spatial resolution effect on the simulated results of watershed scale models

    NASA Astrophysics Data System (ADS)

    Epelde, Ane; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Sauvage, Sabine; Sánchez-Pérez, José Miguel

    2016-04-01

    Numerical models are useful tools for water resources planning, development and management. Currently, their use is being spread and more complex modeling systems are being employed for these purposes. The adding of complexity allows the simulation of water quality related processes. Nevertheless, this implies a considerable increase on the computational requirements, which usually is compensated on the models by a decrease on their spatial resolution. The spatial resolution of the models is known to affect the simulation of hydrological processes and therefore, also the nutrient exportation and cycling processes. However, the implication of the spatial resolution on the simulated results is rarely assessed. In this study, we examine the effect of the change in the grid size on the integrated and distributed results of the Alegria River watershed model (Basque Country, Northern Spain). Variables such as discharge, water table level, relative water content of soils, nitrogen exportation and denitrification are analyzed in order to quantify the uncertainty involved in the spatial discretization of the watershed scale models. This is an aspect that needs to be carefully considered when numerical models are employed in watershed management studies or quality programs.

  12. Comparisons of Observations with Results from 3D Simulations and Implications for Predictions of Ozone Recovery

    NASA Technical Reports Server (NTRS)

    Douglass, Anne R.; Stolarski, Richard S.; Strahan, Susan E.; Steenrod, Stephen D.; Polarsky, Brian C.

    2004-01-01

    Although chemistry and transport models (CTMs) include the same basic elements (photo- chemical mechanism and solver, photolysis scheme, meteorological fields, numerical transport scheme), they produce different results for the future recovery of stratospheric ozone as chlorofluorcarbons decrease. Three simulations will be contrasted: the Global Modeling Initiative (GMI) CTM driven by a single year\\'s winds from a general circulation model; the GMI CTM driven by a single year\\'s winds from a data assimilation system; the NASA GSFC CTM driven by a winds from a multi-year GCM simulation. CTM results for ozone and other constituents will be compared with each other and with observations from ground-based and satellite platforms to address the following: Does the simulated ozone tendency and its latitude, altitude and seasonal dependence match that derived from observations? Does the balance from analysis of observations? Does the balance among photochemical processes match that expected from observations? Can the differences in prediction for ozone recovery be anticipated from these comparisons?

  13. Molecular simulation of aqueous electrolytes: water chemical potential results and Gibbs-Duhem equation consistency tests.

    PubMed

    Moučka, Filip; Nezbeda, Ivo; Smith, William R

    2013-09-28

    This paper deals with molecular simulation of the chemical potentials in aqueous electrolyte solutions for the water solvent and its relationship to chemical potential simulation results for the electrolyte solute. We use the Gibbs-Duhem equation linking the concentration dependence of these quantities to test the thermodynamic consistency of separate calculations of each quantity. We consider aqueous NaCl solutions at ambient conditions, using the standard SPC/E force field for water and the Joung-Cheatham force field for the electrolyte. We calculate the water chemical potential using the osmotic ensemble Monte Carlo algorithm by varying the number of water molecules at a constant amount of solute. We demonstrate numerical consistency of these results in terms of the Gibbs-Duhem equation in conjunction with our previous calculations of the electrolyte chemical potential. We present the chemical potential vs molality curves for both solvent and solute in the form of appropriately chosen analytical equations fitted to the simulation data. As a byproduct, in the context of the force fields considered, we also obtain values for the Henry convention standard molar chemical potential for aqueous NaCl using molality as the concentration variable and for the chemical potential of pure SPC/E water. These values are in reasonable agreement with the experimental values.

  14. Result-driven exploration of simulation parameter spaces for visual effects design.

    PubMed

    Bruckner, Stefan; Möller, Torsten

    2010-01-01

    Graphics artists commonly employ physically-based simulation for the generation of effects such as smoke, explosions, and similar phenomena. The task of finding the correct parameters for a desired result, however, is difficult and time-consuming as current tools provide little to no guidance. In this paper, we present a new approach for the visual exploration of such parameter spaces. Given a three-dimensional scene description, we utilize sampling and spatio-temporal clustering techniques to generate a concise overview of the achievable variations and their temporal evolution. Our visualization system then allows the user to explore the simulation space in a goal-oriented manner. Animation sequences with a set of desired characteristics can be composed using a novel search-by-example approach and interactive direct volume rendering is employed to provide instant visual feedback. A user study was performed to evaluate the applicability of our system in production use.

  15. Preliminary Analysis and Simulation Results of Microwave Transmission Through an Electron Cloud

    SciTech Connect

    Sonnad, Kiran; Sonnad, Kiran; Furman, Miguel; Veitzer, Seth; Stoltz, Peter; Cary, John

    2007-01-12

    The electromagnetic particle-in-cell (PIC) code VORPAL is being used to simulate the interaction of microwave radiation through an electron cloud. The results so far showgood agreement with theory for simple cases. The study has been motivated by previous experimental work on this problem at the CERN SPS [1], experiments at the PEP-II Low Energy Ring (LER) at SLAC [4], and proposed experiments at the Fermilab Main Injector (MI). With experimental observation of quantities such as amplitude, phase and spectrum of the output microwave radiation and with support from simulations for different cloud densities and applied magnetic fields, this technique can prove to be a useful probe for assessing the presence as well as the densityof electron clouds.

  16. Simulation and experimental results of optical and thermal modeling of gold nanoshells.

    PubMed

    Ghazanfari, Lida; Khosroshahi, Mohammad E

    2014-09-01

    This paper proposes a generalized method for optical and thermal modeling of synthesized magneto-optical nanoshells (MNSs) for biomedical applications. Superparamagnetic magnetite nanoparticles with diameter of 9.5 ± 1.4 nm are fabricated using co-precipitation method and subsequently covered by a thin layer of gold to obtain 15.8 ± 3.5 nm MNSs. In this paper, simulations and detailed analysis are carried out for different nanoshell geometry to achieve a maximum heat power. Structural, magnetic and optical properties of MNSs are assessed using vibrating sample magnetometer (VSM), X-ray diffraction (XRD), UV-VIS spectrophotometer, dynamic light scattering (DLS), and transmission electron microscope (TEM). Magnetic saturation of synthesized magnetite nanoparticles are reduced from 46.94 to 11.98 emu/g after coating with gold. The performance of the proposed optical-thermal modeling technique is verified by simulation and experimental results.

  17. RESULTS OF CESIUM MASS TRANSFER TESTING FOR NEXT GENERATION SOLVENT WITH HANFORD WASTE SIMULANT AP-101

    SciTech Connect

    Peters, T.; Washington, A.; Fink, S.

    2011-09-27

    SRNL has performed an Extraction, Scrub, Strip (ESS) test using the next generation solvent and AP-101 Hanford Waste simulant. The results indicate that the next generation solvent (MG solvent) has adequate extraction behavior even in the face of a massive excess of potassium. The stripping results indicate poorer behavior, but this may be due to inadequate method detection limits. SRNL recommends further testing using hot tank waste or spiked simulant to provide for better detection limits. Furthermore, strong consideration should be given to performing an actual waste, or spiked waste demonstration using the 2cm contactor bank. The Savannah River Site currently utilizes a solvent extraction technology to selectively remove cesium from tank waste at the Multi-Component Solvent Extraction unit (MCU). This solvent consists of four components: the extractant - BoBCalixC6, a modifier - Cs-7B, a suppressor - trioctylamine, and a diluent, Isopar L{trademark}. This solvent has been used to successfully decontaminate over 2 million gallons of tank waste. However, recent work at Oak Ridge National Laboratory (ORNL), Argonne National Laboratory (ANL), and Savannah River National Laboratory (SRNL) has provided a basis to implement an improved solvent blend. This new solvent blend - referred to as Next Generation Solvent (NGS) - is similar to the current solvent, and also contains four components: the extractant - MAXCalix, a modifier - Cs-7B, a suppressor - LIX-79{trademark} guanidine, and a diluent, Isopar L{trademark}. Testing to date has shown that this 'Next Generation' solvent promises to provide far superior cesium removal efficiencies, and furthermore, is theorized to perform adequately even in waste with high potassium concentrations such that it could be used for processing Hanford wastes. SRNL has performed a cesium mass transfer test in to confirm this behavior, using a simulant designed to simulate Hanford AP-101 waste.

  18. Computer simulation applied to jewellery casting: challenges, results and future possibilities

    NASA Astrophysics Data System (ADS)

    Tiberto, Dario; Klotz, Ulrich E.

    2012-07-01

    Computer simulation has been successfully applied in the past to several industrial processes (such as lost foam and die casting) by larger foundries and direct automotive suppliers, while for the jewelry sector it is a procedure which is not widespread, and which has been tested mainly in the context of research projects. On the basis of a recently concluded EU project, the authors here present the simulation of investment casting, using two different softwares: one for the filling step (Flow-3D®), the other one for the solidification (PoligonSoft®). A work on material characterization was conducted to obtain the necessary physical parameters for the investment (used for the mold) and for the gold alloys (through thermal analysis). A series of 18k and 14k gold alloys were cast in standard set-ups to have a series of benchmark trials with embedded thermocouples for temperature measurement, in order to compare and validate the software output in terms of the cooling curves for definite test parts. Results obtained with the simulation included the reduction of micro-porosity through an optimization of the feeding channels for a controlled solidification of the metal: examples of the predicted porosity in the cast parts (with metallographic comparison) will be shown. Considerations on the feasibility of applying the casting simulation in the jewelry sector will be reached, underlining the importance of the software parametrization necessary to obtain reliable results, and the discrepancies found with the experimental comparison. In addition an overview on further possibilities of application for the CFD in jewellery casting, such as the modeling of the centrifugal and tilting processes, will be presented.

  19. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    SciTech Connect

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L.

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.

  20. A new model to simulate the Martian mesoscale and microscale atmospheric circulation: Validation and first results

    NASA Astrophysics Data System (ADS)

    Spiga, Aymeric; Forget, François

    2009-02-01

    associated dynamics: convective motions, overlying gravity waves, and dust devil-like vortices. Modeled temperature profiles are in satisfactory agreement with the Miniature Thermal Emission Spectrometer (Mini-TES) measurements. The ability of the model to transport tracers at regional scales is exemplified by the model's prediction for the altitude of the Tharsis topographical water ice clouds in the afternoon. Finally, a nighttime ``warm ring'' at the base of Olympus Mons is identified in the simulations, resulting from adiabatic warming by the intense downslope winds along the flanks of the volcano. The surface temperature enhancement reaches +20 K throughout the night. Such a phenomenon may have adversely affected the thermal inertia derivations in the region.

  1. SRG110 Stirling Generator Dynamic Simulator Vibration Test Results and Analysis Correlation

    NASA Technical Reports Server (NTRS)

    Suarez, Vicente J.; Lewandowski, Edward J.; Callahan, John

    2006-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin (LM), and NASA Glenn Research Center (GRC) have been developing the Stirling Radioisotope Generator (SRG110) for use as a power system for space science missions. The launch environment enveloping potential missions results in a random input spectrum that is significantly higher than historical RPS launch levels and is a challenge for designers. Analysis presented in prior work predicted that tailoring the compliance at the generator-spacecraft interface reduced the dynamic response of the system thereby allowing higher launch load input levels and expanding the range of potential generator missions. To confirm analytical predictions, a dynamic simulator representing the generator structure, Stirling convertors and heat sources was designed and built for testing with and without a compliant interface. Finite element analysis was performed to guide the generator simulator and compliant interface design so that test modes and frequencies were representative of the SRG110 generator. This paper presents the dynamic simulator design, the test setup and methodology, test article modes and frequencies and dynamic responses, and post-test analysis results. With the compliant interface, component responses to an input environment exceeding the SRG110 qualification level spectrum were all within design allowables. Post-test analysis included finite element model tuning to match test frequencies and random response analysis using the test input spectrum. Analytical results were in good overall agreement with the test results and confirmed previous predictions that the SRG110 power system may be considered for a broad range of potential missions, including those with demanding launch environments.

  2. SRG110 Stirling Generator Dynamic Simulator Vibration Test Results and Analysis Correlation

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.; Suarez, Vicente J.; Goodnight, Thomas W.; Callahan, John

    2007-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin (LM), and NASA Glenn Research Center (GRC) have been developing the Stirling Radioisotope Generator (SRG110) for use as a power system for space science missions. The launch environment enveloping potential missions results in a random input spectrum that is significantly higher than historical radioisotope power system (RPS) launch levels and is a challenge for designers. Analysis presented in prior work predicted that tailoring the compliance at the generator-spacecraft interface reduced the dynamic response of the system thereby allowing higher launch load input levels and expanding the range of potential generator missions. To confirm analytical predictions, a dynamic simulator representing the generator structure, Stirling convertors and heat sources were designed and built for testing with and without a compliant interface. Finite element analysis was performed to guide the generator simulator and compliant interface design so that test modes and frequencies were representative of the SRG110 generator. This paper presents the dynamic simulator design, the test setup and methodology, test article modes and frequencies and dynamic responses, and post-test analysis results. With the compliant interface, component responses to an input environment exceeding the SRG110 qualification level spectrum were all within design allowables. Post-test analysis included finite element model tuning to match test frequencies and random response analysis using the test input spectrum. Analytical results were in good overall agreement with the test results and confirmed previous predictions that the SRG110 power system may be considered for a broad range of potential missions, including those with demanding launch environments.

  3. Flow-driven cloud formation and fragmentation: results from Eulerian and Lagrangian simulations

    NASA Astrophysics Data System (ADS)

    Heitsch, Fabian; Naab, Thorsten; Walch, Stefanie

    2011-07-01

    The fragmentation of shocked flows in a thermally bistable medium provides a natural mechanism to form turbulent cold clouds as precursors to molecular clouds. Yet because of the large density and temperature differences and the range of dynamical scales involved, following this process with numerical simulations is challenging. We compare two-dimensional simulations of flow-driven cloud formation without self-gravity, using the Lagrangian smoothed particle hydrodynamics (SPH) code VINE and the Eulerian grid code PROTEUS. Results are qualitatively similar for both methods, yet the variable spatial resolution of the SPH method leads to smaller fragments and thinner filaments, rendering the overall morphologies different. Thermal and hydrodynamical instabilities lead to rapid cooling and fragmentation into cold clumps with temperatures below 300 K. For clumps more massive than 1 M⊙ pc-1, the clump mass function has an average slope of -0.8. The internal velocity dispersion of the clumps is nearly an order of magnitude smaller than their relative motion, rendering it subsonic with respect to the internal sound speed of the clumps but supersonic as seen by an external observer. For the SPH simulations most of the cold gas resides at temperatures below 100 K, while the grid-based models show an additional, substantial component between 100 and 300 K. Independent of the numerical method, our models confirm that converging flows of warm neutral gas fragment rapidly and form high-density, low-temperature clumps as possible seeds for star formation.

  4. Mechanisms of Core-Collapse Supernovae & Simulation Results from the CHIMERA Code

    NASA Astrophysics Data System (ADS)

    Bruenn, S. W.; Mezzacappa, A.; Hix, W. R.; Blondin, J. M.; Marronetti, P.; Messer, O. E. B.; Dirk, C. J.; Yoshida, S.

    2009-05-01

    Unraveling the mechanism for core-collapse supernova explosions is an outstanding computational challenge and the problem remains essentially unsolved despite more than four decades of effort. However, much progress in realistic modeling has occurred recently through the availability of multi-teraflop machines and the increasing sophistication of supernova codes. These improvements have led to some key insights which may clarify the picture in the not too distant future. Here we briefly review the current status of the three explosion mechanisms (acoustic, MHD, and neutrino heating) that are currently under active investigation, concentrating on the neutrino heating mechanism as the one most likely responsible for producing explosions from progenitors in the mass range ~10 to ~25Msolar. We then briefly describe the CHIMERA code, a supernova code we have developed to simulate core-collapse supernovae in 1, 2, and 3 spatial dimensions. We finally describe the results of an ongoing suite of 2D simulations initiated from a 12, 15, 20, and 25Msolar progenitor. These have all exhibited explosions and are currently in the expanding phase with the shock at between 5,000 and 10,000 km. We finally very briefly describe an ongoing simulation in 3 spatial dimensions initiated from the 15Msolar progenitor.

  5. Development of ADOCS controllers and control laws. Volume 3: Simulation results and recommendations

    NASA Technical Reports Server (NTRS)

    Landis, Kenneth H.; Glusman, Steven I.

    1985-01-01

    The Advanced Cockpit Controls/Advanced Flight Control System (ACC/AFCS) study was conducted by the Boeing Vertol Company as part of the Army's Advanced Digital/Optical Control System (ADOCS) program. Specifically, the ACC/AFCS investigation was aimed at developing the flight control laws for the ADOCS demonstator aircraft which will provide satisfactory handling qualities for an attack helicopter mission. The three major elements of design considered are as follows: Pilot's integrated Side-Stick Controller (SSC) -- Number of axes controlled; force/displacement characteristics; ergonomic design. Stability and Control Augmentation System (SCAS)--Digital flight control laws for the various mission phases; SCAS mode switching logic. Pilot's Displays--For night/adverse weather conditions, the dynamics of the superimposed symbology presented to the pilot in a format similar to the Advanced Attack Helicopter (AAH) Pilot Night Vision System (PNVS) for each mission phase is a function of SCAS characteristics; display mode switching logic. Results of the five piloted simulations conducted at the Boeing Vertol and NASA-Ames simulation facilities are presented in Volume 3. Conclusions drawn from analysis of pilot rating data and commentary were used to formulate recommendations for the ADOCS demonstrator flight control system design. The ACC/AFCS simulation data also provide an extensive data base to aid the development of advanced flight control system design for future V/STOL aircraft.

  6. Galaxy Properties and UV Escape Fractions during the Epoch of Reionization: Results from the Renaissance Simulations

    NASA Astrophysics Data System (ADS)

    Xu, Hao; Wise, John H.; Norman, Michael L.; Ahn, Kyungjin; O'Shea, Brian W.

    2016-12-01

    Cosmic reionization is thought to be primarily fueled by the first generations of galaxies. We examine their stellar and gaseous properties, focusing on the star formation rates and the escape of ionizing photons, as a function of halo mass, redshift, and environment using the full suite of the Renaissance Simulations with an eye to provide better inputs to global reionization simulations. This suite probes overdense, average, and underdense regions of the universe of several hundred comoving Mpc3, each yielding a sample of over 3000 halos in the mass range of 107-109.5 {M}⊙ at their final redshifts of 15, 12.5, and 8, respectively. In the process, we simulate the effects of radiative and supernova feedback from 5000 to 10,000 Population III stars in each simulation. We find that halos as small as 107 {M}⊙ are able to host bursty star formation due to metal-line cooling from earlier enrichment by massive Population III stars. Using our large sample, we find that the galaxy-halo occupation fraction drops from unity at virial masses above 108.5 {M}⊙ to ˜50% at 108 {M}⊙ and ˜10% at 107 {M}⊙ , quite independent of redshift and region. Their average ionizing escape fraction is ˜5% in the mass range of 108-109 {M}⊙ and increases with decreasing halo mass below this range, reaching 40%-60% at 107 {M}⊙ . Interestingly, we find that the escape fraction varies between 10%-20% in halos with virial masses of ˜3 × 109 {M}⊙ . Taken together, our results confirm the importance of the smallest galaxies as sources of ionizing radiation contributing to the reionization of the universe.

  7. Conversion of NIMROD simulation results for graphical analysis using VisIt

    SciTech Connect

    Romero-Talamas, C A

    2006-05-03

    Software routines developed to prepare NIMROD [C. R. Sovinec et al., J. Comp. Phys. 195, 355 (2004)] results for three-dimensional visualization from simulations of the Sustained Spheromak Physics Experiment (SSPX ) [E. B. Hooper et al., Nucl. Fusion 39, 863 (1999)] are presented here. The visualization is done by first converting the NIMROD output to a format known as legacy VTK and then loading it to VisIt, a graphical analysis tool that includes three-dimensional rendering and various mathematical operations for large data sets. Sample images obtained from the processing of NIMROD data with VisIt are included.

  8. Results and simulation of the prototype detection unit of KM3NeT-ARCA

    NASA Astrophysics Data System (ADS)

    Hugon, C. M. F.

    2017-03-01

    KM3NeT-ARCA is a deep sea high energy neutrino detector. A detection unit prototype was deployed in the future KM3NeT-ARCA deep-sea site, off of the Sicilian coast. This detection unit is composed of a line of 3 digital optical modules with 31 photomultiplier tubes on each one. The prototype detection unit was operated since its deployment in May 2014 until its decommissioning in July 2015. The results of the calibration of this detection unit and its simulation are presented and discussed.

  9. Entry, Descent and Landing Systems Analysis: Exploration Class Simulation Overview and Results

    NASA Technical Reports Server (NTRS)

    DwyerCianciolo, Alicia M.; Davis, Jody L.; Shidner, Jeremy D.; Powell, Richard W.

    2010-01-01

    NASA senior management commissioned the Entry, Descent and Landing Systems Analysis (EDL-SA) Study in 2008 to identify and roadmap the Entry, Descent and Landing (EDL) technology investments that the agency needed to make in order to successfully land large payloads at Mars for both robotic and exploration or human-scale missions. The year one exploration class mission activity considered technologies capable of delivering a 40-mt payload. This paper provides an overview of the exploration class mission study, including technologies considered, models developed and initial simulation results from the EDL-SA year one effort.

  10. JT9D performance deterioration results from a simulated aerodynamic load test

    NASA Technical Reports Server (NTRS)

    Stakolich, E. G.; Stromberg, W. J.

    1981-01-01

    The results of testing to identify the effects of simulated aerodynamic flight loads on JT9D engine performance are presented. The test results were also used to refine previous analytical studies on the impact of aerodynamic flight loads on performance losses. To accomplish these objectives, a JT9D-7AH engine was assembled with average production clearances and new seals as well as extensive instrumentation to monitor engine performance, case temperatures, and blade tip clearance changes. A special loading device was designed and constructed to permit application of known moments and shear forces to the engine by the use of cables placed around the flight inlet. The test was conducted in the Pratt & Whitney Aircraft X-Ray Test Facility to permit the use of X-ray techniques in conjunction with laser blade tip proximity probes to monitor important engine clearance changes. Upon completion of the test program, the test engine was disassembled, and the condition of gas path parts and final clearances were documented. The test results indicate that the engine lost 1.1 percent in thrust specific fuel consumption (TSFC), as measured under sea level static conditions, due to increased operating clearances caused by simulated flight loads. This compares with 0.9 percent predicted by the analytical model and previous study efforts.

  11. Results from the simulations of geopotential coefficient estimation from gravity gradients

    NASA Astrophysics Data System (ADS)

    Bettadpur, S.; Schutz, B. E.; Lundberg, J. B.

    New information of the short and medium wavelength components of the geopotential is expected from the measurements of gravity gradients made by the future ESA Aristoteles and the NASA Superconducting Gravity Gradiometer missions. In this paper, results are presented from preliminary simulations concerning the estimation of the spherical harmonic coefficients of the geopotential expansion from gravity gradients data. Numerical issues in the brute-force inversion (BFI) of the gravity gradients data are examined, and numerical algorithms are developed that substantially speed up the computation of the potential, acceleration, and gradients, as well as the mapping from the gravity gradients to the geopotential coefficients. The solution of a large least squares problem is also examined, and computational requirements are determined for the implementation of a large scale inversion. A comparative analysis of the results from the BFI and a symmetry method is reported for the test simulations of the estimation of a degree and order 50 gravity field. The results from the two, in the presence of white noise, are seen to compare well. The latter method is implemented on a special, axially symmetric surface that fits the orbit within 380 meters.

  12. Influence of land use on rainfall simulation results in the Souss basin, Morocco

    NASA Astrophysics Data System (ADS)

    Peter, Klaus Daniel; Ries, Johannes B.; Hssaine, Ali Ait

    2013-04-01

    Situated between the High and Anti-Atlas, the Souss basin is characterized by a dynamic land use change. It is one of the fastest growing agricultural regions of Morocco. Traditional mixed agriculture is replaced by extensive plantations of citrus fruits, bananas and vegetables in monocropping, mainly for the European market. For the implementation of the land use change and further expansion of the plantations into marginal land which was former unsuitable for agriculture, land levelling by heavy machinery is used to plane the fields and close the widespread gullies. These gully systems are cutting deep between the plantations and other arable land. Their development started already over 400 years ago with the introduction of sugar production. Heavy rainfall events lead to further strong soil and gully erosion in this with 200 mm mean annual precipitation normally arid region. Gullies are cutting into the arable land or are re-excavating their old stream courses. On the test sites around the city of Taroudant, a total of 122 rainfall simulations were conducted to analyze the susceptibility of soils to surface runoff and soil erosion under different land use. A small portable nozzle rainfall simulator is used for the rainfall simulation experiments, quantifying runoff and erosion rates on micro-plots with a size of 0.28 m2. A motor pump boosts the water regulated by a flow metre into the commercial full cone nozzle at a height of 2 m. The rainfall intensity is maintained at about 40 mm h-1 for each of the 30 min lasting experiments. Ten categories of land use are classified for different stages of levelling, fallow land, cultivation and rangeland. Results show that mean runoff coefficients and mean sediment loads are significantly higher (1.4 and 3.5 times respectively) on levelled study sites compared to undisturbed sites. However, the runoff coefficients of all land use types are relatively equal and reach high median coefficients from 39 to 56 %. Only the

  13. SZ effects in the Magneticum Pathfinder simulation: comparison with the Planck, SPT, and ACT results

    NASA Astrophysics Data System (ADS)

    Dolag, K.; Komatsu, E.; Sunyaev, R.

    2016-12-01

    We calculate the one-point probability density distribution functions (PDF) and the power spectra of the thermal and kinetic Sunyaev-Zeldovich (tSZ and kSZ) effects and the mean Compton Y parameter using the Magneticum Pathfinder simulations, state-of-the-art cosmological hydrodynamical simulations of a large cosmological volume of (896 Mpc h-1)3. These simulations follow in detail the thermal and chemical evolution of the intracluster medium as well as the evolution of supermassive black holes and their associated feedback processes. We construct full-sky maps of tSZ and kSZ from the light-cones out to z = 0.17, and one realization of 8.8° × 8.8° deep light-cone out to z = 5.2. The local universe at z < 0.027 is simulated by a constrained realization. The tail of the one-point PDF of tSZ from the deep light-cone follows a power-law shape with an index of -3.2. Once convolved with the effective beam of Planck, it agrees with the PDF measured by Planck. The predicted tSZ power spectrum agrees with that of the Planck data at all multipoles up to l ≈ 1000, once the calculations are scaled to the Planck 2015 cosmological parameters with Ωm = 0.308 and σ8 = 0.8149. Consistent with the results in the literature, however, we continue to find the tSZ power spectrum at l = 3000 that is significantly larger than that estimated from the high-resolution ground-based data. The simulation predicts the mean fluctuating Compton Y value of bar{Y}=1.18× 10^{-6} for Ωm = 0.272 and σ8 = 0.809. Nearly half (≈5 × 10-7) of the signal comes from haloes below a virial mass of 1013 M⊙ h-1. Scaling this to the Planck 2015 parameters, we find bar{Y}=1.57× {}10^{-6}.

  14. RESULTS OF COPPER CATALYZED PEROXIDE OXIDATION (CCPO) OF TANK 48H SIMULANTS

    SciTech Connect

    Peters, T.; Pareizs, J.; Newell, J.; Fondeur, F.; Nash, C.; White, T.; Fink, S.

    2012-08-14

    Savannah River National Laboratory (SRNL) performed a series of laboratory-scale experiments that examined copper-catalyzed hydrogen peroxide (H{sub 2}O{sub 2}) aided destruction of organic components, most notably tetraphenylborate (TPB), in Tank 48H simulant slurries. The experiments were designed with an expectation of conducting the process within existing vessels of Building 241-96H with minimal modifications to the existing equipment. Results of the experiments indicate that TPB destruction levels exceeding 99.9% are achievable, dependent on the reaction conditions. The following observations were made with respect to the major processing variables investigated. A lower reaction pH provides faster reaction rates (pH 7 > pH 9 > pH 11); however, pH 9 reactions provide the least quantity of organic residual compounds within the limits of species analyzed. Higher temperatures lead to faster reaction rates and smaller quantities of organic residual compounds. Higher concentrations of the copper catalyst provide faster reaction rates, but the highest copper concentration (500 mg/L) also resulted in the second highest quantity of organic residual compounds. Faster rates of H{sub 2}O{sub 2} addition lead to faster reaction rates and lower quantities of organic residual compounds. Testing with simulated slurries continues. Current testing is examining lower copper concentrations, refined peroxide addition rates, and alternate acidification methods. A revision of this report will provide updated findings with emphasis on defining recommended conditions for similar tests with actual waste samples.

  15. Natural frequencies of two bubbles in a compliant tube: Analytical, simulation, and experimental results

    PubMed Central

    Jang, Neo W.; Zakrzewski, Aaron; Rossi, Christina; Dalecki, Diane; Gracewski, Sheryl

    2011-01-01

    Motivated by various clinical applications of ultrasound contrast agents within blood vessels, the natural frequencies of two bubbles in a compliant tube are studied analytically, numerically, and experimentally. A lumped parameter model for a five degree of freedom system was developed, accounting for the compliance of the tube and coupled response of the two bubbles. The results were compared to those produced by two different simulation methods: (1) an axisymmetric coupled boundary element and finite element code previously used to investigate the response of a single bubble in a compliant tube and (2) finite element models developed in comsol Multiphysics. For the simplified case of two bubbles in a rigid tube, the lumped parameter model predicts two frequencies for in- and out-of-phase oscillations, in good agreement with both numerical simulation and experimental results. For two bubbles in a compliant tube, the lumped parameter model predicts four nonzero frequencies, each asymptotically converging to expected values in the rigid and compliant limits of the tube material. PMID:22088008

  16. Evolution of star cluster systems in isolated galaxies: first results from direct N-body simulations

    NASA Astrophysics Data System (ADS)

    Rossi, L. J.; Bekki, K.; Hurley, J. R.

    2016-11-01

    The evolution of star clusters is largely affected by the tidal field generated by the host galaxy. It is thus in principle expected that under the assumption of a `universal' initial cluster mass function the properties of the evolved present-day mass function of star cluster systems should show a dependence on the properties of the galactic environment in which they evolve. To explore this expectation, a sophisticated model of the tidal field is required in order to study the evolution of star cluster systems in realistic galaxies. Along these lines, in this work we first describe a method developed for coupling N-body simulations of galaxies and star clusters. We then generate a data base of galaxy models along the Hubble sequence and calibrate evolutionary equations to the results of direct N-body simulations of star clusters in order to predict the clusters' mass evolution as function of the galactic environment. We finally apply our methods to explore the properties of evolved `universal' initial cluster mass functions and any dependence on the host galaxy morphology and mass distribution. The preliminary results show that an initial power-law distribution of the masses `universally' evolves into a lognormal distribution, with the properties correlated with the stellar mass and stellar mass density of the host galaxy.

  17. Modelled air pollution levels versus EC air quality legislation - results from high resolution simulation.

    PubMed

    Chervenkov, Hristo

    2013-12-01

    An appropriate method for evaluating the air quality of a certain area is to contrast the actual air pollution levels to the critical ones, prescribed in the legislative standards. The application of numerical simulation models for assessing the real air quality status is allowed by the legislation of the European Community (EC). This approach is preferable, especially when the area of interest is relatively big and/or the network of measurement stations is sparse, and the available observational data are scarce, respectively. Such method is very efficient for similar assessment studies due to continuous spatio-temporal coverage of the obtained results. In the study the values of the concentration of the harmful substances sulphur dioxide, (SO2), nitrogen dioxide (NO2), particulate matter - coarse (PM10) and fine (PM2.5) fraction, ozone (O3), carbon monoxide (CO) and ammonia (NH3) in the surface layer obtained from modelling simulations with resolution 10 km on hourly bases are taken to calculate the necessary statistical quantities which are used for comparison with the corresponding critical levels, prescribed in the EC directives. For part of them (PM2.5, CO and NH3) this is done for first time with such resolution. The computational grid covers Bulgaria entirely and some surrounding territories and the calculations are made for every year in the period 1991-2000. The averaged over the whole time slice results can be treated as representative for the air quality situation of the last decade of the former century.

  18. Newest Results from the Investigation of Polymer-Induced Drag Reduction through Direct Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Dimitropoulos, Costas D.; Beris, Antony N.; Sureshkumar, R.; Handler, Robert A.

    1998-11-01

    This work continues our attempts to elucidate theoretically the mechanism of polymer-induced drag reduction through direct numerical simulations of turbulent channel flow, using an independently evaluated rheological model for the polymer stress. Using appropriate scaling to accommodate effects due to viscoelasticity reveals that there exists a great consistency in the results for different combinations of the polymer concentration and chain extension. This helps demonstrate that our obervations are applicable to very dilute systems, currently not possible to simulate. It also reinforces the hypothesis that one of the prerequisites for the phenomenon of drag reduction is sufficiently enhanced extensional viscosity, corresponding to the level of intensity and duration of extensional rates typically encountered during the turbulent flow. Moreover, these results motivate a study of the turbulence structure at larger Reynolds numbers and for different periodic computational cell sizes. In addition, the Reynolds stress budgets demonstrate that flow elasticity adversely affects the activities represented by the pressure-strain correlations, leading to a redistribution of turbulent kinetic energy amongst all directions. Finally, we discuss the influence of viscoelasticity in reducing the production of streamwise vorticity.

  19. Natural frequencies of two bubbles in a compliant tube: analytical, simulation, and experimental results.

    PubMed

    Jang, Neo W; Zakrzewski, Aaron; Rossi, Christina; Dalecki, Diane; Gracewski, Sheryl

    2011-11-01

    Motivated by various clinical applications of ultrasound contrast agents within blood vessels, the natural frequencies of two bubbles in a compliant tube are studied analytically, numerically, and experimentally. A lumped parameter model for a five degree of freedom system was developed, accounting for the compliance of the tube and coupled response of the two bubbles. The results were compared to those produced by two different simulation methods: (1) an axisymmetric coupled boundary element and finite element code previously used to investigate the response of a single bubble in a compliant tube and (2) finite element models developed in comsol Multiphysics. For the simplified case of two bubbles in a rigid tube, the lumped parameter model predicts two frequencies for in- and out-of-phase oscillations, in good agreement with both numerical simulation and experimental results. For two bubbles in a compliant tube, the lumped parameter model predicts four nonzero frequencies, each asymptotically converging to expected values in the rigid and compliant limits of the tube material.

  20. Ion equation of state in quasi-parallel shocks - A simulation result

    NASA Technical Reports Server (NTRS)

    Mandt, M. E.; Kan, J. R.

    1988-01-01

    Ion equation of state in the quasi-parallel collisionless shock is deduced from simulation results. The simulations were performed for theta(bn) = 10 deg, beta = 0.5 and M sub A in the range from 1.2 to 8, where M sub A is the Alfven Mach number, beta is the upstream ratio of plasma pressure to magnetic pressure, and theta(bn) is the angle between the shock normal and the upstream magnetic field. The equation of state can be approximated by a power law with different exponents in the upstream and downstream sides of the shock transition region. The exponent in the upstream side of the transition region is much greater than the adiabatic value of 5/3 and increases with M sub A. The exponent in the downstream side of the transition region is slightly less than 5/3. The results show that ion heating in the quasi-parallel shock is highly nonadiabatic with a large increase in entropy and in temperature ratio in the upstream side of the transition region, while the heating is highly isentropic with a large increase in temperature difference across the principal density jump in the downstream side of the transition region.

  1. Ion cyclotron instability at Io: Hybrid simulation results compared to in situ observations

    NASA Astrophysics Data System (ADS)

    Šebek, Ondřej; Trávníček, Pavel M.; Walker, Raymond J.; Hellinger, Petr

    2016-08-01

    We present analysis of global three-dimensional hybrid simulations of Io's interaction with Jovian magnetospheric plasma. We apply a single-species model with simplified neutral-plasma chemistry and downscale Io in order to resolve the ion kinetic scales. We consider charge exchange, electron impact ionization, and photoionization by using variable rates of these processes to investigate their impact. Our results are in a good qualitative agreement with the in situ magnetic field measurements for five Galileo flybys around Io. The hybrid model describes ion kinetics self-consistently. This allows us to assess the distribution of temperature anisotropies around Io and thereby determine the possible triggering mechanism for waves observed near Io. We compare simulated dynamic spectra of magnetic fluctuations with in situ observations made by Galileo. Our results are consistent with both the spatial distribution and local amplitude of magnetic fluctuations found in the observations. Cyclotron waves, triggered probably by the growth of ion cyclotron instability, are observed mainly downstream of Io and on the flanks in regions farther from Io where the ion pickup rate is relatively low. Growth of the ion cyclotron instability is governed mainly by the charge exchange rate.

  2. Mercury's plasma belt: hybrid simulations results compared to in-situ measurements

    NASA Astrophysics Data System (ADS)

    Hercik, D.; Travnicek, P. M.; Schriver, D.; Hellinger, P.

    2012-12-01

    The presence of plasma belt and trapped particles region in the Mercury's inner magnetosphere has been questionable due to small dimensions of the magnetosphere of Mercury compared to Earth, where these regions are formed. Numerical simulations of the solar wind interaction with Mercury's magnetic field suggested that such a structure could be found also in the vicinity of Mercury. These results has been recently confirmed also by MESSENGER observations. Here we present more detailed analysis of the plasma belt structure and quasi-trapped particle population characteristics and behaviour under different orientations of the interplanetary magnetic field.The plasma belt region is constantly supplied with solar wind protons via magnetospheric flanks and tail current sheet region. Protons inside the plasma belt region are quasi-trapped in the magnetic field of Mercury and perform westward drift along the planet. This region is well separated by a magnetic shell and has higher average temperatures and lower bulk proton current densities than surrounding area. On the day side the population exhibits loss cone distribution function matching the theoretical loss cone angle. Simulations results are also compared to in-situ measurements acquired by MESSENGER MAG and FIPS instruments.

  3. Stellar hydrodynamical modeling of dwarf galaxies: simulation methodology, tests, and first results

    NASA Astrophysics Data System (ADS)

    Vorobyov, Eduard I.; Recchi, Simone; Hensler, Gerhard

    2015-07-01

    Context. In spite of enormous progress and brilliant achievements in cosmological simulations, they still lack numerical resolution or physical processes to simulate dwarf galaxies in sufficient detail. Accurate numerical simulations of individual dwarf galaxies are thus still in demand. Aims: We aim to improve available numerical techniques to simulate individual dwarf galaxies. In particular, we aim to (i) study in detail the coupling between stars and gas in a galaxy, exploiting the so-called stellar hydrodynamical approach; and (ii) study for the first time the chemodynamical evolution of individual galaxies starting from self-consistently calculated initial gas distributions. Methods: We present a novel chemodynamical code for studying the evolution of individual dwarf galaxies. In this code, the dynamics of gas is computed using the usual hydrodynamics equations, while the dynamics of stars is described by the stellar hydrodynamics approach, which solves for the first three moments of the collisionless Boltzmann equation. The feedback from stellar winds and dying stars is followed in detail. In particular, a novel and detailed approach has been developed to trace the aging of various stellar populations, which facilitates an accurate calculation of the stellar feedback depending on the stellar age. The code has been accurately benchmarked, allowing us to provide a recipe for improving the code performance on the Sedov test problem. Results: We build initial equilibrium models of dwarf galaxies that take gas self-gravity into account and present different levels of rotational support. Models with high rotational support (and hence high degrees of flattening) develop prominent bipolar outflows; a newly-born stellar population in these models is preferentially concentrated to the galactic midplane. Models with little rotational support blow away a large fraction of the gas and the resulting stellar distribution is extended and diffuse. Models that start from non

  4. Carbon fiber composites inspection and defect characterization using active infrared thermography: numerical simulations and experimental results.

    PubMed

    Fernandes, Henrique; Zhang, Hai; Figueiredo, Alisson; Ibarra-Castanedo, Clemente; Guimarares, Gilmar; Maldague, Xavier

    2016-12-01

    Composite materials are widely used in the aeronautic industry. One of the reasons is because they have strength and stiffness comparable to metals, with the added advantage of significant weight reduction. Infrared thermography (IT) is a safe nondestructive testing technique that has a fast inspection rate. In active IT, an external heat source is used to stimulate the material being inspected in order to generate a thermal contrast between the feature of interest and the background. In this paper, carbon-fiber-reinforced polymers are inspected using IT. More specifically, carbon/PEEK (polyether ether ketone) laminates with square Kapton inserts of different sizes and at different depths are tested with three different IT techniques: pulsed thermography, vibrothermography, and line scan thermography. The finite element method is used to simulate the pulsed thermography experiment. Numerical results displayed a very good agreement with experimental results.

  5. Multipacting simulation and test results of BNL 704 MHz SRF gun

    SciTech Connect

    Xu W.; Belomestnykh, S.; Ben-Zvi, I.; Cullen, C. et al

    2012-05-20

    The BNL 704MHz SRF gun has a grooved choke joint to support the photo-cathode. Due to the distortion of grooves at the choke joint during the BCP for the choke joint, several multipacting barriers showed up when it was tested with Nb cathode stalk at JLab. We built a setup to use the spare large grain SRF cavity to test and condition the multipacting at BNL with various power sources up to 50kW. The test is carried out in three stages: testing the cavity performance without cathode, testing the cavity with the Nb cathode stalk that was used at Jlab, and testing the cavity with a copper cathode stalk that is based on the design for the SRF gun. This paper summarizes the results of multipacting simulation, and presents the large grain cavity test setup and the test results.

  6. Results of Simulated Galactic Cosmic Radiation (GCR) and Solar Particle Events (SPE) on Spectra Restraint Fabric

    NASA Technical Reports Server (NTRS)

    Peters, Benjamin; Hussain, Sarosh; Waller, Jess

    2017-01-01

    Spectra or similar Ultra-high-molecular-weight polyethylene (UHMWPE) fabric is the likely choice for future structural space suit restraint materials due to its high strength-to-weight ratio, abrasion resistance, and dimensional stability. During long duration space missions, space suits will be subjected to significant amounts of high-energy radiation from several different sources. To insure that pressure garment designs properly account for effects of radiation, it is important to characterize the mechanical changes to structural materials after they have been irradiated. White Sands Test Facility (WSFTF) collaborated with the Crew and Thermal Systems Division at the Johnson Space Center (JSC) to irradiate and test various space suit materials by examining their tensile properties through blunt probe puncture testing and single fiber tensile testing after the materials had been dosed at various levels of simulated GCR and SPE Iron and Proton beams at Brookhaven National Laboratories. The dosages were chosen based on a simulation developed by the Structural Engineering Division at JSC for the expected radiation dosages seen by space suit softgoods seen on a Mars reference mission. Spectra fabric tested in the effort saw equivalent dosages at 2x, 10x, and 20x the predicted dose as well as a simulated 50 year exposure to examine the range of effects on the material and examine whether any degradation due to GCR would be present if the suit softgoods were stored in deep space for a long period of time. This paper presents the results of this work and outlines the impact on space suit pressure garment design for long duration deep space missions.

  7. Free space optical communication flight mission: simulations and experimental results on ground level demonstrator

    NASA Astrophysics Data System (ADS)

    Mata Calvo, Ramon; Ferrero, Valter; Camatel, Stefano; Catalano, Valeria; Bonino, Luciana; Toselli, Italo

    2009-05-01

    In the context of the increasing demand in high-speed data link for scientific, planetary exploration and earth observation missions, the Italian Space Agency (ASI), involving Thales Alenia Space as prime, the Polytechnic of Turin and other Italian partners, is developing a program for feasibility demonstration of optical communication system with the goal of a prototype flight mission in the next future. We have designed and analyzed a ground level bidirectional Free Space Optical Communication (FSOC) Breadboard at 2.5Gbit/s working at 1550nm as an emulator of slant path link. The breadboard is full-working and we tested it back-toback, at 500m and 2.3km during one month. The distances were chosen in order to get an equivalent slant path cumulative turbulence in a ground level link. The measurements campaign was done during the day and the night time and under several weather conditions, from sunny, rainy or windy. So we could work under very different turbulence conditions from weak to strong turbulence. We measured the scintillation both, on-axis and off-axis by introducing known misalignments at the terminals, transmission losses at both path lengths and BER at both receivers. We present simulations results considering slant and ground level links, where we took into account the atmospheric effects; scintillation, beam spread, beam wander and fade probability, and comparing them with the ground level experimental results, we find a good agreement between them. Finally we discuss the results obtained in the experimentation and in the flight mission simulations in order to apply our experimental results in the next project phases.

  8. Role of dayside transients in a substorm process: Results from the global kinetic simulation Vlasiator

    NASA Astrophysics Data System (ADS)

    Palmroth, M.; Hoilijoki, S.; Pfau-Kempf, Y.; Hietala, H.; Nishimura, Y.; Angelopoulos, V.; Pulkkinen, T. I.; Ganse, U.; Hannuksela, O.; von Alfthan, S.; Battarbee, M. C.; Vainio, R. O.

    2015-12-01

    We investigate the dayside-nightside coupling of the magnetospheric dynamics in a global kinetic simulation displaying the entire magnetosphere. We use the newly developed Vlasiator (http://vlasiator.fmi.fi), which is the world's first global hybrid-Vlasov simulation modelling the ions as distribution functions, while electrons are treated as a charge-neutralising fluid. Here, we run Vlasiator in the 5-dimensional (5D) setup, where the ordinary space is presented in the 2D noon-midnight meridional plane, embedding in each grid cell the 3D velocity space. This approach combines the improved physical solution with fine resolution, allowing to investigate kinetic processes as a consequence of the global magnetospheric evolution. The simulation is during steady southward interplanetary magnetic field. We observe dayside reconnection and the resulting 2D representations of flux transfer events (FTE). FTE's move tailwards and distort the magnetopause, while the largest of them even modify the plasma sheet location. In the nightside, the plasma sheet shows bead-like density enhancements moving slowly earthward. The tailward side of the dipolar field stretches. Strong reconnection initiates first in the near-Earth region, forming a tailward-moving magnetic island that cannibalises other islands forming further down the tail, increasing the island's volume and complexity. After this, several reconnection lines are formed again in the near-Earth region, resulting in several magnetic islands. At first, none of the earthward moving islands reach the closed field region because just tailward of the dipolar region exists a relatively stable X-line, which is strong enough to push most of the magnetic islands tailward. However, finally one of the tailward X-lines is strong enough to overcome the X-line nearest to Earth, forming a strong surge into the dipolar field region as there is nothing anymore to hold back the propagation of the structure. We investigate this substorm

  9. C13 urea breath test accuracy analysis against former C14 urea breath test technique: is there still a need for an indeterminate result category?

    PubMed

    Charest, Mathieu; Belair, Marc-Andre

    2017-03-09

    Helicobacter pylori (H. Pylori) infection is the leading cause of peptic ulcer disease. Purpose: To assess the difference in distribution of negative versus positive breath test results between the former C14 urea breath test (UBT) and the newer C13 UBT. Second, to determine if the use of an indeterminate category is still meaningful and what type of results should trigger a repeat testing. Methods: Retrospective survey was performed of all consecutive patients referred to our service for a UBT. We analysed 562 patients with C14 UBT and 454 patients with C13 UBT. Results: C13 negative results are distributed farther away from the cut-off value and grouped more tightly around the mean negative value, as compare to the more widely distributed C14 negative results. Distribution analysis of the negative results of the C13 UBT compare to the negative results of the C14 UBT reveals a statistically significant difference. Within the C13 UBT group, only 1 patient could have been classify as having an indeterminate result using the same indeterminate zone previously used with C14 UBT. This is significantly less frequent than what was previously found with C14 UBT. Discussion: Borderline negative result do occurs with C13 UBT, although less frequently then with the C14 UBT, and we will carefully monitored results falling between 3.0 and 3.5 %delta. C13 UBTis a safe and simple test for the patient, provides a clearer positive or negative test results for the clinician in the majority of cases.

  10. Late Pop III Star Formation During the Epoch of Reionization: Results from the Renaissance Simulations

    NASA Astrophysics Data System (ADS)

    Xu, Hao; Norman, Michael L.; O'Shea, Brian W.; Wise, John H.

    2016-06-01

    We present results on the formation of Population III (Pop III) stars at redshift 7.6 from the Renaissance Simulations, a suite of extremely high-resolution and physics-rich radiation transport hydrodynamics cosmological adaptive-mesh refinement simulations of high-redshift galaxy formation performed on the Blue Waters supercomputer. In a survey volume of about 220 comoving Mpc3, we found 14 Pop III galaxies with recent star formation. The surprisingly late formation of Pop III stars is possible due to two factors: (i) the metal enrichment process is local and slow, leaving plenty of pristine gas to exist in the vast volume; and (ii) strong Lyman-Werner radiation from vigorous metal-enriched star formation in early galaxies suppresses Pop III formation in (“not so”) small primordial halos with mass less than ˜3 × 107 M ⊙. We quantify the properties of these Pop III galaxies and their Pop III star formation environments. We look for analogs to the recently discovered luminous Ly α emitter CR7, which has been interpreted as a Pop III star cluster within or near a metal-enriched star-forming galaxy. We find and discuss a system similar to this in some respects, however, the Pop III star cluster is far less massive and luminous than CR7 is inferred to be.

  11. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): simulation design and preliminary results

    NASA Astrophysics Data System (ADS)

    Kravitz, B.; Robock, A.; Tilmes, S.; Boucher, O.; English, J. M.; Irvine, P. J.; Jones, A.; Lawrence, M. G.; MacCracken, M.; Muri, H.; Moore, J. C.; Niemeier, U.; Phipps, S. J.; Sillmann, J.; Storelvmo, T.; Wang, H.; Watanabe, S.

    2015-06-01

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more longwave radiation to escape to space. We discuss experiment designs, as well as the rationale for those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. This is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.

  12. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): simulation design and preliminary results

    NASA Astrophysics Data System (ADS)

    Kravitz, B.; Robock, A.; Tilmes, S.; Boucher, O.; English, J. M.; Irvine, P. J.; Jones, A.; Lawrence, M. G.; MacCracken, M.; Muri, H.; Moore, J. C.; Niemeier, U.; Phipps, S. J.; Sillmann, J.; Storelvmo, T.; Wang, H.; Watanabe, S.

    2015-10-01

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more longwave radiation to escape to space. We discuss experiment designs, as well as the rationale for those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. This is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.

  13. Control of Warm Compression Stations Using Model Predictive Control: Simulation and Experimental Results

    NASA Astrophysics Data System (ADS)

    Bonne, F.; Alamir, M.; Bonnay, P.

    2017-02-01

    This paper deals with multivariable constrained model predictive control for Warm Compression Stations (WCS). WCSs are subject to numerous constraints (limits on pressures, actuators) that need to be satisfied using appropriate algorithms. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to achieve precise control of pressures in normal operation or to avoid reaching stopping criteria (such as excessive pressures) under high disturbances (such as a pulsed heat load expected to take place in future fusion reactors, expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details the simulator used to validate this new control scheme and the associated simulation results on the SBTs WCS. This work is partially supported through the French National Research Agency (ANR), task agreement ANR-13-SEED-0005.

  14. Simulated flight through JAWS wind shear - In-depth analysis results. [Joint Airport Weather Studies

    NASA Technical Reports Server (NTRS)

    Frost, W.; Chang, H.-P.; Elmore, K. L.; Mccarthy, J.

    1984-01-01

    The Joint Airport Weather Studies (JAWS) field experiment was carried out in 1982 near Denver. An analysis is presented of aircraft performance in the three-dimensional wind fields. The fourth dimension, time, is not considered. The analysis seeks to prepare computer models of microburst wind shear from the JAWS data sets for input to flight simulators and for research and development of aircraft control systems and operational procedures. A description is given of the data set and the method of interpolating velocities and velocity gradients for input to the six-degrees-of-freedom equations governing the motion of the aircraft. The results of the aircraft performance analysis are then presented, and the interpretation classifies the regions of shear as severe, moderate, or weak. Paths through the severe microburst of August 5, 1982, are then recommended for training and operational applications. Selected subregions of the flow field defined in terms of planar sections through the wind field are presented for application to simulators with limited computer storage capacity, that is, for computers incapable of storing the entire array of variables needed if the complete wind field is programmed.

  15. The Formation of Asteroid Satellites in Catastrophic Impacts: Results from Numerical Simulations

    NASA Technical Reports Server (NTRS)

    Durda, D. D.; Bottke, W. F., Jr.; Enke, B. L.; Asphaug, E.; Richardson, D. C.; Leinhardt, Z. M.

    2003-01-01

    We have performed new simulations of the formation of asteroid satellites by collisions, using a combination of hydrodynamical and gravitational dynamical codes. This initial work shows that both small satellites and ejected, co-orbiting pairs are produced most favorably by moderate-energy collisions at more direct, rather than oblique, impact angles. Simulations so far seem to be able to produce systems qualitatively similar to known binaries. Asteroid satellites provide vital clues that can help us understand the physics of hypervelocity impacts, the dominant geologic process affecting large main belt asteroids. Moreover, models of satellite formation may provide constraints on the internal structures of asteroids beyond those possible from observations of satellite orbital properties alone. It is probable that most observed main-belt asteroid satellites are by-products of cratering and/or catastrophic disruption events. Several possible formation mechanisms related to collisions have been identified: (i) mutual capture following catastrophic disruption, (ii) rotational fission due to glancing impact and spin-up, and (iii) re-accretion in orbit of ejecta from large, non-catastrophic impacts. Here we present results from a systematic investigation directed toward mapping out the parameter space of the first and third of these three collisional mechanisms.

  16. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): Simulation design and preliminary results

    DOE PAGES

    Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; ...

    2015-10-27

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more long wave radiation to escape to space. We discuss experiment designs, as well as the rationale formore » those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. In conclusion, this is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.« less

  17. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): Simulation design and preliminary results

    SciTech Connect

    Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; Boucher, Olivier; English, J. M.; Irvine, Peter J.; Jones, Andrew; Lawrence, M. G.; MacCracken, Michael C.; Muri, Helene O.; Moore, John C.; Niemeier, Ulrike; Phipps, Steven J.; Sillmann, Jana; Storelvmo, Trude; Wang, Hailong; Watanabe, Shingo

    2015-10-27

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more long wave radiation to escape to space. We discuss experiment designs, as well as the rationale for those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. In conclusion, this is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.

  18. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  19. Accuracy of 4D Flow Measurement of Cerebrospinal Fluid Dynamics in the Cervical Spine: An In Vitro Verification Against Numerical Simulation.

    PubMed

    Heidari Pahlavian, Soroush; Bunck, Alexander C; Thyagaraj, Suraj; Giese, Daniel; Loth, Francis; Hedderich, Dennis M; Kröger, Jan Robert; Martin, Bryn A

    2016-11-01

    Abnormal alterations in cerebrospinal fluid (CSF) flow are thought to play an important role in pathophysiology of various craniospinal disorders such as hydrocephalus and Chiari malformation. Three directional phase contrast MRI (4D Flow) has been proposed as one method for quantification of the CSF dynamics in healthy and disease states, but prior to further implementation of this technique, its accuracy in measuring CSF velocity magnitude and distribution must be evaluated. In this study, an MR-compatible experimental platform was developed based on an anatomically detailed 3D printed model of the cervical subarachnoid space and subject specific flow boundary conditions. Accuracy of 4D Flow measurements was assessed by comparison of CSF velocities obtained within the in vitro model with the numerically predicted velocities calculated from a spatially averaged computational fluid dynamics (CFD) model based on the same geometry and flow boundary conditions. Good agreement was observed between CFD and 4D Flow in terms of spatial distribution and peak magnitude of through-plane velocities with an average difference of 7.5 and 10.6% for peak systolic and diastolic velocities, respectively. Regression analysis showed lower accuracy of 4D Flow measurement at the timeframes corresponding to low CSF flow rate and poor correlation between CFD and 4D Flow in-plane velocities.

  20. Simulation of compact circumstellar shells around Type Ia supernovae and the resulting high-velocity features

    NASA Astrophysics Data System (ADS)

    Mulligan, Brian W.; Wheeler, J. Craig

    2017-01-01

    For Type Ia supernovae that are observed prior to B-band maximum (approximately 18-20 days after the explosion) Ca absorption features are observed at velocities of order 10,000 km/s faster than the typical photospheric features. These high velocity features weaken in the first couple of weeks, disappearing entirely by a week after B-band maximum. The source of this high velocity material is uncertain: it may be the result of interaction between the supernova and circumstellar material or may be the result of plumes or bullets of material ejected during the course of the explosion. We simulate interaction between a supernova and several compact circumstellar shells, located within 0.03 solar radii of the progenitor white dwarf and having masses of 0.02 solar masses or less. We use FLASH to perform hydrodynamic simulations of the system to determine the structure of the ejecta and shell components after the interaction, then use these results to generate synthetic spectra with 1 day cadence for the first 25 days after the explosion. We compare the evolution of the velocity and pseudo-equivalent width of the Ca near-infrared triplet features in the synthetic spectra to observed values, demonstrating that these models are consistent with observations. Additionally, we fit the observed spectra of SN 2011fe (Parrent 2012, Pereira 2013) prior to B-band maximum using these models and synthetic spectra and provide an estimate for Ca abundance within the circumstellar material with implications for the mechanism by which the white dwarf explodes.

  1. Biofilm formation and control in a simulated spacecraft water system - Two-year results

    NASA Technical Reports Server (NTRS)

    Schultz, John R.; Taylor, Robert D.; Flanagan, David T.; Carr, Sandra E.; Bruce, Rebekah J.; Svoboda, Judy V.; Huls, M. H.; Sauer, Richard L.; Pierson, Duane L.

    1991-01-01

    The ability of iodine to maintain microbial water quality in a simulated spacecraft water system is being studied. An iodine level of about 2.0 mg/L is maintained by passing ultrapure influent water through an iodinated ion exchange resin. Six liters are withdrawn daily and the chemical and microbial quality of the water is monitored regularly. Stainless steel coupons used to monitor biofilm formation are being analyzed by culture methods, epifluorescence microscopy, and scanning electron microscopy. Results from the first two years of operation show a single episode of high bacterial colony counts in the iodinated system. This growth was apparently controlled by replacing the iodinated ion exchange resin. Scanning electron microscopy indicates that the iodine has limited but not completely eliminated the formation of biofilm during the first two years of operation. Significant microbial contamination has been present continuously in a parallel noniodinated system since the third week of operation.

  2. Multiple Frequency Contrast Source Inversion Method for Vertical Electromagnetic Profiling: 2D Simulation Results and Analyses

    NASA Astrophysics Data System (ADS)

    Li, Jinghe; Song, Linping; Liu, Qing Huo

    2016-02-01

    A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.

  3. AeroMACS C-Band Interference Modeling and Simulation Results

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey

    2010-01-01

    A new C-band (5091-5150 MHz) airport communications system designated as Aeronautical Mobile Airport Communications System (AeroMACS) is being planned under the Federal Aviation Administration s NextGen program. It is necessary to establish practical limits on AeroMACS transmission power from airports so that the threshold of interference into the Mobile Satellite Service (Globalstar) feeder uplinks is not exceeded. To help provide guidelines for these limits, interference models have been created with the commercial software Visualyse Professional. In this presentation, simulation results were shown for the aggregate interference power at low earth orbit from AeroMACS transmitters at each of up to 757 airports in the United States, Canada, Mexico, and the surrounding area. Both omni-directional and sectoral antenna configurations were modeled. Effects of antenna height, beamwidth, and tilt were presented.

  4. Statistics of interacting networks with extreme preferred degrees: Simulation results and theoretical approaches

    NASA Astrophysics Data System (ADS)

    Liu, Wenjia; Schmittmann, Beate; Zia, R. K. P.

    2012-02-01

    Network studies have played a central role for understanding many systems in nature - e.g., physical, biological, and social. So far, much of the focus has been the statistics of networks in isolation. Yet, many networks in the world are coupled to each other. Recently, we considered this issue, in the context of two interacting social networks. In particular, We studied networks with two different preferred degrees, modeling, say, introverts vs. extroverts, with a variety of ``rules for engagement.'' As a first step towards an analytically accessible theory, we restrict our attention to an ``extreme scenario'': The introverts prefer zero contacts while the extroverts like to befriend everyone in the society. In this ``maximally frustrated'' system, the degree distributions, as well as the statistics of cross-links (between the two groups), can depend sensitively on how a node (individual) creates/breaks its connections. The simulation results can be reasonably well understood in terms of an approximate theory.

  5. Using Classification and Regression Trees (CART) and random forests to analyze attrition: Results from two simulations.

    PubMed

    Hayes, Timothy; Usami, Satoshi; Jacobucci, Ross; McArdle, John J

    2015-12-01

    In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers.

  6. Test Results From a Direct Drive Gas Reactor Simulator Coupled to a Brayton Power Conversion Unit

    NASA Technical Reports Server (NTRS)

    Hervol, David S.; Briggs, Maxwell H.; Owen, Albert K.; Bragg-Sitton, Shannon M.

    2009-01-01

    The Brayton Power Conversion Unit (BPCU) located at NASA Glenn Research Center (GRC) in Cleveland, OH is a closed cycle system incorporating a turboaltemator, recuperator, and gas cooler connected by gas ducts to an external gas heater. For this series of tests, the BPCU was modified by replacing the gas heater with the Direct Drive Gas heater or DOG. The DOG uses electric resistance heaters to simulate a fast spectrum nuclear reactor similar to those proposed for space power applications. The combined system thermal transient behavior was the focus of these tests. The BPCU was operated at various steady state points. At each point it was subjected to transient changes involving shaft rotational speed or DOG electrical input. This paper outlines the changes made to the test unit and describes the testing that took place along with the test results.

  7. Barred Galaxy Photometry: Comparing results from the Cananea sample with N-body simulations

    NASA Astrophysics Data System (ADS)

    Athanassoula, E.; Gadotti, D. A.; Carrasco, L.; Bosma, A.; de Souza, R. E.; Recillas, E.

    2009-11-01

    We compare the results of the photometrical analysis of barred galaxies with those of a similar analysis from N-body simulations. The photometry is for a sample of nine barred galaxies observed in the J and K[s] bands with the CANICA near infrared (NIR) camera at the 2.1 m telescope of the Observatorio Astrofísico Guillermo Haro (OAGH) in Cananea, Sonora, Mexico. The comparison includes radial ellipticity profiles and surface brightness (density for the N-body galaxies) profiles along the bar major and minor axes. We find very good agreement, arguing that the exchange of angular momentum within the galaxy plays a determinant role in the evolution of barred galaxies.

  8. Experimental and simulation study results for video landmark acquisition and tracking technology

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Tietz, J. C.; Thomas, H. M.; Lowrie, J. W.

    1979-01-01

    A synopsis of related Earth observation technology is provided and includes surface-feature tracking, generic feature classification and landmark identification, and navigation by multicolor correlation. With the advent of the Space Shuttle era, the NASA role takes on new significance in that one can now conceive of dedicated Earth resources missions. Space Shuttle also provides a unique test bed for evaluating advanced sensor technology like that described in this report. As a result of this type of rationale, the FILE OSTA-1 Shuttle experiment, which grew out of the Video Landmark Acquisition and Tracking (VILAT) activity, was developed and is described in this report along with the relevant tradeoffs. In addition, a synopsis of FILE computer simulation activity is included. This synopsis relates to future required capabilities such as landmark registration, reacquisition, and tracking.

  9. Results of field trials using the NPL simulated reactor neutron field facility.

    PubMed

    Taylor, G C; Thomas, D J; Bennett, A

    2007-01-01

    The NPL simulated reactor neutron field facility provides neutron spectra similar to those found in the environs of UK gas-cooled reactors. Neutrons are generated by irradiating a thick lithium-alloy target with monoenergetic protons between 2.5 and 3.5 MeV (depending on the desired spectrum), and then moderated by a 40-cm diameter sphere of heavy water. This represents an extremely soft workplace field, with a mean neutron energy of 25 keV and, more significantly, a mean fluence to ambient dose equivalent conversion coefficient of the order of 20 pSv cm(2), approximately 20 times lower than those of the ISO standard calibration sources (252)Cf and (241)Am-Be. Results of field trials are presented, including readings from neutron spectrometers, personal dosimeters (active and passive) and neutron area survey meters, and issues with beam monitoring are discussed.

  10. Results of Aging Tests of Vendor-Produced Blended Feed Simulant

    SciTech Connect

    Russell, Renee L.; Buchmiller, William C.; Cantrell, Kirk J.; Peterson, Reid A.; Rinehart, Donald E.

    2009-04-21

    The Hanford Tank Waste Treatment and Immobilization Plant (WTP) is procuring through Pacific Northwest National Laboratory (PNNL) a minimum of five 3,500 gallon batches of waste simulant for Phase 1 testing in the Pretreatment Engineering Platform (PEP). To make sure that the quality of the simulant is acceptable, the production method was scaled up starting from laboratory-prepared simulant through 15-gallon vendor prepared simulant and 250-gallon vendor prepared simulant before embarking on the production of the 3500-gallon simulant batch by the vendor. The 3500-gallon PEP simulant batches were packaged in 250-gallon high molecular weight polyethylene totes at NOAH Technologies. The simulant was stored in an environmentally controlled environment at NOAH Technologies within their warehouse before blending or shipping. For the 15-gallon, 250-gallon, and 3500-gallon batch 0, the simulant was shipped in ambient temperature trucks with shipment requiring nominally 3 days. The 3500-gallon batch 1 traveled in a 70-75°F temperature controlled truck. Typically the simulant was uploaded in a PEP receiving tank within 24-hours of receipt. The first uploading required longer with it stored outside. Physical and chemical characterization of the 250-gallon batch was necessary to determine the effect of aging on the simulant in transit from the vendor and in storage before its use in the PEP. Therefore, aging tests were conducted on the 250-gallon batch of the vendor-produced PEP blended feed simulant to identify and determine any changes to the physical characteristics of the simulant when in storage. The supernate was also chemically characterized. Four aging scenarios for the vendor-produced blended simulant were studied: 1) stored outside in a 250-gallon tote, 2) stored inside in a gallon plastic bottle, 3) stored inside in a well mixed 5-L tank, and 4) subject to extended temperature cycling under summer temperature conditions in a gallon plastic bottle. The following

  11. First results from ARTEMIS lunar wake crossing: observations and hybrid simulation

    NASA Astrophysics Data System (ADS)

    Plaschke, F.; Wiehle, S.; Angelopoulos, V.; Auster, H.; Georgescu, E.; Glassmeier, K.; Motschmann, U. M.; Sibeck, D. G.

    2010-12-01

    The Moon does not have an intrinsic magnetic field and its conductivity is not sufficient to facilitate the development of an induced magnetosphere. The interaction of the Moon with the unperturbed solar wind (SW) is, hence, dominated by the absorption of SW particles on its surface and the consequent generation of a lunar wake on the night side. The SW magnetic field is basically convected through the Moon; the pressure imbalance in lunar wake, however, accounts for a slight increase in magnetic pressure in the lunar wake center. The wake is slowly filled up with SW particles due to their thermal motion, which generates a magnetohydrodynamic (MHD) rarefaction wave propagating away from the wake in the SW frame of reference. Over the last 3 years the Time History of Events and Macroscale Interactions During Substorms (THEMIS) mission provided excellent data helping the scientific community in drawing a detailed picture of the physical processes associated with the development of substorms in the terrestrial magnetotail. Two of the five THEMIS spacecraft are currently being sent into stationary orbits around the Moon in a follow-up mission called Acceleration, Reconnection, Turbulence and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS). The ARTEMIS P1 spacecraft (formerly THEMIS-B) has recently passed through the lunar wake in a flyby maneuver on February 13, 2010. We show first results of two hybrid code simulations with static and, for the first time, dynamically changing SW input. Adapted SW monitor data of the NASA OMNI database is used as input for the simulations. During the wake crossing the spin stabilized spacecraft P1 was in lunar shadow and, hence, its spin period cannot be determined from sun sensor data. Therefore, an eclipse-spin model is applied to bridge the gap of missing spin period data in order to recover vector measurements. A comparison of the simulation results with correctly despun magnetic field and particle measurements of

  12. Simulated microgravity inhibits the proliferation of K562 erythroleukemia cells but does not result in apoptosis

    NASA Astrophysics Data System (ADS)

    Yi, Zong-Chun; Xia, Bing; Xue, Ming; Zhang, Guang-Yao; Wang, Hong; Zhou, Hui-Min; Sun, Yan; Zhuang, Feng-Yuan

    2009-07-01

    Astronauts and experimental animals in space develop the anemia of space flight, but the underlying mechanisms are still unclear. In this study, the impact of simulated microgravity on proliferation, cell death, cell cycle progress and cytoskeleton of erythroid progenitor-like K562 leukemia cells was observed. K562 cells were cultured in NASA Rotary Cell Culture System (RCCS) that was used to simulate microgravity (at 15 rpm). After culture for 24 h, 48 h, 72 h, and 96 h, the cell densities cultured in RCCS were only 55.5%, 54.3%, 67.2% and 66.4% of the flask-cultured control cells, respectively. The percentages of trypan blue-stained dead cells and the percentages of apoptotic cells demonstrated no difference between RCCS-cultured cells and flask-cultured cells at every time points (from 12 h to 96 h). Compared with flask-cultured cells, RCCS culture induced an accumulation of cell number at S phase concomitant with a decrease at G0/G1 and G2/M phases at 12 h. But 12 h later (from 24 h to 60 h), the distribution of cell cycle phases in RCCS-cultured cells became no difference compared to flask-cultured cells. Consistent with the changes of cell cycle distribution, the levels of intercellular cyclins in RCCS-cultured cells changed at 12 h, including a decrease in cyclin A, and the increasing in cyclin B, D1 and E, and then (from 24 h to 36 h) began to restore to control levels. After RCCS culture for 12-36 h, the microfilaments showed uneven and clustered distribution, and the microtubules were highly disorganized. These results indicated that RCCS-simulated microgravity could induce a transient inhibition of proliferation, but not result in apoptosis, which could involve in the development of space flight anemia. K562 cells could be a useful model to research the effects of microgravity on differentiation and proliferation of hematopoietic cells.

  13. Recent Simulation Results on Ring Current Dynamics Using the Comprehensive Ring Current Model

    NASA Technical Reports Server (NTRS)

    Zheng, Yihua; Zaharia, Sorin G.; Lui, Anthony T. Y.; Fok, Mei-Ching

    2010-01-01

    Plasma sheet conditions and electromagnetic field configurations are both crucial in determining ring current evolution and connection to the ionosphere. In this presentation, we investigate how different conditions of plasma sheet distribution affect ring current properties. Results include comparative studies in 1) varying the radial distance of the plasma sheet boundary; 2) varying local time distribution of the source population; 3) varying the source spectra. Our results show that a source located farther away leads to a stronger ring current than a source that is closer to the Earth. Local time distribution of the source plays an important role in determining both the radial and azimuthal (local time) location of the ring current peak pressure. We found that post-midnight source locations generally lead to a stronger ring current. This finding is in agreement with Lavraud et al.. However, our results do not exhibit any simple dependence of the local time distribution of the peak ring current (within the lower energy range) on the local time distribution of the source, as suggested by Lavraud et al. [2008]. In addition, we will show how different specifications of the magnetic field in the simulation domain affect ring current dynamics in reference to the 20 November 2007 storm, which include initial results on coupling the CRCM with a three-dimensional (3-D) plasma force balance code to achieve self-consistency in the magnetic field.

  14. Research on an expert system for database operation of simulation-emulation math models. Volume 1, Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Kawamura, K.; Beale, G. O.; Schaffer, J. D.; Hsieh, B. J.; Padalkar, S.; Rodriguez-Moscoso, J. J.

    1985-01-01

    The results of the first phase of Research on an Expert System for Database Operation of Simulation/Emulation Math Models, is described. Techniques from artificial intelligence (AI) were to bear on task domains of interest to NASA Marshall Space Flight Center. One such domain is simulation of spacecraft attitude control systems. Two related software systems were developed to and delivered to NASA. One was a generic simulation model for spacecraft attitude control, written in FORTRAN. The second was an expert system which understands the usage of a class of spacecraft attitude control simulation software and can assist the user in running the software. This NASA Expert Simulation System (NESS), written in LISP, contains general knowledge about digital simulation, specific knowledge about the simulation software, and self knowledge.

  15. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  16. Initial results of efficacy and safety of Sofosbuvir among Pakistani Population: A real life trial - Hepatitis Eradication Accuracy Trial of Sofosbuvir (HEATS)

    PubMed Central

    Azam, Zahid; Shoaib, Muhammad; Javed, Masood; Sarwar, Muhammad Adnan; Shaikh, Hafeezullah; Khokhar, Nasir

    2017-01-01

    Objective: The uridine nucleotide analogue sofosbuvir is a selective inhibitor of hepatitis C virus (HCV) NS5B polymerase approved for the treatment of chronic HCV infection with genotypes 1 – 4. The objective of the study was to evaluate the interim results of efficacy and safety of regimens containing Sofosbuvir (Zoval) among Pakistani population with the rapid virologic response (RVR2/4 weeks) with HCV infections. Methods: This is a multicenter open label prospective observational study. Patients suffering from chronic Hepatitis C infection received Sofosbuvir (Zoval) 400 mg plus ribavirin (with or without peg interferon) for 12/24 weeks. The interim results of this study were rapid virological response on week 4. Data was analyzed using SPSS version 21 for descriptive statistics. Results: A total of 573 patients with HCV infection were included in the study. The mean age of patients was 46.07 ± 11.41 years. Out of 573 patients 535 (93.3%) were treatment naive, 26 (4.5%) were relapser, 7 (1.2%) were non-responders and 5 (1.0%) were partial responders. A rapid virologic response was reported in 563(98.2%) of patients with HCV infection after four weeks of treatment. The treatment was generally well tolerated. Conclusion: Sofosbuvir (Zoval) is effective and well tolerated in combination with ribavirin in HCV infected patients. PMID:28367171

  17. Elastodynamic analysis of a gear pump. Part II: Meshing phenomena and simulation results

    NASA Astrophysics Data System (ADS)

    Mucchi, E.; Dalpiaz, G.; Rivola, A.

    2010-10-01

    A non-linear lumped kineto-elastodynamic model for the prediction of the dynamic behaviour of external gear pumps is presented. It takes into account the most important phenomena involved in the operation of this kind of machines. Two main sources of noise and vibration can be considered: pressure and gear meshing. Fluid pressure distribution on gears, which is time-varying, is computed and included as a resultant external force and torque acting on the gears. Parametric excitations due to time-varying meshing stiffness, the tooth profile errors (obtained by a metrological analysis), the backlash effects between meshing teeth, the lubricant squeeze and the possibility of tooth contact on both lines of action were also included. Finally, the torsional stiffness and damping of the driving shaft and the non-linear behaviour of the hydrodynamic journal bearings were also taken into account. Model validation was carried out on the basis of experimental data concerning case accelerations and force reactions. The model can be used in order to analyse the pump dynamic behaviour and to identify the effects of modifications in design and operation parameters, in terms of vibration and dynamic forces. Part I is devoted to the calculation of the gear eccentricity in the steady-state condition as result of the balancing between mean pressure loads, mean meshing force and bearing reactions, while in Part II the meshing phenomena are fully explained and the main simulation results are presented.

  18. Free-Flight Test Results of Scale Models Simulating Viking Parachute/Lander Staging

    NASA Technical Reports Server (NTRS)

    Polutchko, Robert J.

    1973-01-01

    This report presents the results of Viking Aerothermodynamics Test D4-34.0. Motion picture coverage of a number of Scale model drop tests provides the data from which time-position characteristics as well as canopy shape and model system attitudes are measured. These data are processed to obtain the instantaneous drag during staging of a model simulating the Viking decelerator system during parachute staging at Mars. Through scaling laws derived prior to test (Appendix A and B) these results are used to predict such performance of the Viking decelerator parachute during staging at Mars. The tests were performed at the NASA/Kennedy Space Center (KSC) Vertical Assembly Building (VAB). Model assemblies were dropped 300 feet to a platform in High Bay No. 3. The data consist of an edited master film (negative) which is on permanent file in the NASA/LRC Library. Principal results of this investigation indicate that for Viking parachute staging at Mars: 1. Parachute staging separation distance is always positive and continuously increasing generally along the descent path. 2. At staging, the parachute drag coefficient is at least 55% of its prestage equilibrium value. One quarter minute later, it has recovered to its pre-stage value.

  19. Gas cooling in semi-analytic models and smoothed particle hydrodynamics simulations: are results consistent?

    NASA Astrophysics Data System (ADS)

    Saro, A.; De Lucia, G.; Borgani, S.; Dolag, K.

    2010-08-01

    We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical smoothed particle hydrodynamics (SPH) simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. This simplified comparison is thus not meant to be compared with observational data, but is aimed at understanding the level of agreement, at the stripped-down level considered, between two techniques that are widely used to model galaxy formation in a cosmological framework and which present complementary advantages and disadvantages. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: (i) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; (ii) while all stars associated with the BCG were formed in its progenitors in the SAM used here, this holds true only for half of the final BCG stellar mass in the SPH simulation, the remaining half being contributed by tidal stripping of stars from the diffuse stellar component associated with galaxies accreted on the cluster halo; (iii) SPH satellites can lose up to 90 per cent of their stellar mass at the time of accretion, due to tidal stripping, a process not included in the SAM used in this paper; (iv) in the SPH simulation, significant cooling occurs on the most massive satellite galaxies and this lasts for up to 1 Gyr after accretion. This physical process is

  20. Near-Infrared Spectroscopic Measurements of Calf Muscle during Walking at Simulated Reduced Gravity - Preliminary Results

    NASA Technical Reports Server (NTRS)

    Ellerby, Gwenn E. C.; Lee, Stuart M. C.; Stroud, Leah; Norcross, Jason; Gernhardt, Michael; Soller, Babs R.

    2008-01-01

    Consideration for lunar and planetary exploration space suit design can be enhanced by investigating the physiologic responses of individual muscles during locomotion in reduced gravity. Near-infrared spectroscopy (NIRS) provides a non-invasive method to study the physiology of individual muscles in ambulatory subjects during reduced gravity simulations. PURPOSE: To investigate calf muscle oxygen saturation (SmO2) and pH during reduced gravity walking at varying treadmill inclines and added mass conditions using NIRS. METHODS: Four male subjects aged 42.3 +/- 1.7 years (mean +/- SE) and weighing 77.9 +/- 2.4 kg walked at a moderate speed (3.2 +/- 0.2 km/h) on a treadmill at inclines of 0, 10, 20, and 30%. Unsuited subjects were attached to a partial gravity simulator which unloaded the subject to simulate body weight plus the additional weight of a space suit (121 kg) in lunar gravity (0.17G). Masses of 0, 11, 23, and 34 kg were added to the subject and then unloaded to maintain constant weight. Spectra were collected from the lateral gastrocnemius (LG), and SmO2 and pH were calculated using previously published methods (Yang et al. 2007 Optics Express ; Soller et al. 2008 J Appl Physiol). The effects of incline and added mass on SmO2 and pH were analyzed through repeated measures ANOVA. RESULTS: SmO2 and pH were both unchanged by added mass (p>0.05), so data from trials at the same incline were averaged. LG SmO2 decreased significantly with increasing incline (p=0.003) from 61.1 +/- 2.0% at 0% incline to 48.7 +/- 2.6% at 30% incline, while pH was unchanged by incline (p=0.12). CONCLUSION: Increasing the incline (and thus work performed) during walking causes the LG to extract more oxygen from the blood supply, presumably to support the increased metabolic cost of uphill walking. The lack of an effect of incline on pH may indicate that, while the intensity of exercise has increased, the LG has not reached a level of work above the anaerobic threshold. In these

  1. Wolter X-Ray Microscope Computed Tomography Ray-Trace Model with Preliminary Simulation Results

    SciTech Connect

    Jackson, J A

    2006-02-27

    code, (5) description of the modeling code, (6) the results of a number of preliminary imaging simulations, and (7) recommendations for future Wolter designs and for further modeling studies.

  2. A rainfall simulation experiment on soil and water conservation measures - Undesirable results

    NASA Astrophysics Data System (ADS)

    Hösl, R.; Strauss, P.

    2012-04-01

    Sediment and nutrient inputs from agriculturally used land into surface waters are one of the main problems concerning surface water quality. On-site soil and water conservation measures are getting more and more popular throughout the last decades and a lot of research has been done within this issue. Numerous studies can be found about rainfall simulation experiments with different conservation measures tested like no till, mulching employing different types of soil cover, as well as sub soiling practices. Many studies document a more or less great success in preventing soil erosion and enhancing water quality by implementing no till and mulching techniques on farmland but few studies also indicate higher erosion rates with implementation of conservation tillage practices (Strauss et al., 2003). In May 2011 we conducted a field rainfall simulation experiment in Upper Austria to test 5 different maize cultivation techniques: no till with rough seedbed, no till with fine seedbed, mulching with disc harrow and rotary harrow, mulching with rotary harrow and conventional tillage using plough and rotary harrow. Rough seedbed refers to the seedbed preparation at planting of the cover crops. On every plot except on the conventionally managed one cover crops (a mix of Trifolium alexandrinum, Phacelia, Raphanus sativus and Herpestes) were sown in August 2010. All plots were rained three times with deionised water (<50 μS.cm-1) for one hour with 50mm.h-1 rainfall intensity. Surface runoff and soil erosion were measured. Additionally, soil cover by mulch was measured as well as soil texture, bulk density, penetration resistance, surface roughness and soil water content before and after the simulation. The simulation experiments took place about 2 weeks after seeding of maize in spring 2011. The most effective cultivation techniques for soil prevention expectedly proved to be the no till variants, mean erosion rate was about 0.1 kg.h-1, mean surface runoff was 29 l.h-1

  3. Democratic population decisions result in robust policy-gradient learning: a parametric study with GPU simulations.

    PubMed

    Richmond, Paul; Buesing, Lars; Giugliano, Michele; Vasilaki, Eleni

    2011-05-04

    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated.

  4. Prediction Markets and Beliefs about Climate: Results from Agent-Based Simulations

    NASA Astrophysics Data System (ADS)

    Gilligan, J. M.; John, N. J.; van der Linden, M.

    2015-12-01

    Climate scientists have long been frustrated by persistent doubts a large portion of the public expresses toward the scientific consensus about anthropogenic global warming. The political and ideological polarization of this doubt led Vandenbergh, Raimi, and Gilligan [1] to propose that prediction markets for climate change might influence the opinions of those who mistrust the scientific community but do trust the power of markets.We have developed an agent-based simulation of a climate prediction market in which traders buy and sell future contracts that will pay off at some future year with a value that depends on the global average temperature at that time. The traders form a heterogeneous population with different ideological positions, different beliefs about anthropogenic global warming, and different degrees of risk aversion. We also vary characteristics of the market, including the topology of social networks among the traders, the number of traders, and the completeness of the market. Traders adjust their beliefs about climate according to the gains and losses they and other traders in their social network experience. This model predicts that if global temperature is predominantly driven by greenhouse gas concentrations, prediction markets will cause traders' beliefs to converge toward correctly accepting anthropogenic warming as real. This convergence is largely independent of the structure of the market and the characteristics of the population of traders. However, it may take considerable time for beliefs to converge. Conversely, if temperature does not depend on greenhouse gases, the model predicts that traders' beliefs will not converge. We will discuss the policy-relevance of these results and more generally, the use of agent-based market simulations for policy analysis regarding climate change, seasonal agricultural weather forecasts, and other applications.[1] MP Vandenbergh, KT Raimi, & JM Gilligan. UCLA Law Rev. 61, 1962 (2014).

  5. Initial quality performance results using a phantom to simulate chest computed radiography.

    PubMed

    Muhogora, Wilbroad; Padovani, Renato; Msaki, Peter

    2011-01-01

    The aim of this study was to develop a homemade phantom for quantitative quality control in chest computed radiography (CR). The phantom was constructed from copper, aluminium, and polymenthylmethacrylate (PMMA) plates as well as Styrofoam materials. Depending on combinations, the literature suggests that these materials can simulate the attenuation and scattering characteristics of lung, heart, and mediastinum. The lung, heart, and mediastinum regions were simulated by 10 mm x 10 mm x 0.5 mm, 10 mm x 10 mm x 0.5 mm and 10 mm x 10 mm x 1 mm copper plates, respectively. A test object of 100 mm x 100 mm and 0.2 mm thick copper was positioned to each region for CNR measurements. The phantom was exposed to x-rays generated by different tube potentials that covered settings in clinical use: 110-120 kVp (HVL=4.26-4.66 mm Al) at a source image distance (SID) of 180 cm. An approach similar to the recommended method in digital mammography was applied to determine the CNR values of phantom images produced by a Kodak CR 850A system with post-processing turned off. Subjective contrast-detail studies were also carried out by using images of Leeds TOR CDR test object acquired under similar exposure conditions as during CNR measurements. For clinical kVp conditions relevant to chest radiography, the CNR was highest over 90-100 kVp range. The CNR data correlated with the results of contrast detail observations. The values of clinical tube potentials at which CNR is the highest are regarded to be optimal kVp settings. The simplicity in phantom construction can offer easy implementation of related quality control program.

  6. Test Results of Level A Suits to Challenge by Chemical and Biological Warfare Agents and Simulants: Summary Report

    DTIC Science & Technology

    1998-06-01

    Agent Permeation of GB and HD Through 25-Mil Chemical Protective Glove 30 3.3 System Test (Aerosol Simulant) 3.3.1 System Test (Aerosol Simulant... Chemical Protective Glove GB Permeation 176 Appendix Q: Commander Brigade F91 Table Q - 3: Commander Brigade F91: System Test (Vapor Simulant) Results No...capability to protect in a chemical agent or biological agent environment. Each

  7. Petroleum Systems of South Kara Basin: 3D stratigraphic simulation and basin modeling results

    NASA Astrophysics Data System (ADS)

    Malysheva, S.; Vasilyev, V.; Verzhbitsky, V.; Ananyev, V.; Murzin, R.; Komissarov, D.; Kosenkova, N.; Roslov, Yu.

    2012-04-01

    Petroleum systems of South Kara Basin are still poorly studied and hydrocarbon resource estimates vary depending on geological models and understanding of the basin evolution. The main purpose of the regional studies of South Kara Basin was to produce a consistent model, which would be able to explain the existence of the fields discovered in the area as well as to determine the most favorable hydrocarbon accumulation zones in the study area for further exploration. In the study 3D stratigraphic simulation and basin modeling of South Kara Basin was carried out. The stratigraphic simulation results, along with geological, geophysical and geochemical data for the inland areas of Yamal and Gydan peninsulas and South Kara islands enabled to predict the lithological composition and distribution of source rocks, reservoirs and seals in the Kara Sea offshore area. Based on the basin modeling results hydrocarbon accumulations may occur in the reservoir facies of the wide stratigraphic range from Jurrasic to Cretaceous. The main source for the hydrocarbons, accumulated in the South Kara Basin Neocomian and Cenomanian reservoirs are the J3-K1 (the northward extension of Bazhenov Formation and its analogs of West Siberia), as well as J1 and probably J2 shales with predominantly marine type of kerogen (type II). Thermal and burial history restorations show that Lower Cretaceous (Aptian-Albian) sediments enriched with terrigenous organic matter (kerogen of type III) and containing coaly layers could not produce the hydrocarbon volumes to fill the giant Rusanovskoye and Leningradskoye gas-condensate fields as the K1 source rocks are not mature enough. The modeling results, in particular, suggest that the geologic conditions in the South Kara Basin are favorable for further discoveries of giant fields. Although gas accumulations are predominating in the basin, oil-and-gascondensate fields (not a pure oil fields though) with sufficient part of liquid hydrocarbons might be present

  8. Factors influencing the probability of an incident at a junction: results from an interactive driving simulator.

    PubMed

    Alexander, Jennifer; Barham, Philip; Black, Ian

    2002-11-01

    Using data generated from a fixed-base interactive driving simulator, which was used to evaluate a driver decision aid, a model is built to predict the probability of an incident (i.e. an accident or a 'near miss') occurring as a result of a right-turn across left-hand traffic at an unsignalised junction. This can be considered to be the product of two separate probabilities, the first being the probability that the gap between a pair of vehicles in the traffic stream is accepted, and the second the probability that the time needed to cross the on-coming stream of traffic causes the time-to-collision with the nearest vehicle in this traffic stream to be less than a second. The model is developed from the results of experimental trials involving a sample of drivers, the majority of whom were aged 60 years or older, in order to demonstrate the effect of various parameters on these probabilities. The parameters considered include the size of the gap between successive vehicles, vehicle characteristics such as size, colour and velocity, driver characteristics such as age and sex, and both daytime and night-time conditions.

  9. Results from simulated remote-handled transuranic waste experiments at the Waste Isolation Pilot Plant (WIPP)

    SciTech Connect

    Molecke, M A

    1992-01-01

    Multi-year, simulated remote-handled transuranic waste (RH TRU, nonradioactive) experiments are being conducted underground in the Waste Isolation Pilot-Plant (WIPP) facility. These experiments involve the near-reference (thermal and geometrical) testing of eight full size RH TRU test containers emplaced into horizontal, unlined rock salt boreholes. Half of the test emplacements are partially filled with bentonite/silica-sand backfill material. All test containers were electrically heated at about 115 W/each for three years, then raised to about 300 W/each for the remaining time. Each test borehole was instrumented with a selection of remote-reading thermocouples, pressure gages, borehole vertical-closure gages, and vertical and horizontal borehole-diameter closure gages. Each test emplacements was also periodically opened for visual inspections of brine intrusions and any interactions with waste package materials, materials sampling, manual closure measurements, and observations of borehole changes. Effects of heat on borehole closure rates and near-field materials (metals, backfill, rock salt, and intruding brine) interactions were closely monitored as a function of time. This paper summarizes results for the first five years of in situ test operation with supporting instrumentation and laboratory data and interpretations. Some details of RH TRU waste package materials, designs, and assorted underground test observations are also discussed. Based on the results, the tested RH TRU waste packages, materials, and emplacement geometry in unlined salt boreholes appear to be quite adequate for initial WIPP repository-phase operations.

  10. Correlations between visual test results and flying performance on the advanced simulator for pilot training (ASPT).

    PubMed

    Kruk, R; Regan, D; Beverley, K I; Longridge, T

    1981-08-01

    Looking for visual differences in pilots to account for differences in flying performance, we tested five groups of subjects: Air Force primary student jet pilots, graduating (T38 aircraft) students, Air Force pilot instructors, and two control groups made up of experienced nonpilot aircrew and nonflying civilians. This interim report compares 13 different visual test results with low-visibility landing performance on the Air Force Human Resources Laboratory ASPT simulator. Performance was assessed by the number of crashes and by the distance of the aircraft from the runway threshold at the time of the first visual flight correction. Our main finding was that, for student pilots, landing performance correlated with tracking performance for a target that changed size (as if moving in depth) and also with tracking performance for a target that moved sideways. On the other hand, landing performance correlated comparatively weakly with psychophysical thresholds for motion and contrast. For student pilots, several of the visual tests gave results that correlated with flying grades in T37 and T38 jet aircraft. Tracking tests clearly distinguished between the nonflying group and all the flying groups. On the other hand, visual threshold tests did not distinguish between nonflying and flying groups except for grating contrast, which distinguished between the nonflying group and the pilot instructors. The sideways-motion tracking task was sensitive enough to distinguish between the various flying groups.

  11. Simulation Results of the Huygens Probe Entry and Descent Trajectory Reconstruction Algorithm

    NASA Technical Reports Server (NTRS)

    Kazeminejad, B.; Atkinson, D. H.; Perez-Ayucar, M.

    2005-01-01

    Cassini/Huygens is a joint NASA/ESA mission to explore the Saturnian system. The ESA Huygens probe is scheduled to be released from the Cassini spacecraft on December 25, 2004, enter the atmosphere of Titan in January, 2005, and descend to Titan s surface using a sequence of different parachutes. To correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for "ground-truthing" Orbiter remote sensing measurements, it is essential that the probe entry and descent trajectory reconstruction be performed as early as possible in the postflight data analysis phase. The Huygens Descent Trajectory Working Group (DTWG), a subgroup of the Huygens Science Working Team (HSWT), is responsible for developing a methodology and performing the entry and descent trajectory reconstruction. This paper provides an outline of the trajectory reconstruction methodology, preliminary probe trajectory retrieval test results using a simulated synthetic Huygens dataset developed by the Huygens Project Scientist Team at ESA/ESTEC, and a discussion of strategies for recovery from possible instrument failure.

  12. Results and Lessons Learned from Performance Testing of Humans in Spacesuits in Simulated Reduced Gravity

    NASA Technical Reports Server (NTRS)

    Chappell, Steven P.; Norcross, Jason R.; Gernhardt, Michael L.

    2009-01-01

    NASA's Constellation Program has plans to return to the Moon within the next 10 years. Although reaching the Moon during the Apollo Program was a remarkable human engineering achievement, fewer than 20 extravehicular activities (EVAs) were performed. Current projections indicate that the next lunar exploration program will require thousands of EVAs, which will require spacesuits that are better optimized for human performance. Limited mobility and dexterity, and the position of the center of gravity (CG) are a few of many features of the Apollo suit that required significant crew compensation to accomplish the objectives. Development of a new EVA suit system will ideally result in performance close to or better than that in shirtsleeves at 1 G, i.e., in "a suit that is a pleasure to work in, one that you would want to go out and explore in on your day off." Unlike the Shuttle program, in which only a fraction of the crew perform EVA, the Constellation program will require that all crewmembers be able to perform EVA. As a result, suits must be built to accommodate and optimize performance for a larger range of crew anthropometry, strength, and endurance. To address these concerns, NASA has begun a series of tests to better understand the factors affecting human performance and how to utilize various lunar gravity simulation environments available for testing.

  13. Simulation results of Pulse Shape Discrimination (PSD) for background reduction in INTEGRAL Spectrometer (SPI) germanium detectors

    NASA Technical Reports Server (NTRS)

    Slassi-Sennou, S. A.; Boggs, S. E.; Feffer, P. T.; Lin, R. P.

    1997-01-01

    Pulse Shape Discrimination (PSD) for background reduction will be used in the INTErnational Gamma Ray Astrophysics Laboratory (INTEGRAL) imaging spectrometer (SPI) to improve the sensitivity from 200 keV to 2 MeV. The observation of significant astrophysical gamma ray lines in this energy range is expected, where the dominant component of the background is the beta(sup -) decay in the Ge detectors due to the activation of Ge nuclei by cosmic rays. The sensitivity of the SPI will be improved by rejecting beta(sup -) decay events while retaining photon events. The PSD technique will distinguish between single and multiple site events. Simulation results of PSD for INTEGRAL-type Ge detectors using a numerical model for pulse shape generation are presented. The model was shown to agree with the experimental results for a narrow inner bore closed end cylindrical detector. Using PSD, a sensitivity improvement factor of the order of 2.4 at 0.8 MeV is expected.

  14. Benefits and costs of methadone treatment: results from a lifetime simulation model.

    PubMed

    Zarkin, Gary A; Dunlap, Laura J; Hicks, Katherine A; Mamo, Daniel

    2005-11-01

    Several studies have examined the benefits and costs of drug treatment; however, they have typically focused on the benefits and costs of a single treatment episode. Although beneficial for certain analyses, the results are limited because they implicitly treat drug abuse as an acute problem that can be treated in one episode. We developed a Monte Carlo simulation model that incorporates the chronic nature of drug abuse. Our model represents the progression of individuals from the general population aged 18-60 with respect to their heroin use, treatment for heroin use, criminal behavior, employment, and health care use. We also present three model scenarios representing an increase in the probability of going to treatment, an increase in the treatment length of stay, and a scenario in which drug treatment is not available to evaluate how changes in treatment parameters affect model results. We find that the benefit-cost ratio of treatment from our lifetime model (37.72) exceeds the benefit-cost ratio from a static model (4.86). The model provides a rich characterization of the dynamics of heroin use and captures the notion of heroin use as a chronic recurring condition. Similar models can be developed for other chronic diseases, such as diabetes, mental illness, or cardiovascular disease.

  15. LSP Simulation and Analytical Results on Electromagnetic Wave Scattering on Coherent Density Structures

    NASA Astrophysics Data System (ADS)

    Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T.

    2014-09-01

    The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics (HEDP) and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present PIC simulation results on EM scattering on vortex type density structures using the LSP code and compare them with analytical results. Acknowledgement: This work was supported by the Air Force Research laboratory, the Air Force Office of Scientific Research, the Naval Research Laboratory and NNSA/DOE grant no. DE-FC52-06NA27616 at the University of Nevada at Reno.

  16. A mathematical model and simulation results of plasma enhanced chemical vapor deposition of silicon nitride films

    NASA Astrophysics Data System (ADS)

    Konakov, S. A.; Krzhizhanovskaya, V. V.

    2015-01-01

    We developed a mathematical model of Plasma Enhanced Chemical Vapor Deposition (PECVD) of silicon nitride thin films from SiH4-NH3-N2-Ar mixture, an important application in modern materials science. Our multiphysics model describes gas dynamics, chemical physics, plasma physics and electrodynamics. The PECVD technology is inherently multiscale, from macroscale processes in the chemical reactor to atomic-scale surface chemistry. Our macroscale model is based on Navier-Stokes equations for a transient laminar flow of a compressible chemically reacting gas mixture, together with the mass transfer and energy balance equations, Poisson equation for electric potential, electrons and ions balance equations. The chemical kinetics model includes 24 species and 58 reactions: 37 in the gas phase and 21 on the surface. A deposition model consists of three stages: adsorption to the surface, diffusion along the surface and embedding of products into the substrate. A new model has been validated on experimental results obtained with the "Plasmalab System 100" reactor. We present the mathematical model and simulation results investigating the influence of flow rate and source gas proportion on silicon nitride film growth rate and chemical composition.

  17. [Implementation results of emission standards of air pollutants for thermal power plants: a numerical simulation].

    PubMed

    Wang, Zhan-Shan; Pan, Li-Bo

    2014-03-01

    The emission inventory of air pollutants from the thermal power plants in the year of 2010 was set up. Based on the inventory, the air quality of the prediction scenarios by implementation of both 2003-version emission standard and the new emission standard were simulated using Models-3/CMAQ. The concentrations of NO2, SO2, and PM2.5, and the deposition of nitrogen and sulfur in the year of 2015 and 2020 were predicted to investigate the regional air quality improvement by the new emission standard. The results showed that the new emission standard could effectively improve the air quality in China. Compared with the implementation results of the 2003-version emission standard, by 2015 and 2020, the area with NO2 concentration higher than the emission standard would be reduced by 53.9% and 55.2%, the area with SO2 concentration higher than the emission standard would be reduced by 40.0%, the area with nitrogen deposition higher than 1.0 t x km(-2) would be reduced by 75.4% and 77.9%, and the area with sulfur deposition higher than 1.6 t x km(-2) would be reduced by 37.1% and 34.3%, respectively.

  18. CSSC Fish Barrier Simulated Rescuer Touch Point Results, Operating Guidance, and Recommendations for Rescuer Safety

    DTIC Science & Technology

    2011-09-01

    Public September 2011 x This page intentionally left blank. CSSC Fish Barrier Simulated Rescuer Touch...electrode on the wet end (hook end) to simulate the PIW, and lashed it to the life ring to provide flotation during towing. The dry end was wrapped in foil

  19. Simulation Framework for Rapid Entry, Descent, and Landing (EDL) Analysis, Phase 2 Results

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.

    2011-01-01

    The NASA Engineering and Safety Center (NESC) was requested to establish the Simulation Framework for Rapid Entry, Descent, and Landing (EDL) Analysis assessment, which involved development of an enhanced simulation architecture using the Program to Optimize Simulated Trajectories II simulation tool. The assessment was requested to enhance the capability of the Agency to provide rapid evaluation of EDL characteristics in systems analysis studies, preliminary design, mission development and execution, and time-critical assessments. Many of the new simulation framework capabilities were developed to support the Agency EDL-Systems Analysis (SA) team that is conducting studies of the technologies and architectures that are required to enable human and higher mass robotic missions to Mars. The findings, observations, and recommendations from the NESC are provided in this report.

  20. Feature Extraction from Simulations and Experiments: Preliminary Results Using a Fluid Mix Problem

    SciTech Connect

    Kamath, C; Nguyen, T

    2005-01-04

    Code validation, or comparing the output of computer simulations to experiments, is necessary to determine which simulation is a better approximation to an experiment. It can also be used to determine how the input parameters in a simulation can be modified to yield output that is closer to the experiment. In this report, we discuss our experiences in the use of image processing techniques for extracting features from 2-D simulations and experiments. These features can be used in comparing the output of simulations to experiments, or to other simulations. We first describe the problem domain and the data. We next explain the need for cleaning or denoising the experimental data and discuss the performance of different techniques. Finally, we discuss the features of interest and describe how they can be extracted from the data. The focus in this report is on extracting features from experimental and simulation data for the purpose of code validation; the actual interpretation of these features and their use in code validation is left to the domain experts.

  1. Chemical compatibility screening results of plastic packaging to mixed waste simulants

    SciTech Connect

    Nigrey, P.J.; Dickens, T.G.

    1995-12-01

    We have developed a chemical compatibility program for evaluating transportation packaging components for transporting mixed waste forms. We have performed the first phase of this experimental program to determine the effects of simulant mixed wastes on packaging materials. This effort involved the screening of 10 plastic materials in four liquid mixed waste simulants. The testing protocol involved exposing the respective materials to {approximately}3 kGy of gamma radiation followed by 14 day exposures to the waste simulants of 60 C. The seal materials or rubbers were tested using VTR (vapor transport rate) measurements while the liner materials were tested using specific gravity as a metric. For these tests, a screening criteria of {approximately}1 g/m{sup 2}/hr for VTR and a specific gravity change of 10% was used. It was concluded that while all seal materials passed exposure to the aqueous simulant mixed waste, EPDM and SBR had the lowest VTRs. In the chlorinated hydrocarbon simulant mixed waste, only VITON passed the screening tests. In both the simulant scintillation fluid mixed waste and the ketone mixture simulant mixed waste, none of the seal materials met the screening criteria. It is anticipated that those materials with the lowest VTRs will be evaluated in the comprehensive phase of the program. For specific gravity testing of liner materials the data showed that while all materials with the exception of polypropylene passed the screening criteria, Kel-F, HDPE, and XLPE were found to offer the greatest resistance to the combination of radiation and chemicals.

  2. Chemical and Mechanical Alteration of Fractures: Micro-Scale Simulations and Comparison to Experimental Results

    NASA Astrophysics Data System (ADS)

    Ameli, P.; Detwiler, R. L.; Elkhoury, J. E.; Morris, J. P.

    2012-12-01

    surfaces to shift away from the equilibrium location. We apply a relative rotation of the fracture surfaces to preserve force equilibrium during each iteration. The results of the model are compared with flow-through experiments conducted on fractured limestone cores and on analogue rough-surfaced KDP-glass fractures. The fracture apertures are mapped before, during (for some) and after the experiments. These detailed aperture measurements are used as input to our new coupled model. The experiments cover a wide range of transport and reaction conditions; some exhibit permeability increase due to channel formation and others exhibit fracture closure due to deformation of contacting asperities. Simulation results predict these general trends as well as the small-scale details in regions of contacting asperities.n example of an aperture field under chemical and mechanical alterations. The color scale is in microns.

  3. Preliminary results for a two-dimensional simulation of the working process of a Stirling engine

    SciTech Connect

    Makhkamov, K.K.; Ingham, D.B.

    1998-07-01

    Stirling engines have several potential advantages over existing types of engines, in particular they can use renewable energy sources for power production and their performance meets the demands on the environmental security. In order to design Stirling Engines properly, and to put into effect their potential performance, it is important to more accurately mathematically simulate its working process. At present, a series of very important mathematical models are used for describing the working process of Stirling Engines and these are, in general, classified as models of three levels. All the models consider one-dimensional schemes for the engine and assume a uniform fluid velocity, temperature and pressure profiles at each plane of the internal gas circuit of the engine. The use of two-dimensional CFD models can significantly extend the capabilities for the detailed analysis of the complex heat transfer and gas dynamic processes which occur in the internal gas circuit, as well as in the external circuit of the engine. In this paper a two-dimensional simplified frame (no construction walls) calculation scheme for the Stirling Engine has been assumed and the standard {kappa}-{var{underscore}epsilon} turbulence model has been used for the analysis of the engine working process. The results obtained show that the use of two-dimensional CFD models gives the possibility of gaining a much greater insight into the fluid flow and heat transfer processes which occur in Stirling Engines.

  4. Personal values and crew compatibility: Results from a 105 days simulated space mission

    NASA Astrophysics Data System (ADS)

    Sandal, Gro M.; Bye, Hege H.; van de Vijver, Fons J. R.

    2011-08-01

    On a mission to Mars the crew will experience high autonomy and inter-dependence. "Groupthink", known as a tendency to strive for consensus at the cost of considering alternative courses of action, represents a potential safety hazard. This paper addresses two aspects of "groupthink": the extent to which confined crewmembers perceive increasing convergence in personal values, and whether they attribute less tension to individual differences over time. It further examines the impact of personal values for interpersonal compatibility. These questions were investigated in a 105-day confinement study in which a multinational crew ( N=6) simulated a Mars mission. The Portrait of Crew Values Questionnaire was administered regularly to assess personal values, perceived value homogeneity, and tension attributed to value disparities. Interviews were conducted before and after the confinement. Multiple regression analysis revealed no significant changes in value homogeneity over time; rather the opposite tendency was indicated. More tension was attributed to differences in hedonism, benevolence and tradition in the last 35 days when the crew was allowed greater autonomy. Three subgroups, distinct in terms of personal values, were identified. No evidence for "groupthink" was found. The results suggest that personal values should be considered in composition of crews for long duration missions.

  5. Optimal piezoelectric beam shape for single and broadband vibration energy harvesting: Modeling, simulation and experimental results

    NASA Astrophysics Data System (ADS)

    Muthalif, Asan G. A.; Nordin, N. H. Diyana

    2015-03-01

    Harvesting energy from the surroundings has become a new trend in saving our environment. Among the established ones are solar panels, wind turbines and hydroelectric generators which have successfully grown in meeting the world's energy demand. However, for low powered electronic devices; especially when being placed in a remote area, micro scale energy harvesting is preferable. One of the popular methods is via vibration energy scavenging which converts mechanical energy (from vibration) to electrical energy by the effect of coupling between mechanical variables and electric or magnetic fields. As the voltage generated greatly depends on the geometry and size of the piezoelectric material, there is a need to define an optimum shape and configuration of the piezoelectric energy scavenger. In this research, mathematical derivations for unimorph piezoelectric energy harvester are presented. Simulation is done using MATLAB and COMSOL Multiphysics software to study the effect of varying the length and shape of the beam to the generated voltage. Experimental results comparing triangular and rectangular shaped piezoelectric beam are also presented.

  6. Wide Bandpass and Narrow Bandstop Microstrip Filters based on Hilbert fractal geometry: design and simulation results.

    PubMed

    Mezaal, Yaqeen S; Eyyuboglu, Halil T; Ali, Jawad K

    2014-01-01

    This paper presents new Wide Bandpass Filter (WBPF) and Narrow Bandstop Filter (NBSF) incorporating two microstrip resonators, each resonator is based on 2nd iteration of Hilbert fractal geometry. The type of filter as pass or reject band has been adjusted by coupling gap parameter (d) between Hilbert resonators using a substrate with a dielectric constant of 10.8 and a thickness of 1.27 mm. Numerical simulation results as well as a parametric study of d parameter on filter type and frequency responses are presented and studied. WBPF has designed at resonant frequencies of 2 and 2.2 GHz with a bandwidth of 0.52 GHz, -28 dB return loss and -0.125 dB insertion loss while NBSF has designed for electrical specifications of 2.37 GHz center frequency, 20 MHz rejection bandwidth, -0.1873 dB return loss and 13.746 dB insertion loss. The proposed technique offers a new alternative to construct low-cost high-performance filter devices, suitable for a wide range of wireless communication systems.

  7. Do tanning salons adhere to new legal regulations? Results of a simulated client trial in Germany.

    PubMed

    Möllers, Tobias; Pischke, Claudia R; Zeeb, Hajo

    2016-03-01

    In August 2009 and January 2012, two regulations were passed in Germany to limit UV exposure in the general population. These regulations state that no minors are allowed to use tanning devices. Personnel of tanning salons is mandated to offer counseling regarding individual skin type, to create a dosage plan with the customer and to provide a list describing harmful effects of UV radiation. Furthermore, a poster of warning criteria has to be visible and readable at all times inside the tanning salon. It is unclear whether these regulations are followed by employees of tanning salons in Germany, and we are not aware of any studies examining the implementation of the regulations at individual salons. We performed a simulated client study visiting 20 tanning salons in the city-state of Bremen in the year 2014, using a short checklist of criteria derived from the legal requirements, to evaluate whether legal requirements were followed or not. We found that only 20 % of the tanning salons communicated adverse health effects of UV radiation in visible posters and other materials and that only 60 % of the salons offered the required determination of the skin type to customers. In addition, only 60 % of the salons offered to complete the required dosage plan with their customers. To conclude, our results suggest that the new regulations are insufficiently implemented in Bremen. Additional control mechanisms appear necessary to ensure that consumers are protected from possible carcinogenic effects of excessive UV radiation.

  8. The Plasma Wake Downstream of Lunar Topographic Obstacles: Preliminary Results from 2D Particle Simulations

    NASA Technical Reports Server (NTRS)

    Zimmerman, Michael I.; Farrell, W. M.; Snubbs, T. J.; Halekas, J. S.

    2011-01-01

    Anticipating the plasma and electrical environments in permanently shadowed regions (PSRs) of the moon is critical in understanding local processes of space weathering, surface charging, surface chemistry, volatile production and trapping, exo-ion sputtering, and charged dust transport. In the present study, we have employed the open-source XOOPIC code [I] to investigate the effects of solar wind conditions and plasma-surface interactions on the electrical environment in PSRs through fully two-dimensional pattic1e-in-cell simulations. By direct analogy with current understanding of the global lunar wake (e.g., references) deep, near-terminator, shadowed craters are expected to produce plasma "mini-wakes" just leeward of the crater wall. The present results (e.g., Figure I) are in agreement with previous claims that hot electrons rush into the crater void ahead of the heavier ions, fanning a negative cloud of charge. Charge separation along the initial plasma-vacuum interface gives rise to an ambipolar electric field that subsequently accelerates ions into the void. However, the situation is complicated by the presence of the dynamic lunar surface, which develops an electric potential in response to local plasma currents (e.g., Figure Ia). In some regimes, wake structure is clearly affected by the presence of the charged crater floor as it seeks to achieve current balance (i.e. zero net current to the surface).

  9. Wide Bandpass and Narrow Bandstop Microstrip Filters Based on Hilbert Fractal Geometry: Design and Simulation Results

    PubMed Central

    Mezaal, Yaqeen S.; Eyyuboglu, Halil T.; Ali, Jawad K.

    2014-01-01

    This paper presents new Wide Bandpass Filter (WBPF) and Narrow Bandstop Filter (NBSF) incorporating two microstrip resonators, each resonator is based on 2nd iteration of Hilbert fractal geometry. The type of filter as pass or reject band has been adjusted by coupling gap parameter (d) between Hilbert resonators using a substrate with a dielectric constant of 10.8 and a thickness of 1.27 mm. Numerical simulation results as well as a parametric study of d parameter on filter type and frequency responses are presented and studied. WBPF has designed at resonant frequencies of 2 and 2.2 GHz with a bandwidth of 0.52 GHz, −28 dB return loss and −0.125 dB insertion loss while NBSF has designed for electrical specifications of 2.37 GHz center frequency, 20 MHz rejection bandwidth, −0.1873 dB return loss and 13.746 dB insertion loss. The proposed technique offers a new alternative to construct low-cost high-performance filter devices, suitable for a wide range of wireless communication systems. PMID:25536436

  10. Biofilm formation and control in a simulated spacecraft water system - Three year results

    NASA Technical Reports Server (NTRS)

    Schultz, John R.; Flanagan, David T.; Bruce, Rebekah J.; Mudgett, Paul D.; Carr, Sandra E.; Rutz, Jeffrey A.; Huls, M. H.; Sauer, Richard L.; Pierson, Duane L.

    1992-01-01

    Two simulated spacecraft water systems are being used to evaluate the effectiveness of iodine for controlling microbial contamination within such systems. An iodine concentration of about 2.0 mg/L is maintained in one system by passing ultrapure water through an iodinated ion exchange resin. Stainless steel coupons with electropolished and mechanically-polished sides are being used to monitor biofilm formation. Results after three years of operation show a single episode of significant bacterial growth in the iodinated system when the iodine level dropped to 1.9 mg/L. This growth was apparently controlled by replacing the iodinated ion exchange resin, thereby increasing the iodine level. The second batch of resin has remained effective in controlling microbial growth down to an iodine level of 1.0 mg/L. SEM indicates that the iodine has impeded but may have not completely eliminated the formation of biofilm. Metals analyses reveal some corrosion in the iodinated system after 3 years of continuous exposure. Significant microbial contamination has been present continuously in a parallel noniodinated system since the third week of operation.

  11. Simulation of natural corrosion by vapor hydration test: seven-year results

    SciTech Connect

    Luo, J.S.; Ebert, W.L.; Mazer, J.J.; Bates, J.K.

    1996-12-31

    We have investigated the alteration behavior of synthetic basalt and SRL 165 borosilicate waste glasses that had been reacted in water vapor at 70 {degrees}C for time periods up to seven years. The nature and extent of corrosion of glasses have been determined by characterizing the reacted glass surface with optical microscopy, scanning electron microscopy (SEM), transmission electron microscopy (TEM), and energy dispersive x-ray spectroscopy (EDS). Alteration in 70 {degrees}C laboratory tests was compared to that which occurs at 150-200 {degrees}C and also with Hawaiian basaltic glasses of 480 to 750 year old subaerially altered in nature. Synthetic basalt and waste glasses, both containing about 50 percent wt SiO{sub 2} were found to react with water vapor to form an amorphous hydrated gel that contained small amounts of clay, nearly identical to palagonite layers formed on naturally altered basaltic glass. This result implies that the corrosion reaction in nature can be simulated with a vapor hydration test. These tests also provide a means for measuring the corrosion kinetics, which are difficult to determine by studying natural samples because alteration layers have often spelled off the samples and we have only limited knowledge of the conditions under which alteration occurred.

  12. Achieving Actionable Results from Available Inputs: Metamodels Take Building Energy Simulations One Step Further

    SciTech Connect

    Horsey, Henry; Fleming, Katherine; Ball, Brian; Long, Nicholas

    2016-08-26

    Modeling commercial building energy usage can be a difficult and time-consuming task. The increasing prevalence of optimization algorithms provides one path for reducing the time and difficulty. Many use cases remain, however, where information regarding whole-building energy usage is valuable, but the time and expertise required to run and post-process a large number of building energy simulations is intractable. A relatively underutilized option to accurately estimate building energy consumption in real time is to pre-compute large datasets of potential building energy models, and use the set of results to quickly and efficiently provide highly accurate data. This process is called metamodeling. In this paper, two case studies are presented demonstrating the successful applications of metamodeling using the open-source OpenStudio Analysis Framework. The first case study involves the U.S. Department of Energy's Asset Score Tool, specifically the Preview Asset Score Tool, which is designed to give nontechnical users a near-instantaneous estimated range of expected results based on building system-level inputs. The second case study involves estimating the potential demand response capabilities of retail buildings in Colorado. The metamodel developed in this second application not only allows for estimation of a single building's expected performance, but also can be combined with public data to estimate the aggregate DR potential across various geographic (county and state) scales. In both case studies, the unique advantages of pre-computation allow building energy models to take the place of topdown actuarial evaluations. This paper ends by exploring the benefits of using metamodels and then examines the cost-effectiveness of this approach.

  13. The Planetary Accretion Shock. I. Framework for Radiation-hydrodynamical Simulations and First Results

    NASA Astrophysics Data System (ADS)

    Marleau, Gabriel-Dominique; Klahr, Hubert; Kuiper, Rolf; Mordasini, Christoph

    2017-02-01

    The key aspect determining the postformation luminosity of gas giants has long been considered to be the energetics of the accretion shock at the surface of the planet. We use one-dimensional radiation-hydrodynamical simulations to study the radiative loss efficiency and to obtain postshock temperatures and pressures and thus entropies. The efficiency is defined as the fraction of the total incoming energy flux that escapes the system (roughly the Hill sphere), taking into account the energy recycling that occurs ahead of the shock in a radiative precursor. We focus in this paper on a constant equation of state (EOS) to isolate the shock physics but use constant and tabulated opacities. While robust quantitative results will have to await a self-consistent treatment including hydrogen dissociation and ionization, the results presented here show the correct qualitative behavior and can be understood from semianalytical calculations. The shock is found to be isothermal and supercritical for a range of conditions relevant to the core accretion formation scenario (CA), with Mach numbers { M }≳ 3. Across the shock, the entropy decreases significantly by a few times {k}{{B}}/{{baryon}}. While nearly 100% of the incoming kinetic energy is converted to radiation locally, the efficiencies are found to be as low as roughly 40%, implying that a significant fraction of the total accretion energy is brought into the planet. However, for realistic parameter combinations in the CA scenario, we find that a nonzero fraction of the luminosity always escapes the Hill sphere. This luminosity could explain, at least in part, recent observations in the young LkCa 15 and HD 100546 systems.

  14. Soil nitrogen balance under wastewater management: Field measurements and simulation results

    USGS Publications Warehouse

    Sophocleous, M.; Townsend, M.A.; Vocasek, F.; Ma, L.; KC, A.

    2009-01-01

    The use of treated wastewater for irrigation of crops could result in high nitrate-nitrogen (NO3-N) concentrations in the vadose zone and ground water. The goal of this 2-yr field-monitoring study in the deep silty clay loam soils south of Dodge City, Kansas, was to assess how and under what circumstances N from the secondary-treated, wastewater-irrigated corn reached the deep (20-45 m) water table of the underlying High Plains aquifer and what could be done to minimize this problem. We collected 15.2-m-deep soil cores for characterization of physical and chemical properties; installed neutron probe access tubes to measure soil-water content and suction lysimeters to sample soil water periodically; sampled monitoring, irrigation, and domestic wells in the area; and obtained climatic, crop, irrigation, and N application rate records for two wastewater-irrigated study sites. These data and additional information were used to run the Root Zone Water Quality Model to identify key parameters and processes that influence N losses in the study area. We demonstrated that NO3-N transport processes result in significant accumulations of N in the vadose zone and that NO3-N in the underlying ground water is increasing with time. Root Zone Water Quality Model simulations for two wastewater-irrigated study sites indicated that reducing levels of corn N fertilization by more than half to 170 kg ha-1 substantially increases N-use efficiency and achieves near-maximum crop yield. Combining such measures with a crop rotation that includes alfalfa should further reduce the accumulation and downward movement of NO3-N in the soil profile. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  15. Velocity structure of a bottom simulating reflector offshore Peru: Results from full waveform inversion

    USGS Publications Warehouse

    Pecher, I.A.; Minshull, T.A.; Singh, S.C.; Von Huene, R.

    1996-01-01

    Much of our knowledge of the worldwide distribution of submarine gas hydrates comes from seismic observations of Bottom Simulating Reflectors (BSRs). Full waveform inversion has proven to be a reliable technique for studying the fine structure of BSRs using the compressional wave velocity. We applied a non-linear full waveform inversion technique to a BSR at a location offshore Peru. We first determined the large-scale features of seismic velocity variations using a statistical inversion technique to maximise coherent energy along travel-time curves. These velocities were used for a starting velocity model for the full waveform inversion, which yielded a detailed velocity/depth model in the vicinity of the BSR. We found that the data are best fit by a model in which the BSR consists of a thin, low-velocity layer. The compressional wave velocity drops from 2.15 km/s down to an average of 1.70 km/s in an 18m thick interval, with a minimum velocity of 1.62 km/s in a 6 m interval. The resulting compressional wave velocity was used to estimate gas content in the sediments. Our results suggest that the low velocity layer is a 6-18 m thick zone containing a few percent of free gas in the pore space. The presence of the BSR coincides with a region of vertical uplift. Therefore, we suggest that gas at this BSR is formed by a dissociation of hydrates at the base of the hydrate stability zone due to uplift and subsequently a decrease in pressure.

  16. Test Results from a Direct Drive Gas Reactor Simulator Coupled to a Brayton Power Conversion Unit

    NASA Technical Reports Server (NTRS)

    Hervol, David S.; Briggs, Maxwell H.; Owen, Albert K.; Bragg-Sitton, Shannon M.; Godfroy, Thomas J.

    2010-01-01

    Component level testing of power conversion units proposed for use in fission surface power systems has typically been done using relatively simple electric heaters for thermal input. These heaters do not adequately represent the geometry or response of proposed reactors. As testing of fission surface power systems transitions from the component level to the system level it becomes necessary to more accurately replicate these reactors using reactor simulators. The Direct Drive Gas-Brayton Power Conversion Unit test activity at the NASA Glenn Research Center integrates a reactor simulator with an existing Brayton test rig. The response of the reactor simulator to a change in Brayton shaft speed is shown as well as the response of the Brayton to an insertion of reactivity, corresponding to a drum reconfiguration. The lessons learned from these tests can be used to improve the design of future reactor simulators which can be used in system level fission surface power tests.

  17. The simulation of optical diagnostics for crystal growth - Models and results

    NASA Astrophysics Data System (ADS)

    Banish, M. R.; Clark, R. L.; Kathman, A. D.; Lawson, S. M.

    A computer simulation of a Two Color Holographic Interferometric (TCHI) optical system was performed using a physical (wave) optics model. This model accurately simulates propagation through time-varying, 2-D or 3-D concentration and temperature fields as a wave phenomenon. The model calculates wavefront deformations that can be used to generate fringe patterns. This simulation modeled a proposed TriGlycine sulphate TGS flight experiment by propagating through the simplified onion-like refractive index distribution of the growing crystal and calculating the recorded wavefront deformation. The phase of this wavefront was used to generate sample interferograms that map index of refraction variation. Two such fringe patterns, generated at different wavelengths, were used to extract the original temperature and concentration field characteristics within the growth chamber. This proves feasibility for this TCHI crystal growth diagnostic technique. This simulation provides feedback to the experimental design process.

  18. ATMOSPHERIC MERCURY SIMULATION USING THE CMAQ MODEL: FORMULATION DESCRIPTION AND ANALYSIS OF WET DEPOSITION RESULTS

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) modeling system has recently been adapted to simulate the emission, transport, transformation and deposition of atmospheric mercury in three distinct forms; elemental mercury gas, reactive gaseous mercury, and particulate mercury. Emis...

  19. Ion velocity distribution functions in argon and helium discharges: detailed comparison of numerical simulation results and experimental data

    NASA Astrophysics Data System (ADS)

    Wang, Huihui; Sukhomlinov, Vladimir S.; Kaganovich, Igor D.; Mustafaev, Alexander S.

    2017-02-01

    Using the Monte Carlo collision method, we have performed simulations of ion velocity distribution functions (IVDF) taking into account both elastic collisions and charge exchange collisions of ions with atoms in uniform electric fields for argon and helium background gases. The simulation results are verified by comparison with the experiment data of the ion mobilities and the ion transverse diffusion coefficients in argon and helium. The recently published experimental data for the first seven coefficients of the Legendre polynomial expansion of the ion energy and angular distribution functions are used to validate simulation results for IVDF. Good agreement between measured and simulated IVDFs shows that the developed simulation model can be used for accurate calculations of IVDFs.

  20. Hamiltonian and potentials in derivative pricing models: exact results and lattice simulations

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani

    2004-03-01

    The pricing of options, warrants and other derivative securities is one of the great success of financial economics. These financial products can be modeled and simulated using quantum mechanical instruments based on a Hamiltonian formulation. We show here some applications of these methods for various potentials, which we have simulated via lattice Langevin and Monte Carlo algorithms, to the pricing of options. We focus on barrier or path dependent options, showing in some detail the computational strategies involved.

  1. Transient analysis of distribution class Adaptive Var Compensators: Simulation and field test results

    SciTech Connect

    Kagalwala, R.A.; Venkata, S.S.; El-Sharkawi, M.A.; Butler, N.G.; Van Leuven, A.; Rodriguez, A.P.; Kerszenbaum, I.; Smith, D.

    1995-04-01

    Simulation studies are performed to analyze the transient behavior of the Adaptive Var Compensator (AVC), a power electronic device installed at the distribution level, during its design, installation and field testing stages. The simulation model includes detailed models for power apparatus, power semiconductor devices and low signal level electronics. Hence, by using this model, a wide range of simulation studies which contribute towards the development of the AVC and its effectiveness in the field can all be performed on the same platform. A new power electronics simulator called SABER has proven to be very effective for this study because of its model-independent structure and extensive library that covers various disciplines of engineering. The simulation studies are aimed at gaining a better understanding of the interaction between the AVC and the distribution system. They cover a range of phenomena such as switching transients due to mechanical capacitor bank closing, fast transients due to reverse recovery of the power diodes of the AVC, power system harmonics and voltage flicker problem. This paper also briefly describes the criteria for selection of the simulation tool and the models developed.

  2. NPE 2010 results - Independent performance assessment by simulated CTBT violation scenarios

    NASA Astrophysics Data System (ADS)

    Ross, O.; Bönnemann, C.; Ceranna, L.; Gestermann, N.; Hartmann, G.; Plenefisch, T.

    2012-04-01

    earthquakes by seismological analysis. The remaining event at Black Thunder Mine, Wyoming, on 23 Oct at 21:15 UTC showed clear explosion characteristics. It caused also Infrasound detections at one station in Canada. An infrasonic one station localization algorithm led to event localization results comparable in precision to the teleseismic localization. However, the analysis of regional seismological stations gave the most accurate result giving an error ellipse of about 60 square kilometer. Finally a forward ATM simulation was performed with the candidate event as source in order to reproduce the original detection scenario. The ATM results showed a simulated station fingerprint in the IMS very similar to the fictitious detections given in the NPE 2010 scenario which is an additional confirmation that the event was correctly identified. The shown event analysis of the NPE 2010 serves as successful example for Data Fusion between the technology of radionuclide detection supported by ATM and seismological methodology as well as infrasound signal processing.

  3. Multiple Hypothesis Tracking (MHT) for Space Surveillance: Results and Simulation Studies

    NASA Astrophysics Data System (ADS)

    Singh, N.; Poore, A.; Sheaff, C.; Aristoff, J.; Jah, M.

    2013-09-01

    tracking performance compared to existing methods at a lower computational cost, especially for closely-spaced objects, in realistic multi-sensor multi-object tracking scenarios over multiple regimes of space. Specifically, we demonstrate that the prototype MHT system can accurately and efficiently process tens of thousands of UCTs and angles-only UCOs emanating from thousands of objects in LEO, GEO, MEO and HELO, many of which are closely-spaced, in real-time on a single laptop computer, thereby making it well-suited for large-scale breakup and tracking scenarios. This is possible in part because complexity reduction techniques are used to control the runtime of MHT without sacrificing accuracy. We assess the performance of MHT in relation to other tracking methods in multi-target, multi-sensor scenarios ranging from easy to difficult (i.e., widely-spaced objects to closely-spaced objects), using realistic physics and probabilities of detection less than one. In LEO, it is shown that the MHT system is able to address the challenges of processing breakups by analyzing multiple frames of data simultaneously in order to improve association decisions, reduce cross-tagging, and reduce unassociated UCTs. As a result, the multi-frame MHT system can establish orbits up to ten times faster than single-frame methods. Finally, it is shown that in GEO, MEO and HELO, the MHT system is able to address the challenges of processing angles-only optical observations by providing a unified multi-frame framework.

  4. Direct Numerical Simulation of Liquid Nozzle Spray with Comparison to Shadowgraphy and X-Ray Computed Tomography Experimental Results

    NASA Astrophysics Data System (ADS)

    van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis

    2014-11-01

    In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.

  5. Particle-In-Cell (PIC) code simulation results and comparison with theory scaling laws for photoelectron-generated radiation

    SciTech Connect

    Dipp, T.M. |

    1993-12-01

    The generation of radiation via photoelectrons induced off of a conducting surface was explored using Particle-In-Cell (PIC) code computer simulations. Using the MAGIC PIC code, the simulations were performed in one dimension to handle the diverse scale lengths of the particles and fields in the problem. The simulations involved monoenergetic, nonrelativistic photoelectrons emitted normal to the illuminated conducting surface. A sinusoidal, 100% modulated, 6.3263 ns pulse train, as well as unmodulated emission, were used to explore the behavior of the particles, fields, and generated radiation. A special postprocessor was written to convert the PIC code simulated electron sheath into far-field radiation parameters by means of rigorous retarded time calculations. The results of the small-spot PIC simulations were used to generate various graphs showing resonance and nonresonance radiation quantities such as radiated lobe patterns, frequency, and power. A database of PIC simulation results was created and, using a nonlinear curve-fitting program, compared with theoretical scaling laws. Overall, the small-spot behavior predicted by the theoretical scaling laws was generally observed in the PIC simulation data, providing confidence in both the theoretical scaling laws and the PIC simulations.

  6. High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media 2. Transport results

    USGS Publications Warehouse

    Naff, R.L.; Haley, D.F.; Sudicky, E.A.

    1998-01-01

    In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.

  7. Results Of Copper Catalyzed Peroxide Oxidation (CCPO) Of Tank 48H Simulants

    SciTech Connect

    Peters, T. B.; Pareizs, J. M.; Newell, J. D.; Fondeur, F. F.; Nash, C. A.; White, T. L.; Fink, S. D.

    2012-12-13

    Savannah River National Laboratory (SRNL) performed a series of laboratory-scale experiments that examined copper-catalyzed hydrogen peroxide (H{sub 2}O{sub 2}) aided destruction of organic components, most notably tetraphenylborate (TPB), in Tank 48H simulant slurries. The experiments were designed with an expectation of conducting the process within existing vessels of Building 241-96H with minimal modifications to the existing equipment. Results of the experiments indicate that TPB destruction levels exceeding 99.9% are achievable, dependent on the reaction conditions. A lower reaction pH provides faster reaction rates (pH 7 > pH 9 > pH 11); however, pH 9 reactions provide the least quantity of organic residual compounds within the limits of species analyzed. Higher temperatures lead to faster reaction rates and smaller quantities of organic residual compounds. A processing temperature of 50°C as part of an overall set of conditions appears to provide a viable TPB destruction time on the order of 4 days. Higher concentrations of the copper catalyst provide faster reaction rates, but the highest copper concentration (500 mg/L) also resulted in the second highest quantity of organic residual compounds. The data in this report suggests 100-250 mg/L as a minimum. Faster rates of H{sub 2}O{sub 2} addition lead to faster reaction rates and lower quantities of organic residual compounds. An addition rate of 0.4 mL/hour, scaled to the full vessel, is suggested for the process. SRNL recommends that for pH adjustment, an acid addition rate 42 mL/hour, scaled to the full vessel, is used. This is the same addition rate used in the testing. Even though the TPB and phenylborates can be destroyed in a relative short time period, the residual organics will take longer to degrade to <10 mg/L. Low level leaching on titanium occurred, however, the typical concentrations of released titanium are very low (~40 mg/L or less). A small amount of leaching under these conditions is not

  8. Composition, preparation, and gas generation results from simulated wastes of Tank 241-SY-101

    SciTech Connect

    Bryan, S.A.; Pederson, L.R.

    1994-08-01

    This document reviews the preparation and composition of simulants that have been developed to mimic the wastes temporarily stored in Tank 241-SY-101 at Hanford. The kinetics and stoichiometry of gases that are generated using these simulants are also compared, considering the roles of hydroxide, chloride, and transition metal ions; the identities of organic constituents; and the effects of dilution, radiation, and temperature. Work described in this report was conducted for the Flammable Gas Safety Program at Pacific Northwest Laboratory, (a) whose purpose is to develop information that is necessary to mitigate potential safety hazards associated with waste tanks at the Hanford Site. The goal of this research and of related efforts at the Georgia Institute of Technology (GIT), Argonne National Laboratory (ANL), and Westinghouse Hanford Company (WHC) is to determine the thermal and thermal/radiolytic mechanisms by which flammable and other gases are produced in Hanford wastes, emphasizing those stored in Tank 241-SY-101. A variety of Tank 241-SY-101 simulants have been developed to date. The use of simulants in laboratory testing activities provides a number of advantages, including elimination of radiological risks to researchers, lower costs associated with experimentation, and the ability to systematically alter simulant compositions to study the chemical mechanisms of reactions responsible for gas generation. The earliest simulants contained the principal inorganic components of the actual waste and generally a single complexant such as N-(2-hydroxyethyl) ethylenediaminetriacetic acid (HEDTA) or ethylenediaminetriacetic acid (EDTA). Both homogeneous and heterogeneous compositional forms were developed. Aggressive core sampling and analysis activities conducted during Windows C and E provided information that was used to design new simulants that more accurately reflected major and minor inorganic components.

  9. MULTI - TRACER CONTROL ROOM AIR INLEAKAGE PROTOCOL AND SIMULATED PRIMARY AND EXTENDED MULTI - ZONE RESULTS.

    SciTech Connect

    DIETZ,R.N.

    2002-01-01

    The perfluorocarbon tracer (PFT) technology can be applied simultaneously to the wide range in zonal flowrates (from tens of cfms in some Control Rooms to almost 1,000,000 cfm in Turbine Buildings), to achieve the necessary uniform tagging for subsequent determination of the desired air inleakage and outleakage from all zones surrounding a plant's Control Room (CR). New types of PFT sources (Mega sources) were devised and tested to handle the unusually large flowrates in a number of HVAC zones in power stations. A review of the plans of a particular nuclear power plant and subsequent simulations of the tagging and sampling results confirm that the technology can provide the necessary concentration measurement data to allow the important ventilation pathways involving the Control Room and its air flow communications with all adjacent zones to be quantitatively determined with minimal uncertainty. Depending on need, a simple single or 3-zone scheme (involving the Control Room alone or along with the Aux. Bldg. and Turbine Bldg.) or a more complex test involving up to 7 zones simultaneously can be accommodated with the current revisions to the technology; to test all the possible flow pathways, several different combinations of up to 7 zones would need to be run. The potential exists that for an appropriate investment, in about 2 years, it would be possible to completely evaluate an entire power plant in a single extended multizone test with up to 12 to 13 separate HVAC zones. With multiple samplers in the Control Room near each of the contiguous zones, not only will the prevalent inleakage or outleakage zones be documented, but the particular location of the pathway's room of ingress can be identified. The suggested protocol is to perform a 3-zone test involving the Control Room, Aux. Bldg., and Turbine Bldg. to (1) verify CR total inleakage and (2) proportion that inleakage to distinguish that from the other 2 major buildings and any remaining untagged locations

  10. Planck 2013 results. X. HFI energetic particle effects: characterization, removal, and simulation

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bridges, M.; Bucher, M.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Girard, D.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leonardi, R.; Leroy, C.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Miniussi, A.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Mottet, S.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Racine, B.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rusholme, B.; Sanselme, L.; Santos, D.; Sauvé, A.; Savini, G.; Scott, D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    We describe the detection, interpretation, and removal of the signal resulting from interactions of high energy particles with the Planck High Frequency Instrument (HFI). There are two types of interactions: heating of the 0.1 K bolometer plate; and glitches i