Science.gov

Sample records for accuracy simulation results

  1. Equations of State for Mixtures: Results from DFT Simulations of Xenon/Ethane Mixtures Compared to High Accuracy Validation Experiments on Z

    NASA Astrophysics Data System (ADS)

    Magyar, Rudolph

    2013-06-01

    We report a computational and validation study of equation of state (EOS) properties of liquid / dense plasma mixtures of xenon and ethane to explore and to illustrate the physics of the molecular scale mixing of light elements with heavy elements. Accurate EOS models are crucial to achieve high-fidelity hydrodynamics simulations of many high-energy-density phenomena such as inertial confinement fusion and strong shock waves. While the EOS is often tabulated for separate species, the equation of state for arbitrary mixtures is generally not available, requiring properties of the mixture to be approximated by combining physical properties of the pure systems. The main goal of this study is to access how accurate this approximation is under shock conditions. Density functional theory molecular dynamics (DFT-MD) at elevated-temperature and pressure is used to assess the thermodynamics of the xenon-ethane mixture. The simulations are unbiased as to elemental species and therefore provide comparable accuracy when describing total energies, pressures, and other physical properties of mixtures as they do for pure systems. In addition, we have performed shock compression experiments using the Sandia Z-accelerator on pure xenon, ethane, and various mixture ratios thereof. The Hugoniot results are compared to the DFT-MD results and the predictions of different rules for combing EOS tables. The DFT-based simulation results compare well with the experimental points, and it is found that a mixing rule based on pressure equilibration performs reliably well for the mixtures considered. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. Accuracy of non-Newtonian Lattice Boltzmann simulations

    NASA Astrophysics Data System (ADS)

    Conrad, Daniel; Schneider, Andreas; Böhle, Martin

    2015-11-01

    This work deals with the accuracy of non-Newtonian Lattice Boltzmann simulations. Previous work for Newtonian fluids indicate that, depending on the numerical value of the dimensionless collision frequency Ω, additional artificial viscosity is introduced, which negatively influences the accuracy. Since the non-Newtonian fluid behavior is incorporated through appropriate modeling of the dimensionless collision frequency, a Ω dependent error EΩ is introduced and its influence on the overall error is investigated. Here, simulations with the SRT and the MRT model are carried out for power-law fluids in order to numerically investigate the accuracy of non-Newtonian Lattice Boltzmann simulations. A goal of this accuracy analysis is to derive a recommendation for an optimal choice of the time step size and the simulation Mach number, respectively. For the non-Newtonian case, an error estimate for EΩ in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. With the help of the error functional, the prediction of the global error minimum of the velocity field is excellent in regions where the EΩ error is the dominant source of error. With an optimal simulation Mach number, the simulation is about one order of magnitude more accurate. Additionally, for both collision models a detailed study of the convergence behavior of the method in the non-Newtonian case is conducted. The results show that the simulation Mach number has a major impact on the convergence rate and second order accuracy is not preserved for every choice of the simulation Mach number.

  3. An evaluation of information retrieval accuracy with simulated OCR output

    SciTech Connect

    Croft, W.B.; Harding, S.M.; Taghva, K.; Borsack, J.

    1994-12-31

    Optical Character Recognition (OCR) is a critical part of many text-based applications. Although some commercial systems use the output from OCR devices to index documents without editing, there is very little quantitative data on the impact of OCR errors on the accuracy of a text retrieval system. Because of the difficulty of constructing test collections to obtain this data, we have carried out evaluation using simulated OCR output on a variety of databases. The results show that high quality OCR devices have little effect on the accuracy of retrieval, but low quality devices used with databases of short documents can result in significant degradation.

  4. Improving ASM stepper alignment accuracy by alignment signal intensity simulation

    NASA Astrophysics Data System (ADS)

    Li, Gerald; Pushpala, Sagar M.; Bradford, Bradley; Peng, Zezhong; Gottipati, Mohan

    1993-08-01

    As photolithography technology advances into submicron regime, the requirement for alignment accuracy also becomes much tighter. The alignment accuracy is a function of the strength of the alignment signal. Therefore, a detailed alignment signal intensity simulation for 0.8 micrometers EPROM poly-1 layer on ASM stepper was done based on the process of record in the fab to reduce misalignment and improve die yield. Oxide thickness variation did not have significant impact on the alignment signal intensity. However, poly-1 thickness was the most important parameter to affect optical alignments. The real alignment intensity data versus resist thickness on production wafers was collected and it showed good agreement with the simulated results. Similar results were obtained for ONO dielectric layer at a different fab.

  5. Performance and accuracy benchmarks for a next generation geodynamo simulation

    NASA Astrophysics Data System (ADS)

    Matsui, H.

    2015-12-01

    A number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field in the last twenty years. However, parameters in the current dynamo model are far from realistic for the Earth's core. To approach a realistic parameters for the Earth's core in geodynmo simulations, extremely large spatial resolutions are required to resolve convective turbulence and small-scale magnetic fields. To assess the next generation dynamo models on a massively parallel computer, we performed performance and accuracy benchmarks from 15 dynamo codes which employ a diverse range of discretization (spectral, finite difference, finite element, and hybrid methods) and parallelization methods. In the performance benchmark, we compare elapsed time and parallelization capability on the TACC Stampede platform, using up to 16384 processor cores. In the accuracy benchmark, we compare required resolutions to obtain less than 1% error from the suggested solutions. The results of the performance benchmark show that codes using 2-D or 3-D parallelization models have a capability to run with 16384 processor cores. The elapsed time for Calypso and Rayleigh, two parallelized codes that use the spectral method, scales with a smaller exponent than the ideal scaling. The elapsed time of SFEMaNS, which uses finite element and Fourier transform, has the smallest growth of the elapsed time with the resolution and parallelization. However, the accuracy benchmark results show that SFEMaNS require three times more degrees of freedoms in each direction compared with a spherical harmonics expansion. Consequently, SFEMaNS needs more than 200 times of elapsed time for the Calypso and Rayleigh with 10000 cores to obtain the same accuracy. These benchmark results indicate that the spectral method with 2-D or 3-D domain decomposition is the most promising methodology for advancing numerical dynamo simulations in the immediate future.

  6. Evaluating the Accuracy of Hessian Approximations for Direct Dynamics Simulations.

    PubMed

    Zhuang, Yu; Siebert, Matthew R; Hase, William L; Kay, Kenneth G; Ceotto, Michele

    2013-01-01

    Direct dynamics simulations are a very useful and general approach for studying the atomistic properties of complex chemical systems, since an electronic structure theory representation of a system's potential energy surface is possible without the need for fitting an analytic potential energy function. In this paper, recently introduced compact finite difference (CFD) schemes for approximating the Hessian [J. Chem. Phys.2010, 133, 074101] are tested by employing the monodromy matrix equations of motion. Several systems, including carbon dioxide and benzene, are simulated, using both analytic potential energy surfaces and on-the-fly direct dynamics. The results show, depending on the molecular system, that electronic structure theory Hessian direct dynamics can be accelerated up to 2 orders of magnitude. The CFD approximation is found to be robust enough to deal with chaotic motion, concomitant with floppy and stiff mode dynamics, Fermi resonances, and other kinds of molecular couplings. Finally, the CFD approximations allow parametrical tuning of different CFD parameters to attain the best possible accuracy for different molecular systems. Thus, a direct dynamics simulation requiring the Hessian at every integration step may be replaced with an approximate Hessian updating by tuning the appropriate accuracy. PMID:26589009

  7. Simulation of Local Tie Accuracy on VLBI Antennas

    NASA Technical Reports Server (NTRS)

    Kallio, Ulla; Poutanen, Markku

    2010-01-01

    We introduce a new mathematical model to compute the centering parameters of a VLBI antenna. These include the coordinates of the reference point, axis offset, orientation, and non-perpendicularity of the axes. Using the model we simulated how precisely parameters can be computed in different cases. Based on the simulation we can give some recommendations and practices to control the accuracy and reliability of the local ties at the VLBI sites.

  8. Open cherry picker simulation results

    NASA Technical Reports Server (NTRS)

    Nathan, C. A.

    1982-01-01

    The simulation program associated with a key piece of support equipment to be used to service satellites directly from the Shuttle is assessed. The Open Cherry Picker (OCP) is a manned platform mounted at the end of the remote manipulator system (RMS) and is used to enhance extra vehicular activities (EVA). The results of simulations performed on the Grumman Large Amplitude Space Simulator (LASS) and at the JSC Water Immersion Facility are summarized.

  9. Study of accuracy of precipitation measurements using simulation method

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Lajos, Tamás; Morvai, Krisztián

    2013-04-01

    Hungarian Meteorological Service1 Budapest University of Technology and Economics2 Precipitation is one of the the most important meteorological parameters describing the state of the climate and to get correct information from trends, accurate measurements of precipitation is very important. The problem is that the precipitation measurements are affected by systematic errors leading to an underestimation of actual precipitation which errors vary by type of precipitaion and gauge type. It is well known that the wind speed is the most important enviromental factor that contributes to the underestimation of actual precipitation, especially for solid precipitation. To study and correct the errors of precipitation measurements there are two basic possibilities: · Use of results and conclusion of International Precipitation Measurements Intercomparisons; · To build standard reference gauges (DFIR, pit gauge) and make own investigation; In 1999 at the HMS we tried to achieve own investigation and built standard reference gauges But the cost-benefit ratio in case of snow (use of DFIR) was very bad (we had several winters without significant amount of snow, while the state of DFIR was continously falling) Due to the problem mentioned above there was need for new approximation that was the modelling made by Budapest University of Technology and Economics, Department of Fluid Mechanics using the FLUENT 6.2 model. The ANSYS Fluent package is featured fluid dynamics solution for modelling flow and other related physical phenomena. It provides the tools needed to describe atmospheric processes, design and optimize new equipment. The CFD package includes solvers that accurately simulate behaviour of the broad range of flows that from single-phase to multi-phase. The questions we wanted to get answer to are as follows: · How do the different types of gauges deform the airflow around themselves? · Try to give quantitative estimation of wind induced error. · How does the use

  10. Poor Metacomprehension Accuracy as a Result of Inappropriate Cue Use

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Griffin, Thomas D.; Wiley, Jennifer; Anderson, Mary C. M.

    2010-01-01

    Two studies attempt to determine the causes of poor metacomprehension accuracy and then, in turn, to identify interventions that circumvent these difficulties to support effective comprehension monitoring performance. The first study explored the cues that both at-risk and typical college readers use as a basis for their metacomprehension…

  11. Accuracy vs. computational time: translating aortic simulations to the clinic.

    PubMed

    Brown, Alistair G; Shi, Yubing; Marzo, Alberto; Staicu, Cristina; Valverde, Isra; Beerbaum, Philipp; Lawford, Patricia V; Hose, D Rodney

    2012-02-01

    State of the art simulations of aortic haemodynamics feature full fluid-structure interaction (FSI) and coupled 0D boundary conditions. Such analyses require not only significant computational resource but also weeks to months of run time, which compromises the effectiveness of their translation to a clinical workflow. This article employs three computational fluid methodologies, of varying levels of complexity with coupled 0D boundary conditions, to simulate the haemodynamics within a patient-specific aorta. The most comprehensive model is a full FSI simulation. The simplest is a rigid walled incompressible fluid simulation while an alternative middle-ground approach employs a compressible fluid, tuned to elicit a response analogous to the compliance of the aortic wall. The results demonstrate that, in the context of certain clinical questions, the simpler analysis methods may capture the important characteristics of the flow field.

  12. High-accuracy simulation-based optical proximity correction

    NASA Astrophysics Data System (ADS)

    Keck, Martin C.; Henkel, Thomas; Ziebold, Ralf; Crell, Christian; Thiele, J.÷rg

    2003-12-01

    In times of continuing aggressive shrinking of chip layouts a thorough understanding of the pattern transfer process from layout to silicon is indispensable. We analyzed the most prominent effects limiting the control of this process for a contact layer like process, printing 140nm features of variable length and different proximity using 248nm lithography. Deviations of the photo mask from the ideal layout, in particular mask off-target and corner rounding have been identified as clearly contributing to the printing behavior. In the next step, these deviations from ideal behavior have been incorporated into the optical proximity correction (OPC) modeling process. The degree of accuracy for describing experimental data by simulation, using an OPC model modified in that manner could be increased significantly. Further improvement in modeling the optical imaging process could be accomplished by taking into account lens aberrations of the exposure tool. This suggests a high potential to improve OPC by considering the effects mentioned, delivering a significant contribution to extending the application of OPC techniques beyond current limits.

  13. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry

    SciTech Connect

    Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.

    2015-02-15

    Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.

  14. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  15. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    SciTech Connect

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I found that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.

  16. Simulation approach for the evaluation of tracking accuracy in radiotherapy: a preliminary study.

    PubMed

    Tanaka, Rie; Ichikawa, Katsuhiro; Mori, Shinichiro; Sanada, Sigeru

    2013-01-01

    Real-time tumor tracking in external radiotherapy can be achieved by diagnostic (kV) X-ray imaging with a dynamic flat-panel detector (FPD). It is important to keep the patient dose as low as possible while maintaining tracking accuracy. A simulation approach would be helpful to optimize the imaging conditions. This study was performed to develop a computer simulation platform based on a noise property of the imaging system for the evaluation of tracking accuracy at any noise level. Flat-field images were obtained using a direct-type dynamic FPD, and noise power spectrum (NPS) analysis was performed. The relationship between incident quantum number and pixel value was addressed, and a conversion function was created. The pixel values were converted into a map of quantum number using the conversion function, and the map was then input into the random number generator to simulate image noise. Simulation images were provided at different noise levels by changing the incident quantum numbers. Subsequently, an implanted marker was tracked automatically and the maximum tracking errors were calculated at different noise levels. The results indicated that the maximum tracking error increased with decreasing incident quantum number in flat-field images with an implanted marker. In addition, the range of errors increased with decreasing incident quantum number. The present method could be used to determine the relationship between image noise and tracking accuracy. The results indicated that the simulation approach would aid in determining exposure dose conditions according to the necessary tracking accuracy. PMID:22843379

  17. Criteria for the accuracy of small polaron quantum master equation in simulating excitation energy transfer dynamics

    SciTech Connect

    Chang, Hung-Tzu; Cheng, Yuan-Chung; Zhang, Pan-Pan

    2013-12-14

    The small polaron quantum master equation (SPQME) proposed by Jang et al. [J. Chem. Phys. 129, 101104 (2008)] is a promising approach to describe coherent excitation energy transfer dynamics in complex molecular systems. To determine the applicable regime of the SPQME approach, we perform a comprehensive investigation of its accuracy by comparing its simulated population dynamics with numerically exact quasi-adiabatic path integral calculations. We demonstrate that the SPQME method yields accurate dynamics in a wide parameter range. Furthermore, our results show that the accuracy of polaron theory depends strongly upon the degree of exciton delocalization and timescale of polaron formation. Finally, we propose a simple criterion to assess the applicability of the SPQME theory that ensures the reliability of practical simulations of energy transfer dynamics with SPQME in light-harvesting systems.

  18. The Impact of Sea Ice Concentration Accuracies on Climate Model Simulations with the GISS GCM

    NASA Technical Reports Server (NTRS)

    Parkinson, Claire L.; Rind, David; Healy, Richard J.; Martinson, Douglas G.; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    The Goddard Institute for Space Studies global climate model (GISS GCM) is used to examine the sensitivity of the simulated climate to sea ice concentration specifications in the type of simulation done in the Atmospheric Modeling Intercomparison Project (AMIP), with specified oceanic boundary conditions. Results show that sea ice concentration uncertainties of +/- 7% can affect simulated regional temperatures by more than 6 C, and biases in sea ice concentrations of +7% and -7% alter simulated annually averaged global surface air temperatures by -0.10 C and +0.17 C, respectively, over those in the control simulation. The resulting 0.27 C difference in simulated annual global surface air temperatures is reduced by a third, to 0.18 C, when considering instead biases of +4% and -4%. More broadly, least-squares fits through the temperature results of 17 simulations with ice concentration input changes ranging from increases of 50% versus the control simulation to decreases of 50% yield a yearly average global impact of 0.0107 C warming for every 1% ice concentration decrease, i.e., 1.07 C warming for the full +50% to -50% range. Regionally and on a monthly average basis, the differences can be far greater, especially in the polar regions, where wintertime contrasts between the +50% and -50% cases can exceed 30 C. However, few statistically significant effects are found outside the polar latitudes, and temperature effects over the non-polar oceans tend to be under 1 C, due in part to the specification of an unvarying annual cycle of sea surface temperatures. The +/- 7% and 14% results provide bounds on the impact (on GISS GCM simulations making use of satellite data) of satellite-derived ice concentration inaccuracies, +/- 7% being the current estimated average accuracy of satellite retrievals and +/- 4% being the anticipated improved average accuracy for upcoming satellite instruments. Results show that the impact on simulated temperatures of imposed ice concentration

  19. Improved Accuracy of the Gravity Probe B Science Results

    NASA Astrophysics Data System (ADS)

    Conklin, John; Adams, M.; Aljadaan, A.; Aljibreen, H.; Almeshari, M.; Alsuwaidan, B.; Bencze, W.; Buchman, S.; Clarke, B.; Debra, D. B.; Everitt, C. W. F.; Heifetz, M.; Holmes, T.; Keiser, G. M.; Kolodziejczak, J.; Li, J.; Lipa, J.; Lockhart, J. M.; Muhlfelder, B.; Parkinson, B. W.; Salomon, M.; Silbergleit, A.; Solomonik, V.; Stahl, K.; Taber, M.; Turneaure, J. P.; Worden, P. W., Jr.

    This paper presents the progress in the science data analysis for the Gravity Probe B (GP-B) experiment. GP-B, sponsored by NASA and launched in April of 2004, tests two fundamental predictions of general relativity, the geodetic effect and the frame-dragging effect. The GP-B spacecraft measures the non-Newtonian drift rates of four ultra-precise cryogenic gyroscopes placed in a circular polar Low Earth Orbit. Science data was collected from 28 August 2004 until cryogen depletion on 29 September 2005. The data analysis is complicated by two unexpected phenomena, a) a continually damping gyroscope polhode affecting the calibration of the gyro readout scale factor, and b) two larger than expected classes of Newtonian torque acting on the gyroscopes. Experimental evidence strongly suggests that both effects are caused by non-uniform electric potentials (i.e. the patch effect) on the surfaces of the gyroscope rotor and its housing. At the end of 2008, the data analysis team reported intermediate results showing that the two complications are well understood and are separable from the relativity signal. Since then we have developed the final GP-B data analysis code, the "2-second Filter", which provides the most accurate and precise determination of the non-Newtonian drifts attainable in the presence of the two Newtonian torques and the fundamental instrument noise. This limit is roughly 5

  20. SPHGal: smoothed particle hydrodynamics with improved accuracy for galaxy simulations

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Yu; Naab, Thorsten; Walch, Stefanie; Moster, Benjamin P.; Oser, Ludwig

    2014-09-01

    We present the smoothed particle hydrodynamics (SPH) implementation SPHGal, which combines some recently proposed improvements in GADGET. This includes a pressure-entropy formulation with a Wendland kernel, a higher order estimate of velocity gradients, a modified artificial viscosity switch with a modified strong limiter, and artificial conduction of thermal energy. With a series of idealized hydrodynamic tests, we show that the pressure-entropy formulation is ideal for resolving fluid mixing at contact discontinuities but performs conspicuously worse at strong shocks due to the large entropy discontinuities. Including artificial conduction at shocks greatly improves the results. In simulations of Milky Way like disc galaxies a feedback-induced instability develops if too much artificial viscosity is introduced. Our modified artificial viscosity scheme prevents this instability and shows efficient shock capturing capability. We also investigate the star formation rate and the galactic outflow. The star formation rates vary slightly for different SPH schemes while the mass loading is sensitive to the SPH scheme and significantly reduced in our favoured implementation. We compare the accretion behaviour of the hot halo gas. The formation of cold blobs, an artefact of simple SPH implementations, can be eliminated efficiently with proper fluid mixing, either by conduction and/or by using a pressure-entropy formulation.

  1. The effectiveness of FE model for increasing accuracy in stretch forming simulation of aircraft skin panels

    NASA Astrophysics Data System (ADS)

    Kono, A.; Yamada, T.; Takahashi, S.

    2013-12-01

    In the aerospace industry, stretch forming has been used to form the outer surface parts of aircraft, which are called skin panels. Empirical methods have been used to correct the springback by measuring the formed panels. However, such methods are impractical and cost prohibitive. Therefore, there is a need to develop simulation technologies to predict the springback caused by stretch forming [1]. This paper reports the results of a study on the influences of the modeling conditions and parameters on the accuracy of an FE analysis simulating the stretch forming of aircraft skin panels. The effects of the mesh aspect ratio, convergence criteria, and integration points are investigated, and better simulation conditions and parameters are proposed.

  2. Accuracy of Numerical Simulations of Tip Clearance Flow in Transonic Compressor Rotors Improved Dramatically

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R.; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors have a significant impact on rotor and stage performance. Although numerical simulations of these flows are quite sophisticated, they are seldom verified through rigorous comparisons of numerical and measured data because, in high-speed machines, measurements acquired in sufficient detail to be useful are rare. Researchers at the NASA Glenn Research Center at Lewis Field compared measured tip clearance flow details (e.g., trajectory and radial extent) of the NASA Rotor 35 with results obtained from a numerical simulation. Previous investigations had focused on capturing the detailed development of the jetlike flow leaking through the clearance gap between the rotating blade tip and the stationary compressor shroud. However, we discovered that the simulation accuracy depends primarily on capturing the detailed development of a wall-bounded shear layer formed by the relative motion between the leakage jet and the shroud.

  3. Improved reticle requalification accuracy and efficiency via simulation-powered automated defect classification

    NASA Astrophysics Data System (ADS)

    Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John

    2014-04-01

    Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.

  4. Grid Generation Issues and CFD Simulation Accuracy for the X33 Aerothermal Simulations

    NASA Technical Reports Server (NTRS)

    Polsky, Susan; Papadopoulos, Periklis; Davies, Carol; Loomis, Mark; Prabhu, Dinesh; Langhoff, Stephen R. (Technical Monitor)

    1997-01-01

    Grid generation issues relating to the simulation of the X33 aerothermal environment using the GASP code are explored. Required grid densities and normal grid stretching are discussed with regards to predicting the fluid dynamic and heating environments with the desired accuracy. The generation of volume grids is explored and includes discussions of structured grid generation packages such as GRIDGEN, GRIDPRO and HYPGEN. Volume grid manipulation techniques for obtaining desired outer boundary and grid clustering using the OUTBOUND code are examined. The generation of the surface grid with the required surface grid with the required surface grid topology is also discussed. Utilizing grids without singular axes is explored as a method of avoiding numerical difficulties at the singular line.

  5. Probing the limits of accuracy in electronic structure calculations: is theory capable of results uniformly better than "chemical accuracy"?

    PubMed

    Feller, David; Peterson, Kirk A

    2007-03-21

    Current limitations in electronic structure methods are discussed from the perspective of their potential to contribute to inherent uncertainties in predictions of molecular properties, with an emphasis on atomization energies (or heats of formation). The practical difficulties arising from attempts to achieve high accuracy are illustrated via two case studies: the carbon dimer (C2) and the hydroperoxyl radical (HO2). While the HO2 wave function is dominated by a single configuration, the carbon dimer involves considerable multiconfigurational character. In addition to these two molecules, statistical results will be presented for a much larger sample of molecules drawn from the Computational Results Database. The goal of this analysis will be to determine if a combination of coupled cluster theory with large 1-particle basis sets and careful incorporation of several computationally expensive smaller corrections can yield uniform agreement with experiment to better than "chemical accuracy" (+/-1 kcal/mol). In the case of HO2, the best current theoretical estimate of the zero-point-inclusive, spin-orbit corrected atomization energy (SigmaD0=166.0+/-0.3 kcal/mol) and the most recent Active Thermochemical Table (ATcT) value (165.97+/-0.06 kcal/mol) are in excellent agreement. For C2 the agreement is only slightly poorer, with theory (D0=143.7+/-0.3 kcal/mol) almost encompassing the most recent ATcT value (144.03+/-0.13 kcal/mol). For a larger collection of 68 molecules, a mean absolute deviation of 0.3 kcal/mol was found. The same high level of theory that produces good agreement for atomization energies also appears capable of predicting bond lengths to an accuracy of +/-0.001 A. PMID:17381194

  6. Probing the limits of accuracy in electronic structure calculations: Is theory capable of results uniformly better than ``chemical accuracy''?

    NASA Astrophysics Data System (ADS)

    Feller, David; Peterson, Kirk A.

    2007-03-01

    Current limitations in electronic structure methods are discussed from the perspective of their potential to contribute to inherent uncertainties in predictions of molecular properties, with an emphasis on atomization energies (or heats of formation). The practical difficulties arising from attempts to achieve high accuracy are illustrated via two case studies: the carbon dimer (C2) and the hydroperoxyl radical (HO2). While the HO2 wave function is dominated by a single configuration, the carbon dimer involves considerable multiconfigurational character. In addition to these two molecules, statistical results will be presented for a much larger sample of molecules drawn from the Computational Results Database. The goal of this analysis will be to determine if a combination of coupled cluster theory with large 1-particle basis sets and careful incorporation of several computationally expensive smaller corrections can yield uniform agreement with experiment to better than "chemical accuracy" (±1kcal /mol). In the case of HO2, the best current theoretical estimate of the zero-point-inclusive, spin-orbit corrected atomization energy (ΣD0=166.0±0.3kcal /mol) and the most recent Active Thermochemical Table (ATcT) value (165.97±0.06kcal/mol) are in excellent agreement. For C2 the agreement is only slightly poorer, with theory (D0=143.7±0.3kcal/mol) almost encompassing the most recent ATcT value (144.03±0.13kcal/mol). For a larger collection of 68molecules, a mean absolute deviation of 0.3kcal/mol was found. The same high level of theory that produces good agreement for atomization energies also appears capable of predicting bond lengths to an accuracy of ±0.001Å.

  7. Digital core based transmitted ultrasonic wave simulation and velocity accuracy analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Shan, Rui

    2016-06-01

    Transmitted ultrasonic wave simulation (TUWS) in a digital core is one of the important elements of digital rock physics and is used to study wave propagation in porous cores and calculate equivalent velocity. When simulating wave propagates in a 3D digital core, two additional layers are attached to its two surfaces vertical to the wave-direction and one planar wave source and two receiver-arrays are properly installed. After source excitation, the two receivers then record incident and transmitted waves of the digital rock. Wave propagating velocity, which is the velocity of the digital core, is computed by the picked peak-time difference between the two recorded waves. To evaluate the accuracy of TUWS, a digital core is fully saturated with gas, oil, and water to calculate the corresponding velocities. The velocities increase with decreasing wave frequencies in the simulation frequency band, and this is considered to be the result of scattering. When the pore fluids are varied from gas to oil and finally to water, the velocity-variation characteristics between the different frequencies are similar, thereby approximately following the variation law of velocities obtained from linear elastic statics simulation (LESS), although their absolute values are different. However, LESS has been widely used. The results of this paper show that the transmission ultrasonic simulation has high relative precision.

  8. Accuracy of flowmeters measuring horizontal groundwater flow in an unconsolidated aquifer simulator.

    USGS Publications Warehouse

    Bayless, E.R.; Mandell, Wayne A.; Ursic, James R.

    2011-01-01

    Borehole flowmeters that measure horizontal flow velocity and direction of groundwater flow are being increasingly applied to a wide variety of environmental problems. This study was carried out to evaluate the measurement accuracy of several types of flowmeters in an unconsolidated aquifer simulator. Flowmeter response to hydraulic gradient, aquifer properties, and well-screen construction was measured during 2003 and 2005 at the U.S. Geological Survey Hydrologic Instrumentation Facility in Bay St. Louis, Mississippi. The flowmeters tested included a commercially available heat-pulse flowmeter, an acoustic Doppler flowmeter, a scanning colloidal borescope flowmeter, and a fluid-conductivity logging system. Results of the study indicated that at least one flowmeter was capable of measuring borehole flow velocity and direction in most simulated conditions. The mean error in direction measurements ranged from 15.1 degrees to 23.5 degrees and the directional accuracy of all tested flowmeters improved with increasing hydraulic gradient. The range of Darcy velocities examined in this study ranged 4.3 to 155 ft/d. For many plots comparing the simulated and measured Darcy velocity, the squared correlation coefficient (r2) exceeded 0.92. The accuracy of velocity measurements varied with well construction and velocity magnitude. The use of horizontal flowmeters in environmental studies appears promising but applications may require more than one type of flowmeter to span the range of conditions encountered in the field. Interpreting flowmeter data from field settings may be complicated by geologic heterogeneity, preferential flow, vertical flow, constricted screen openings, and nonoptimal screen orientation.

  9. Deciphering the impact of uncertainty on the accuracy of large wildfire spread simulations.

    PubMed

    Benali, Akli; Ervilha, Ana R; Sá, Ana C L; Fernandes, Paulo M; Pinto, Renata M S; Trigo, Ricardo M; Pereira, José M C

    2016-11-01

    Predicting wildfire spread is a challenging task fraught with uncertainties. 'Perfect' predictions are unfeasible since uncertainties will always be present. Improving fire spread predictions is important to reduce its negative environmental impacts. Here, we propose to understand, characterize, and quantify the impact of uncertainty in the accuracy of fire spread predictions for very large wildfires. We frame this work from the perspective of the major problems commonly faced by fire model users, namely the necessity of accounting for uncertainty in input data to produce reliable and useful fire spread predictions. Uncertainty in input variables was propagated throughout the modeling framework and its impact was evaluated by estimating the spatial discrepancy between simulated and satellite-observed fire progression data, for eight very large wildfires in Portugal. Results showed that uncertainties in wind speed and direction, fuel model assignment and typology, location and timing of ignitions, had a major impact on prediction accuracy. We argue that uncertainties in these variables should be integrated in future fire spread simulation approaches, and provide the necessary data for any fire model user to do so.

  10. Deciphering the impact of uncertainty on the accuracy of large wildfire spread simulations.

    PubMed

    Benali, Akli; Ervilha, Ana R; Sá, Ana C L; Fernandes, Paulo M; Pinto, Renata M S; Trigo, Ricardo M; Pereira, José M C

    2016-11-01

    Predicting wildfire spread is a challenging task fraught with uncertainties. 'Perfect' predictions are unfeasible since uncertainties will always be present. Improving fire spread predictions is important to reduce its negative environmental impacts. Here, we propose to understand, characterize, and quantify the impact of uncertainty in the accuracy of fire spread predictions for very large wildfires. We frame this work from the perspective of the major problems commonly faced by fire model users, namely the necessity of accounting for uncertainty in input data to produce reliable and useful fire spread predictions. Uncertainty in input variables was propagated throughout the modeling framework and its impact was evaluated by estimating the spatial discrepancy between simulated and satellite-observed fire progression data, for eight very large wildfires in Portugal. Results showed that uncertainties in wind speed and direction, fuel model assignment and typology, location and timing of ignitions, had a major impact on prediction accuracy. We argue that uncertainties in these variables should be integrated in future fire spread simulation approaches, and provide the necessary data for any fire model user to do so. PMID:27333574

  11. SAR simulations for high-field MRI: how much detail, effort, and accuracy is needed?

    PubMed

    Wolf, S; Diehl, D; Gebhardt, M; Mallow, J; Speck, O

    2013-04-01

    Accurate prediction of specific absorption rate (SAR) for high field MRI is necessary to best exploit its potential and guarantee safe operation. To reduce the effort (time, complexity) of SAR simulations while maintaining robust results, the minimum requirements for the creation (segmentation, labeling) of human models and methods to reduce the time for SAR calculations for 7 Tesla MR-imaging are evaluated. The geometric extent of the model required for realistic head-simulations and the number of tissue types sufficient to form a reliable but simplified model of the human body are studied. Two models (male and female) of the virtual family are analyzed. Additionally, their position within the head-coil is taken into account. Furthermore, the effects of retuning the coils to different load conditions and the influence of a large bore radiofrequency-shield have been examined. The calculation time for SAR simulations in the head can be reduced by 50% without significant error for smaller model extent and simplified tissue structure outside the coil. Likewise, the model generation can be accelerated by reducing the number of tissue types. Local SAR can vary up to 14% due to position alone. This must be considered and sets a limit for SAR prediction accuracy. All these results are comparable between the two body models tested. PMID:22611018

  12. SAR simulations for high-field MRI: how much detail, effort, and accuracy is needed?

    PubMed

    Wolf, S; Diehl, D; Gebhardt, M; Mallow, J; Speck, O

    2013-04-01

    Accurate prediction of specific absorption rate (SAR) for high field MRI is necessary to best exploit its potential and guarantee safe operation. To reduce the effort (time, complexity) of SAR simulations while maintaining robust results, the minimum requirements for the creation (segmentation, labeling) of human models and methods to reduce the time for SAR calculations for 7 Tesla MR-imaging are evaluated. The geometric extent of the model required for realistic head-simulations and the number of tissue types sufficient to form a reliable but simplified model of the human body are studied. Two models (male and female) of the virtual family are analyzed. Additionally, their position within the head-coil is taken into account. Furthermore, the effects of retuning the coils to different load conditions and the influence of a large bore radiofrequency-shield have been examined. The calculation time for SAR simulations in the head can be reduced by 50% without significant error for smaller model extent and simplified tissue structure outside the coil. Likewise, the model generation can be accelerated by reducing the number of tissue types. Local SAR can vary up to 14% due to position alone. This must be considered and sets a limit for SAR prediction accuracy. All these results are comparable between the two body models tested.

  13. WorldView-2 data simulation and analysis results

    NASA Astrophysics Data System (ADS)

    Puetz, Angela M.; Lee, Krista; Olsen, R. Chris

    2009-05-01

    The WorldView-2 sensor, to be launched mid-2009, will have 8 MSI bands - 4 standard MSI spectral channels and an additional 4 non-traditional bands. Hyperspectral data from the AURORA sensor (from the former Advanced Power Technologies, Inc. (APTI)) was used to simulate the spectral response of the WorldView-2 Sensor and DigitalGlobe's 4- band QuickBird system. A bandpass filter method was used to simulate the spectral response of the sensors. The resulting simulated images were analyzed to determine possible uses of the additional bands available with the WorldView-2 sensor. Particular attention is given to littoral (shallow water) applications. The overall classification accuracy for the simulated QuickBird scene was 89%, and 94% for the simulated WorldView-2 scene.

  14. Milestone M4900: Simulant Mixing Analytical Results

    SciTech Connect

    Kaplan, D.I.

    2001-07-26

    This report addresses Milestone M4900, ''Simulant Mixing Sample Analysis Results,'' and contains the data generated during the ''Mixing of Process Heels, Process Solutions, and Recycle Streams: Small-Scale Simulant'' task. The Task Technical and Quality Assurance Plan for this task is BNF-003-98-0079A. A report with a narrative description and discussion of the data will be issued separately.

  15. DKIST Adaptive Optics System: Simulation Results

    NASA Astrophysics Data System (ADS)

    Marino, Jose; Schmidt, Dirk

    2016-05-01

    The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.

  16. Results of the 2015 Spitzer Exoplanet Data Challenge: Repeatability and Accuracy of Exoplanet Eclipse Depths

    NASA Astrophysics Data System (ADS)

    Ingalls, James G.; Krick, Jessica E.; Carey, Sean J.; Stauffer, John R.; Grillmair, Carl J.; Lowrance, Patrick

    2016-06-01

    We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. At infrared wavelengths secondary eclipses and phase curves are powerful tools for studying a planet’s atmosphere. Extracting information about atmospheres, however, is extremely challenging due to the small differential signals, which are often at the level of 100 parts per million (ppm) or smaller, and require the removal of significant instrumental systematics. For the IRAC 3.6 and 4.5μm InSb detectors that remain active on post-cryogenic Spitzer, the interplay of residual telescope pointing fluctuations with intrapixel gain variations in the moderately under sampled camera is the largest source of time-correlated noise. Over the past decade, a suite of techniques for removing this noise from IRAC data has been developed independently by various investigators. In summer 2015, the Spitzer Science Center hosted a Data Challenge in which seven exoplanet expert teams, each using a different noise-removal method, were invited to analyze 10 eclipse measurements of the hot Jupiter XO-3 b, as well as a complementary set of 10 simulated measurements. In this contribution we review the results of the Challenge. We describe statistical tools to assess the repeatability, reliability, and validity of data reduction techniques, and to compare and (perhaps) choose between techniques.

  17. Persistency of accuracy of genomic breeding values for different simulated pig breeding programs in developing countries.

    PubMed

    Akanno, E C; Schenkel, F S; Sargolzaei, M; Friendship, R M; Robinson, J A B

    2014-10-01

    Genetic improvement of pigs in tropical developing countries has focused on imported exotic populations which have been subjected to intensive selection with attendant high population-wide linkage disequilibrium (LD). Presently, indigenous pig population with limited selection and low LD are being considered for improvement. Given that the infrastructure for genetic improvement using the conventional BLUP selection methods are lacking, a genome-wide selection (GS) program was proposed for developing countries. A simulation study was conducted to evaluate the option of using 60 K SNP panel and observed amount of LD in the exotic and indigenous pig populations. Several scenarios were evaluated including different size and structure of training and validation populations, different selection methods and long-term accuracy of GS in different population/breeding structures and traits. The training set included previously selected exotic population, unselected indigenous population and their crossbreds. Traits studied included number born alive (NBA), average daily gain (ADG) and back fat thickness (BFT). The ridge regression method was used to train the prediction model. The results showed that accuracies of genomic breeding values (GBVs) in the range of 0.30 (NBA) to 0.86 (BFT) in the validation population are expected if high density marker panels are utilized. The GS method improved accuracy of breeding values better than pedigree-based approach for traits with low heritability and in young animals with no performance data. Crossbred training population performed better than purebreds when validation was in populations with similar or a different structure as in the training set. Genome-wide selection holds promise for genetic improvement of pigs in the tropics. PMID:24628765

  18. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    SciTech Connect

    Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen Martin; Tucker, Garritt J.

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers

  19. Accuracy of nonmolecular identification of growth-hormone- transgenic coho salmon after simulated escape.

    PubMed

    SundströM, L F; Lõhmus, M; Devlin, R H

    2015-09-01

    Concerns with transgenic animals include the potential ecological risks associated with release or escape to the natural environment, and a critical requirement for assessment of ecological effects is the ability to distinguish transgenic animals from wild type. Here, we explore geometric morphometrics (GeoM) and human expertise to distinguish growth-hormone-transgenic coho salmon (Oncorhynchus kisutch) specimens from wild type. First, we simulated an escape of 3-month-old hatchery-reared wild-type and transgenic fish to an artificial stream, and recaptured them at the time of seaward migration at an age of 13 months. Second, we reared fish in the stream from first-feeding fry until an age of 13 months, thereby simulating fish arising from a successful spawn in the wild of an escaped hatchery-reared transgenic fish. All fish were then assessed from 'photographs by visual identification (VID) by local staff and by GeoM based on 13 morphological landmarks. A leave-one-out discriminant analysis of GeoM data had on average 86% (72-100% for individual groups) accuracy in assigning the correct genotypes, whereas the human experts were correct, on average, in only 49% of cases (range of 18-100% for individual fish groups). However, serious errors (i.e., classifying transgenic specimens as wild type) occurred for 7% (GeoM) and 67% (VID) of transgenic fish, and all of these incorrect assignments arose with fish reared in the stream from the first-feeding stage. The results show that we presently lack the skills of visually distinguishing transgenic coho salmon from wild type with a high level of accuracy, but that further development-of GeoM methods could be useful in identifying second-generation,fish from nature as a nonmolecular approach.

  20. Evaluation of the soil moisture prediction accuracy of a space radar using simulation techniques. [Kansas

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Stiles, J. A.; Moore, R. K.; Holtzman, J. C.

    1981-01-01

    Image simulation techniques were employed to generate synthetic aperture radar images of a 17.7 km x 19.3 km test site located east of Lawrence, Kansas. The simulations were performed for a space SAR at an orbital altitude of 600 km, with the following sensor parameters: frequency = 4.75 GHz, polarization = HH, and angle of incidence range = 7 deg to 22 deg from nadir. Three sets of images were produced corresponding to three different spatial resolutions; 20 m x 20 m with 12 looks, 100 m x 100 m with 23 looks, and 1 km x 1 km with 1000 looks. Each set consisted of images for four different soil moisture distributions across the test site. Results indicate that, for the agricultural portion of the test site, the soil moisture in about 90% of the pixels can be predicted with an accuracy of = + or - 20% of field capacity. Among the three spatial resolutions, the 1 km x 1 km resolution gave the best results for most cases, however, for very dry soil conditions, the 100 m x 100 m resolution was slightly superior.

  1. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  2. Accuracy, Speed, Scalability: the Challenges of Large-Scale DFT Simulations

    NASA Astrophysics Data System (ADS)

    Gygi, Francois

    2014-03-01

    First-Principles Molecular Dynamics (FPMD) simulations based on Density Functional Theory (DFT) have become popular in investigations of electronic and structural properties of liquids and solids. The current upsurge in available computing resources enables simulations of larger and more complex systems, such as solvated ions or defects in crystalline solids. The high cost of FPMD simulations however still strongly limits the size of feasible simulations, in particular when using hybrid-DFT approximations. In addition, the simulation times needed to extract statistically meaningful quantities also grows with system size, which puts a premium on scalable implementations. We discuss recent research in the design and implementation of scalable FPMD algorithms, with emphasis on controlled-accuracy approximations and accurate hybrid-DFT molecular dynamics simulations, using examples of applications to materials science and chemistry. Work supported by DOE-BES under grant DE-SC0008938.

  3. Geopositioning accuracy prediction results for registration of imaging and nonimaging sensors using moving objects

    NASA Astrophysics Data System (ADS)

    Taylor, Charles R.; Dolloff, John T.; Lofy, Brian A.; Luker, Steve A.

    2003-08-01

    BAE SYSTEMS is developing a "4D Registration" capability for DARPA's Dynamic Tactical Targeting program. This will further advance our automatic image registration capability to use moving objects for image registration, and extend our current capability to include the registration of non-imaging sensors. Moving objects produce signals that are identifiable across multiple sensors such as radar moving target indicators, unattended ground sensors, and imaging sensors. Correspondences of those signals across sensor types make it possible to improve the support data accuracy for each of the sensors involved in the correspondence. The amount of accuracy improvement possible, and the effects of the accuracy improvement on geopositioning with the sensors, is a complex problem. The main factors that contribute to the complexity are the sensor-to-target geometry, the a priori sensor support data accuracy, sensor measurement accuracy, the distribution of identified objects in ground space, and the motion and motion uncertainty of the identified objects. As part of the 4D Registration effort, BAE SYSTEMS is conducting a sensitivity study to investigate the complexities and benefits of multisensor registration with moving objects. The results of the study will be summarized.

  4. Assessment of the accuracy of density functional theory for first principles simulations of water

    NASA Astrophysics Data System (ADS)

    Grossman, J. C.; Schwegler, E.; Draeger, E.; Gygi, F.; Galli, G.

    2004-03-01

    We present a series of Car-Parrinello (CP) molecular dynamics simulation in order to better understand the accuracy of density functional theory for the calculation of the properties of water [1]. Through 10 separate ab initio simulations, each for 20 ps of ``production'' time, a number of approximations are tested by varying the density functional employed, the fictitious electron mass, μ, in the CP Langrangian, the system size, and the ionic mass, M (we considered both H_2O and D_2O). We present the impact of these approximations on properties such as the radial distribution function [g(r)], structure factor [S(k)], diffusion coefficient and dipole moment. Our results show that structural properties may artificially depend on μ, and that in the case of an accurate description of the electronic ground state, and in the absence of proton quantum effects, we obtained an oxygen-oxygen correlation function that is over-structured compared to experiment, and a diffusion coefficient which is approximately 10 times smaller. ^1 J.C. Grossman et. al., J. Chem. Phys. (in press, 2004).

  5. Accuracy of Student Recall of Strong Interest Inventory Results 1 Year after Interpretation.

    ERIC Educational Resources Information Center

    Hansen, Jo-Ida C.; And Others

    1994-01-01

    Examined how accurately college students (n=87) recalled information from their Strong Interest Inventory (SII) profiles one year later. Significant number of participants recalled at least one profile result, but accuracy of recall varied by type of scale and percentage of participants who first remembered something and then remembered it…

  6. Cassini radar : system concept and simulation results

    NASA Astrophysics Data System (ADS)

    Melacci, P. T.; Orosei, R.; Picardi, G.; Seu, R.

    1998-10-01

    The Cassini mission is an international venture, involving NASA, the European Space Agency (ESA) and the Italian Space Agency (ASI), for the investigation of the Saturn system and, in particular, Titan. The Cassini radar will be able to see through Titan's thick, optically opaque atmosphere, allowing us to better understand the composition and the morphology of its surface, but the interpretation of the results, due to the complex interplay of many different factors determining the radar echo, will not be possible without an extensive modellization of the radar system functioning and of the surface reflectivity. In this paper, a simulator of the multimode Cassini radar will be described, after a brief review of our current knowledge of Titan and a discussion of the contribution of the Cassini radar in answering to currently open questions. Finally, the results of the simulator will be discussed. The simulator has been implemented on a RISC 6000 computer by considering only the active modes of operation, that is altimeter and synthetic aperture radar. In the instrument simulation, strict reference has been made to the present planned sequence of observations and to the radar settings, including burst and single pulse duration, pulse bandwidth, pulse repetition frequency and all other parameters which may be changed, and possibly optimized, according to the operative mode. The observed surfaces are simulated by a facet model, allowing the generation of surfaces with Gaussian or non-Gaussian roughness statistic, together with the possibility of assigning to the surface an average behaviour which can represent, for instance, a flat surface or a crater. The results of the simulation will be discussed, in order to check the analytical evaluations of the models of the average received echoes and of the attainable performances. In conclusion, the simulation results should allow the validation of the theoretical evaluations of the capabilities of microwave instruments, when

  7. Evaluation of optoelectronic Plethysmography accuracy and precision in recording displacements during quiet breathing simulation.

    PubMed

    Massaroni, C; Schena, E; Saccomandi, P; Morrone, M; Sterzi, S; Silvestri, S

    2015-08-01

    Opto-electronic Plethysmography (OEP) is a motion analysis system used to measure chest wall kinematics and to indirectly evaluate respiratory volumes during breathing. Its working principle is based on the computation of marker displacements placed on the chest wall. This work aims at evaluating the accuracy and precision of OEP in measuring displacement in the range of human chest wall displacement during quiet breathing. OEP performances were investigated by the use of a fully programmable chest wall simulator (CWS). CWS was programmed to move 10 times its eight shafts in the range of physiological displacement (i.e., between 1 mm and 8 mm) at three different frequencies (i.e., 0.17 Hz, 0.25 Hz, 0.33 Hz). Experiments were performed with the aim to: (i) evaluate OEP accuracy and precision error in recording displacement in the overall calibrated volume and in three sub-volumes, (ii) evaluate the OEP volume measurement accuracy due to the measurement accuracy of linear displacements. OEP showed an accuracy better than 0.08 mm in all trials, considering the whole 2m(3) calibrated volume. The mean measurement discrepancy was 0.017 mm. The precision error, expressed as the ratio between measurement uncertainty and the recorded displacement by OEP, was always lower than 0.55%. Volume overestimation due to OEP linear measurement accuracy was always <; 12 mL (<; 3.2% of total volume), considering all settings. PMID:26736504

  8. Simulation-based evaluation of the resolution and quantitative accuracy of temperature-modulated fluorescence tomography

    PubMed Central

    Lin, Yuting; Nouizi, Farouk; Kwong, Tiffany C.; Gulsen, Gultekin

    2016-01-01

    Conventional fluorescence tomography (FT) can recover the distribution of fluorescent agents within a highly scattering medium. However, poor spatial resolution remains its foremost limitation. Previously, we introduced a new fluorescence imaging technique termed “temperature-modulated fluorescence tomography” (TM-FT), which provides high-resolution images of fluorophore distribution. TM-FT is a multimodality technique that combines fluorescence imaging with focused ultrasound to locate thermo-sensitive fluorescence probes using a priori spatial information to drastically improve the resolution of conventional FT. In this paper, we present an extensive simulation study to evaluate the performance of the TM-FT technique on complex phantoms with multiple fluorescent targets of various sizes located at different depths. In addition, the performance of the TM-FT is tested in the presence of background fluorescence. The results obtained using our new method are systematically compared with those obtained with the conventional FT. Overall, TM-FT provides higher resolution and superior quantitative accuracy, making it an ideal candidate for in vivo preclinical and clinical imaging. For example, a 4 mm diameter inclusion positioned in the middle of a synthetic slab geometry phantom (D:40 mm × W :100 mm) is recovered as an elongated object in the conventional FT (x = 4.5 mm; y = 10.4 mm), while TM-FT recovers it successfully in both directions (x = 3.8 mm; y = 4.6 mm). As a result, the quantitative accuracy of the TM-FT is superior because it recovers the concentration of the agent with a 22% error, which is in contrast with the 83% error of the conventional FT. PMID:26368884

  9. Evaluation of Accuracy and Reliability of the Six Ensemble Methods Using 198 Sets of Pseudo-Simulation Data

    NASA Astrophysics Data System (ADS)

    Suh, M. S.; Oh, S. G.

    2014-12-01

    The accuracy and reliability of the six ensemble methods were evaluated according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) generated by considering the simulation characteristics of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets with 50 samples. The ensemble methods used were as follows: equal weighted averaging with(out) bias correction (EWA_W(N)BC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), WEA based on reliability (WEA_REA), and multivariate linear regression (Mul_Reg). The weighted ensemble methods showed better projection skills in terms of accuracy and reliability than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. In general, WEA_Tay, WEA_REA and WEA_RAC showed superior skills in terms of accuracy and reliability, regardless of the PSD categories, training periods, and ensemble numbers. The evaluation results showed that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of members. However, the EWA_NBC showed a comparable projection skill with the other methods only in the certain categories with unsystematic biases.

  10. Effects of training and simulated combat stress on leg tourniquet application accuracy, time, and effectiveness.

    PubMed

    Schreckengaust, Richard; Littlejohn, Lanny; Zarow, Gregory J

    2014-02-01

    The lower extremity tourniquet failure rate remains significantly higher in combat than in preclinical testing, so we hypothesized that tourniquet placement accuracy, speed, and effectiveness would improve during training and decline during simulated combat. Navy Hospital Corpsman (N = 89), enrolled in a Tactical Combat Casualty Care training course in preparation for deployment, applied Combat Application Tourniquet (CAT) and the Special Operations Forces Tactical Tourniquet (SOFT-T) on day 1 and day 4 of classroom training, then under simulated combat, wherein participants ran an obstacle course to apply a tourniquet while wearing full body armor and avoiding simulated small arms fire (paint balls). Application time and pulse elimination effectiveness improved day 1 to day 4 (p < 0.005). Under simulated combat, application time slowed significantly (p < 0.001), whereas accuracy and effectiveness declined slightly. Pulse elimination was poor for CAT (25% failure) and SOFT-T (60% failure) even in classroom conditions following training. CAT was more quickly applied (p < 0.005) and more effective (p < 0.002) than SOFT-T. Training fostered fast and effective application of leg tourniquets while performance declined under simulated combat. The inherent efficacy of tourniquet products contributes to high failure rates under combat conditions, pointing to the need for superior tourniquets and for rigorous deployment preparation training in simulated combat scenarios.

  11. Titan's organic chemistry: Results of simulation experiments

    NASA Technical Reports Server (NTRS)

    Sagan, Carl; Thompson, W. Reid; Khare, Bishun N.

    1992-01-01

    Recent low pressure continuous low plasma discharge simulations of the auroral electron driven organic chemistry in Titan's mesosphere are reviewed. These simulations yielded results in good accord with Voyager observations of gas phase organic species. Optical constants of the brownish solid tholins produced in similar experiments are in good accord with Voyager observations of the Titan haze. Titan tholins are rich in prebiotic organic constituents; the Huygens entry probe may shed light on some of the processes that led to the origin of life on Earth.

  12. Comparison of the Accuracy and Speed of Transient Mobile A/C System Simulation Models: Preprint

    SciTech Connect

    Kiss, T.; Lustbader, J.

    2014-03-01

    The operation of air conditioning (A/C) systems is a significant contributor to the total amount of fuel used by light- and heavy-duty vehicles. Therefore, continued improvement of the efficiency of these mobile A/C systems is important. Numerical simulation has been used to reduce the system development time and to improve the electronic controls, but numerical models that include highly detailed physics run slower than desired for carrying out vehicle-focused drive cycle-based system optimization. Therefore, faster models are needed even if some accuracy is sacrificed. In this study, a validated model with highly detailed physics, the 'Fully-Detailed' model, and two models with different levels of simplification, the 'Quasi-Transient' and the 'Mapped- Component' models, are compared. The Quasi-Transient model applies some simplifications compared to the Fully-Detailed model to allow faster model execution speeds. The Mapped-Component model is similar to the Quasi-Transient model except instead of detailed flow and heat transfer calculations in the heat exchangers, it uses lookup tables created with the Quasi-Transient model. All three models are set up to represent the same physical A/C system and the same electronic controls. Speed and results of the three model versions are compared for steady state and transient operation. Steady state simulated data are also compared to measured data. The results show that the Quasi-Transient and Mapped-Component models ran much faster than the Fully-Detailed model, on the order of 10- and 100-fold, respectively. They also adequately approach the results of the Fully-Detailed model for steady-state operation, and for drive cycle-based efficiency predictions

  13. Simulations of thermally transferred OSL signals in quartz: Accuracy and precision of the protocols for equivalent dose evaluation

    NASA Astrophysics Data System (ADS)

    Pagonis, Vasilis; Adamiec, Grzegorz; Athanassas, C.; Chen, Reuven; Baker, Atlee; Larsen, Meredith; Thompson, Zachary

    2011-06-01

    Thermally-transferred optically stimulated luminescence (TT-OSL) signals in sedimentary quartz have been the subject of several recent studies, due to the potential shown by these signals to increase the range of luminescence dating by an order of magnitude. Based on these signals, a single aliquot protocol termed the ReSAR protocol has been developed and tested experimentally. This paper presents extensive numerical simulations of this ReSAR protocol. The purpose of the simulations is to investigate several aspects of the ReSAR protocol which are believed to cause difficulties during application of the protocol. Furthermore, several modified versions of the ReSAR protocol are simulated, and their relative accuracy and precision are compared. The simulations are carried out using a recently published kinetic model for quartz, consisting of 11 energy levels. One hundred random variants of the natural samples were generated by keeping the transition probabilities between energy levels fixed, while allowing simultaneous random variations of the concentrations of the 11 energy levels. The relative intrinsic accuracy and precision of the protocols are simulated by calculating the equivalent dose (ED) within the model, for a given natural burial dose of the sample. The complete sequence of steps undertaken in several versions of the dating protocols is simulated. The relative intrinsic precision of these techniques is estimated by fitting Gaussian probability functions to the resulting simulated distribution of ED values. New simulations are presented for commonly used OSL sensitivity tests, consisting of successive cycles of sample irradiation with the same dose, followed by measurements of the sensitivity corrected L/T signals. We investigate several experimental factors which may be affecting both the intrinsic precision and intrinsic accuracy of the ReSAR protocol. The results of the simulation show that the four different published versions of the ReSAR protocol can

  14. High accuracy binary black hole simulations with an extended wave zone

    SciTech Connect

    Pollney, Denis; Reisswig, Christian; Dorband, Nils; Schnetter, Erik; Diener, Peter

    2011-02-15

    We present results from a new code for binary black hole evolutions using the moving-puncture approach, implementing finite differences in generalized coordinates, and allowing the spacetime to be covered with multiple communicating nonsingular coordinate patches. Here we consider a regular Cartesian near-zone, with adapted spherical grids covering the wave zone. The efficiencies resulting from the use of adapted coordinates allow us to maintain sufficient grid resolution to an artificial outer boundary location which is causally disconnected from the measurement. For the well-studied test case of the inspiral of an equal-mass nonspinning binary (evolved for more than 8 orbits before merger), we determine the phase and amplitude to numerical accuracies better than 0.010% and 0.090% during inspiral, respectively, and 0.003% and 0.153% during merger. The waveforms, including the resolved higher harmonics, are convergent and can be consistently extrapolated to r{yields}{infinity} throughout the simulation, including the merger and ringdown. Ringdown frequencies for these modes (to (l,m)=(6,6)) match perturbative calculations to within 0.01%, providing a strong confirmation that the remnant settles to a Kerr black hole with irreducible mass M{sub irr}=0.884355{+-}20x10{sup -6} and spin S{sub f}/M{sub f}{sup 2}=0.686923{+-}10x10{sup -6}.

  15. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    PubMed

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. PMID:26531324

  16. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  17. [Improvement of root parameters in land surface model (LSM )and its effect on the simulated results].

    PubMed

    Cai, Kui-ye; Liu, Jing-miao; Zhang, Zheng-qiu; Liang, Hong; He, Xiao-dong

    2015-10-01

    In order to improve root parameterization in land surface model, the sub-model for root in CERES-Maize was coupled in the SSiB2 after calibrating of maize parameters in SSiB2. The effects of two improved root parameterization schemes on simulated results of land surface flux were analyzed. Results indicated that simulation accuracy of land surface flux was enhanced when the root module provided root depth only with the SSiB2 model (scheme I). Correlation coefficients between observed and simulated values of latent flux and sensible flux increased during the whole growing season, and RMSE of linear fitting decreased. Simulation accuracy of CO2 flux was also enhanced from 121 days after sowing to mature period. On the other hand, simulation accuracy of the flux was enhanced when the root module provided root depth and root length density simultaneously for the SSiB2 model (scheme II). Compared with the scheme I, the scheme II was more comprehensive, while its simulation accuracy of land surface flux decreased. The improved root parameterization in the SSiB2 model was better than the original one, which made simulated accuracy of land-atmospheric flux improved. The scheme II overestimated root relative growth in the surface layer soil, so its simulated accuracy was lower than that of the scheme I. PMID:26995920

  18. Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories

    NASA Technical Reports Server (NTRS)

    Green, S.; Grace, M.; Williams, D.

    1999-01-01

    The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major

  19. Numerical simulations of catastrophic disruption: Recent results

    NASA Astrophysics Data System (ADS)

    Benz, W.; Asphaug, E.; Ryan, E. V.

    1994-12-01

    Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.

  20. Accuracy of Root ZX in teeth with simulated root perforation in the presence of gel or liquid type endodontic irrigant

    PubMed Central

    Shin, Hyeong-Soon; Yang, Won-Kyung; Kim, Mi-Ri; Ko, Hyun-Jung; Cho, Kyung-Mo; Park, Se-Hee

    2012-01-01

    Objectives To evaluate the accuracy of the Root ZX in teeth with simulated root perforation in the presence of gel or liquid type endodontic irrigants, such as saline, 5.25% sodium hypochlorite (NaOCl), 2% chlorhexidine liquid, 2% chlorhexidine gel, and RC-Prep, and also to determine the electrical conductivities of these endodontic irrigants. Materials and Methods A root perforation was simulated on twenty freshly extracted teeth by means of a small perforation made on the proximal surface of the root at 4 mm from the anatomic apex. Root ZX was used to locate root perforation and measure the electronic working lengths. The results obtained were compared with the actual working length (AWL) and the actual location of perforations (AP), allowing tolerances of 0.5 or 1.0 mm. Measurements within these limits were considered as acceptable. Chi-square test or the Fisher's exact test was used to evaluate significance. Electrical conductivities of each irrigant were also measured with an electrical conductivity tester. Results The accuracies of the Root ZX in perforated teeth were significantly different between liquid types (saline, NaOCl) and gel types (chlorhexidine gel, RC-Prep). The accuracies of electronic working lengths in perforated teeth were higher in gel types than in liquid types. The accuracy in locating root perforation was higher in liquid types than gel types. 5.25% NaOCl had the highest electrical conductivity, whereas 2% chlorhexidine gel and RC-Prep gel had the lowest electrical conductivities among the five irrigants. Conclusions Different canal irrigants with different electrical conductivities may affect the accuracy of the Root ZX in perforated teeth. PMID:23431125

  1. Fast Plasma Instrument for MMS: Simulation Results

    NASA Technical Reports Server (NTRS)

    Figueroa-Vinas, Adolfo; Adrian, Mark L.; Lobell, James V.; Simpson, David G.; Barrie, Alex; Winkert, George E.; Yeh, Pen-Shu; Moore, Thomas E.

    2008-01-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. The Dual Electron Spectrometer (DES) of the Fast Plasma Instrument (FPI) for MMS meets these demanding requirements by acquiring the electron velocity distribution functions (VDFs) for the full sky with high-resolution angular measurements every 30 ms. This will provide unprecedented access to electron scale dynamics within the reconnection diffusion region. The DES consists of eight half-top-hat energy analyzers. Each analyzer has a 6 deg. x 11.25 deg. Full-sky coverage is achieved by electrostatically stepping the FOV of each of the eight sensors through four discrete deflection look directions. Data compression and burst memory management will provide approximately 30 minutes of high time resolution data during each orbit of the four MMS spacecraft. Each spacecraft will intelligently downlink the data sequences that contain the greatest amount of temporal structure. Here we present the results of a simulation of the DES analyzer measurements, data compression and decompression, as well as ground-based analysis using as a seed re-processed Cluster/PEACE electron measurements. The Cluster/PEACE electron measurements have been reprocessed through virtual DES analyzers with their proper geometrical, energy, and timing scale factors and re-mapped via interpolation to the DES angular and energy phase-space sampling measurements. The results of the simulated DES measurements are analyzed and the full moments of the simulated VDFs are compared with those obtained from the Cluster/PEACE spectrometer using a standard quadrature moment, a newly implemented spectral spherical harmonic method, and a singular value decomposition method. Our preliminary moment calculations show a remarkable agreement within the uncertainties of the measurements, with the

  2. Accuracy of the Frensley inflow boundary condition for Wigner equations in simulating resonant tunneling diodes

    SciTech Connect

    Jiang Haiyan; Cai Wei; Tsu, Raphael

    2011-03-01

    In this paper, the accuracy of the Frensley inflow boundary condition of the Wigner equation is analyzed in computing the I-V characteristics of a resonant tunneling diode (RTD). It is found that the Frensley inflow boundary condition for incoming electrons holds only exactly infinite away from the active device region and its accuracy depends on the length of contacts included in the simulation. For this study, the non-equilibrium Green's function (NEGF) with a Dirichlet to Neumann mapping boundary condition is used for comparison. The I-V characteristics of the RTD are found to agree between self-consistent NEGF and Wigner methods at low bias potentials with sufficiently large GaAs contact lengths. Finally, the relation between the negative differential conductance (NDC) of the RTD and the sizes of contact and buffer in the RTD is investigated using both methods.

  3. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506

  4. Results of a remote multiplexer/digitizer unit accuracy and environmental study

    NASA Technical Reports Server (NTRS)

    Wilner, D. O.

    1977-01-01

    A remote multiplexer/digitizer unit (RMDU), a part of the airborne integrated flight test data system, was subjected to an accuracy study. The study was designed to show the effects of temperature, altitude, and vibration on the RMDU. The RMDU was subjected to tests at temperatures from -54 C (-65 F) to 71 C (160 F), and the resulting data are presented here, along with a complete analysis of the effects. The methods and means used for obtaining correctable data and correcting the data are also discussed.

  5. Creating a Standard Set of Metrics to Assess Accuracy of Solar Forecasts: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Banunarayanan, V.; Brockway, A.; Marquis, M.; Haupt, S. E.; Brown, B.; Fowler, T.; Jensen, T.; Hamann, H.; Lu, S.; Hodge, B.; Zhang, J.; Florita, A.

    2013-12-01

    The U.S. Department of Energy (DOE) SunShot Initiative, launched in 2011, seeks to reduce the cost of solar energy systems by 75% from 2010 to 2020. In support of the SunShot Initiative, the DOE Office of Energy Efficiency and Renewable Energy (EERE) is partnering with the National Oceanic and Atmospheric Administration (NOAA) and solar energy stakeholders to improve solar forecasting. Through a funding opportunity announcement issued in the April, 2012, DOE is funding two teams - led by National Center for Atmospheric Research (NCAR), and by IBM - to perform three key activities in order to improve solar forecasts. The teams will: (1) With DOE and NOAA's leadership and significant stakeholder input, develop a standardized set of metrics to evaluate forecast accuracy, and determine the baseline and target values for these metrics; (2) Conduct research that yields a transformational improvement in weather models and methods for forecasting solar irradiance and power; and (3) Incorporate solar forecasts into the system operations of the electric power grid, and evaluate the impact of forecast accuracy on the economics and reliability of operations using the defined, standard metrics. This paper will present preliminary results on the first activity: the development of a standardized set of metrics, baselines and target values. The results will include a proposed framework for metrics development, key categories of metrics, descriptions of each of the proposed set of specific metrics to measure forecast accuracy, feedback gathered from a range of stakeholders on the metrics, and processes to determine baselines and target values for each metric. The paper will also analyze the temporal and spatial resolutions under which these metrics would apply, and conclude with a summary of the work in progress on solar forecasting activities funded by DOE.

  6. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  7. Fast Plasma Instrument for MMS: Simulation Results

    NASA Astrophysics Data System (ADS)

    Viñas, A. F.; Adrian, M. L.; Lobell, J. V.; Simpson, D. G.; Barrie, A.; Winkert, G. E.; Yeh, P.; Moore, T. E.

    2008-12-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. The Dual Electron Spectrometer (DES) of the Fast Plasma Instrument (FPI) for MMS meets these demanding requirements by acquiring the electron velocity distribution functions (VDF's) for the full sky with high-resolution angular measurements every 30 ms. This will provide unprecedented access to electron scale dynamics within the reconnection diffusion region. The DES consists of eight half-top-hat energy analyzers. Each analyzer has a 6° × 180° field of view (FOV) with a single pixel resolution of 6° × 11.25°. Full-sky coverage is achieved by electrostatically stepping the FOV of each of the eight sensors through four discrete deflection look directions. Data compression and burst memory management will provide approximately 30 minutes of high time resolution data during each orbit of the four MMS spacecraft. Each spacecraft will intelligently downlink the data sequences that contain the greatest amount of temporal structure. Here we present the results of a simulation of the DES analyzer measurements, data compression and decompression, as well as ground- based analysis using as a seed re-processed Cluster/PEACE electron measurements. The Cluster/PEACE electron measurements have been re-processed through virtual DES analyzers with their proper geometrical, energy, and timing scale factors and re-mapped via interpolation to the DES angular and energy phase- space sampling measurements. The results of the simulated DES measurements are analyzed and the full moments of the simulated VDF's are compared with those obtained from the Cluster/PEACE spectrometer using a standard quadrature moment, a newly implemented spectral spherical harmonic method, and a singular value decomposition method. Our preliminary moment calculations show a

  8. Medical Simulation Practices 2010 Survey Results

    NASA Technical Reports Server (NTRS)

    McCrindle, Jeffrey J.

    2011-01-01

    Medical Simulation Centers are an essential component of our learning infrastructure to prepare doctors and nurses for their careers. Unlike the military and aerospace simulation industry, very little has been published regarding the best practices currently in use within medical simulation centers. This survey attempts to provide insight into the current simulation practices at medical schools, hospitals, university nursing programs and community college nursing programs. Students within the MBA program at Saint Joseph's University conducted a survey of medical simulation practices during the summer 2010 semester. A total of 115 institutions responded to the survey. The survey resus discuss overall effectiveness of current simulation centers as well as the tools and techniques used to conduct the simulation activity

  9. Effects of experimental protocol on global vegetation model accuracy: a comparison of simulated and observed vegetation patterns for Asia

    USGS Publications Warehouse

    Tang, Guoping; Shafer, Sarah L.; Barlein, Patrick J.; Holman, Justin O.

    2009-01-01

    Prognostic vegetation models have been widely used to study the interactions between environmental change and biological systems. This study examines the sensitivity of vegetation model simulations to: (i) the selection of input climatologies representing different time periods and their associated atmospheric CO2 concentrations, (ii) the choice of observed vegetation data for evaluating the model results, and (iii) the methods used to compare simulated and observed vegetation. We use vegetation simulated for Asia by the equilibrium vegetation model BIOME4 as a typical example of vegetation model output. BIOME4 was run using 19 different climatologies and their associated atmospheric CO2 concentrations. The Kappa statistic, Fuzzy Kappa statistic and a newly developed map-comparison method, the Nomad index, were used to quantify the agreement between the biomes simulated under each scenario and the observed vegetation from three different global land- and tree-cover data sets: the global Potential Natural Vegetation data set (PNV), the Global Land Cover Characteristics data set (GLCC), and the Global Land Cover Facility data set (GLCF). The results indicate that the 30-year mean climatology (and its associated atmospheric CO2 concentration) for the time period immediately preceding the collection date of the observed vegetation data produce the most accurate vegetation simulations when compared with all three observed vegetation data sets. The study also indicates that the BIOME4-simulated vegetation for Asia more closely matches the PNV data than the other two observed vegetation data sets. Given the same observed data, the accuracy assessments of the BIOME4 simulations made using the Kappa, Fuzzy Kappa and Nomad index map-comparison methods agree well when the compared vegetation types consist of a large number of spatially continuous grid cells. The results of this analysis can assist model users in designing experimental protocols for simulating vegetation.

  10. Technical Highlight: NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools

    SciTech Connect

    Ridouane, E.H.

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes.

  11. Use of an extracorporeal circulation perfusion simulator: evaluation of its accuracy and repeatability.

    PubMed

    Tokumine, Asako; Momose, Naoki; Tomizawa, Yasuko

    2013-12-01

    Medical simulators have mainly been used as educational tools. They have been used to train technicians and to educate potential users about safety. We combined software for hybrid-type extracorporeal circulation simulation (ECCSIM) with a CPB-Workshop console. We evaluated the performance of ECCSIM, including its accuracy and repeatability, during simulated ECC. We performed a detailed evaluation of the synchronization of the software with the console and the function of the built-in valves. An S-III heart–lung machine was used for the open circuit. It included a venous reservoir, an oxygenator (RX-25), and an arterial filter. The tubes for venous drainage and the arterial line were connected directly to the ports of the console. The ECCSIM recorded the liquid level of the reservoir continuously. The valve in the console controlled the pressure load of the arterial line. The software made any adjustments necessary to both arterial pressure load and the venous drainage flow volume. No external flowmeters were necessary during simulation. We found the CPB-Workshop to be convenient, reliable, and sufficiently exact. It can be used to validate procedures by monitoring the controls and responses by using a combination of qualitative measures. PMID:24022821

  12. Summarising and validating test accuracy results across multiple studies for use in clinical practice.

    PubMed

    Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J

    2015-06-15

    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV.

  13. Accuracy of relative positioning by interferometry with GPS Double-blind test results

    NASA Technical Reports Server (NTRS)

    Counselman, C. C., III; Gourevitch, S. A.; Herring, T. A.; King, B. W.; Shapiro, I. I.; Cappallo, R. J.; Rogers, A. E. E.; Whitney, A. R.; Greenspan, R. L.; Snyder, R. E.

    1983-01-01

    MITES (Miniature Interferometer Terminals for Earth Surveying) observations conducted on December 17 and 29, 1980, are analyzed. It is noted that the time span of the observations used on each day was 78 minutes, during which five satellites were always above 20 deg elevation. The observations are analyzed to determine the intersite position vectors by means of the algorithm described by Couselman and Gourevitch (1981). The average of the MITES results from the two days is presented. The rms differences between the two determinations of the components of the three vectors, which were about 65, 92, and 124 m long, were 8 mm for the north, 3 mm for the east, and 6 mm for the vertical. It is concluded that, at least for short distances, relative positioning by interferometry with GPS can be done reliably with subcentimeter accuracy.

  14. A toy model to test the accuracy of cosmological N-body simulations

    NASA Astrophysics Data System (ADS)

    Sylos Labini, F.

    2013-04-01

    The evolution of an isolated over-density represents a useful toy model to test the accuracy of a cosmological N-body code in the non-linear regime as it is approximately equivalent to that of a truly isolated cloud of particles, with same density profile and velocity distribution, in a non-expanding background. This is the case as long as the system size is smaller than the simulation box side, so that its interaction with the infinite copies can be neglected. In such a situation, the over-density rapidly undergoes to a global collapse forming a quasi stationary state in virial equilibrium. However, by evolving the system with a cosmological code (GADGET) for a sufficiently long time, a clear deviation from such quasi-equilibrium configuration is observed. This occurs in a time tLI that depends on the values of the simulation numerical parameters such as the softening length and the time-stepping accuracy, i.e. it is a numerical artifact related to the limited spatial and temporal resolutions. The analysis of the Layzer-Irvine cosmic energy equation confirms that this deviation corresponds to an unphysical dynamical regime. By varying the numerical parameters of the simulation and the physical parameters of the system we show that the unphysical behaviour originates from badly integrated close scatterings of high-velocity particles. We find that, while the structure may remain virialized in the unphysical regime, its density and velocity profiles are modified with respect to the quasi-equilibrium configurations, converging, however, to well defined shapes, the former characterised by a Navarro Frenk White-type behaviour.

  15. SALTSTONE MATRIX CHARACTERIZATION AND STADIUM SIMULATION RESULTS

    SciTech Connect

    Langton, C.

    2009-07-30

    SIMCO Technologies, Inc. was contracted to evaluate the durability of the saltstone matrix material and to measure saltstone transport properties. This information will be used to: (1) Parameterize the STADIUM{reg_sign} service life code, (2) Predict the leach rate (degradation rate) for the saltstone matrix over 10,000 years using the STADIUM{reg_sign} concrete service life code, and (3) Validate the modeled results by conducting leaching (water immersion) tests. Saltstone durability for this evaluation is limited to changes in the matrix itself and does not include changes in the chemical speciation of the contaminants in the saltstone. This report summarized results obtained to date which include: characterization data for saltstone cured up to 365 days and characterization of saltstone cured for 137 days and immersed in water for 31 days. Chemicals for preparing simulated non-radioactive salt solution were obtained from chemical suppliers. The saltstone slurry was mixed according to directions provided by SRNL. However SIMCO Technologies Inc. personnel made a mistake in the premix proportions. The formulation SIMCO personnel used to prepare saltstone premix was not the reference mix proportions: 45 wt% slag, 45 wt% fly ash, and 10 wt% cement. SIMCO Technologies Inc. personnel used the following proportions: 21 wt% slag, 65 wt% fly ash, and 14 wt% cement. The mistake was acknowledged and new mixes have been prepared and are curing. The results presented in this report are assumed to be conservative since the excessive fly ash was used in the SIMCO saltstone. The SIMCO mixes are low in slag which is very reactive in the caustic salt solution. The impact is that the results presented in this report are expected to be conservative since the samples prepared were deficient in slag and contained excess fly ash. The hydraulic reactivity of slag is about four times that of fly ash so the amount of hydrated binder formed per unit volume in the SIMCO saltstone samples is

  16. Exploring Space Physics Concepts Using Simulation Results

    NASA Astrophysics Data System (ADS)

    Gross, N. A.

    2008-05-01

    The Center for Integrated Space Weather Modeling (CISM), a Science and Technology Center (STC) funded by the National Science Foundation, has the goal of developing a suite of integrated physics based computer models of the space environment that can follow the evolution of a space weather event from the Sun to the Earth. In addition to the research goals, CISM is also committed to training the next generation of space weather professionals who are imbued with a system view of space weather. This view should include an understanding of both helio-spheric and geo-space phenomena. To this end, CISM offers a yearly Space Weather Summer School targeted to first year graduate students, although advanced undergraduates and space weather professionals have also attended. This summer school uses a number of innovative pedagogical techniques including devoting each afternoon to a computer lab exercise that use results from research quality simulations and visualization techniques, along with ground based and satellite data to explore concepts introduced during the morning lectures. These labs are suitable for use in wide variety educational settings from formal classroom instruction to outreach programs. The goal of this poster is to outline the goals and content of the lab materials so that instructors may evaluate their potential use in the classroom or other settings.

  17. High accuracy simulations of black hole binaries: Spins anti-aligned with the orbital angular momentum

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Pfeiffer, Harald P.; Scheel, Mark A.

    2009-12-01

    High-accuracy binary black hole simulations are presented for black holes with spins anti-aligned with the orbital angular momentum. The particular case studied represents an equal-mass binary with spins of equal magnitude S/m2=0.43757±0.00001. The system has initial orbital eccentricity ˜4×10-5, and is evolved through 10.6 orbits plus merger and ringdown. The remnant mass and spin are Mf=(0.961109±0.000003)M and Sf/Mf2=0.54781±0.00001, respectively, where M is the mass during early inspiral. The gravitational waveforms have accumulated numerical phase errors of ≲0.1 radians without any time or phase shifts, and ≲0.01 radians when the waveforms are aligned with suitable time and phase shifts. The waveform is extrapolated to infinity using a procedure accurate to ≲0.01 radians in phase, and the extrapolated waveform differs by up to 0.13 radians in phase and about 1% in amplitude from the waveform extracted at finite radius r=350M. The simulations employ different choices for the constraint damping parameters in the wave zone; this greatly reduces the effects of junk radiation, allowing the extraction of a clean gravitational wave signal even very early in the simulation.

  18. Technical Note: Maximising accuracy and minimising cost of a potentiometrically regulated ocean acidification simulation system

    NASA Astrophysics Data System (ADS)

    MacLeod, C. D.; Doyle, H. L.; Currie, K. I.

    2014-05-01

    This article describes a potentiometric ocean acidification simulation system which automatically regulates pH through the injection of 100% CO2 gas into temperature-controlled seawater. The system is ideally suited to long-term experimental studies of the effect of acidification on biological processes involving small-bodied (10-20 mm) calcifying or non-calcifying organisms. Using hobbyist grade equipment, the system was constructed for approximately USD 1200 per treatment unit (tank, pH regulation apparatus, chiller, pump/filter unit). An overall accuracy of ±0.05 pHT units (SD) was achieved over 90 days in two acidified treatments (7.60 and 7.40) at 12 °C using glass electrodes calibrated with salt water buffers, thereby preventing liquid junction error. The accuracy of the system was validated through the independent calculation of pHT (12 °C) using dissolved inorganic carbon (DIC) and total alkalinity (AT) data taken from discrete acidified seawater samples. The system was used to compare the shell growth of the marine gastropod Zeacumantus subcarinatus infected with the trematode parasite Maritrema novaezealandensis with that of uninfected snails, at pH levels of 7.4, 7.6, and 8.1.

  19. Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.

    PubMed

    Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa

    2015-09-01

    Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children.

  20. Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.

    PubMed

    Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa

    2015-09-01

    Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children. PMID:26069219

  1. How well do people recall risk factor test results? Accuracy and bias among cholesterol screening participants.

    PubMed

    Croyle, Robert T; Loftus, Elizabeth F; Barger, Steven D; Sun, Yi-Chun; Hart, Marybeth; Gettig, JoAnn

    2006-05-01

    The authors conducted a community-based cholesterol screening study to examine accuracy of recall for self-relevant health information in long-term autobiographical memory. Adult community residents (N = 496) were recruited to participate in a laboratory-based cholesterol screening and were also provided cholesterol counseling in accordance with national guidelines. Participants were subsequently interviewed 1, 3, or 6 months later to assess their memory for their test results. Participants recalled their exact cholesterol levels inaccurately (38.0% correct) but their cardiovascular risk category comparatively well (88.7% correct). Recall errors showed a systematic bias: Individuals who received the most undesirable test results were most likely to remember their cholesterol scores and cardiovascular risk categories as lower (i.e., healthier) than those actually received. Recall bias was unrelated to age, education, knowledge, self-rated health status, and self-reported efforts to reduce cholesterol. The findings provide evidence that recall of self-relevant health information is susceptible to self-enhancement bias.

  2. Diagnostic Accuracy of Procalcitonin for Predicting Blood Culture Results in Patients With Suspected Bloodstream Infection

    PubMed Central

    Oussalah, Abderrahim; Ferrand, Janina; Filhine-Tresarrieu, Pierre; Aissa, Nejla; Aimone-Gastin, Isabelle; Namour, Fares; Garcia, Matthieu; Lozniewski, Alain; Guéant, Jean-Louis

    2015-01-01

    Abstract Previous studies have suggested that procalcitonin is a reliable marker for predicting bacteremia. However, these studies have had relatively small sample sizes or focused on a single clinical entity. The primary endpoint of this study was to investigate the diagnostic accuracy of procalcitonin for predicting or excluding clinically relevant pathogen categories in patients with suspected bloodstream infections. The secondary endpoint was to look for organisms significantly associated with internationally validated procalcitonin intervals. We performed a cross-sectional study that included 35,343 consecutive patients who underwent concomitant procalcitonin assays and blood cultures for suspected bloodstream infections. Biochemical and microbiological data were systematically collected in an electronic database and extracted for purposes of this study. Depending on blood culture results, patients were classified into 1 of the 5 following groups: negative blood culture, Gram-positive bacteremia, Gram-negative bacteremia, fungi, and potential contaminants found in blood cultures (PCBCs). The highest procalcitonin concentration was observed in patients with blood cultures growing Gram-negative bacteria (median 2.2 ng/mL [IQR 0.6–12.2]), and the lowest procalcitonin concentration was observed in patients with negative blood cultures (median 0.3 ng/mL [IQR 0.1–1.1]). With optimal thresholds ranging from ≤0.4 to ≤0.75 ng/mL, procalcitonin had a high diagnostic accuracy for excluding all pathogen categories with the following negative predictive values: Gram-negative bacteria (98.9%) (including enterobacteria [99.2%], nonfermenting Gram-negative bacilli [99.7%], and anaerobic bacteria [99.9%]), Gram-positive bacteria (98.4%), and fungi (99.6%). A procalcitonin concentration ≥10 ng/mL was associated with a high risk of Gram-negative (odds ratio 5.98; 95% CI, 5.20–6.88) or Gram-positive (odds ratio 3.64; 95% CI, 3.11–4.26) bacteremia but

  3. Use of Electronic Health Record Simulation to Understand the Accuracy of Intern Progress Notes.

    PubMed

    March, Christopher A; Scholl, Gretchen; Dversdal, Renee K; Richards, Matthew; Wilson, Leah M; Mohan, Vishnu; Gold, Jeffrey A

    2016-05-01

    Background With the widespread adoption of electronic health records (EHRs), there is a growing awareness of problems in EHR training for new users and subsequent problems with the quality of information present in EHR-generated progress notes. By standardizing the case, simulation allows for the discovery of EHR patterns of use as well as a modality to aid in EHR training. Objective To develop a high-fidelity EHR training exercise for internal medicine interns to understand patterns of EHR utilization in the generation of daily progress notes. Methods Three months after beginning their internship, 32 interns participated in an EHR simulation designed to assess patterns in note writing and generation. Each intern was given a simulated chart and instructed to create a daily progress note. Notes were graded for use of copy-paste, macros, and accuracy of presented data. Results A total of 31 out of 32 interns (97%) completed the exercise. There was wide variance in use of macros to populate data, with multiple macro types used for the same data category. Three-quarters of notes contained either copy-paste elements or the elimination of active medical problems from the prior days' notes. This was associated with a significant number of quality issues, including failure to recognize a lack of deep vein thrombosis prophylaxis, medications stopped on admission, and issues in prior discharge summary. Conclusions Interns displayed wide variation in the process of creating progress notes. Additional studies are being conducted to determine the impact EHR-based simulation has on standardization of note content.

  4. Accuracy of the unified approach in maternally influenced traits - illustrated by a simulation study in the honey bee (Apis mellifera)

    PubMed Central

    2013-01-01

    Background The honey bee is an economically important species. With a rapid decline of the honey bee population, it is necessary to implement an improved genetic evaluation methodology. In this study, we investigated the applicability of the unified approach and its impact on the accuracy of estimation of breeding values for maternally influenced traits on a simulated dataset for the honey bee. Due to the limitation to the number of individuals that can be genotyped in a honey bee population, the unified approach can be an efficient strategy to increase the genetic gain and to provide a more accurate estimation of breeding values. We calculated the accuracy of estimated breeding values for two evaluation approaches, the unified approach and the traditional pedigree based approach. We analyzed the effects of different heritabilities as well as genetic correlation between direct and maternal effects on the accuracy of estimation of direct, maternal and overall breeding values (sum of maternal and direct breeding values). The genetic and reproductive biology of the honey bee was accounted for by taking into consideration characteristics such as colony structure, uncertain paternity, overlapping generations and polyandry. In addition, we used a modified numerator relationship matrix and a realistic genome for the honey bee. Results For all values of heritability and correlation, the accuracy of overall estimated breeding values increased significantly with the unified approach. The increase in accuracy was always higher for the case when there was no correlation as compared to the case where a negative correlation existed between maternal and direct effects. Conclusions Our study shows that the unified approach is a useful methodology for genetic evaluation in honey bees, and can contribute immensely to the improvement of traits of apicultural interest such as resistance to Varroa or production and behavioural traits. In particular, the study is of great interest for

  5. Towards an assessment of the accuracy of density functional theory for first principles simulations of water

    NASA Astrophysics Data System (ADS)

    Grossman, Jeffrey C.; Schwegler, Eric; Draeger, Erik W.; Gygi, François; Galli, Giulia

    2004-01-01

    A series of Car-Parrinello (CP) molecular dynamics simulations of water are presented, aimed at assessing the accuracy of density functional theory in describing the structural and dynamical properties of water at ambient conditions. We found negligible differences in structural properties obtained using the Perdew-Burke-Ernzerhof or the Becke-Lee-Yang-Parr exchange and correlation energy functionals; we also found that size effects, although not fully negligible when using 32 molecule cells, are rather small. In addition, we identified a wide range of values of the fictitious electronic mass (μ) entering the CP Lagrangian for which the electronic ground state is accurately described, yielding trajectories and average properties that are independent of the value chosen. However, care must be exercised not to carry out simulations outside this range, where structural properties may artificially depend on μ. In the case of an accurate description of the electronic ground state, and in the absence of proton quantum effects, we obtained an oxygen-oxygen correlation function that is overstructured compared to experiment, and a diffusion coefficient which is approximately ten times smaller.

  6. A computer simulation study comparing lesion detection accuracy with digital mammography, breast tomosynthesis, and cone-beam CT breast imaging

    SciTech Connect

    Gong Xing; Glick, Stephen J.; Liu, Bob; Vedula, Aruna A.; Thacker, Samta

    2006-04-15

    Although conventional mammography is currently the best modality to detect early breast cancer, it is limited in that the recorded image represents the superposition of a three-dimensional (3D) object onto a 2D plane. Recently, two promising approaches for 3D volumetric breast imaging have been proposed, breast tomosynthesis (BT) and CT breast imaging (CTBI). To investigate possible improvements in lesion detection accuracy with either breast tomosynthesis or CT breast imaging as compared to digital mammography (DM), a computer simulation study was conducted using simulated lesions embedded into a structured 3D breast model. The computer simulation realistically modeled x-ray transport through a breast model, as well as the signal and noise propagation through a CsI based flat-panel imager. Polyenergetic x-ray spectra of Mo/Mo 28 kVp for digital mammography, Mo/Rh 28 kVp for BT, and W/Ce 50 kVp for CTBI were modeled. For the CTBI simulation, the intensity of the x-ray spectra for each projection view was determined so as to provide a total average glandular dose of 4 mGy, which is approximately equivalent to that given in conventional two-view screening mammography. The same total dose was modeled for both the DM and BT simulations. Irregular lesions were simulated by using a stochastic growth algorithm providing lesions with an effective diameter of 5 mm. Breast tissue was simulated by generating an ensemble of backgrounds with a power law spectrum, with the composition of 50% fibroglandular and 50% adipose tissue. To evaluate lesion detection accuracy, a receiver operating characteristic (ROC) study was performed with five observers reading an ensemble of images for each case. The average area under the ROC curves (A{sub z}) was 0.76 for DM, 0.93 for BT, and 0.94 for CTBI. Results indicated that for the same dose, a 5 mm lesion embedded in a structured breast phantom was detected by the two volumetric breast imaging systems, BT and CTBI, with statistically

  7. Balancing accuracy, robustness, and efficiency in simulations of coupled magma/mantle dynamics

    NASA Astrophysics Data System (ADS)

    Katz, R. F.

    2011-12-01

    Magmatism plays a central role in many Earth-science problems, and is particularly important for the chemical evolution of the mantle. The standard theory for coupled magma/mantle dynamics is fundamentally multi-physical, comprising mass and force balance for two phases, plus conservation of energy and composition in a two-component (minimum) thermochemical system. The tight coupling of these various aspects of the physics makes obtaining numerical solutions a significant challenge. Previous authors have advanced by making drastic simplifications, but these have limited applicability. Here I discuss progress, enabled by advanced numerical software libraries, in obtaining numerical solutions to the full system of governing equations. The goals in developing the code are as usual: accuracy of solutions, robustness of the simulation to non-linearities, and efficiency of code execution. I use the cutting-edge example of magma genesis and migration in a heterogeneous mantle to elucidate these issues. I describe the approximations employed and their consequences, as a means to frame the question of where and how to make improvements. I conclude that the capabilities needed to advance multi-physics simulation are, in part, distinct from those of problems with weaker coupling, or fewer coupled equations. Chief among these distinct requirements is the need to dynamically adjust the solution algorithm to maintain robustness in the face of coupled nonlinearities that would otherwise inhibit convergence. This may mean introducing Picard iteration rather than full coupling, switching between semi-implicit and explicit time-stepping, or adaptively increasing the strength of preconditioners. All of these can be accomplished by the user with, for example, PETSc. Formalising this adaptivity should be a goal for future development of software packages that seek to enable multi-physics simulation.

  8. Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.

    2008-01-01

    This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.

  9. Validation of accuracy of liver model with temperature-dependent thermal conductivity by comparing the simulation and in vitro RF ablation experiment.

    PubMed

    Watanabe, Hiroki; Yamazaki, Nozomu; Isobe, Yosuke; Lu, XiaoWei; Kobayashi, Yo; Miyashita, Tomoyuki; Ohdaira, Takeshi; Hashizume, Makoto; Fujie, Masakatsu G

    2012-01-01

    Radiofrequency (RF) ablation is increasingly used to treat cancer because it is minimally invasive. However, it is difficult for operators to control precisely the formation of coagulation zones because of the inadequacies of imaging modalities. To overcome this limitation, we previously proposed a model-based robotic ablation system that can create the required size and shape of coagulation zone based on the dimensions of the tumor. At the heart of such a robotic system is a precise temperature distribution simulator for RF ablation. In this article, we evaluated the simulation accuracy of two numerical simulation liver models, one using a constant thermal conductivity value and the other using temperature-dependent thermal conductivity values, compared with temperatures obtained using in vitro experiments. The liver model that reflected the temperature dependence of thermal conductivity did not result in a large increase of simulation accuracy compared with the temperature-independent model in the temperature range achieved during clinical RF ablation.

  10. Accuracy Rates of Sex Estimation by Forensic Anthropologists through Comparison with DNA Typing Results in Forensic Casework.

    PubMed

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2016-09-01

    A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases.

  11. Accuracy Rates of Sex Estimation by Forensic Anthropologists through Comparison with DNA Typing Results in Forensic Casework.

    PubMed

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2016-09-01

    A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. PMID:27352918

  12. NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes. Researchers at the National Renewable Energy Laboratory (NREL) have developed models for evaluating the thermal performance of walls in existing homes that will improve the accuracy of building energy simulation tools when predicting potential energy savings of existing homes. Uninsulated walls are typical in older homes where the wall cavities were not insulated during construction or where the insulating material has settled. Accurate calculation of heat transfer through building enclosures will help determine the benefit of energy efficiency upgrades in order to reduce energy consumption in older American homes. NREL performed detailed computational fluid dynamics (CFD) analysis to quantify the energy loss/gain through the walls and to visualize different airflow regimes within the uninsulated cavities. The effects of ambient outdoor temperature, radiative properties of building materials, and insulation level were investigated. The study showed that multi-dimensional airflows occur in walls with uninsulated cavities and that the thermal resistance is a function of the outdoor temperature - an effect not accounted for in existing building energy simulation tools. The study quantified the difference between CFD prediction and the approach currently used in building energy simulation tools over a wide range of conditions. For example, researchers found that CFD predicted lower heating loads and slightly higher cooling loads. Implementation of CFD results into building energy simulation tools such as DOE2 and EnergyPlus will likely reduce the predicted heating load of homes. Researchers also determined that a small air gap in a partially insulated cavity can lead to a significant reduction in thermal resistance. For instance, a 4-in. tall air gap

  13. Accuracy of a Computer-Aided Surgical Simulation (CASS) Protocol for Orthognathic Surgery: A Prospective Multicenter Study

    PubMed Central

    Hsu, Sam Sheng-Pin; Gateno, Jaime; Bell, R. Bryan; Hirsch, David L.; Markiewicz, Michael R.; Teichgraeber, John F.; Zhou, Xiaobo; Xia, James J.

    2012-01-01

    Purpose The purpose of this prospective multicenter study was to assess the accuracy of a computer-aided surgical simulation (CASS) protocol for orthognathic surgery. Materials and Methods The accuracy of the CASS protocol was assessed by comparing planned and postoperative outcomes of 65 consecutive patients enrolled from 3 centers. Computer-generated surgical splints were used for all patients. For the genioplasty, one center utilized computer-generated chin templates to reposition the chin segment only for patients with asymmetry. Standard intraoperative measurements were utilized without the chin templates for the remaining patients. The primary outcome measurements were linear and angular differences for the maxilla, mandible and chin when the planned and postoperative models were registered at the cranium. The secondary outcome measurements were: maxillary dental midline difference between the planned and postoperative positions; and linear and angular differences of the chin segment between the groups with and without the use of the template. The latter was measured when the planned and postoperative models were registered at mandibular body. Statistical analyses were performed, and the accuracy was reported using root mean square deviation (RMSD) and Bland and Altman's method for assessing measurement agreement. Results In the primary outcome measurements, there was no statistically significant difference among the 3 centers for the maxilla and mandible. The largest RMSD was 1.0mm and 1.5° for the maxilla, and 1.1mm and 1.8° for the mandible. For the chin, there was a statistically significant difference between the groups with and without the use of the chin template. The chin template group showed excellent accuracy with largest positional RMSD of 1.0mm and the largest orientational RSMD of 2.2°. However, larger variances were observed in the group not using the chin template. This was significant in anteroposterior and superoinferior directions, as in

  14. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  15. [Initial results with the Munich knee simulator].

    PubMed

    Frey, M; Riener, R; Burgkart, R; Pröll, T

    2002-01-01

    In orthopaedics more than 50 different clinical knee joint evaluation tests exist that have to be trained in orthopaedic education. Often it is not possible to obtain sufficient practical training in a clinical environment. The training can be improved by Virtual Reality technology. In the frame of the Munich Knee Joint Simulation project an artificial leg with anatomical properties is attached by a force-torque sensor to an industrial robot. The recorded forces and torques are the input for a simple biomechanical model of the human knee joint. The robot is controlled in such way that the user gets the feeling he moves a real leg. The leg is embedded in a realistic environment with a couch and a patient on it.

  16. Improving the accuracy of simulation of radiation-reaction effects with implicit Runge-Kutta-Nyström methods

    NASA Astrophysics Data System (ADS)

    Elkina, N. V.; Fedotov, A. M.; Herzing, C.; Ruhl, H.

    2014-05-01

    The Landau-Lifshitz equation provides an efficient way to account for the effects of radiation reaction without acquiring the nonphysical solutions typical for the Lorentz-Abraham-Dirac equation. We solve the Landau-Lifshitz equation in its covariant four-vector form in order to control both the energy and momentum of radiating particles. Our study reveals that implicit time-symmetric collocation methods of the Runge-Kutta-Nyström type are superior in accuracy and better at maintaining the mass-shell condition than their explicit counterparts. We carry out an extensive study of numerical accuracy by comparing the analytical and numerical solutions of the Landau-Lifshitz equation. Finally, we present the results of the simulation of particle scattering by a focused laser pulse. Due to radiation reaction, particles are less capable of penetrating into the focal region compared to the case where radiation reaction is neglected. Our results are important for designing forthcoming experiments with high intensity laser fields.

  17. RF propagation simulator to predict location accuracy of GSM mobile phones for emergency applications

    NASA Astrophysics Data System (ADS)

    Green, Marilynn P.; Wang, S. S. Peter

    2002-11-01

    Mobile location is one of the fastest growing areas for the development of new technologies, services and applications. This paper describes the channel models that were developed as a basis of discussion to assist the Technical Subcommittee T1P1.5 in its consideration of various mobile location technologies for emergency applications (1997 - 1998) for presentation to the U.S. Federal Communication Commission (FCC). It also presents the PCS 1900 extension to this model, which is based on the COST-231 extended Hata model and review of the original Okumura graphical interpretation of signal propagation characteristics in different environments. Based on a wide array of published (and non-publicly disclosed) empirical data, the signal propagation models described in this paper were all obtained by consensus of a group of inter-company participants in order to facilitate the direct comparison between simulations of different handset-based and network-based location methods prior to their standardization for emergency E-911 applications by the FCC. Since that time, this model has become a de-facto standard for assessing the positioning accuracy of different location technologies using GSM mobile terminals. In this paper, the radio environment is described to the level of detail that is necessary to replicate it in a software environment.

  18. A Bloch-McConnell simulator with pharmacokinetic modeling to explore accuracy and reproducibility in the measurement of hyperpolarized pyruvate

    NASA Astrophysics Data System (ADS)

    Walker, Christopher M.; Bankson, James A.

    2015-03-01

    Magnetic resonance imaging (MRI) of hyperpolarized (HP) agents has the potential to probe in-vivo metabolism with sensitivity and specificity that was not previously possible. Biological conversion of HP agents specifically for cancer has been shown to correlate to presence of disease, stage and response to therapy. For such metabolic biomarkers derived from MRI of hyperpolarized agents to be clinically impactful, they need to be validated and well characterized. However, imaging of HP substrates is distinct from conventional MRI, due to the non-renewable nature of transient HP magnetization. Moreover, due to current practical limitations in generation and evolution of hyperpolarized agents, it is not feasible to fully experimentally characterize measurement and processing strategies. In this work we use a custom Bloch-McConnell simulator with pharmacokinetic modeling to characterize the performance of specific magnetic resonance spectroscopy sequences over a range of biological conditions. We performed numerical simulations to evaluate the effect of sequence parameters over a range of chemical conversion rates. Each simulation was analyzed repeatedly with the addition of noise in order to determine the accuracy and reproducibility of measurements. Results indicate that under both closed and perfused conditions, acquisition parameters can affect measurements in a tissue dependent manner, suggesting that great care needs to be taken when designing studies involving hyperpolarized agents. More modeling studies will be needed to determine what effect sequence parameters have on more advanced acquisitions and processing methods.

  19. Accuracy and convergence of coupled finite-volume/Monte Carlo codes for plasma edge simulations of nuclear fusion reactors

    NASA Astrophysics Data System (ADS)

    Ghoos, K.; Dekeyser, W.; Samaey, G.; Börner, P.; Baelmans, M.

    2016-10-01

    The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracy by making use of averaging in the Random Noise coupling technique.

  20. Accuracy of linear measurement in the Galileos cone beam computed tomography under simulated clinical conditions

    PubMed Central

    Ganguly, R; Ruprecht, A; Vincent, S; Hellstein, J; Timmons, S; Qian, F

    2011-01-01

    Objectives The aim of this study was to determine the geometric accuracy of cone beam CT (CBCT)-based linear measurements of bone height obtained with the Galileos CBCT (Sirona Dental Systems Inc., Bensheim, Hessen, Germany) in the presence of soft tissues. Methods Six embalmed cadaver heads were imaged with the Galileos CBCT unit subsequent to placement of radiopaque fiduciary markers over the buccal and lingual cortical plates. Electronic linear measurements of bone height were obtained using the Sirona software. Physical measurements were obtained with digital calipers at the same location. This distance was compared on all six specimens bilaterally to determine accuracy of the image measurements. Results The findings showed no statistically significant difference between the imaging and physical measurements (P > 0.05) as determined by a paired sample t-test. The intraclass correlation was used to measure the intrarater reliability of repeated measures and there was no statistically significant difference between measurements performed at the same location (P > 0.05). Conclusions The Galileos CBCT image-based linear measurement between anatomical structures within the mandible in the presence of soft tissues is sufficiently accurate for clinical use. PMID:21697155

  1. Accuracy and repeatability of weighing for occupational hygiene measurements: results from an inter-laboratory comparison.

    PubMed

    Stacey, Peter; Revell, Graham; Tylee, Barry

    2002-11-01

    Gravimetric analysis is a fundamental technique frequently used in occupational hygiene assessments, but few studies have investigated its repeatability and reproducibility. Four inter-laboratory comparisons are discussed in this paper. The first involved 32 laboratories weighing 25 mm diameter glassfibre filters, the second involved 11 laboratories weighing 25 mm diameter PVC filters and the third involved eight laboratories weighing plastic IOM heads with 25 mm diameter glassfibre filters. Data from the third study found that measurements using this type of IOM head were unreliable. A fourth study, to ascertain if laboratories could improve their performance, involved a selected sub-group of 10 laboratories from the first exercise that analysed the 25 mm diameter glassfibre filters. The studies tested the analytical measurement process and not just the variation in weighings obtained on blank filters, as previous studies have done. Graphs of data from the first and second exercises suggest that a power curve relationship exists between reproducibility and loading and repeatability and loading. The relationship for reproducibility in the first study followed the equation log s(R) = -0.62 log m + 0.86 and in the second study log s(R) = -0.64 log m + 0.57, where s(R) is the reproducibility in terms of per cent relative standard deviation (%RSD) and m is the weight of loading in milligrams. The equation for glassfibre filters from the first exercise suggested that at a measurement of 0.4 mg (about a tenth of the United Kingdom legislative definition of a hazardous substance for a respirable dust for an 8 h sample), the measurement reproducibility is more than +/-25% (2sigma). The results from PVC filters had better repeatability estimates than the glassfibre filters, but overall they had similar estimates of reproducibility. An improvement in both the reproducibility and repeatability for glassfibre filters was observed in the fourth study. This improvement reduced

  2. Accuracy and repeatability of weighing for occupational hygiene measurements: results from an inter-laboratory comparison.

    PubMed

    Stacey, Peter; Revell, Graham; Tylee, Barry

    2002-11-01

    Gravimetric analysis is a fundamental technique frequently used in occupational hygiene assessments, but few studies have investigated its repeatability and reproducibility. Four inter-laboratory comparisons are discussed in this paper. The first involved 32 laboratories weighing 25 mm diameter glassfibre filters, the second involved 11 laboratories weighing 25 mm diameter PVC filters and the third involved eight laboratories weighing plastic IOM heads with 25 mm diameter glassfibre filters. Data from the third study found that measurements using this type of IOM head were unreliable. A fourth study, to ascertain if laboratories could improve their performance, involved a selected sub-group of 10 laboratories from the first exercise that analysed the 25 mm diameter glassfibre filters. The studies tested the analytical measurement process and not just the variation in weighings obtained on blank filters, as previous studies have done. Graphs of data from the first and second exercises suggest that a power curve relationship exists between reproducibility and loading and repeatability and loading. The relationship for reproducibility in the first study followed the equation log s(R) = -0.62 log m + 0.86 and in the second study log s(R) = -0.64 log m + 0.57, where s(R) is the reproducibility in terms of per cent relative standard deviation (%RSD) and m is the weight of loading in milligrams. The equation for glassfibre filters from the first exercise suggested that at a measurement of 0.4 mg (about a tenth of the United Kingdom legislative definition of a hazardous substance for a respirable dust for an 8 h sample), the measurement reproducibility is more than +/-25% (2sigma). The results from PVC filters had better repeatability estimates than the glassfibre filters, but overall they had similar estimates of reproducibility. An improvement in both the reproducibility and repeatability for glassfibre filters was observed in the fourth study. This improvement reduced

  3. Adaptive constructive processes and memory accuracy: Consequences of counterfactual simulations in young and older adults

    PubMed Central

    Gerlach, Kathy D.; Dornblaser, David W.; Schacter, Daniel L.

    2013-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterized as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b, young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test, participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2, younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterization as an adaptive constructive process. PMID:23560477

  4. Improving the Accuracy of Whole Genome Prediction for Complex Traits Using the Results of Genome Wide Association Studies

    PubMed Central

    Zhang, Zhe; Ober, Ulrike; Erbe, Malena; Zhang, Hao; Gao, Ning; He, Jinlong; Li, Jiaqi; Simianer, Henner

    2014-01-01

    Utilizing the whole genomic variation of complex traits to predict the yet-to-be observed phenotypes or unobserved genetic values via whole genome prediction (WGP) and to infer the underlying genetic architecture via genome wide association study (GWAS) is an interesting and fast developing area in the context of human disease studies as well as in animal and plant breeding. Though thousands of significant loci for several species were detected via GWAS in the past decade, they were not used directly to improve WGP due to lack of proper models. Here, we propose a generalized way of building trait-specific genomic relationship matrices which can exploit GWAS results in WGP via a best linear unbiased prediction (BLUP) model for which we suggest the name BLUP|GA. Results from two illustrative examples show that using already existing GWAS results from public databases in BLUP|GA improved the accuracy of WGP for two out of the three model traits in a dairy cattle data set, and for nine out of the 11 traits in a rice diversity data set, compared to the reference methods GBLUP and BayesB. While BLUP|GA outperforms BayesB, its required computing time is comparable to GBLUP. Further simulation results suggest that accounting for publicly available GWAS results is potentially more useful for WGP utilizing smaller data sets and/or traits of low heritability, depending on the genetic architecture of the trait under consideration. To our knowledge, this is the first study incorporating public GWAS results formally into the standard GBLUP model and we think that the BLUP|GA approach deserves further investigations in animal breeding, plant breeding as well as human genetics. PMID:24663104

  5. Improving the accuracy of whole genome prediction for complex traits using the results of genome wide association studies.

    PubMed

    Zhang, Zhe; Ober, Ulrike; Erbe, Malena; Zhang, Hao; Gao, Ning; He, Jinlong; Li, Jiaqi; Simianer, Henner

    2014-01-01

    Utilizing the whole genomic variation of complex traits to predict the yet-to-be observed phenotypes or unobserved genetic values via whole genome prediction (WGP) and to infer the underlying genetic architecture via genome wide association study (GWAS) is an interesting and fast developing area in the context of human disease studies as well as in animal and plant breeding. Though thousands of significant loci for several species were detected via GWAS in the past decade, they were not used directly to improve WGP due to lack of proper models. Here, we propose a generalized way of building trait-specific genomic relationship matrices which can exploit GWAS results in WGP via a best linear unbiased prediction (BLUP) model for which we suggest the name BLUP|GA. Results from two illustrative examples show that using already existing GWAS results from public databases in BLUP|GA improved the accuracy of WGP for two out of the three model traits in a dairy cattle data set, and for nine out of the 11 traits in a rice diversity data set, compared to the reference methods GBLUP and BayesB. While BLUP|GA outperforms BayesB, its required computing time is comparable to GBLUP. Further simulation results suggest that accounting for publicly available GWAS results is potentially more useful for WGP utilizing smaller data sets and/or traits of low heritability, depending on the genetic architecture of the trait under consideration. To our knowledge, this is the first study incorporating public GWAS results formally into the standard GBLUP model and we think that the BLUP|GA approach deserves further investigations in animal breeding, plant breeding as well as human genetics.

  6. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  7. Influence of River Bed Elevation Survey Configurations and Interpolation Methods on the Accuracy of LIDAR Dtm-Based River Flow Simulations

    NASA Astrophysics Data System (ADS)

    Santillan, J. R.; Serviano, J. L.; Makinano-Santillan, M.; Marqueso, J. T.

    2016-09-01

    In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS), zig-zag (ZZ), river banks-centerline (RBCL), and river banks-centerline-zig-zag (RBCLZZ), and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging) were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK) as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs. The use of

  8. Cocontraction of Pairs of Muscles around Joints May Improve an Accuracy of a Reaching Movement: a Numerical Simulation Study

    NASA Astrophysics Data System (ADS)

    Ueyama, Yuki; Miyashita, Eizo

    2011-06-01

    We have pair muscle groups on a joint; agonist and antagonist muscles. Simultaneous activation of agonist and antagonist muscles around a joint, which is called cocontraction, is suggested to take a role of increasing the joint stiffness in order to decelerate hand speed and improve movement accuracy. However, it has not been clear how cocontraction and the joint stiffness are varied during movements. In this study, muscle activation and the joint stiffness in reaching movements were studied under several requirements of end-point accuracy using a 2-joint 6-muscle model and an approximately optimal control. The time-varying cocontraction and the joint stiffness were showed by the numerically simulation study. It indicated that the strength of cocontraction and the joint stiffness increased synchronously as the required accuracy level increased. We conclude that cocontraction may get the joint stiffness increased to achieve higher requirement of the movement accuracy.

  9. Internal Fiducial Markers and Susceptibility Effects in MRI-Simulation and Measurement of Spatial Accuracy

    SciTech Connect

    Jonsson, Joakim H.; Garpebring, Anders; Karlsson, Magnus G.; Nyholm, Tufve

    2012-04-01

    Background: It is well-known that magnetic resonance imaging (MRI) is preferable to computed tomography (CT) in radiotherapy target delineation. To benefit from this, there are two options available: transferring the MRI delineated target volume to the planning CT or performing the treatment planning directly on the MRI study. A precondition for excluding the CT study is the possibility to define internal structures visible on both the planning MRI and on the images used to position the patient at treatment. In prostate cancer radiotherapy, internal gold markers are commonly used, and they are visible on CT, MRI, x-ray, and portal images. The depiction of the markers in MRI are, however, dependent on their shape and orientation relative the main magnetic field because of susceptibility effects. In the present work, these effects are investigated and quantified using both simulations and phantom measurements. Methods and Materials: Software that simulated the magnetic field distortions around user defined geometries of variable susceptibilities was constructed. These magnetic field perturbation maps were then reconstructed to images that were evaluated. The simulation software was validated through phantom measurements of four commercially available gold markers of different shapes and one in-house gold marker. Results: Both simulations and phantom measurements revealed small position deviations of the imaged marker positions relative the actual marker positions (<1 mm). Conclusion: Cylindrical gold markers can be used as internal fiducial markers in MRI.

  10. Development of three dimensional Eulerian numerical procedure toward plate-mantle simulation: accuracy test by the fluid rope coiling

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Kameyama, M.; Kageyama, A.

    2007-12-01

    Reproducing a realistic plate tectonics with mantle convection simulation is one of the greatest challenges in computational geophysics. We have developed a three dimensional Eulerian numerical procedure toward plate-mantle simulation, which includes a finite deformation of the plate in the mantle convection. Our method, combined with CIP-CSLR (Constrained Interpolation Profile method-Conservative Semi-Lagrangian advection scheme with Rational function) and ACuTE method, enables us to solve advection and force balance equations even with a large and sharp viscosity jump, which marks the interface between the plates and surrounding upper mantle materials. One of the typical phenomena represented by our method is a fluid rope coiling event, where a stream of viscous fluid is poured onto the bottom plane from a certain height. This coiling motion is due to delicate balances between bending, twisting and stretching motions of fluid rope. In the framework of the Eulerian scheme, the fluid rope and surrounding air are treated as a viscosity profile which differs by several orders of magnitude. Our method solves the complex force balances of the fluid rope and air, by a multigrid iteration technique of ACuTE algorithm. In addition, the CIP-CSLR advection scheme allows us to obtain a deforming shape of the fluid rope, as a low diffusive solution in the Eulerian frame of reference. In this presentation, we will show the simulation result of the fluid rope coiling as an accuracy test for our simulation scheme, by comparing with the simplified numerical solution for thin viscous jet.

  11. Computer simulation of shading and blocking: Discussion of accuracy and recommendations

    SciTech Connect

    Lipps, F.W. )

    1992-04-01

    A field of heliostats suffers losses caused by shading and blocking by neighboring heliostats. The complex geometry of multiple shading and blocking events suggests that a processing code is needed to update the boundary vector for each shading or blocking event. A new version, RSABS, (programmer's manual included) simulates the split-rectangular heliostat. Researchers concluded that the dominant error for the given heliostat geometry is caused by the departure from planarity of the neighboring heliostats. It is recommended that a version of the heliostat simulation be modified to include losses due to nonreflective structural margins, if they occur. Heliostat neighbors should be given true guidance rather than assumed to be parallel, and the resulting nonidentical quadrilateral images should be processed, as in HELIOS, by ignoring overlapping events, rare in optimized fields.

  12. Accuracy and uncertainty assessment on geostatistical simulation of soil salinity in a coastal farmland using auxiliary variable.

    PubMed

    Yao, R J; Yang, J S; Shao, H B

    2013-06-01

    Understanding the spatial soil salinity aids farmers and researchers in identifying areas in the field where special management practices are required. Apparent electrical conductivity measured by electromagnetic induction instrument in a fairly quick manner has been widely used to estimate spatial soil salinity. However, methods used for this purpose are mostly a series of interpolation algorithms. In this study, sequential Gaussian simulation (SGS) and sequential Gaussian co-simulation (SGCS) algorithms were applied for assessing the prediction accuracy and uncertainty of soil salinity with apparent electrical conductivity as auxiliary variable. Results showed that the spatial patterns of soil salinity generated by SGS and SGCS algorithms showed consistency with the measured values. The profile distribution of soil salinity was characterized by increasing with depth with medium salinization (ECe 4-8 dS/m) as the predominant salinization class. SGCS algorithm privileged SGS algorithm with smaller root mean square error according to the generated realizations. In addition, SGCS algorithm had larger proportions of true values falling within probability intervals and narrower range of probability intervals than SGS algorithm. We concluded that SGCS algorithm had better performance in modeling local uncertainty and propagating spatial uncertainty. The inclusion of auxiliary variable contributed to prediction capability and uncertainty modeling when using densely auxiliary variable as the covariate to predict the sparse target variable.

  13. Evaluation of accuracy of non-linear finite element computations for surgical simulation: study using brain phantom.

    PubMed

    Ma, J; Wittek, A; Singh, S; Joldes, G; Washio, T; Chinzei, K; Miller, K

    2010-12-01

    In this paper, the accuracy of non-linear finite element computations in application to surgical simulation was evaluated by comparing the experiment and modelling of indentation of the human brain phantom. The evaluation was realised by comparing forces acting on the indenter and the deformation of the brain phantom. The deformation of the brain phantom was measured by tracking 3D motions of X-ray opaque markers, placed within the brain phantom using a custom-built bi-plane X-ray image intensifier system. The model was implemented using the ABAQUS(TM) finite element solver. Realistic geometry obtained from magnetic resonance images and specific constitutive properties determined through compression tests were used in the model. The model accurately predicted the indentation force-displacement relations and marker displacements. Good agreement between modelling and experimental results verifies the reliability of the finite element modelling techniques used in this study and confirms the predictive power of these techniques in surgical simulation. PMID:21153973

  14. Increasing the efficiency of bacterial transcription simulations: When to exclude the genome without loss of accuracy

    PubMed Central

    Iafolla, Marco AJ; Dong, Guang Qiang; McMillen, David R

    2008-01-01

    Background Simulating the major molecular events inside an Escherichia coli cell can lead to a very large number of reactions that compose its overall behaviour. Not only should the model be accurate, but it is imperative for the experimenter to create an efficient model to obtain the results in a timely fashion. Here, we show that for many parameter regimes, the effect of the host cell genome on the transcription of a gene from a plasmid-borne promoter is negligible, allowing one to simulate the system more efficiently by removing the computational load associated with representing the presence of the rest of the genome. The key parameter is the on-rate of RNAP binding to the promoter (k_on), and we compare the total number of transcripts produced from a plasmid vector generated as a function of this rate constant, for two versions of our gene expression model, one incorporating the host cell genome and one excluding it. By sweeping parameters, we identify the k_on range for which the difference between the genome and no-genome models drops below 5%, over a wide range of doubling times, mRNA degradation rates, plasmid copy numbers, and gene lengths. Results We assess the effect of the simulating the presence of the genome over a four-dimensional parameter space, considering: 24 min <= bacterial doubling time <= 100 min; 10 <= plasmid copy number <= 1000; 2 min <= mRNA half-life <= 14 min; and 10 bp <= gene length <= 10000 bp. A simple MATLAB user interface generates an interpolated k_on threshold for any point in this range; this rate can be compared to the ones used in other transcription studies to assess the need for including the genome. Conclusion Exclusion of the genome is shown to yield less than 5% difference in transcript numbers over wide ranges of values, and computational speed is improved by two to 24 times by excluding explicit representation of the genome. PMID:18789148

  15. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  16. Accuracy of tumor motion compensation algorithm from a robotic respiratory tracking system: A simulation study

    SciTech Connect

    Seppenwoolde, Yvette; Berbeco, Ross I.; Nishioka, Seiko; Shirato, Hiroki; Heijmen, Ben

    2007-07-15

    could already be reached with a simple linear model. In case of hysteresis, a polynomial model added some extra reduction. More frequent updating of the correspondence model resulted in slightly smaller errors only for the few recordings with a time trend that was fast, relative to the current x-ray update frequency. In general, the simulations suggest that the applied combined use of internal and external markers allow the robot to accurately follow tumor motion even in the case of irregularities in breathing patterns.

  17. Accuracy of standard measures of family planning service quality: findings from the simulated client method.

    PubMed

    Tumlinson, Katherine; Speizer, Ilene S; Curtis, Siân L; Pence, Brian W

    2014-12-01

    In the field of international family planning, quality of care as a reproductive right is widely endorsed, yet we lack validated data-collection instruments that can accurately assess quality in terms of its public health importance. This study, conducted within 19 public and private facilities in Kisumu, Kenya, used the simulated client method to test the validity of three standard data-collection instruments used in large-scale facility surveys: provider interviews, client interviews, and observation of client-provider interactions. Results found low specificity and low positive predictive values in each of the three instruments for a number of quality indicators, suggesting that the quality of care provided may be overestimated by traditional methods of measurement. Revised approaches to measuring family planning service quality may be needed to ensure accurate assessment of programs and to better inform quality-improvement interventions.

  18. Accuracy of standard measures of family planning service quality: findings from the simulated client method.

    PubMed

    Tumlinson, Katherine; Speizer, Ilene S; Curtis, Siân L; Pence, Brian W

    2014-12-01

    In the field of international family planning, quality of care as a reproductive right is widely endorsed, yet we lack validated data-collection instruments that can accurately assess quality in terms of its public health importance. This study, conducted within 19 public and private facilities in Kisumu, Kenya, used the simulated client method to test the validity of three standard data-collection instruments used in large-scale facility surveys: provider interviews, client interviews, and observation of client-provider interactions. Results found low specificity and low positive predictive values in each of the three instruments for a number of quality indicators, suggesting that the quality of care provided may be overestimated by traditional methods of measurement. Revised approaches to measuring family planning service quality may be needed to ensure accurate assessment of programs and to better inform quality-improvement interventions. PMID:25469929

  19. Accuracy of Range Restriction Correction with Multiple Imputation in Small and Moderate Samples: A Simulation Study

    ERIC Educational Resources Information Center

    Pfaffel, Andreas; Spiel, Christiane

    2016-01-01

    Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are…

  20. The VIIRS Ocean Data Simulator Enhancements and Results

    NASA Technical Reports Server (NTRS)

    Robinson, Wayne D.; Patt, Fredrick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-01-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  1. Analysis of Factors Influencing Measurement Accuracy of Al Alloy Tensile Test Results

    NASA Astrophysics Data System (ADS)

    Podgornik, Bojan; Žužek, Borut; Sedlaček, Marko; Kevorkijan, Varužan; Hostej, Boris

    2016-02-01

    In order to properly use materials in design, a complete understanding of and information on their mechanical properties, such as yield and ultimate tensile strength must be obtained. Furthermore, as the design of automotive parts is constantly pushed toward higher limits, excessive measuring uncertainty can lead to unexpected premature failure of the component, thus requiring reliable determination of material properties with low uncertainty. The aim of the present work was to evaluate the effect of different metrology factors, including the number of tested samples, specimens machining and surface quality, specimens input diameter, type of testing and human error on the tensile test results and measurement uncertainty when performed on 2xxx series Al alloy. Results show that the most significant contribution to measurement uncertainty comes from the number of samples tested, which can even exceed 1 %. Furthermore, moving from experimental laboratory conditions to very intense industrial environment further amplifies measurement uncertainty, where even if using automated systems human error cannot be neglected.

  2. Gravity Probe B data analysis status and potential for improved accuracy of scientific results

    NASA Astrophysics Data System (ADS)

    Everitt, C. W. F.; Adams, M.; Bencze, W.; Buchman, S.; Clarke, B.; Conklin, J.; DeBra, D. B.; Dolphin, M.; Heifetz, M.; Hipkins, D.; Holmes, T.; Keiser, G. M.; Kolodziejczak, J.; Li, J.; Lockhart, J. M.; Muhlfelder, B.; Parkinson, B. W.; Salomon, M.; Silbergleit, A.; Solomonik, V.; Stahl, K.; Turneaure, J. P.; Worden, P. W., Jr.

    2008-06-01

    Gravity Probe B (GP-B) is a landmark physics experiment in space designed to yield precise tests of two fundamental predictions of Einstein's theory of general relativity, the geodetic and frame-dragging effects, by means of cryogenic gyroscopes in Earth orbit. Launched on 20 April 2004, data collection began on 28 August 2004 and science operations were completed on 29 September 2005 upon liquid helium depletion. During the course of the experiment, two unexpected and mutually-reinforcing complications were discovered: (1) larger than expected 'misalignment' torques on the gyroscopes producing classical drifts larger than the relativity effects under study and (2) a damped polhode oscillation that complicated the calibration of the instrument's scale factor against the aberration of starlight. Steady progress through 2006 and 2007 established the methods for treating both problems; in particular, an extended effort from January 2007 on 'trapped flux mapping' led in August 2007 to a dramatic breakthrough, resulting in a factor of ~20 reduction in data scatter. This paper reports results up to November 2007. Detailed investigation of a central 85-day segment of the data has yielded robust measurements of both relativity effects. Expansion to the complete science data set, along with anticipated improvements in modeling and in the treatment of systematic errors may be expected to yield a 3 6% determination of the frame-dragging effect.

  3. Analysis of the Accuracy of Weight Loss Information Search Engine Results on the Internet

    PubMed Central

    Shokar, Navkiran K.; Peñaranda, Eribeth; Nguyen, Norma

    2014-01-01

    Objectives. We systematically identified and evaluated the quality and comprehensiveness of online information related to weight loss that users were likely to access. Methods. We evaluated the content quality, accessibility of the information, and author credentials for Web sites in 2012 that were identified from weight loss specific queries that we generated. We scored the content with respect to available evidence-based guidelines for weight loss. Results. One hundred three Web sites met our eligibility criteria (21 commercial, 52 news/media, 7 blogs, 14 medical, government, or university, and 9 unclassified sites). The mean content quality score was 3.75 (range = 0–16; SD = 2.48). Approximately 5% (4.85%) of the sites scored greater than 8 (of 12) on nutrition, physical activity, and behavior. Content quality score varied significantly by type of Web site; the medical, government, or university sites (mean = 4.82, SD = 2.27) and blogs (mean = 6.33, SD = 1.99) had the highest scores. Commercial (mean = 2.37, SD = 2.60) or news/media sites (mean = 3.52, SD = 2.31) had the lowest scores (analysis of variance P < .005). Conclusions. The weight loss information that people were likely to access online was often of substandard quality because most comprehensive and quality Web sites ranked too low in search results. PMID:25122030

  4. Gravity Probe B Data Analysis. Status and Potential for Improved Accuracy of Scientific Results

    NASA Astrophysics Data System (ADS)

    Everitt, C. W. F.; Adams, M.; Bencze, W.; Buchman, S.; Clarke, B.; Conklin, J. W.; Debra, D. B.; Dolphin, M.; Heifetz, M.; Hipkins, D.; Holmes, T.; Keiser, G. M.; Kolodziejczak, J.; Li, J.; Lipa, J.; Lockhart, J. M.; Mester, J. C.; Muhlfelder, B.; Ohshima, Y.; Parkinson, B. W.; Salomon, M.; Silbergleit, A.; Solomonik, V.; Stahl, K.; Taber, M.; Turneaure, J. P.; Wang, S.; Worden, P. W.

    2009-12-01

    This is the first of five connected papers detailing progress on the Gravity Probe B (GP-B) Relativity Mission. GP-B, launched 20 April 2004, is a landmark physics experiment in space to test two fundamental predictions of Einstein’s general relativity theory, the geodetic and frame-dragging effects, by means of cryogenic gyroscopes in Earth orbit. Data collection began 28 August 2004 and science operations were completed 29 September 2005. The data analysis has proven deeper than expected as a result of two mutually reinforcing complications in gyroscope performance: (1) a changing polhode path affecting the calibration of the gyroscope scale factor C g against the aberration of starlight and (2) two larger than expected manifestations of a Newtonian gyro torque due to patch potentials on the rotor and housing. In earlier papers, we reported two methods, ‘geometric’ and ‘algebraic’, for identifying and removing the first Newtonian effect (‘misalignment torque’), and also a preliminary method of treating the second (‘roll-polhode resonance torque’). Central to the progress in both torque modeling and C g determination has been an extended effort on “Trapped Flux Mapping” commenced in November 2006. A turning point came in August 2008 when it became possible to include a detailed history of the resonance torques into the computation. The East-West (frame-dragging) effect is now plainly visible in the processed data. The current statistical uncertainty from an analysis of 155 days of data is 5.4 marc-s/yr (˜14% of the predicted effect), though it must be emphasized that this is a preliminary result requiring rigorous investigation of systematics by methods discussed in the accompanying paper by Muhlfelder et al. A covariance analysis incorporating models of the patch effect torques indicates that a 3-5% determination of frame-dragging is possible with more complete, computationally intensive data analysis.

  5. Measurement and Simulation Results of Ti Coated Microwave Absorber

    SciTech Connect

    Sun, Ding; McGinnis, Dave; /Fermilab

    1998-11-01

    When microwave absorbers are put in a waveguide, a layer of resistive coating can change the distribution of the E-M fields and affect the attenuation of the signal within the microwave absorbers. In order to study such effect, microwave absorbers (TT2-111) were coated with titanium thin film. This report is a document on the coating process and measurement results. The measurement results have been used to check the simulation results from commercial software HFSS (High Frequency Structure Simulator.)

  6. The accuracy of diffusion quantum Monte Carlo simulations in the determination of molecular equilibrium structures

    NASA Astrophysics Data System (ADS)

    Lu, Shih-I.

    2004-12-01

    For a test set of 17 first-row small molecules, the equilibrium structures are calculated with Ornstein-Uhlenbeck diffusion quantum Monte Carlo simulations guiding by trial wave functions constructed from floating spherical Gaussian orbitals and spherical Gaussian geminals. To measure performance of the Monte Carlo calculations, the mean deviation, the mean absolute deviation, the maximum absolute deviation, and the standard deviation of Monte Carlo calculated equilibrium structures with respect to empirical equilibrium structures are given. This approach is found to yield results having a uniformly high quality, being consistent with empirical equilibrium structures and surpassing calculated values from the coupled cluster model with single, double, and noniterative triple excitations [CCSD(T)] with the basis sets of cc-pCVQZ and cc-pVQZ. The nonrelativistic equilibrium atomization energies are also presented to assess performance of the calculated methods. The mean absolute deviations regarding experimental atomization energy are 0.16 and 0.21 kcal/mol for the Monte Carlo and CCSD(T)/cc-pCV(56)Z calculations, respectively.

  7. Accuracy of surface registration compared to conventional volumetric registration in patient positioning for head-and-neck radiotherapy: A simulation study using patient data

    SciTech Connect

    Kim, Youngjun; Li, Ruijiang; Na, Yong Hum; Xing, Lei; Lee, Rena

    2014-12-15

    Purpose: 3D optical surface imaging has been applied to patient positioning in radiation therapy (RT). The optical patient positioning system is advantageous over conventional method using cone-beam computed tomography (CBCT) in that it is radiation free, frameless, and is capable of real-time monitoring. While the conventional radiographic method uses volumetric registration, the optical system uses surface matching for patient alignment. The relative accuracy of these two methods has not yet been sufficiently investigated. This study aims to investigate the theoretical accuracy of the surface registration based on a simulation study using patient data. Methods: This study compares the relative accuracy of surface and volumetric registration in head-and-neck RT. The authors examined 26 patient data sets, each consisting of planning CT data acquired before treatment and patient setup CBCT data acquired at the time of treatment. As input data of surface registration, patient’s skin surfaces were created by contouring patient skin from planning CT and treatment CBCT. Surface registration was performed using the iterative closest points algorithm by point–plane closest, which minimizes the normal distance between source points and target surfaces. Six degrees of freedom (three translations and three rotations) were used in both surface and volumetric registrations and the results were compared. The accuracy of each method was estimated by digital phantom tests. Results: Based on the results of 26 patients, the authors found that the average and maximum root-mean-square translation deviation between the surface and volumetric registrations were 2.7 and 5.2 mm, respectively. The residual error of the surface registration was calculated to have an average of 0.9 mm and a maximum of 1.7 mm. Conclusions: Surface registration may lead to results different from those of the conventional volumetric registration. Only limited accuracy can be achieved for patient

  8. Accuracy of momentum and gyrodensity transport in global gyrokinetic particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    McMillan, B. F.; Villard, L.

    2014-05-01

    Gyrokinetic Particle-In-Cell (PIC) simulations based on conservative Lagrangian formalisms admit transport equations for conserved quantities such as gyrodensity and toroidal momentum, and these can be derived for arbitrary wavelength, even though previous applications have used the long-wavelength approximation. In control-variate PIC simulations, a consequence of the different treatment of the background (f0) and perturbed parts (δf), when a splitting f = f0 + δf is performed, is that analytical transport relations for the relevant fluxes and moments are only reproduced in the large marker number limit. The transport equations for f can be used to write the inconsistency in the perturbed quantities explicitly in terms of the sampling of the background distribution f0. This immediately allows estimates of the error in consistency of momentum transport in control-variate PIC simulations. This inconsistency tends to accumulate secularly and is not directly affected by the sources and noise control in the system. Although physical tokamaks often rotate quite strongly, the standard gyrokinetic formalism assumes weak perpendicular flows, comparable to the drift speed. For systems with such weak flows, maintaining acceptably small relative errors requires that a number of markers scale with the fourth power of the linear system size to consistently resolve long-wavelength evolution. To avoid this unfavourable scaling, an algorithm for exact gyrodensity transport has been developed, and this is shown to allow accurate simulations with an order of magnitude fewer markers.

  9. Accuracy of momentum and gyrodensity transport in global gyrokinetic particle-in-cell simulations

    SciTech Connect

    McMillan, B. F.; Villard, L.

    2014-05-15

    Gyrokinetic Particle-In-Cell (PIC) simulations based on conservative Lagrangian formalisms admit transport equations for conserved quantities such as gyrodensity and toroidal momentum, and these can be derived for arbitrary wavelength, even though previous applications have used the long-wavelength approximation. In control-variate PIC simulations, a consequence of the different treatment of the background (f{sub 0}) and perturbed parts (δf), when a splitting f = f{sub 0} + δf is performed, is that analytical transport relations for the relevant fluxes and moments are only reproduced in the large marker number limit. The transport equations for f can be used to write the inconsistency in the perturbed quantities explicitly in terms of the sampling of the background distribution f{sub 0}. This immediately allows estimates of the error in consistency of momentum transport in control-variate PIC simulations. This inconsistency tends to accumulate secularly and is not directly affected by the sources and noise control in the system. Although physical tokamaks often rotate quite strongly, the standard gyrokinetic formalism assumes weak perpendicular flows, comparable to the drift speed. For systems with such weak flows, maintaining acceptably small relative errors requires that a number of markers scale with the fourth power of the linear system size to consistently resolve long-wavelength evolution. To avoid this unfavourable scaling, an algorithm for exact gyrodensity transport has been developed, and this is shown to allow accurate simulations with an order of magnitude fewer markers.

  10. SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors

    SciTech Connect

    Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I

    2014-06-01

    Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though

  11. Aerosol kinetic code "AERFORM": Model, validation and simulation results

    NASA Astrophysics Data System (ADS)

    Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.

    2016-06-01

    The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.

  12. Experimental and simulational result multipactors in 112 MHz QWR injector

    SciTech Connect

    Xin, T.; Ben-Zvi, I.; Belomestnykh, S.; Brutus, J. C.; Skaritka, J.; Wu, Q.; Xiao, B.

    2015-05-03

    The first RF commissioning of 112 MHz QWR superconducting electron gun was done in late 2014. The coaxial Fundamental Power Coupler (FPC) and Cathode Stalk (stalk) were installed and tested for the first time. During this experiment, we observed several multipacting barriers at different gun voltage levels. The simulation work was done within the same range. The comparison between the experimental observation and the simulation results are presented in this paper. The observations during the test are consisted with the simulation predictions. We were able to overcome most of the multipacting barriers and reach 1.8 MV gun voltage under pulsed mode after several round of conditioning processes.

  13. Preliminary Results from SCEC Earthquake Simulator Comparison Project

    NASA Astrophysics Data System (ADS)

    Tullis, T. E.; Barall, M.; Richards-Dinger, K. B.; Ward, S. N.; Heien, E.; Zielke, O.; Pollitz, F. F.; Dieterich, J. H.; Rundle, J. B.; Yikilmaz, M. B.; Turcotte, D. L.; Kellogg, L. H.; Field, E. H.

    2010-12-01

    Earthquake simulators are computer programs that simulate long sequences of earthquakes. If such simulators could be shown to produce synthetic earthquake histories that are good approximations to actual earthquake histories they could be of great value in helping to anticipate the probabilities of future earthquakes and so could play an important role in helping to make public policy decisions. Consequently it is important to discover how realistic are the earthquake histories that result from these simulators. One way to do this is to compare their behavior with the limited knowledge we have from the instrumental, historic, and paleoseismic records of past earthquakes. Another, but slow process for large events, is to use them to make predictions about future earthquake occurrence and to evaluate how well the predictions match what occurs. A final approach is to compare the results of many varied earthquake simulators to determine the extent to which the results depend on the details of the approaches and assumptions made by each simulator. Five independently developed simulators, capable of running simulations on complicated geometries containing multiple faults, are in use by some of the authors of this abstract. Although similar in their overall purpose and design, these simulators differ from one another widely in their details in many important ways. They require as input for each fault element a value for the average slip rate as well as a value for friction parameters or stress reduction due to slip. They share the use of the boundary element method to compute stress transfer between elements. None use dynamic stress transfer by seismic waves. A notable difference is the assumption different simulators make about the constitutive properties of the faults. The earthquake simulator comparison project is designed to allow comparisons among the simulators and between the simulators and past earthquake history. The project uses sets of increasingly detailed

  14. Electrical properties of polarizable ionic solutions. II. Computer simulation results

    NASA Astrophysics Data System (ADS)

    Caillol, J. M.; Levesque, D.; Weis, J. J.

    1989-11-01

    We present molecular dynamics simulations for two limiting models of ionic solutions: one where the solvent molecules are polar, but nonpolarizable; the other where they are only polarizable (but have no permanent dipole moment). For both models, the static two-body correlation functions, the frequency-dependent dielectric constant and conductivity are calculated and the statistical uncertainty on these quantities estimated for molecular dynamics runs of the order of 105 integration steps. For the case of the polar solvent, the accuracy of the computed static interionic correlation functions allows a valuable test of the hypernetted chain integral equation theory at an ionic concentration of 0.04. The quantitative variation of the fluctuations of polarization and electrical current with change of boundary conditions is evaluated within the context of the second model (polarizable nonpolar solvent). Applying the relationships derived in Part I between the phenomenological coefficients and susceptibilities, it is shown that consistent values for the dielectric constant and electrical conductivity are obtained. The sum rules which generalize the Stillinger-Lovett conditions to ionic solutions are computed and shown to be satisfied in our simulations. The evaluation of these sum rules constitutes an important test of the convergence of the electrolyte system to an equilibrium state.

  15. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  16. Hyper-X Stage Separation: Simulation Development and Results

    NASA Technical Reports Server (NTRS)

    Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.

    2001-01-01

    This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.

  17. AMES Stereo Pipeline Derived DEM Accuracy Experiment Using LROC-NAC Stereopairs and Weighted Spatial Dependence Simulation for Lunar Site Selection

    NASA Astrophysics Data System (ADS)

    Laura, J. R.; Miller, D.; Paul, M. V.

    2012-03-01

    An accuracy assessment of AMES Stereo Pipeline derived DEMs for lunar site selection using weighted spatial dependence simulation and a call for outside AMES derived DEMs to facilitate a statistical precision analysis.

  18. Increased movement accuracy and reduced EMG activity as the result of adopting an external focus of attention.

    PubMed

    Zachry, Tiffany; Wulf, Gabriele; Mercer, John; Bezodis, Neil

    2005-10-30

    The performance and learning of motor skills has been shown to be enhanced if the performer adopts an external focus of attention (focus on the movement effect) compared to an internal focus (focus on the movements themselves) [G. Wulf, W. Prinz, Directing attention to movement effects enhances learning: a review, Psychon. Bull. Rev. 8 (2001) 648-660]. While most previous studies examining attentional focus effects have exclusively used performance outcome (e.g., accuracy) measures, in the present study electromyography (EMG) was used to determine neuromuscular correlates of external versus internal focus differences in movement outcome. Participants performed basketball free throws under both internal focus (wrist motion) and external focus (basket) conditions. EMG activity was recorded for m. flexor carpi radialis, m. biceps brachii, m. triceps triceps brachii, and m. deltoid of each participant's shooting arm. The results showed that free throw accuracy was greater when participants adopted an external compared to an internal focus. In addition, EMG activity of the biceps and triceps muscles was lower with an external relative to an internal focus. This suggests that an external focus of attention enhances movement economy, and presumably reduces "noise" in the motor system that hampers fine movement control and makes the outcome of the movement less reliable.

  19. Tempest: Mesoscale test case suite results and the effect of order-of-accuracy on pressure gradient force errors

    NASA Astrophysics Data System (ADS)

    Guerra, J. E.; Ullrich, P. A.

    2014-12-01

    Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.

  20. Contribution of Sample Processing to Variability and Accuracy of the Results of Pesticide Residue Analysis in Plant Commodities.

    PubMed

    Ambrus, Árpád; Buczkó, Judit; Hamow, Kamirán Á; Juhász, Viktor; Solymosné Majzik, Etelka; Szemánné Dobrik, Henriett; Szitás, Róbert

    2016-08-10

    Significant reduction of concentration of some pesticide residues and substantial increase of the uncertainty of the results derived from the homogenization of sample materials have been reported in scientific papers long ago. Nevertheless, performance of methods is frequently evaluated on the basis of only recovery tests, which exclude sample processing. We studied the effect of sample processing on accuracy and uncertainty of the measured residue values with lettuce, tomato, and maize grain samples applying mixtures of selected pesticides. The results indicate that the method is simple and robust and applicable in any pesticide residue laboratory. The analytes remaining in the final extract are influenced by their physical-chemical properties, the nature of the sample material, the temperature of comminution of sample, and the mass of test portion extracted. Consequently, validation protocols should include testing the effect of sample processing, and the performance of the complete method should be regularly checked within internal quality control. PMID:26755282

  1. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-01

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a SNAP derivative reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  2. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    SciTech Connect

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-21

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the potential development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a liquid metal cooled reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  3. Leveraging data analytics, patterning simulations and metrology models to enhance CD metrology accuracy for advanced IC nodes

    NASA Astrophysics Data System (ADS)

    Rana, Narender; Zhang, Yunlin; Kagalwala, Taher; Hu, Lin; Bailey, Todd

    2014-04-01

    Integrated Circuit (IC) technology is changing in multiple ways: 193i to EUV exposure, planar to non-planar device architecture, from single exposure lithography to multiple exposure and DSA patterning etc. Critical dimension (CD) control requirement is becoming stringent and more exhaustive: CD and process window are shrinking., three sigma CD control of < 2 nm is required in complex geometries, and metrology uncertainty of < 0.2 nm is required to achieve the target CD control for advanced IC nodes (e.g. 14 nm, 10 nm and 7 nm nodes). There are fundamental capability and accuracy limits in all the metrology techniques that are detrimental to the success of advanced IC nodes. Reference or physical CD metrology is provided by CD-AFM, and TEM while workhorse metrology is provided by CD-SEM, Scatterometry, Model Based Infrared Reflectrometry (MBIR). Precision alone is not sufficient moving forward. No single technique is sufficient to ensure the required accuracy of patterning. The accuracy of CD-AFM is ~1 nm and precision in TEM is poor due to limited statistics. CD-SEM, scatterometry and MBIR need to be calibrated by reference measurements for ensuring the accuracy of patterned CDs and patterning models. There is a dire need of measurement with < 0.5 nm accuracy and the industry currently does not have that capability with inline measurments. Being aware of the capability gaps for various metrology techniques, we have employed data processing techniques and predictive data analytics, along with patterning simulation and metrology models, and data integration techniques to selected applications demonstrating the potential solution and practicality of such an approach to enhance CD metrology accuracy. Data from multiple metrology techniques has been analyzed in multiple ways to extract information with associated uncertainties and integrated to extract the useful and more accurate CD and profile information of the structures. This paper presents the optimization of

  4. Examining the Accuracy of Astrophysical Disk Simulations with a Generalized Hydrodynamical Test Problem

    NASA Astrophysics Data System (ADS)

    Raskin, Cody; Owen, J. Michael

    2016-11-01

    We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.

  5. Autonomous navigation accuracy using simulated horizon sensor and sun sensor observations

    NASA Technical Reports Server (NTRS)

    Pease, G. E.; Hendrickson, H. T.

    1980-01-01

    A relatively simple autonomous system which would use horizon crossing indicators, a sun sensor, a quartz oscillator, and a microprogrammed computer is discussed. The sensor combination is required only to effectively measure the angle between the centers of the Earth and the Sun. Simulations for a particular orbit indicate that 2 km r.m.s. orbit determination uncertainties may be expected from a system with 0.06 deg measurement uncertainty. A key finding is that knowledge of the satellite orbit plane orientation can be maintained to this level because of the annual motion of the Sun and the predictable effects of Earth oblateness. The basic system described can be updated periodically by transits of the Moon through the IR horizon crossing indicator fields of view.

  6. The accuracy of simulated indoor time trials utilizing a CompuTrainer and GPS data.

    PubMed

    Peveler, Willard W

    2013-10-01

    The CompuTrainer is commonly used to measure cycling time trial performance in a laboratory setting. Previous research has demonstrated that the CompuTrainer tends toward underestimating power at higher workloads but provides reliable measures. The extent to which the CompuTrainer is capable of simulating outdoor time trials in a laboratory setting has yet to be examined. The purpose of this study was to examine the validity of replicating an outdoor time trial course indoors by comparing completion times between the actual time trial course and the replicated outdoor time trial course on the CompuTrainer. A global positioning system was used to collect data points along a local outdoor time trial course. Data were then downloaded and converted into a time trial course for the CompuTrainer. Eleven recreational to highly trained cyclists participated in this study. To participate in this study, subjects had to have completed a minimum of 2 of the local Cleves time trial races. Subjects completed 2 simulated indoor time trials on the CompuTrainer. Mean finishing times for the mean indoor performance trial (34.58 ± 8.63 minutes) were significantly slower in relation to the mean outdoor performance time (26.24 ± 3.23 minutes). Cyclists' finish times increased (performance decreased) by 24% on the indoor time trials in relation to the mean outdoor times. There were no significant differences between CompuTrainer trial 1 (34.77 ± 8.54 minutes) and CompuTrainer trial 1 (34.37 ± 8.76 minutes). Because of the significant differences in times between the indoor and outdoor time trials, meaningful comparisons of performance times cannot be made between the two. However, there were no significant differences found between the 2 CompuTrainer trials, and therefore the CompuTrainer can still be recommended for laboratory testing between trials.

  7. Results from Binary Black Hole Simulations in Astrophysics Applications

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2007-01-01

    Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.

  8. Simulation of diurnal thermal energy storage systems: Preliminary results

    SciTech Connect

    Katipamula, S.; Somasundaram, S.; Williams, H.R.

    1994-12-01

    This report describes the results of a simulation of thermal energy storage (TES) integrated with a simple-cycle gas turbine cogeneration system. Integrating TES with cogeneration can serve the electrical and thermal loads independently while firing all fuel in the gas turbine. The detailed engineering and economic feasibility of diurnal TES systems integrated with cogeneration systems has been described in two previous PNL reports. The objective of this study was to lay the ground work for optimization of the TES system designs using a simulation tool called TRNSYS (TRaNsient SYstem Simulation). TRNSYS is a transient simulation program with a sequential-modular structure developed at the Solar Energy Laboratory, University of Wisconsin-Madison. The two TES systems selected for the base-case simulations were: (1) a one-tank storage model to represent the oil/rock TES system, and (2) a two-tank storage model to represent the molten nitrate salt TES system. Results of the study clearly indicate that an engineering optimization of the TES system using TRNSYS is possible. The one-tank stratified oil/rock storage model described here is a good starting point for parametric studies of a TES system. Further developments to the TRNSYS library of available models (economizer, evaporator, gas turbine, etc.) are recommended so that the phase-change processes is accurately treated.

  9. Simulation of diurnal thermal energy storage systems: Preliminary results

    NASA Astrophysics Data System (ADS)

    Katipamula, S.; Somasundaram, S.; Williams, H. R.

    1994-12-01

    This report describes the results of a simulation of thermal energy storage (TES) integrated with a simple-cycle gas turbine cogeneration system. Integrating TES with cogeneration can serve the electrical and thermal loads independently while firing all fuel in the gas turbine. The detailed engineering and economic feasibility of diurnal TES systems integrated with cogeneration systems has been described in two previous PNL reports. The objective of this study was to lay the ground work for optimization of the TES system designs using a simulation tool called TRNSYS (TRaNsient SYstem Simulation). TRNSYS is a transient simulation program with a sequential-modular structure developed at the Solar Energy Laboratory, University of Wisconsin-Madison. The two TES systems selected for the base-case simulations were: (1) a one-tank storage model to represent the oil/rock TES system; and (2) a two-tank storage model to represent the molten nitrate salt TES system. Results of the study clearly indicate that an engineering optimization of the TES system using TRNSYS is possible. The one-tank stratified oil/rock storage model described here is a good starting point for parametric studies of a TES system. Further developments to the TRNSYS library of available models (economizer, evaporator, gas turbine, etc.) are recommended so that the phase-change processes is accurately treated.

  10. Simulating lightning into the RAMS model: implementation and preliminary results

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Petracca, M.; Panegrossi, G.; Sanò, P.; Casella, D.; Dietrich, S.

    2014-05-01

    This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS). The method gives the flash density at the resolution of the RAMS grid-scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity. Results show that the model predicts reasonably well both cases and that the lightning activity is well reproduced especially for the most intense case. However, there are errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the intensity and the evolution of the convection. This shows the importance of the use of computationally efficient lightning schemes, such as the one described in this paper, in forecast models.

  11. Comparative evaluation of the accuracy of two electronic apex locators in determining the working length in teeth with simulated apical root resorption: An in vitro study

    PubMed Central

    Saraswathi, Vidya; Kedia, Archit; Purayil, Tina Puthen; Ballal, Vasudev; Saini, Aakriti

    2016-01-01

    Introduction: Accurate determination of working length (WL) is a critical factor for endodontic success. This is commonly achieved using an apex locator which is influenced by the presence or absence of the apical constriction. Hence, this study was done to compare the accuracy of two generations of apex locators in teeth with simulated apical root resorption. Materials and Methods: Forty maxillary central incisors were selected and after access preparation, were embedded in an alginate mold. On achieving partial set, teeth were removed, and a 45° oblique cut was made at the apex. The teeth were replanted and stabilized in the mold, and WL was determined using two generations of apex locators (Raypex 5 and Apex NRG XFR). Actual length of teeth (control) was determined by visual method. Statistical Analysis: Results were subjected to statistical analysis using the paired t-test. Results: Raypex 5 and Apex NRG was accurate for only 33.75% and 23.75% of samples, respectively. However, with ±0.5 mm acceptance limit, they showed an average accuracy of 56.2% and 57.5%, respectively. There was no significant difference in the accuracy between the two apex locators. Conclusion: Neither of the two apex locators were 100% accurate in determining the WL.

  12. Comparative evaluation of the accuracy of two electronic apex locators in determining the working length in teeth with simulated apical root resorption: An in vitro study

    PubMed Central

    Saraswathi, Vidya; Kedia, Archit; Purayil, Tina Puthen; Ballal, Vasudev; Saini, Aakriti

    2016-01-01

    Introduction: Accurate determination of working length (WL) is a critical factor for endodontic success. This is commonly achieved using an apex locator which is influenced by the presence or absence of the apical constriction. Hence, this study was done to compare the accuracy of two generations of apex locators in teeth with simulated apical root resorption. Materials and Methods: Forty maxillary central incisors were selected and after access preparation, were embedded in an alginate mold. On achieving partial set, teeth were removed, and a 45° oblique cut was made at the apex. The teeth were replanted and stabilized in the mold, and WL was determined using two generations of apex locators (Raypex 5 and Apex NRG XFR). Actual length of teeth (control) was determined by visual method. Statistical Analysis: Results were subjected to statistical analysis using the paired t-test. Results: Raypex 5 and Apex NRG was accurate for only 33.75% and 23.75% of samples, respectively. However, with ±0.5 mm acceptance limit, they showed an average accuracy of 56.2% and 57.5%, respectively. There was no significant difference in the accuracy between the two apex locators. Conclusion: Neither of the two apex locators were 100% accurate in determining the WL. PMID:27656055

  13. Integrated friction measurements in hip wear simulations: short-term results.

    PubMed

    Spinelli, M; Affatato, S; Tiberi, L; Carmignato, S; Viceconti, M

    2010-01-01

    Hip joint wear simulators are used extensively to simulate the dynamic behaviour of the human hip joint and, through the wear rate, gain a concrete indicator about the overall wear performance of different coupled bearings. Present knowledge of the dynamic behaviour of important concurrent indicators, such as the coefficient of friction, could prove helpful for the continuing improvement in applied biomaterials. A limited number of commercial or custom-made simulators have been designed specifically for friction studies but always separately from wear tests; thus, analysis of these two important parameters has remained unconnected. As a result, a new friction sensor has been designed, built, and integrated in a commercial biaxial rocking motion hip simulator. The aim of this study is to verify the feasibility of an experimental set-up in which the dynamic measurement of the friction factor could effectively be implemented in a standard wear test without compromising its general accuracy and repeatability. A short wear test was run with the new set-up for 1 x 10(6) cycles. In particular, three soft-bearings (metal-on-polyethylene, phi = 28 mm) were tested; during the whole test, axial load and frictional torque about the vertical loading axis were synchronously recorded in order to calculate the friction factor. Additional analyses were performed on the specimens, before and after the test, in order to verify the accuracy of the wear test. The average friction factor was 0.110 +/- 0.025. The friction sensors showed good accuracy and repeatability throughout. This innovative set-up was able to reproduce stable and reliable measurements. The results obtained encourage further investigations of this set-up for long-term assessment and using different combinations of materials.

  14. Technical Note: Maximising accuracy and minimising cost of a potentiometrically regulated ocean acidification simulation system

    NASA Astrophysics Data System (ADS)

    MacLeod, C. D.; Doyle, H. L.; Currie, K. I.

    2015-02-01

    This article describes a potentiometric ocean acidification simulation system which automatically regulates pH through the injection of 100% CO2 gas into temperature-controlled seawater. The system is ideally suited to long-term experimental studies of the effect of acidification on biological processes involving small-bodied (10-20 mm) calcifying or non-calcifying organisms. Using hobbyist-grade equipment, the system was constructed for approximately USD 1200 per treatment unit (tank, pH regulation apparatus, chiller, pump/filter unit). An overall tolerance of ±0.05 pHT units (SD) was achieved over 90 days in two acidified treatments (7.60 and 7.40) at 12 °C using glass electrodes calibrated with synthetic seawater buffers, thereby preventing liquid junction error. The performance of the system was validated through the independent calculation of pHT (12 °C) using dissolved inorganic carbon and total alkalinity data taken from discrete acidified seawater samples. The system was used to compare the shell growth of the marine gastropod Zeacumantus subcarinatus infected with the trematode parasite Maritrema novaezealandensis with that of uninfected snails at pH levels of 7.4, 7.6, and 8.1.

  15. Recent results from simulations of the magnetorotational instability

    NASA Astrophysics Data System (ADS)

    Stone, James M.

    2011-06-01

    The nonlinear saturation of the magnetorotational instability (MRI) is best studied through numerical MHD simulations. Recent results of simulations that adopt the local shearing box approximation, and fully global models that follow the entire disk, are described. Outstanding issues remain, such as a first-principles understanding of the dynamo processes that control saturation with no net magnetic flux. Important directions for future work include a better understanding of basic plasma processes, such as reconnection, dissipation, and particle acceleration, in the MHD turbulence driven by the MRI.

  16. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    SciTech Connect

    Tisseur, D. Costin, M. Rattoni, B. Vienne, C. Vabre, A. Cattiaux, G.; Sollier, T.

    2015-03-31

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  17. Electronic medical record in the simulation hospital: does it improve accuracy in charting vital signs, intake, and output?

    PubMed

    Mountain, Carel; Redd, Roxanne; O'Leary-Kelly, Colleen; Giles, Kim

    2015-04-01

    Nursing care delivery has shifted in response to the introduction of electronic health records. Adequate education using computerized documentation heavily influences a nurse's ability to navigate and utilize electronic medical records. The risk for treatment error increases when a bedside nurse lacks the correct knowledge and skills regarding electronic medical record documentation. Prelicensure nursing education should introduce electronic medical record documentation and provide a method for feedback from instructors to ensure proper understanding and use of this technology. RN preceptors evaluated two groups of associate degree nursing students to determine if introduction of electronic medical record in the simulation hospital increased accuracy in documenting vital signs, intake, and output in the actual clinical setting. During simulation, the first group of students documented using traditional paper and pen; the second group used an academic electronic medical record. Preceptors evaluated each group during their clinical rotations at two local inpatient facilities. RN preceptors provided information by responding to a 10-question Likert scale survey regarding the use of student electronic medical record documentation during the 120-hour inpatient preceptor rotation. The implementation of the electronic medical record into the simulation hospital, although a complex undertaking, provided students a safe and supportive environment in which to practice using technology and receive feedback from faculty regarding accurate documentation.

  18. Efficiency and Accuracy of the Generalized Solvent Boundary Potential for Hybrid QM/MM Simulations: Implementation for Semiempirical Hamiltonians.

    PubMed

    Benighaus, Tobias; Thiel, Walter

    2008-10-14

    We report the implementation of the generalized solvent boundary potential (GSBP) [ Im , W. , Bernèche , S. , and Roux , B. J. Chem. Phys. 2001, 114, 2924 ] in the framework of semiempirical hybrid quantum mechanical/molecular mechanical (QM/MM) methods. Application of the GSBP is connected with a significant overhead that is dominated by numerical solutions of the Poisson-Boltzmann equation for continuous charge distributions. Three approaches are presented that accelerate computation of the values at the boundary of the simulation box and in the interior of the macromolecule and solvent. It is shown that these methods reduce the computational overhead of the GSBP significantly with only minimal loss of accuracy. The accuracy of the GSBP to represent long-range electrostatic interactions is assessed for an extensive set of its inherent parameters, and a set of optimal parameters is defined. On this basis, the overhead and the savings of the GSBP are quantified for model systems of different sizes in the range of 7000 to 40 000 atoms. We find that the savings compensate for the overhead in systems larger than 12 500 atoms. Beyond this system size, the GSBP reduces the computational cost significantly, by 70% and more for large systems (>25 000 atoms). PMID:26620166

  19. Development of a Haptic Elbow Spasticity Simulator (HESS) for Improving Accuracy and Reliability of Clinical Assessment of Spasticity

    PubMed Central

    Park, Hyung-Soon; Kim, Jonghyun; Damiano, Diane L.

    2013-01-01

    This paper presents the framework for developing a robotic system to improve accuracy and reliability of clinical assessment. Clinical assessment of spasticity tends to have poor reliability because of the nature of the in-person assessment. To improve accuracy and reliability of spasticity assessment, a haptic device, named the HESS (Haptic Elbow Spasticity Simulator) has been designed and constructed to recreate the clinical “feel” of elbow spasticity based on quantitative measurements. A mathematical model representing the spastic elbow joint was proposed based on clinical assessment using the Modified Ashworth Scale (MAS) and quantitative data (position, velocity, and torque) collected on subjects with elbow spasticity. Four haptic models (HMs) were created to represent the haptic feel of MAS 1, 1+, 2, and 3. The four HMs were assessed by experienced clinicians; three clinicians performed both in-person and haptic assessments, and had 100% agreement in MAS scores; and eight clinicians who were experienced with MAS assessed the four HMs without receiving any training prior to the test. Inter-rater reliability among the eight clinicians had substantial agreement (κ = 0.626). The eight clinicians also rated the level of realism (7.63 ± 0.92 out of 10) as compared to their experience with real patients. PMID:22562769

  20. Expected accuracy of tilt measurements on a novel hexapod-based digital zenith camera system: a Monte-Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Hirt, Christian; Papp, Gábor; Pál, András; Benedek, Judit; Szũcs, Eszter

    2014-08-01

    Digital zenith camera systems (DZCS) are dedicated astronomical-geodetic measurement systems for the observation of the direction of the plumb line. A DZCS key component is a pair of tilt meters for the determination of the instrumental tilt with respect to the plumb line. Highest accuracy (i.e., 0.1 arc-seconds or better) is achieved in practice through observation with precision tilt meters in opposite faces (180° instrumental rotation), and application of rigorous tilt reduction models. A novel concept proposes the development of a hexapod (Stewart platform)-based DZCS. However, hexapod-based total rotations are limited to about 30°-60° in azimuth (equivalent to ±15° to ±30° yaw rotation), which raises the question of the impact of the rotation angle between the two faces on the accuracy of the tilt measurement. The goal of the present study is the investigation of the expected accuracy of tilt measurements to be carried out on future hexapod-based DZCS, with special focus placed on the role of the limited rotation angle. A Monte-Carlo simulation study is carried out in order to derive accuracy estimates for the tilt determination as a function of several input parameters, and the results are validated against analytical error propagation. As the main result of the study, limitation of the instrumental rotation to 60° (30°) deteriorates the tilt accuracy by a factor of about 2 (4) compared to a 180° rotation between the faces. Nonetheless, a tilt accuracy at the 0.1 arc-second level is expected when the rotation is at least 45°, and 0.05 arc-second (about 0.25 microradian) accurate tilt meters are deployed. As such, a hexapod-based DZCS can be expected to allow sufficiently accurate determination of the instrumental tilt. This provides supporting evidence for the feasibility of such a novel instrumentation. The outcomes of our study are not only relevant to the field of DZCS, but also to all other types of instruments where the instrumental tilt

  1. The Mayfield method of estimating nesting success: A model, estimators and simulation results

    USGS Publications Warehouse

    Hensler, G.L.; Nichols, J.D.

    1981-01-01

    Using a nesting model proposed by Mayfield we show that the estimator he proposes is a maximum likelihood estimator (m.l.e.). M.l.e. theory allows us to calculate the asymptotic distribution of this estimator, and we propose an estimator of the asymptotic variance. Using these estimators we give approximate confidence intervals and tests of significance for daily survival. Monte Carlo simulation results show the performance of our estimators and tests under many sets of conditions. A traditional estimator of nesting success is shown to be quite inferior to the Mayfield estimator. We give sample sizes required for a given accuracy under several sets of conditions.

  2. Simulating lightning into the RAMS model: implementation and preliminary results

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Petracca, M.; Panegrossi, G.; Sanò, P.; Casella, D.; Dietrich, S.

    2014-11-01

    This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS). The method gives the flash density at the resolution of the RAMS grid scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity which occurred, respectively, on 20 October 2011 and on 15 October 2012. The number of flashes simulated (observed) over Lazio is 19435 (16231) for the first case and 7012 (4820) for the second case, and the model correctly reproduces the larger number of flashes that characterized the 20 October 2011 event compared to the 15 October 2012 event. There are, however, errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. For the 20 October 2011 case study, spatial errors are of the order of a few tens of kilometres and the timing of the event is correctly simulated. For the 15 October 2012 case study, the spatial error in the positioning of the convection is of the order of 100 km and the event has a longer duration in the simulation than in the reality. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the

  3. Enhanced vision systems: results of simulation and operational tests

    NASA Astrophysics Data System (ADS)

    Hecker, Peter; Doehler, Hans-Ullrich

    1998-07-01

    Today's aircrews have to handle more and more complex situations. Most critical tasks in the field of civil aviation are landing approaches and taxiing. Especially under bad weather conditions the crew has to handle a tremendous workload. Therefore DLR's Institute of Flight Guidance has developed a concept for an enhanced vision system (EVS), which increases performance and safety of the aircrew and provides comprehensive situational awareness. In previous contributions some elements of this concept have been presented, i.e. the 'Simulation of Imaging Radar for Obstacle Detection and Enhanced Vision' by Doehler and Bollmeyer 1996. Now the presented paper gives an overview about the DLR's enhanced vision concept and research approach, which consists of two main components: simulation and experimental evaluation. In a first step the simulational environment for enhanced vision research with a pilot-in-the-loop is introduced. An existing fixed base flight simulator is supplemented by real-time simulations of imaging sensors, i.e. imaging radar and infrared. By applying methods of data fusion an enhanced vision display is generated combining different levels of information, such as terrain model data, processed images acquired by sensors, aircraft state vectors and data transmitted via datalink. The second part of this contribution presents some experimental results. In cooperation with Daimler Benz Aerospace Sensorsystems Ulm, a test van and a test aircraft were equipped with a prototype of an imaging millimeter wave radar. This sophisticated HiVision Radar is up to now one of the most promising sensors for all weather operations. Images acquired by this sensor are shown as well as results of data fusion processes based on digital terrain models. The contribution is concluded by a short video presentation.

  4. ENTROPY PRODUCTION IN COLLISIONLESS SYSTEMS. III. RESULTS FROM SIMULATIONS

    SciTech Connect

    Barnes, Eric I.; Egerer, Colin P. E-mail: egerer.coli@uwlax.edu

    2015-05-20

    The equilibria formed by the self-gravitating, collisionless collapse of simple initial conditions have been investigated for decades. We present the results of our attempts to describe the equilibria formed in N-body simulations using thermodynamically motivated models. Previous work has suggested that it is possible to define distribution functions for such systems that describe maximum entropy states. These distribution functions are used to create radial density and velocity distributions for comparison to those from simulations. A wide variety of N-body code conditions are used to reduce the chance that results are biased by numerical issues. We find that a subset of initial conditions studied lead to equilibria that can be accurately described by these models, and that direct calculation of the entropy shows maximum values being achieved.

  5. Improved Accuracy of Continuous Glucose Monitoring Systems in Pediatric Patients with Diabetes Mellitus: Results from Two Studies

    PubMed Central

    2016-01-01

    Abstract Objective: This study was designed to evaluate accuracy, performance, and safety of the Dexcom (San Diego, CA) G4® Platinum continuous glucose monitoring (CGM) system (G4P) compared with the Dexcom G4 Platinum with Software 505 algorithm (SW505) when used as adjunctive management to blood glucose (BG) monitoring over a 7-day period in youth, 2–17 years of age, with diabetes. Research Design and Methods: Youth wore either one or two sensors placed on the abdomen or upper buttocks for 7 days, calibrating the device twice daily with a uniform BG meter. Participants had one in-clinic session on Day 1, 4, or 7, during which fingerstick BG measurements (self-monitoring of blood glucose [SMBG]) were obtained every 30 ± 5 min for comparison with CGM, and in youth 6–17 years of age, reference YSI glucose measurements were obtained from arterialized venous blood collected every 15 ± 5 min for comparison with CGM. The sensor was removed by the participant/family after 7 days. Results: In comparison of 2,922 temporally paired points of CGM with the reference YSI measurement for G4P and 2,262 paired points for SW505, the mean absolute relative difference (MARD) was 17% for G4P versus 10% for SW505 (P < 0.0001). In comparison of 16,318 temporally paired points of CGM with SMBG for G4P and 4,264 paired points for SW505, MARD was 15% for G4P versus 13% for SW505 (P < 0.0001). Similarly, error grid analyses indicated superior performance with SW505 compared with G4P in comparison of CGM with YSI and CGM with SMBG results, with greater percentages of SW505 results falling within error grid Zone A or the combined Zones A plus B. There were no serious adverse events or device-related serious adverse events for either the G4P or the SW505, and there was no sensor breakoff. Conclusions: The updated algorithm offers substantial improvements in accuracy and performance in pediatric patients with diabetes. Use of CGM with improved performance has

  6. Key results from SB8 simulant flowsheet studies

    SciTech Connect

    Koopman, D. C.

    2013-04-26

    Key technically reviewed results are presented here in support of the Defense Waste Processing Facility (DWPF) acceptance of Sludge Batch 8 (SB8). This report summarizes results from simulant flowsheet studies of the DWPF Chemical Process Cell (CPC). Results include: Hydrogen generation rate for the Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) cycles of the CPC on a 6,000 gallon basis; Volume percent of nitrous oxide, N2O, produced during the SRAT cycle; Ammonium ion concentrations recovered from the SRAT and SME off-gas; and, Dried weight percent solids (insoluble, soluble, and total) measurements and density.

  7. Numerical simulation results in the Carthage Cotton Valley field

    SciTech Connect

    Meehan, D.N.; Pennington, B.F.

    1982-01-01

    By coordinating three-dimensional reservoir simulations with pressure-transient tests, core analyses, open-hole and production logs, evaluations of tracer data during hydraulic fracturing, and geologic mapping, Champlin Petroleum obtained better predictions of the reserves and the long-term deliverability of the very tight (less than 0.1-md) Cotton Valley gas reservoir in east Texas. The simulation model that was developed proved capable of optimizing the well spacing and the fracture length. The final history match with the simulator indicated that the formation permeability of the very tight producing zones is substantially lower than suggested by conventional core analysis, 640-acre well spacing will not drain this reservoir efficiently in a reasonable time, and reserves are higher than presimulation estimates. Other results showed that even very long-term pressure buildups in this multilayer reservoir may not reach the straight line required in the conventional Horner pressure-transient analysis, type curves reflecting finite fracture flow capacity can be very useful, and pressure-drawdown analyses from well flow rates and flowing tubing pressure can provide good initial estimates of reservoir and fracture properties for detailed reservoir simulation without requiring expensive, long-term shut-ins of the well.

  8. Preliminary Results of Laboratory Simulation of Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Zhang, Shou-Biao; Xie, Jin-Lin; Hu, Guang-Hai; Li, Hong; Huang, Guang-Li; Liu, Wan-Dong

    2011-10-01

    In the Linear Magnetized Plasma (LMP) device of University of Science and Technology of China and by exerting parallel currents on two parallel copper plates, we have realized the magnetic reconnection in laboratory plasma. With the emissive probes, we have measured the parallel (along the axial direction) electric field in the process of reconnection, and verified the dependence of reconnection current on passing particles. Using the magnetic probe, we have measured the time evolution of magnetic flux, and the measured result shows no pileup of magnetic flux, in consistence with the result of numerical simulation.

  9. Molecular beam simulation of planetary atmospheric entry - Some recent results.

    NASA Technical Reports Server (NTRS)

    French, J. B.; Reid, N. M.; Nier, A. O.; Hayden, J. L.

    1972-01-01

    Progress is reported in the development of molecular beam techniques to simulate entry into planetary atmospheres. Molecular beam sources for producing fast beams containing CO2 and atomic oxygen are discussed. Results pertinent to the design and calibration of a mass spectrometer ion source for measurement of the Martian atmosphere during the free molecule portion of the entry trajectory are also presented. The shortcomings and advantages of this simulation technique are discussed, and it is demonstrated that even with certain inadequacies much information useful to the ion source design was obtained. Particularly, it is shown that an open-cavity configuration retains sensitivity to atomic oxygen, provides reasonable signal enhancement from the stagnation effect, is not highly sensitive to pitch and yaw effects, and presents no unforeseen problems in measuring CO2 or atomic oxygen.

  10. Simulation results for an innovative anti-multipath digital receiver

    NASA Technical Reports Server (NTRS)

    Painter, J. H.; Wilson, L. R.

    1973-01-01

    Simulation results are presented for the error rate performance of the recursive digital MAP detector for known M-ary signals in multiplicative and additive Gaussian noise. Plots of detection error rate versus additive signal to noise ratio are given, with multipath interference strength as a parameter. For comparison, the error rates of conventional coherent and noncoherent digital MAP detectors are simultaneously simulated and graphed. It is shown that with nonzero multiplicative noise, the error rates of the conventional detectors saturate at an irreducible level as additive signal to noise ratio increases. The error rate for the innovative detector continues to decrease rapidly with increasing additive signal to noise ratio. In the absence of multiplicative interference, the conventional coherent detector and the innovative detector are shown to exhibit identical performance.-

  11. Airflow Hazard Visualization for Helicopter Pilots: Flight Simulation Study Results

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.; Long, Kurtis R.

    2005-01-01

    Airflow hazards such as vortices or low level wind shear have been identified as a primary contributing factor in many helicopter accidents. US Navy ships generate airwakes over their decks, creating potentially hazardous conditions for shipboard rotorcraft launch and recovery. Recent sensor developments may enable the delivery of airwake data to the cockpit, where visualizing the hazard data may improve safety and possibly extend ship/helicopter operational envelopes. A prototype flight-deck airflow hazard visualization system was implemented on a high-fidelity rotorcraft flight dynamics simulator. Experienced helicopter pilots, including pilots from all five branches of the military, participated in a usability study of the system. Data was collected both objectively from the simulator and subjectively from post-test questionnaires. Results of the data analysis are presented, demonstrating a reduction in crash rate and other trends that illustrate the potential of airflow hazard visualization to improve flight safety.

  12. BWR Full Integral Simulation Test (FIST). Phase I test results

    SciTech Connect

    Hwang, W S; Alamgir, M; Sutherland, W A

    1984-09-01

    A new full height BWR system simulator has been built under the Full-Integral-Simulation-Test (FIST) program to investigate the system responses to various transients. The test program consists of two test phases. This report provides a summary, discussions, highlights and conclusions of the FIST Phase I tests. Eight matrix tests were conducted in the FIST Phase I. These tests have investigated the large break, small break and steamline break LOCA's, as well as natural circulation and power transients. Results and governing phenomena of each test have been evaluated and discussed in detail in this report. One of the FIST program objectives is to assess the TRAC code by comparisons with test data. Two pretest predictions made with TRACB02 are presented and compared with test data in this report.

  13. Machine learning methods for empirical streamflow simulation: a comparison of model accuracy, interpretability, and uncertainty in seasonal watersheds

    NASA Astrophysics Data System (ADS)

    Shortridge, Julie E.; Guikema, Seth D.; Zaitchik, Benjamin F.

    2016-07-01

    In the past decade, machine learning methods for empirical rainfall-runoff modeling have seen extensive development and been proposed as a useful complement to physical hydrologic models, particularly in basins where data to support process-based models are limited. However, the majority of research has focused on a small number of methods, such as artificial neural networks, despite the development of multiple other approaches for non-parametric regression in recent years. Furthermore, this work has often evaluated model performance based on predictive accuracy alone, while not considering broader objectives, such as model interpretability and uncertainty, that are important if such methods are to be used for planning and management decisions. In this paper, we use multiple regression and machine learning approaches (including generalized additive models, multivariate adaptive regression splines, artificial neural networks, random forests, and M5 cubist models) to simulate monthly streamflow in five highly seasonal rivers in the highlands of Ethiopia and compare their performance in terms of predictive accuracy, error structure and bias, model interpretability, and uncertainty when faced with extreme climate conditions. While the relative predictive performance of models differed across basins, data-driven approaches were able to achieve reduced errors when compared to physical models developed for the region. Methods such as random forests and generalized additive models may have advantages in terms of visualization and interpretation of model structure, which can be useful in providing insights into physical watershed function. However, the uncertainty associated with model predictions under extreme climate conditions should be carefully evaluated, since certain models (especially generalized additive models and multivariate adaptive regression splines) become highly variable when faced with high temperatures.

  14. Accuracy of liquid based versus conventional cytology: overall results of new technologies for cervical cancer screening: randomised controlled trial

    PubMed Central

    Cuzick, Jack; Pierotti, Paola; Cariaggi, Maria Paola; Palma, Paolo Dalla; Naldoni, Carlo; Ghiringhello, Bruno; Giorgi-Rossi, Paolo; Minucci, Daria; Parisio, Franca; Pojer, Ada; Schiboni, Maria Luisa; Sintoni, Catia; Zorzi, Manuel; Segnan, Nereo; Confortini, Massimo

    2007-01-01

    Objective To compare the accuracy of conventional cytology with liquid based cytology for primary screening of cervical cancer. Design Randomised controlled trial. Setting Nine screening programmes in Italy. Participants Women aged 25-60 attending for a new screening round: 22 466 were assigned to the conventional arm and 22 708 were assigned to the experimental arm. Interventions Conventional cytology compared with liquid based cytology and testing for human papillomavirus. Main outcome measure Relative sensitivity for cervical intraepithelial neoplasia of grade 2 or more at blindly reviewed histology, with atypical cells of undetermined significance or more severe cytology considered a positive result. Results In an intention to screen analysis liquid based cytology showed no significant increase in sensitivity for cervical intraepithelial neoplasia of grade 2 or more (relative sensitivity 1.17, 95% confidence interval 0.87 to 1.56) whereas the positive predictive value was reduced (relative positive predictive value v conventional cytology 0.58, 0.44 to 0.77). Liquid based cytology detected more lesions of grade 1 or more (relative sensitivity 1.68, 1.40 to 2.02), with a larger increase among women aged 25-34 (P for heterogeneity 0.0006), but did not detect more lesions of grade 3 or more (relative sensitivity 0.84, 0.56 to 1.25). Results were similar when only low grade intraepithelial lesions or more severe cytology were considered a positive result. No evidence was found of heterogeneity between centres or of improvement with increasing time from start of the study. The relative frequency of women with at least one unsatisfactory result was lower with liquid based cytology (0.62, 0.56 to 0.69). Conclusion Liquid based cytology showed no statistically significant difference in sensitivity to conventional cytology for detection of cervical intraepithelial neoplasia of grade 2 or more. More positive results were found, however, leading to a lower positive

  15. Planck 2015 results. XII. Full focal plane simulations

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Castex, G.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Karakci, A.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Welikala, N.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    We present the 8th full focal plane simulation set (FFP8), deployed in support of the Planck 2015 results. FFP8 consists of 10 fiducial mission realizations reduced to 18 144 maps, together with the most massive suite of Monte Carlo realizations of instrument noise and CMB ever generated, comprising 104 mission realizations reduced to about 106 maps. The resulting maps incorporate the dominant instrumental, scanning, and data analysis effects, and the remaining subdominant effects will be included in future updates. Generated at a cost of some 25 million CPU-hours spread across multiple high-performance-computing (HPC) platforms, FFP8 is used to validate and verify analysis algorithms and their implementations, and to remove biases from and quantify uncertainties in the results of analyses of the real data.

  16. The relativity experiment of MORE: Global full-cycle simulation and results

    NASA Astrophysics Data System (ADS)

    Schettino, Giulia

    2015-07-01

    BepiColombo is a joint ESA/JAXA mission to Mercury with challenging objectives regarding geophysics, geodesy and fundamental physics. In particular, the Mercury Orbiter Radio science Experiment (MORE) intends, as one of its goals, to perform a test of General Relativity. This can be done by measuring and constraining the parametrized post-Newtonian (PPN) parameters to an accuracy significantly better than current one. In this work we perform a global numerical full-cycle simulation of the BepiColombo Radio Science Experiments (RSE) in a realistic scenario, focussing on the relativity experiment, solving simultaneously for all the parameters of interest for RSE in a global least squares fit within a constrained multiarc strategy. The results on the achievable accuracy for each PPN parameter will be presented and discussed, confirming the significant improvement to the actual knowledge of gravitation theory expected for the MORE relativity experiment. In particular, we will show that, including realistic systematic effects in the range observables, an accuracy of the order of 10-6 can still be achieved in the Eddington parameter β and in the parameter α1, which accounts for preferred frame effects, while the only poorly determined parameter turns out to be ζ, which describes the temporal variations of the gravitational constant and the Sun mass.

  17. Validation results of wind diesel simulation model TKKMOD

    NASA Astrophysics Data System (ADS)

    Manninen, L. M.

    The document summarizes the results of TKKMOD validation procedure. TKKMOD is a simulation model developed at Helsinki University of Technology for a specific wind-diesel system layout. The model has been included into the European wind-diesel modeling software package WDLTOOLS under the CEC JOULE project Engineering Design Tools for Wind-Diesel Systems (JOUR-0078). The simulation model is utilized for calculation of long-term performance of the reference system configuration for given wind and load conditions. The main results are energy flows, energy losses in the system components, diesel fuel consumption, and the number of diesel engine starts. The work has been funded through the Finnish Advanced Energy System R&D Programme (NEMO). The validation has been performed using the data from EFI (Norwegian Electric Power Institute), since data from the Finnish reference system is not yet available. The EFI system has a slightly different configuration with similar overall operating principles and approximately same battery capacity. The validation data set, 394 hours of measured data, is from the first prototype wind-diesel system on the island FROYA off the Norwegian coast.

  18. Tomography and calibration for Raven: from simulations to laboratory results

    NASA Astrophysics Data System (ADS)

    Jackson, Kate; Correia, Carlos; Lardière, Olivier; Andersen, Dave; Bradley, Colin; Pham, Laurie; Blain, Célia; Nash, Reston; Gamroth, Darryl; Véran, Jean-Pierre

    2014-07-01

    This paper discusses static and dynamic tomographic wave-front (WF) reconstructors tailored to Multi-Object Adaptive Optics (MOAO) for Raven, the first MOAO science and technology demonstrator recently installed on an 8m telescope. We show the results of a new minimum mean- square error (MMSE) solution based on spatio-angular (SA) correlation functions, which extends previous work in Correia et al, JOSA-A 20131 to adopt a zonal representation of the wave-front and its associated signals. This solution is outlined for the static reconstruction and then extended for the use of stand-alone temporal prediction and as a prediction model in a pupil plane based Linear Quadratic Gaussian (LQG) algorithm. We have fully tested our algorithms in the lab and compared the results to simulations of the Raven system. These simulations have shown that an increase in limiting magnitude of up to one magnitude can be expected when prediction is implemented and up to two magnitudes when the LQG is used.

  19. Distortion measurement of antennas under space simulation conditions with high accuracy and high resolution by means of holography

    NASA Technical Reports Server (NTRS)

    Frey, H. U.

    1984-01-01

    The use of laser holography for measuring the distortion of antennas under space simulation conditions is described. The subject is the so-called double exposure procedure which allows to measure the distortion in the order of 1 to 30/micrometers + or - 0.5 per hologramme of an area of 4 m diameter max. The method of holography takes into account the constraints of the space simulation facility. The test method, the test set up and the constraints by the space simulation facility are described. The results of the performed tests are presented and compared with the theoretical predictions. The test on the K-Band Antenna e.g., showed a distortion of approximately 140/micrometers + or - 5/micrometers measured during the cool down from -10 C to -120 C.

  20. Assessment of the accuracy of an MCNPX-based Monte Carlo simulation model for predicting three-dimensional absorbed dose distributions

    PubMed Central

    Titt, U; Sahoo, N; Ding, X; Zheng, Y; Newhauser, W D; Zhu, X R; Polf, J C; Gillin, M T; Mohan, R

    2014-01-01

    In recent years, the Monte Carlo method has been used in a large number of research studies in radiation therapy. For applications such as treatment planning, it is essential to validate the dosimetric accuracy of the Monte Carlo simulations in heterogeneous media. The AAPM Report no 105 addresses issues concerning clinical implementation of Monte Carlo based treatment planning for photon and electron beams, however for proton-therapy planning, such guidance is not yet available. Here we present the results of our validation of the Monte Carlo model of the double scattering system used at our Proton Therapy Center in Houston. In this study, we compared Monte Carlo simulated depth doses and lateral profiles to measured data for a magnitude of beam parameters. We varied simulated proton energies and widths of the spread-out Bragg peaks, and compared them to measurements obtained during the commissioning phase of the Proton Therapy Center in Houston. Of 191 simulated data sets, 189 agreed with measured data sets to within 3% of the maximum dose difference and within 3 mm of the maximum range or penumbra size difference. The two simulated data sets that did not agree with the measured data sets were in the distal falloff of the measured dose distribution, where large dose gradients potentially produce large differences on the basis of minute changes in the beam steering. Hence, the Monte Carlo models of medium- and large-size double scattering proton-therapy nozzles were valid for proton beams in the 100 MeV–250 MeV interval. PMID:18670050

  1. Some results on ethnic conflicts based on evolutionary game simulation

    NASA Astrophysics Data System (ADS)

    Qin, Jun; Yi, Yunfei; Wu, Hongrun; Liu, Yuhang; Tong, Xiaonian; Zheng, Bojin

    2014-07-01

    The force of the ethnic separatism, essentially originating from the negative effect of ethnic identity, is damaging the stability and harmony of multiethnic countries. In order to eliminate the foundation of the ethnic separatism and set up a harmonious ethnic relationship, some scholars have proposed a viewpoint: ethnic harmony could be promoted by popularizing civic identity. However, this viewpoint is discussed only from a philosophical prospective and still lacks support of scientific evidences. Because ethnic group and ethnic identity are products of evolution and ethnic identity is the parochialism strategy under the perspective of game theory, this paper proposes an evolutionary game simulation model to study the relationship between civic identity and ethnic conflict based on evolutionary game theory. The simulation results indicate that: (1) the ratio of individuals with civic identity has a negative association with the frequency of ethnic conflicts; (2) ethnic conflict will not die out by killing all ethnic members once for all, and it also cannot be reduced by a forcible pressure, i.e., increasing the ratio of individuals with civic identity; (3) the average frequencies of conflicts can stay in a low level by promoting civic identity periodically and persistently.

  2. Wastewater neutralization control based in fuzzy logic: Simulation results

    SciTech Connect

    Garrido, R.; Adroer, M.; Poch, M.

    1997-05-01

    Neutralization is a technique widely used as a part of wastewater treatment processes. Due to the importance of this technique, extensive study has been devoted to its control. However, industrial wastewater neutralization control is a procedure with a lot of problems--nonlinearity of the titration curve, variable buffering, changes in loading--and despite the efforts devoted to this subject, the problem has not been totally solved. in this paper, the authors present the development of a controller based in fuzzy logic (FLC). In order to study its effectiveness, it has been compared, by simulation, with other advanced controllers (using identification techniques and adaptive control algorithms using reference models) when faced with various types of wastewater with different buffer capacity or when changes in the concentration of the acid present in the wastewater take place. Results obtained show that FLC could be considered as a powerful alternative for wastewater neutralization processes.

  3. SLAC E144 Plots, Simulation Results, and Data

    DOE Data Explorer

    The 1997 E144 experiments at the Stanford Linear Accelerator Center (SLAC) utilitized extremely high laser intensities and collided huge groups of photons together so violently that positron-electron pairs were briefly created, actual particles of matter and antimatter. Instead of matter exploding into heat and light, light actually become matter. That accomplishment opened a new path into the exploration of the interactions of electrons and photons or quantum electrodynamics (QED). The E144 information at this website includes Feynmann Diagrams, simulation results, and data files. See also aseries of frames showing the E144 laser colliding with a beam electron and producing an electron-positron pair at http://www.slac.stanford.edu/exp/e144/focpic/focpic.html and lists of collaborators' papers, theses, and a page of press articles.

  4. Accuracy of System Step Response Roll Magnitude Estimation from Central and Peripheral Visual Displays and Simulator Cockpit Motion

    NASA Technical Reports Server (NTRS)

    Hosman, R. J. A. W.; Vandervaart, J. C.

    1984-01-01

    An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.

  5. Relative significance of heat transfer processes to quantify tradeoffs between complexity and accuracy of energy simulations with a building energy use patterns classification

    NASA Astrophysics Data System (ADS)

    Heidarinejad, Mohammad

    the indoor condition regardless of the contribution of internal and external loads. To deploy the methodology to another portfolio of buildings, simulated LEED NC office buildings are selected. The advantage of this approach is to isolate energy performance due to inherent building characteristics and location, rather than operational and maintenance factors that can contribute to significant variation in building energy use. A framework for detailed building energy databases with annual energy end-uses is developed to select variables and omit outliers. The results show that the high performance office buildings are internally-load dominated with existence of three different clusters of low-intensity, medium-intensity, and high-intensity energy use pattern for the reviewed office buildings. Low-intensity cluster buildings benefit from small building area, while the medium- and high-intensity clusters have a similar range of floor areas and different energy use intensities. Half of the energy use in the low-intensity buildings is associated with the internal loads, such as lighting and plug loads, indicating that there are opportunities to save energy by using lighting or plug load management systems. A comparison between the frameworks developed for the campus buildings and LEED NC office buildings indicates these two frameworks are complementary to each other. Availability of the information has yielded to two different procedures, suggesting future studies for a portfolio of buildings such as city benchmarking and disclosure ordinance should collect and disclose minimal required inputs suggested by this study with the minimum level of monthly energy consumption granularity. This dissertation developed automated methods using the OpenStudio API (Application Programing Interface) to create energy models based on the building class. ASHRAE Guideline 14 defines well-accepted criteria to measure accuracy of energy simulations; however, there is no well

  6. An in vitro comparison of diagnostic accuracy of cone beam computed tomography and phosphor storage plate to detect simulated occlusal secondary caries under amalgam restoration

    PubMed Central

    Shahidi, Shoaleh; Zadeh, Nahal Kazerooni; Sharafeddin, Farahnaz; Shahab, Shahriar; Bahrampour, Ehsan; Hamedani, Shahram

    2015-01-01

    Background: This study was aimed to compare the diagnostic accuracy and feasibility of cone beam computed tomography (CBCT) with phosphor storage plate (PSP) in detection of simulated occlusal secondary caries. Materials and Methods: In this in vitro descriptive-comparative study, a total of 80 slots of class I cavities were prepared on 80 extracted human premolars. Then, 40 teeth were randomly selected out of this sample and artificial carious lesions were created on these teeth by a round diamond bur no. 1/2. All 80 teeth were restored with amalgam fillings and radiographs were taken, both with PSP system and CBCT. All images were evaluated by three calibrated observers. The area under the receiver operating characteristic curve was used to compare the diagnostic accuracy of two systems. SPSS (SPSS Inc., Chicago, IL, USA) was adopted for statistical analysis. The difference between Az value of bitewing and CBCT methods were compared by pairwise comparison method. The inter- and intra-operator agreement was assessed by kappa analysis (P < 0.05). Results: The mean Az value for bitewings and CBCT was 0.903 and 0.994, respectively. Significant differences were found between PSP and CBCT (P = 0.010). The kappa value for inter-observer agreement was 0.68 and 0.76 for PSP and CBCT, respectively. The kappa value for intra-observer agreement was 0.698 (observer 1, P = 0.000), 0.766 (observer 2, P = 0.000) and 0.716 (observer 3, P = 0.000) in PSP method, and 0.816 (observer 1, P = 0.000), 0.653 (observer 2, P = 0.000) and 0.744 (observer 3, P = 0.000) in CBCT method. Conclusion: This in vitro study, with a limited number of samples, showed that the New Tom VGI Flex CBCT system was more accurate than the PSP in detecting the simulated small secondary occlusal caries under amalgam restoration. PMID:25878682

  7. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  8. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    DOE PAGES

    Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; Young, Mitchell T. H.; Kochunas, Brendan; Graham, Aaron; Larsen, Edward W.; Downar, Thomas; Godfrey, Andrew

    2016-08-25

    We derived a consistent “2D/1D” neutron transport method from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. Our paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. We also performed several applications on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less

  9. Multiple interfacing between classical ray-tracing and wave-optical simulation approaches: a study on applicability and accuracy.

    PubMed

    Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Wenzl, Franz P; Hartmann, Paul; Hohenester, Ulrich; Sommer, Christian

    2014-06-30

    In this study the applicability of an interface procedure for combined ray-tracing and finite difference time domain (FDTD) simulations of optical systems which contain two diffractive gratings is discussed. The simulation of suchlike systems requires multiple FDTD↔RT steps. In order to minimize the error due to the loss of the phase information in an FDTD→RT step, we derive an equation for a maximal coherence correlation function (MCCF) which describes the maximum degree of impact of phase effects between these two different diffraction gratings and which depends on: the spatial distance between the gratings, the degree of spatial coherence of the light source and the diffraction angle of the first grating for the wavelength of light used. This MCCF builds an envelope of the oscillations caused by the distance dependent coupling effects between the two diffractive optical elements. Furthermore, by comparing the far field projections of pure FDTD simulations with the results of an RT→FDTD→RT→FDTD→RT interface procedure simulation we show that this function strongly correlates with the error caused by the interface procedure.

  10. Accuracy of the water column approximation in numerically simulating propagation of teleseismic PP waves and Rayleigh waves

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian

    2016-06-01

    Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modeling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5% and 9 ° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10% in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1% at periods greater than 30 s in most oceanic regions, but the error is up to 2% for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.

  11. Accuracy of the water column approximation in numerically simulating propagation of teleseismic PP waves and Rayleigh waves

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian

    2016-08-01

    Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modelling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5 per cent and 9° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10 per cent in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1 per cent at periods greater than 30 s in most oceanic regions, but the error is up to 2 per cent for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.

  12. Improving stamping simulation accuracy by accounting for realistic friction and lubrication conditions: Application to the door-outer of the Mercedes-Benz C-class Coupé

    NASA Astrophysics Data System (ADS)

    Hol, J.; Wiebenga, J. H.; Stock, J.; Wied, J.; Wiegand, K.; Carleer, B.

    2016-08-01

    In the stamping of automotive parts, friction and lubrication play a key role in achieving high quality products. In the development process of new automotive parts, it is therefore crucial to accurately account for these effects in sheet metal forming simulations. Only then, one can obtain reliable and realistic simulation results that correspond to the actual try-out and mass production conditions. In this work, the TriboForm software is used to accurately account for tribology-, friction-, and lubrication conditions in stamping simulations. The enhanced stamping simulations are applied and validated for the door-outer of the Mercedes- Benz C-Class Coupe. The project results demonstrate the improved prediction accuracy of stamping simulations with respect to both part quality and actual stamping process conditions.

  13. New simulation and measurement results on gateable DEPFET devices

    NASA Astrophysics Data System (ADS)

    Bähr, Alexander; Aschauer, Stefan; Hermenau, Katrin; Herrmann, Sven; Lechner, Peter H.; Lutz, Gerhard; Majewski, Petra; Miessner, Danilo; Porro, Matteo; Richter, Rainer H.; Schaller, Gerhard; Sandow, Christian; Schnecke, Martina; Schopper, Florian; Stefanescu, Alexander; Strüder, Lothar; Treis, Johannes

    2012-07-01

    To improve the signal to noise level, devices for optical and x-ray astronomy use techniques to suppress background events. Well known examples are e.g. shutters or frame-store Charge Coupled Devices (CCDs). Based on the DEpleted P-channel Field Effect Transistor (DEPFET) principle a so-called Gatebale DEPFET detector can be built. Those devices combine the DEPFET principle with a fast built-in electronic shutter usable for optical and x-ray applications. The DEPFET itself is the basic cell of an active pixel sensor build on a fully depleted bulk. It combines internal amplification, readout on demand, analog storage of the signal charge and a low readout noise with full sensitivity over the whole bulk thickness. A Gatebale DEPFET has all these benefits and obviates the need for an external shutter. Two concepts of Gatebale DEPFET layouts providing a built-in shutter will be introduced. Furthermore proof of principle measurements for both concepts are presented. Using recently produced prototypes a shielding of the collection anode up to 1 • 10-4 was achieved. Predicted by simulations, an optimized geometry should result in values of 1 • 10-5 and better. With the switching electronic currently in use a timing evaluation of the shutter opening and closing resulted in rise and fall times of 100ns.

  14. International test results for objective lens quality, resolution, spectral accuracy and spectral separation for confocal laser scanning microscopes.

    PubMed

    Cole, Richard W; Thibault, Marc; Bayles, Carol J; Eason, Brady; Girard, Anne-Marie; Jinadasa, Tushare; Opansky, Cynthia; Schulz, Katherine; Brown, Claire M

    2013-12-01

    As part of an ongoing effort to increase image reproducibility and fidelity in addition to improving cross-instrument consistency, we have proposed using four separate instrument quality tests to augment the ones we have previously reported. These four tests assessed the following areas: (1) objective lens quality, (2) resolution, (3) accuracy of the wavelength information from spectral detectors, and (4) the accuracy and quality of spectral separation algorithms. Data were received from 55 laboratories located in 18 countries. The largest source of errors across all tests was user error which could be subdivided between failure to follow provided protocols and improper use of the microscope. This truly emphasizes the importance of proper rigorous training and diligence in performing confocal microscopy experiments and equipment evaluations. It should be noted that there was no discernible difference in quality between confocal microscope manufactures. These tests, as well as others previously reported, will help assess the quality of confocal microscopy equipment and will provide a means to track equipment performance over time. From 62 to 97% of the data sets sent in passed the various tests demonstrating the usefulness and appropriateness of these tests as part of a larger performance testing regiment.

  15. LANGMUIR WAVE DECAY IN INHOMOGENEOUS SOLAR WIND PLASMAS: SIMULATION RESULTS

    SciTech Connect

    Krafft, C.; Volokitin, A. S.; Krasnoselskikh, V. V.

    2015-08-20

    Langmuir turbulence excited by electron flows in solar wind plasmas is studied on the basis of numerical simulations. In particular, nonlinear wave decay processes involving ion-sound (IS) waves are considered in order to understand their dependence on external long-wavelength plasma density fluctuations. In the presence of inhomogeneities, it is shown that the decay processes are localized in space and, due to the differences between the group velocities of Langmuir and IS waves, their duration is limited so that a full nonlinear saturation cannot be achieved. The reflection and the scattering of Langmuir wave packets on the ambient and randomly varying density fluctuations lead to crucial effects impacting the development of the IS wave spectrum. Notably, beatings between forward propagating Langmuir waves and reflected ones result in the parametric generation of waves of noticeable amplitudes and in the amplification of IS waves. These processes, repeated at different space locations, form a series of cascades of wave energy transfer, similar to those studied in the frame of weak turbulence theory. The dynamics of such a cascading mechanism and its influence on the acceleration of the most energetic part of the electron beam are studied. Finally, the role of the decay processes in the shaping of the profiles of the Langmuir wave packets is discussed, and the waveforms calculated are compared with those observed recently on board the spacecraft Solar TErrestrial RElations Observatory and WIND.

  16. Improving the trust in results of numerical simulations and scientific data analytics

    SciTech Connect

    Cappello, Franck; Constantinescu, Emil; Hovland, Paul; Peterka, Tom; Phillips, Carolyn; Snir, Marc; Wild, Stefan

    2015-04-30

    approaches to address it. This paper does not focus on the trust that the execution will actually complete. The product of simulation or of data analytic executions is the final element of a potentially long chain of transformations, where each stage has the potential to introduce harmful corruptions. These corruptions may produce results that deviate from the user-expected accuracy without notifying the user of this deviation. There are many potential sources of corruption before and during the execution; consequently, in this white paper we do not focus on the protection of the end result after the execution.

  17. Airborne ICESat-2 simulator (MABEL) results from Greenland

    NASA Astrophysics Data System (ADS)

    Neumann, T.; Markus, T.; Brunt, K. M.; Walsh, K.; Hancock, D.; Cook, W. B.; Brenner, A. C.; Csatho, B. M.; De Marco, E.

    2012-12-01

    The Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) is a next-generation laser altimeter designed to continue key observations of sea ice freeboard, ice sheet elevation change, vegetation canopy height, earth surface elevation and sea surface heights. Scheduled for launch in mid-2016, ICESat-2 will collect data between 88 degrees north and south using a high-repetition rate (10 kHz) laser operating at 532nm, and using a photon-counting detection strategy. Our airborne simulator, the Multiple Altimeter Beam Experimental Lidar (MABEL) uses a similar photon-counting measurement strategy, operates at 532nm (16 beams) and 1064 nm (8 beams) to collect similar data to what we expect for ICESat-2. The comparison between frequencies allows for studies of possible penetration of green light into water or snow. MABEL collects more spatially-dense data than ICESat-2 (2cm along-track vs. 70 cm along track for ICESat-2, and has a smaller footprint than ICESat-2 (2m nominal diameter vs. 10m nominal diameter for ICESat-2) requiring geometric and radiometric scaling to relate MABEL data to simulate ICESat-2 data. We based MABEL out of Keflavik, Iceland during April 2012, and collected ~ 100 hours of data from 20km altitude over a variety of targets. MABEL collected sea ice data over the Nares Strait, and off the east coast of Greenland, the later flight in coordination with NASA's Operation IceBridge, which collected ATM data along the same track within 90 minutes of MABEL data collection. MABEL flew a variety of lines over Greenland in the southwest, Jakobshavn region, and over the ice sheet interior, including 4 hours of coincident data with Operation IceBridge in southwest Greenland. MABEL flew a number of calibration sites, including corner cubes in Svalbard, Summit Station (where a GPS survey of the surface elevation was collected within an hour of our overflight), and well-surveyed targets in Iceland and western Greenland. In this presentation, we present an overview of

  18. Results of a Flight Simulation Software Methods Survey

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce

    1995-01-01

    A ten-page questionnaire was mailed to members of the AIAA Flight Simulation Technical Committee in the spring of 1994. The survey inquired about various aspects of developing and maintaining flight simulation software, as well as a few questions dealing with characterization of each facility. As of this report, 19 completed surveys (out of 74 sent out) have been received. This paper summarizes those responses.

  19. Improved Accuracy in RNA-Protein Rigid Body Docking by Incorporating Force Field for Molecular Dynamics Simulation into the Scoring Function.

    PubMed

    Iwakiri, Junichi; Hamada, Michiaki; Asai, Kiyoshi; Kameda, Tomoshi

    2016-09-13

    RNA-protein interactions play fundamental roles in many biological processes. To understand these interactions, it is necessary to know the three-dimensional structures of RNA-protein complexes. However, determining the tertiary structure of these complexes is often difficult, suggesting that an accurate rigid body docking for RNA-protein complexes is needed. In general, the rigid body docking process is divided into two steps: generating candidate structures from the individual RNA and protein structures and then narrowing down the candidates. In this study, we focus on the former problem to improve the prediction accuracy in RNA-protein docking. Our method is based on the integration of physicochemical information about RNA into ZDOCK, which is known as one of the most successful computer programs for protein-protein docking. Because recent studies showed the current force field for molecular dynamics simulation of protein and nucleic acids is quite accurate, we modeled the physicochemical information about RNA by force fields such as AMBER and CHARMM. A comprehensive benchmark of RNA-protein docking, using three recently developed data sets, reveals the remarkable prediction accuracy of the proposed method compared with existing programs for docking: the highest success rate is 34.7% for the predicted structure of the RNA-protein complex with the best score and 79.2% for 3,600 predicted ones. Three full atomistic force fields for RNA (AMBER94, AMBER99, and CHARMM22) produced almost the same accurate result, which showed current force fields for nucleic acids are quite accurate. In addition, we found that the electrostatic interaction and the representation of shape complementary between protein and RNA plays the important roles for accurate prediction of the native structures of RNA-protein complexes.

  20. Improved Accuracy in RNA-Protein Rigid Body Docking by Incorporating Force Field for Molecular Dynamics Simulation into the Scoring Function.

    PubMed

    Iwakiri, Junichi; Hamada, Michiaki; Asai, Kiyoshi; Kameda, Tomoshi

    2016-09-13

    RNA-protein interactions play fundamental roles in many biological processes. To understand these interactions, it is necessary to know the three-dimensional structures of RNA-protein complexes. However, determining the tertiary structure of these complexes is often difficult, suggesting that an accurate rigid body docking for RNA-protein complexes is needed. In general, the rigid body docking process is divided into two steps: generating candidate structures from the individual RNA and protein structures and then narrowing down the candidates. In this study, we focus on the former problem to improve the prediction accuracy in RNA-protein docking. Our method is based on the integration of physicochemical information about RNA into ZDOCK, which is known as one of the most successful computer programs for protein-protein docking. Because recent studies showed the current force field for molecular dynamics simulation of protein and nucleic acids is quite accurate, we modeled the physicochemical information about RNA by force fields such as AMBER and CHARMM. A comprehensive benchmark of RNA-protein docking, using three recently developed data sets, reveals the remarkable prediction accuracy of the proposed method compared with existing programs for docking: the highest success rate is 34.7% for the predicted structure of the RNA-protein complex with the best score and 79.2% for 3,600 predicted ones. Three full atomistic force fields for RNA (AMBER94, AMBER99, and CHARMM22) produced almost the same accurate result, which showed current force fields for nucleic acids are quite accurate. In addition, we found that the electrostatic interaction and the representation of shape complementary between protein and RNA plays the important roles for accurate prediction of the native structures of RNA-protein complexes. PMID:27494732

  1. SIMULATION OF DNAPL DISTRIBUTION RESULTING FROM MULTIPLE SOURCES

    EPA Science Inventory

    A three-dimensional and three-phase (water, NAPL and gas) numerical simulator, called NAPL, was employed to study the interaction between DNAPL (PCE) plumes in a variably saturated porous media. Several model verification tests have been performed, including a series of 2-D labo...

  2. Effects of heterogeneity in aquifer permeability and biomass on biodegradation rate calculations - Results from numerical simulations

    USGS Publications Warehouse

    Scholl, M.A.

    2000-01-01

    Numerical simulations were used to examine the effects of heterogeneity in hydraulic conductivity (K) and intrinsic biodegradation rate on the accuracy of contaminant plume-scale biodegradation rates obtained from field data. The simulations were based on a steady-state BTEX contaminant plume-scale biodegradation under sulfate-reducing conditions, with the electron acceptor in excess. Biomass was either uniform or correlated with K to model spatially variable intrinsic biodegradation rates. A hydraulic conductivity data set from an alluvial aquifer was used to generate three sets of 10 realizations with different degrees of heterogeneity, and contaminant transport with biodegradation was simulated with BIOMOC. Biodegradation rates were calculated from the steady-state contaminant plumes using decreases in concentration with distance downgradient and a single flow velocity estimate, as is commonly done in site characterization to support the interpretation of natural attenuation. The observed rates were found to underestimate the actual rate specified in the heterogeneous model in all cases. The discrepancy between the observed rate and the 'true' rate depended on the ground water flow velocity estimate, and increased with increasing heterogeneity in the aquifer. For a lognormal K distribution with variance of 0.46, the estimate was no more than a factor of 1.4 slower than the true rate. For aquifer with 20% silt/clay lenses, the rate estimate was as much as nine times slower than the true rate. Homogeneous-permeability, uniform-degradation rate simulations were used to generate predictions of remediation time with the rates estimated from heterogeneous models. The homogeneous models were generally overestimated the extent of remediation or underestimated remediation time, due to delayed degradation of contaminants in the low-K areas. Results suggest that aquifer characterization for natural attenuation at contaminated sites should include assessment of the presence

  3. FINAL SIMULATION RESULTS FOR DEMONSTRATION CASE 1 AND 2

    SciTech Connect

    David Sloan; Woodrow Fiveland

    2003-10-15

    The goal of this DOE Vision-21 project work scope was to develop an integrated suite of software tools that could be used to simulate and visualize advanced plant concepts. Existing process simulation software did not meet the DOE's objective of ''virtual simulation'' which was needed to evaluate complex cycles. The overall intent of the DOE was to improve predictive tools for cycle analysis, and to improve the component models that are used in turn to simulate equipment in the cycle. Advanced component models are available; however, a generic coupling capability that would link the advanced component models to the cycle simulation software remained to be developed. In the current project, the coupling of the cycle analysis and cycle component simulation software was based on an existing suite of programs. The challenge was to develop a general-purpose software and communications link between the cycle analysis software Aspen Plus{reg_sign} (marketed by Aspen Technology, Inc.), and specialized component modeling packages, as exemplified by industrial proprietary codes (utilized by ALSTOM Power Inc.) and the FLUENT{reg_sign} computational fluid dynamics (CFD) code (provided by Fluent Inc). A software interface and controller, based on an open CAPE-OPEN standard, has been developed and extensively tested. Various test runs and demonstration cases have been utilized to confirm the viability and reliability of the software. ALSTOM Power was tasked with the responsibility to select and run two demonstration cases to test the software--(1) a conventional steam cycle (designated as Demonstration Case 1), and (2) a combined cycle test case (designated as Demonstration Case 2). Demonstration Case 1 is a 30 MWe coal-fired power plant for municipal electricity generation, while Demonstration Case 2 is a 270 MWe, natural gas-fired, combined cycle power plant. Sufficient data was available from the operation of both power plants to complete the cycle configurations. Three runs

  4. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Astrophysics Data System (ADS)

    Barrie, A. C.; Adrian, M. L.; Yeh, P.; Winkert, G. E.; Lobell, J. V.; Viňas, A. F.; Simpson, D. G.; Moore, T. E.

    2008-12-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° × 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° × 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 7.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm- based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re- processed Cluster/PEACE electron measurements. Topics to be

  5. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Technical Reports Server (NTRS)

    Barrie, A.; Adrian, Mark L.; Yeh, P.-S.; Winkert, G. E.; Lobell, J. V.; Vinas, A.F.; Simpson, D. J.; Moore, T. E.

    2008-01-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eights (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6 deg x 180 deg fields-of-view (FOV) are set 90 deg apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45 deg x 180 deg fan about its nominal viewing (0 deg deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the results in the DES complement of a given spacecraft generating 6.5-Mbs(exp -1) of electron data while the DIS generates 1.1-Mbs(exp -1) of ion data yielding an FPI total data rate of 6.6-MBs(exp -1). The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mbs(exp -1). Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re-processed Cluster/PEACE electron measurements. Topics to be discussed include: review of compression algorithm; data quality

  6. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Astrophysics Data System (ADS)

    Barrie, A.; Adrian, M. L.; Yeh, P.; Winkert, G.; Lobell, J.; Vinas, A. F.; Simpson, D. G.

    2009-12-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° x 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° x 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 6.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present updated simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data as well as the FPI-DIS ion data. Compression analysis is based upon a seed of re-processed Cluster

  7. A simulation study of predictive ability measures in a survival model II: explained randomness and predictive accuracy.

    PubMed

    Choodari-Oskooei, B; Royston, P; Parmar, Mahesh K B

    2012-10-15

    Several R(2) -type measures have been proposed to evaluate the predictive ability of a survival model. In Part I, we classified the measures into four categories and studied the measures in the explained variation category. In this paper, we study the remaining measures in a similar fashion, discussing their strengths and shortcomings. Simulation studies are used to examine the performance of the measures with respect to the criteria we set out in Part I. Our simulation studies showed that among the measures studied in this paper, the measures proposed by Kent and O'Quigley ρ(W)(2) (and its approximation ρ(W,A)(2)) and Schemper and Kaider R(SK)(2) perform better with respect to our criteria. However, our investigations showed that ρ(W)(2) is adversely affected by the distribution of covariate and the presence of influential observations. The results show that the other measures perform poorly, primarily because they are affected either by the degree of censoring or the follow-up period.

  8. Evaluation of the accuracy of an offline seasonally-varying matrix transport model for simulating ideal age

    DOE PAGES

    Bardin, Ann; Primeau, Francois; Lindsay, Keith; Bradley, Andrew

    2016-07-21

    Newton-Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton’s method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earthmore » System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. As a result, for many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.« less

  9. Finite-volume versus streaming-based lattice Boltzmann algorithm for fluid-dynamics simulations: A one-to-one accuracy and performance study.

    PubMed

    Shrestha, Kalyan; Mompean, Gilmar; Calzavarini, Enrico

    2016-02-01

    A finite-volume (FV) discretization method for the lattice Boltzmann (LB) equation, which combines high accuracy with limited computational cost is presented. In order to assess the performance of the FV method we carry out a systematic comparison, focused on accuracy and computational performances, with the standard streaming lattice Boltzmann equation algorithm. In particular we aim at clarifying whether and in which conditions the proposed algorithm, and more generally any FV algorithm, can be taken as the method of choice in fluid-dynamics LB simulations. For this reason the comparative analysis is further extended to the case of realistic flows, in particular thermally driven flows in turbulent conditions. We report the successful simulation of high-Rayleigh number convective flow performed by a lattice Boltzmann FV-based algorithm with wall grid refinement.

  10. Simulations Build Efficacy: Empirical Results from a Four-Week Congressional Simulation

    ERIC Educational Resources Information Center

    Mariani, Mack; Glenn, Brian J.

    2014-01-01

    This article describes a four-week congressional committee simulation implemented in upper level courses on Congress and the Legislative process at two liberal arts colleges. We find that the students participating in the simulation possessed high levels of political knowledge and confidence in their political skills prior to the simulation. An…

  11. Direct drive: Simulations and results from the National Ignition Facility

    DOE PAGES

    Radha, P. B.; Hohenberger, M.; Edgell, D. H.; Marozas, J. A.; Marshall, F. J.; Michel, D. T.; Rosenberg, M. J.; Seka, W.; Shvydky, A.; Boehly, T. R.; et al

    2016-04-19

    Here, the direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivitymore » analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.« less

  12. Direct drive: Simulations and results from the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Radha, P. B.; Hohenberger, M.; Edgell, D. H.; Marozas, J. A.; Marshall, F. J.; Michel, D. T.; Rosenberg, M. J.; Seka, W.; Shvydky, A.; Boehly, T. R.; Collins, T. J. B.; Campbell, E. M.; Craxton, R. S.; Delettrez, J. A.; Dixit, S. N.; Frenje, J. A.; Froula, D. H.; Goncharov, V. N.; Hu, S. X.; Knauer, J. P.; McCrory, R. L.; McKenty, P. W.; Meyerhofer, D. D.; Moody, J.; Myatt, J. F.; Petrasso, R. D.; Regan, S. P.; Sangster, T. C.; Sio, H.; Skupsky, S.; Zylstra, A.

    2016-05-01

    Direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivity analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.

  13. ASTRA Simulation Results of RF Propagation in Plasma Medium

    NASA Astrophysics Data System (ADS)

    Goodwin, Joshua; Oneal, Brandon; Smith, Aaron; Sen, Sudip

    2015-04-01

    Transport barriers in toroidal plasmas play a major role in achieving the required confinement for reactor grade plasmas. They are formed by different mechanisms, but most of them are associated with a zonal flow which suppresses turbulence. A different way of producing a barrier has been recently proposed which uses the ponderomotive force of RF waves to reduce the fluctuations due to drift waves, but without inducing any plasma rotation. Using this mechanism, a transport coefficient is derived which is a function of RF power, and it is incorporated in transport simulations performed for the Brazilian tokamak TCABR, as a possible test bed for the theoretical model. The formation of a transport barrier is demonstrated at the position of the RF wave resonant absorption surface, having the typical pedestal-like temperature profile.

  14. Implementation and Simulation Results using Autonomous Aerobraking Development Software

    NASA Technical Reports Server (NTRS)

    Maddock, Robert W.; DwyerCianciolo, Alicia M.; Bowes, Angela; Prince, Jill L. H.; Powell, Richard W.

    2011-01-01

    An Autonomous Aerobraking software system is currently under development with support from the NASA Engineering and Safety Center (NESC) that would move typically ground-based operations functions to onboard an aerobraking spacecraft, reducing mission risk and mission cost. The suite of software that will enable autonomous aerobraking is the Autonomous Aerobraking Development Software (AADS) and consists of an ephemeris model, onboard atmosphere estimator, temperature and loads prediction, and a maneuver calculation. The software calculates the maneuver time, magnitude and direction commands to maintain the spacecraft periapsis parameters within design structural load and/or thermal constraints. The AADS is currently tested in simulations at Mars, with plans to also evaluate feasibility and performance at Venus and Titan.

  15. Frontotemporal oxyhemoglobin dynamics predict performance accuracy of dance simulation gameplay: temporal characteristics of top-down and bottom-up cortical activities.

    PubMed

    Ono, Yumie; Nomoto, Yasunori; Tanaka, Shohei; Sato, Keisuke; Shimada, Sotaro; Tachibana, Atsumichi; Bronner, Shaw; Noah, J Adam

    2014-01-15

    We utilized the high temporal resolution of functional near-infrared spectroscopy to explore how sensory input (visual and rhythmic auditory cues) are processed in the cortical areas of multimodal integration to achieve coordinated motor output during unrestricted dance simulation gameplay. Using an open source clone of the dance simulation video game, Dance Dance Revolution, two cortical regions of interest were selected for study, the middle temporal gyrus (MTG) and the frontopolar cortex (FPC). We hypothesized that activity in the FPC would indicate top-down regulatory mechanisms of motor behavior; while that in the MTG would be sustained due to bottom-up integration of visual and auditory cues throughout the task. We also hypothesized that a correlation would exist between behavioral performance and the temporal patterns of the hemodynamic responses in these regions of interest. Results indicated that greater temporal accuracy of dance steps positively correlated with persistent activation of the MTG and with cumulative suppression of the FPC. When auditory cues were eliminated from the simulation, modifications in cortical responses were found depending on the gameplay performance. In the MTG, high-performance players showed an increase but low-performance players displayed a decrease in cumulative amount of the oxygenated hemoglobin response in the no music condition compared to that in the music condition. In the FPC, high-performance players showed relatively small variance in the activity regardless of the presence of auditory cues, while low-performance players showed larger differences in the activity between the no music and music conditions. These results suggest that the MTG plays an important role in the successful integration of visual and rhythmic cues and the FPC may work as top-down control to compensate for insufficient integrative ability of visual and rhythmic cues in the MTG. The relative relationships between these cortical areas indicated

  16. Stellar populations of stellar halos: Results from the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Cook, B. A.; Conroy, C.; Pillepich, A.; Hernquist, L.

    2016-08-01

    The influence of both major and minor mergers is expected to significantly affect gradients of stellar ages and metallicities in the outskirts of galaxies. Measurements of observed gradients are beginning to reach large radii in galaxies, but a theoretical framework for connecting the findings to a picture of galactic build-up is still in its infancy. We analyze stellar populations of a statistically representative sample of quiescent galaxies over a wide mass range from the Illustris simulation. We measure metallicity and age profiles in the stellar halos of quiescent Illustris galaxies ranging in stellar mass from 1010 to 1012 M ⊙, accounting for observational projection and luminosity-weighting effects. We find wide variance in stellar population gradients between galaxies of similar mass, with typical gradients agreeing with observed galaxies. We show that, at fixed mass, the fraction of stars born in-situ within galaxies is correlated with the metallicity gradient in the halo, confirming that stellar halos contain unique information about the build-up and merger histories of galaxies.

  17. SLUDGE BATCH 4 SIMULANT FLOWSHEET STUDIES: PHASE II RESULTS

    SciTech Connect

    Stone, M; David Best, D

    2006-09-12

    The Defense Waste Processing Facility (DWPF) will transition from Sludge Batch 3 (SB3) processing to Sludge Batch 4 (SB4) processing in early fiscal year 2007. Tests were conducted using non-radioactive simulants of the expected SB4 composition to determine the impact of varying the acid stoichiometry during the Sludge Receipt and Adjustment Tank (SRAT) process. The work was conducted to meet the Technical Task Request (TTR) HLW/DWPF/TTR-2004-0031 and followed the guidelines of a Task Technical and Quality Assurance Plan (TT&QAP). The flowsheet studies are performed to evaluate the potential chemical processing issues, hydrogen generation rates, and process slurry rheological properties as a function of acid stoichiometry. Initial SB4 flowsheet studies were conducted to guide decisions during the sludge batch preparation process. These studies were conducted with the estimated SB4 composition at the time of the study. The composition has changed slightly since these studies were completed due to changes in the sludges blended to prepare SB4 and the estimated SB3 heel mass. The following TTR requirements were addressed in this testing: (1) Hydrogen and nitrous oxide generation rates as a function of acid stoichiometry; (2) Acid quantities and processing times required for mercury removal; (3) Acid quantities and processing times required for nitrite destruction; and (4) Impact of SB4 composition (in particular, oxalate, manganese, nickel, mercury, and aluminum) on DWPF processing (i.e. acid addition strategy, foaming, hydrogen generation, REDOX control, rheology, etc.).

  18. Results from modeling and simulation of chemical downstream etch systems

    SciTech Connect

    Meeks, E.; Vosen, S.R.; Shon, J.W.; Larson, R.S.; Fox, C.A.; Buchenauer

    1996-05-01

    This report summarizes modeling work performed at Sandia in support of Chemical Downstream Etch (CDE) benchmark and tool development programs under a Cooperative Research and Development Agreement (CRADA) with SEMATECH. The Chemical Downstream Etch (CDE) Modeling Project supports SEMATECH Joint Development Projects (JDPs) with Matrix Integrated Systems, Applied Materials, and Astex Corporation in the development of new CDE reactors for wafer cleaning and stripping processes. These dry-etch reactors replace wet-etch steps in microelectronics fabrication, enabling compatibility with other process steps and reducing the use of hazardous chemicals. Models were developed at Sandia to simulate the gas flow, chemistry and transport in CDE reactors. These models address the essential components of the CDE system: a microwave source, a transport tube, a showerhead/gas inlet, and a downstream etch chamber. The models have been used in tandem to determine the evolution of reactive species throughout the system, and to make recommendations for process and tool optimization. A significant part of this task has been in the assembly of a reasonable set of chemical rate constants and species data necessary for successful use of the models. Often the kinetic parameters were uncertain or unknown. For this reason, a significant effort was placed on model validation to obtain industry confidence in the model predictions. Data for model validation were obtained from the Sandia Molecular Beam Mass Spectrometry (MBMS) experiments, from the literature, from the CDE Benchmark Project (also part of the Sandia/SEMATECH CRADA), and from the JDP partners. The validated models were used to evaluate process behavior as a function of microwave-source operating parameters, transport-tube geometry, system pressure, and downstream chamber geometry. In addition, quantitative correlations were developed between CDE tool performance and operation set points.

  19. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1991-01-01

    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  20. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1990-01-01

    In the present study, two codes which solve the three-dimensional Thin-Layer Navier-Stokes (TLNS) equations are used to compute the steady-state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  1. Research on the classification result and accuracy of building windows in high resolution satellite images: take the typical rural buildings in Guangxi, China, as an example

    NASA Astrophysics Data System (ADS)

    Li, Baishou; Gao, Yujiu

    2015-12-01

    The information extracted from the high spatial resolution remote sensing images has become one of the important data sources of the GIS large scale spatial database updating. The realization of the building information monitoring using the high resolution remote sensing, building small scale information extracting and its quality analyzing has become an important precondition for the applying of the high-resolution satellite image information, because of the large amount of regional high spatial resolution satellite image data. In this paper, a clustering segmentation classification evaluation method for the high resolution satellite images of the typical rural buildings is proposed based on the traditional KMeans clustering algorithm. The factors of separability and building density were used for describing image classification characteristics of clustering window. The sensitivity of the factors influenced the clustering result was studied from the perspective of the separability between high image itself target and background spectrum. This study showed that the number of the sample contents is the important influencing factor to the clustering accuracy and performance, the pixel ratio of the objects in images and the separation factor can be used to determine the specific impact of cluster-window subsets on the clustering accuracy, and the count of window target pixels (Nw) does not alone affect clustering accuracy. The result can provide effective research reference for the quality assessment of the segmentation and classification of high spatial resolution remote sensing images.

  2. Diamond-NICAM-SPRINTARS: downscaling and simulation results

    NASA Astrophysics Data System (ADS)

    Uchida, J.

    2012-12-01

    As a part of initiative "Research Program on Climate Change Adaptation" (RECCA) which investigates how predicted large-scale climate change may affect a local weather, and further examines possible atmospheric hazards that cities may encounter due to such a climate change, thus to guide policy makers on implementing new environmental measures, a "Development of Seamless Chemical AssimiLation System and its Application for Atmospheric Environmental Materials" (SALSA) project is funded by the Japanese Ministry of Education, Culture, Sports, Science and Technology and is focused on creating a regional (local) scale assimilation system that can accurately recreate and predict a transport of carbon dioxide and other air pollutants. In this study, a regional model of the next generation global cloud-resolving model NICAM (Non-hydrostatic ICosahedral Atmospheric Model) (Tomita and Satoh, 2004) is used and ran together with a transport model SPRINTARS (Spectral Radiation Transport Model for Aerosol Species) (Takemura et al, 2000) and a chemical transport model CHASER (Sudo et al, 2002) to simulate aerosols across urban cities (over a Kanto region including metropolitan Tokyo). The presentation will mainly be on a "Diamond-NICAM" (Figure 1), a regional climate model version of the global climate model NICAM, and its dynamical downscaling methodologies. Originally, a global NICAM can be described as twenty identical equilateral triangular-shaped panels covering the entire globe where grid points are at the corners of those panels, and to increase a resolution (called a "global-level" in NICAM), additional points are added at the middle of existing two adjacent points, so a number of panels increases by fourfold with an increment of one global-level. On the other hand, a Diamond-NICAM only uses two of those initial triangular-shaped panels, thus only covers part of the globe. In addition, NICAM uses an adaptive mesh scheme and its grid size can gradually decrease, as the grid

  3. Preliminary Benchmarking and MCNP Simulation Results for Homeland Security

    SciTech Connect

    Robert Hayes

    2008-03-01

    The purpose of this article is to create Monte Carlo N-Particle (MCNP) input stacks for benchmarked measurements sufficient for future perturbation studies and analysis. The approach was to utilize historical experimental measurements to recreate the empirical spectral results in MCNP, both qualitatively and quantitatively. Results demonstrate that perturbation analysis of benchmarked MCNP spectra can be used to obtain a better understanding of field measurement results which may be of national interest. If one or more spectral radiation measurements are made in the field and deemed of national interest, the potential source distribution, naturally occurring radioactive material shielding, and interstitial materials can only be estimated in many circumstances. The effects from these factors on the resultant spectral radiation measurements can be very confusing. If benchmarks exist which are sufficiently similar to the suspected configuration, these benchmarks can then be compared to the suspect measurements. Having these benchmarks with validated MCNP input stacks can substantially improve the predictive capability of experts supporting these efforts.

  4. Analysis of Numerical Simulation Results of LIPS-200 Lifetime Experiments

    NASA Astrophysics Data System (ADS)

    Chen, Juanjuan; Zhang, Tianping; Geng, Hai; Jia, Yanhui; Meng, Wei; Wu, Xianming; Sun, Anbang

    2016-06-01

    Accelerator grid structural and electron backstreaming failures are the most important factors affecting the ion thruster's lifetime. During the thruster's operation, Charge Exchange Xenon (CEX) ions are generated from collisions between plasma and neutral atoms. Those CEX ions grid's barrel and wall frequently, which cause the failures of the grid system. In order to validate whether the 20 cm Lanzhou Ion Propulsion System (LIPS-200) satisfies China's communication satellite platform's application requirement for North-South Station Keeping (NSSK), this study analyzed the measured depth of the pit/groove on the accelerator grid's wall and aperture diameter's variation and estimated the operating lifetime of the ion thruster. Different from the previous method, in this paper, the experimental results after the 5500 h of accumulated operation of the LIPS-200 ion thruster are presented firstly. Then, based on these results, theoretical analysis and numerical calculations were firstly performed to predict the on-orbit lifetime of LIPS-200. The results obtained were more accurate to calculate the reliability and analyze the failure modes of the ion thruster. The results indicated that the predicted lifetime of LIPS-200's was about 13218.1 h which could satisfy the required lifetime requirement of 11000 h very well.

  5. Accuracy of the electron transport in mcnp5 and its suitability for ionization chamber response simulations: A comparison with the egsnrc and penelope codes

    SciTech Connect

    Koivunoro, Hanna; Siiskonen, Teemu; Kotiluoto, Petri; Auterinen, Iiro; Hippelaeinen, Eero; Savolainen, Sauli

    2012-03-15

    Purpose: In this work, accuracy of the mcnp5 code in the electron transport calculations and its suitability for ionization chamber (IC) response simulations in photon beams are studied in comparison to egsnrc and penelope codes. Methods: The electron transport is studied by comparing the depth dose distributions in a water phantom subdivided into thin layers using incident energies (0.05, 0.1, 1, and 10 MeV) for the broad parallel electron beams. The IC response simulations are studied in water phantom in three dosimetric gas materials (air, argon, and methane based tissue equivalent gas) for photon beams ({sup 60}Co source, 6 MV linear medical accelerator, and mono-energetic 2 MeV photon source). Two optional electron transport models of mcnp5 are evaluated: the ITS-based electron energy indexing (mcnp5{sub ITS}) and the new detailed electron energy-loss straggling logic (mcnp5{sub new}). The electron substep length (ESTEP parameter) dependency in mcnp5 is investigated as well. Results: For the electron beam studies, large discrepancies (>3%) are observed between the mcnp5 dose distributions and the reference codes at 1 MeV and lower energies. The discrepancy is especially notable for 0.1 and 0.05 MeV electron beams. The boundary crossing artifacts, which are well known for the mcnp5{sub ITS}, are observed for the mcnp5{sub new} only at 0.1 and 0.05 MeV beam energies. If the excessive boundary crossing is eliminated by using single scoring cells, the mcnp5{sub ITS} provides dose distributions that agree better with the reference codes than mcnp5{sub new}. The mcnp5 dose estimates for the gas cavity agree within 1% with the reference codes, if the mcnp5{sub ITS} is applied or electron substep length is set adequately for the gas in the cavity using the mcnp5{sub new}. The mcnp5{sub new} results are found highly dependent on the chosen electron substep length and might lead up to 15% underestimation of the absorbed dose. Conclusions: Since the mcnp5 electron

  6. Design and analysis of ALE schemes with provable second-order time-accuracy for inviscid and viscous flow simulations

    NASA Astrophysics Data System (ADS)

    Geuzaine, Philippe; Grandmont, Céline; Farhat, Charbel

    2003-10-01

    We consider the solution of inviscid as well as viscous unsteady flow problems with moving boundaries by the arbitrary Lagrangian-Eulerian (ALE) method. We present two computational approaches for achieving formal second-order time-accuracy on moving grids. The first approach is based on flux time-averaging, and the second one on mesh configuration time-averaging. In both cases, we prove that formally second-order time-accurate ALE schemes can be designed. We illustrate our theoretical findings and highlight their impact on practice with the solution of inviscid as well as viscous, unsteady, nonlinear flow problems associated with the AGARD Wing 445.6 and a complete F-16 configuration.

  7. Home energy rating system building energy simulation test (HERS BESTEST). Volume 2, Tier 1 and Tier 2 tests reference results

    SciTech Connect

    Judkoff, R.; Neymark, J.

    1995-11-01

    The Home Energy Rating System (HERS) Building Energy Simulation Test (BESTEST) is a method for evaluating the credibility of software used by HERS to model energy use in buildings. The method provides the technical foundation for ''certification of the technical accuracy of building energy analysis tools used to determine energy efficiency ratings,'' as called for in the Energy Policy Act of 1992 (Title I, Subtitle A, Section 102, Title II, Part 6, Section 271). Certification is accomplished with a uniform set of test cases that Facilitate the comparison of a software tool with several of the best public-domain, state-of-the-art building energy simulation programs available in the United States. The HERS BESTEST work is divided into two volumes. Volume 1 contains the test case specifications and is a user's manual for anyone wishing to test a computer program. Volume 2 contains the reference results and suggestions for accrediting agencies on how to use and interpret the results.

  8. Accuracy for detection of simulated lesions: comparison of fluid-attenuated inversion-recovery, proton density--weighted, and T2-weighted synthetic brain MR imaging

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.; Itoh, R.; Melhem, E. R.

    2001-01-01

    OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.

  9. Preliminary Benchmarking Efforts and MCNP Simulation Results for Homeland Security

    SciTech Connect

    Robert Hayes

    2008-04-18

    It is shown in this work that basic measurements made from well defined source detector configurations can be readily converted in to benchmark quality results by which Monte Carlo N-Particle (MCNP) input stacks can be validated. Specifically, a recent measurement made in support of national security at the Nevada Test Site (NTS) is described with sufficient detail to be submitted to the American Nuclear Society’s (ANS) Joint Benchmark Committee (JBC) for consideration as a radiation measurement benchmark. From this very basic measurement, MCNP input stacks are generated and validated both in predicted signal amplitude and spectral shape. Not modeled at this time are those perturbations from the more recent pulse height light (PHL) tally feature, although what spectral deviations are seen can be largely attributed to not including this small correction. The value of this work is as a proof-of-concept demonstration that with well documented historical testing can be converted into formal radiation measurement benchmarks. This effort would support virtual testing of algorithms and new detector configurations.

  10. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  11. Vibronic coupling simulations for linear and nonlinear optical processes: Simulation results

    NASA Astrophysics Data System (ADS)

    Silverstein, Daniel W.; Jensen, Lasse

    2012-02-01

    A vibronic coupling model based on time-dependent wavepacket approach is applied to simulate linear optical processes, such as one-photon absorbance and resonance Raman scattering, and nonlinear optical processes, such as two-photon absorbance and resonance hyper-Raman scattering, on a series of small molecules. Simulations employing both the long-range corrected approach in density functional theory and coupled cluster are compared and also examined based on available experimental data. Although many of the small molecules are prone to anharmonicity in their potential energy surfaces, the harmonic approach performs adequately. A detailed discussion of the non-Condon effects is illustrated by the molecules presented in this work. Linear and nonlinear Raman scattering simulations allow for the quantification of interference between the Franck-Condon and Herzberg-Teller terms for different molecules.

  12. Comparison of the effect of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume on midwifery students: A randomized clinical trial

    PubMed Central

    Kordi, Masoumeh; Fakari, Farzaneh Rashidi; Mazloum, Seyed Reza; Khadivzadeh, Talaat; Akhlaghi, Farideh; Tara, Mahmoud

    2016-01-01

    Introduction: Delay in diagnosis of bleeding can be due to underestimation of the actual amount of blood loss during delivery. Therefore, this research aimed to compare the efficacy of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume. Materials and Methods: This three-group randomized clinical trial study was performed on 105 midwifery students in Mashhad School of Nursing and Midwifery in 2013. The samples were selected by the convenience method and were randomly divided into three groups of web-based, simulation-based, and conventional training. The three groups participated before and 1 week after the training course in eight station practical tests, then, the students of the web-based group were trained on-line for 1 week, the students of the simulation-based group were trained in the Clinical Skills Centre for 4 h, and the students of the conventional group were trained for 4 h presentation by researchers. The data gathering tool was a demographic questionnaire designed by the researchers and objective structured clinical examination. Data were analyzed by software version 11.5. Results: The accuracy of visual estimation of postpartum hemorrhage volume after training increased significantly in the three groups at all stations (1, 2, 4, 5, 6 and 7 (P = 0.001), 8 (P = 0.027)) except station 3 (blood loss of 20 cc, P = 0.095), but the mean score of blood loss estimation after training did not significantly different between the three groups (P = 0.95). Conclusion: Training increased the accuracy of estimation of postpartum hemorrhage, but no significant difference was found among the three training groups. We can use web-based training as a substitute or supplement of training along with two other more common simulation and conventional methods. PMID:27500175

  13. Depletion potentials in highly size-asymmetric binary hard-sphere mixtures: Comparison of simulation results with theory

    NASA Astrophysics Data System (ADS)

    Ashton, Douglas J.; Wilding, Nigel B.; Roth, Roland; Evans, Robert

    2011-12-01

    We report a detailed study, using state-of-the-art simulation and theoretical methods, of the effective (depletion) potential between a pair of big hard spheres immersed in a reservoir of much smaller hard spheres, the size disparity being measured by the ratio of diameters q≡σs/σb. Small particles are treated grand canonically, their influence being parameterized in terms of their packing fraction in the reservoir ηsr. Two Monte Carlo simulation schemes—the geometrical cluster algorithm, and staged particle insertion—are deployed to obtain accurate depletion potentials for a number of combinations of q⩽0.1 and ηsr. After applying corrections for simulation finite-size effects, the depletion potentials are compared with the prediction of new density functional theory (DFT) calculations based on the insertion trick using the Rosenfeld functional and several subsequent modifications. While agreement between the DFT and simulation is generally good, significant discrepancies are evident at the largest reservoir packing fraction accessible to our simulation methods, namely, ηsr=0.35. These discrepancies are, however, small compared to those between simulation and the much poorer predictions of the Derjaguin approximation at this ηsr. The recently proposed morphometric approximation performs better than Derjaguin but is somewhat poorer than DFT for the size ratios and small-sphere packing fractions that we consider. The effective potentials from simulation, DFT, and the morphometric approximation were used to compute the second virial coefficient B2 as a function of ηsr. Comparison of the results enables an assessment of the extent to which DFT can be expected to correctly predict the propensity toward fluid-fluid phase separation in additive binary hard-sphere mixtures with q⩽0.1. In all, the new simulation results provide a fully quantitative benchmark for assessing the relative accuracy of theoretical approaches for calculating depletion potentials

  14. Towards an assessment of the accuracy of density functional theory for first principles simulations of water. II

    NASA Astrophysics Data System (ADS)

    Schwegler, Eric; Grossman, Jeffrey C.; Gygi, François; Galli, Giulia

    2004-09-01

    A series of 20 ps ab initio molecular dynamics simulations of water at ambient density and temperatures ranging from 300 to 450 K are presented. Car-Parrinello (CP) and Born-Oppenheimer (BO) molecular dynamics techniques are compared for systems containing 54 and 64 water molecules. At 300 K, an excellent agreement is found between radial distribution functions (RDFs) obtained with BO and CP dynamics, provided an appropriately small value of the fictitious mass parameter is used in the CP simulation. However, we find that the diffusion coefficients computed from CP dynamics are approximately two times larger than those obtained with BO simulations for T>400 K, where statistically meaningful comparisons can be made. Overall, both BO and CP dynamics at 300 K yield overstructured RDFs and slow diffusion as compared to experiment. In order to understand these discrepancies, the effect of proton quantum motion is investigated with the use of empirical interaction potentials. We find that proton quantum effects may have a larger impact than previously thought on structure and diffusion of the liquid.

  15. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures.

  16. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. PMID:26894840

  17. Evaluation of the accuracy of an offline seasonally-varying matrix transport model for simulating ideal age

    NASA Astrophysics Data System (ADS)

    Bardin, Ann; Primeau, François; Lindsay, Keith; Bradley, Andrew

    2016-09-01

    Newton-Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton's method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earth System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. For many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.

  18. Evaluation of the efficiency and accuracy of new methods for atmospheric opacity and radiative transfer calculations in planetary general circulation model simulations

    NASA Astrophysics Data System (ADS)

    Zube, Nicholas Gerard; Zhang, Xi; Natraj, Vijay

    2016-10-01

    General circulation models often incorporate simple approximations of heating between vertically inhomogeneous layers rather than more accurate but computationally expensive radiative transfer (RT) methods. With the goal of developing a GCM package that can model both solar system bodies and exoplanets, it is vital to examine up-to-date RT models to optimize speed and accuracy for heat transfer calculations. Here, we examine a variety of interchangeable radiative transfer models in conjunction with MITGCM (Hill and Marshall, 1995). First, for atmospheric opacity calculations, we test gray approximation, line-by-line, and correlated-k methods. In combination with these, we also test RT routines using 2-stream DISORT (discrete ordinates RT), N-stream DISORT (Stamnes et al., 1988), and optimized 2-stream (Spurr and Natraj, 2011). Initial tests are run using Jupiter as an example case. The results can be compared in nine possible configurations for running a complete RT routine within a GCM. Each individual combination of opacity and RT methods is contrasted with the "ground truth" calculation provided by the line-by-line opacity and N-stream DISORT, in terms of computation speed and accuracy of the approximation methods. We also examine the effects on accuracy when performing these calculations at different time step frequencies within MITGCM. Ultimately, we will catalog and present the ideal RT routines that can replace commonly used approximations within a GCM for a significant increase in calculation accuracy, and speed comparable to the dynamical time steps of MITGCM. Future work will involve examining whether calculations in the spatial domain can also be reduced by smearing grid points into larger areas, and what effects this will have on overall accuracy.

  19. Impact of Calibrated Land Surface Model Parameters on the Accuracy and Uncertainty of Land-Atmosphere Coupling in WRF Simulations

    NASA Technical Reports Server (NTRS)

    Santanello, Joseph A., Jr.; Kumar, Sujay V.; Peters-Lidard, Christa D.; Harrison, Ken; Zhou, Shujia

    2012-01-01

    Land-atmosphere (L-A) interactions play a critical role in determining the diurnal evolution of both planetary boundary layer (PBL) and land surface temperature and moisture budgets, as well as controlling feedbacks with clouds and precipitation that lead to the persistence of dry and wet regimes. Recent efforts to quantify the strength of L-A coupling in prediction models have produced diagnostics that integrate across both the land and PBL components of the system. In this study, we examine the impact of improved specification of land surface states, anomalies, and fluxes on coupled WRF forecasts during the summers of extreme dry (2006) and wet (2007) land surface conditions in the U.S. Southern Great Plains. The improved land initialization and surface flux parameterizations are obtained through the use of a new optimization and uncertainty estimation module in NASA's Land Information System (LIS-OPT/UE), whereby parameter sets are calibrated in the Noah land surface model and classified according to a land cover and soil type mapping of the observation sites to the full model domain. The impact of calibrated parameters on the a) spinup of the land surface used as initial conditions, and b) heat and moisture states and fluxes of the coupled WRF simulations are then assessed in terms of ambient weather and land-atmosphere coupling along with measures of uncertainty propagation into the forecasts. In addition, the sensitivity of this approach to the period of calibration (dry, wet, average) is investigated. Finally, tradeoffs of computational tractability and scientific validity, and the potential for combining this approach with satellite remote sensing data are also discussed.

  20. Diagnostic Accuracy of Ultrasound B scan using 10 MHz linear probe in ocular trauma;results from a high burden country

    PubMed Central

    Shazlee, Muhammad Kashif; Ali, Muhammad; SaadAhmed, Muhammad; Hussain, Ammad; Hameed, Kamran; Lutfi, Irfan Amjad; Khan, Muhammad Tahir

    2016-01-01

    Objective: To study the diagnostic accuracy of Ultrasound B scan using 10 MHz linear probe in ocular trauma. Methods: A total of 61 patients with 63 ocular injuries were assessed during July 2013 to January 2014. All patients were referred to the department of Radiology from Emergency Room since adequate clinical assessment of the fundus was impossible because of the presence of opaque ocular media. Based on radiological diagnosis, the patients were provided treatment (surgical or medical). Clinical diagnosis was confirmed during surgical procedures or clinical follow-up. Results: A total of 63 ocular injuries were examined in 61 patients. The overall sensitivity was 91.5%, Specificity was 98.87%, Positive predictive value was 87.62 and Negative predictive value was 99%. Conclusion: Ultrasound B-scan is a sensitive, non invasive and rapid way of assessing intraocular damage caused by blunt or penetrating eye injuries. PMID:27182245

  1. Consideration of shear modulus in biomechanical analysis of peri-implant jaw bone: accuracy verification using image-based multi-scale simulation.

    PubMed

    Matsunaga, Satoru; Naito, Hiroyoshi; Tamatsu, Yuichi; Takano, Naoki; Abe, Shinichi; Ide, Yoshinobu

    2013-01-01

    The aim of this study was to clarify the influence of shear modulus on the analytical accuracy in peri-implant jaw bone simulation. A 3D finite element (FE) model was prepared based on micro-CT data obtained from images of a jawbone containing implants. A precise model that closely reproduced the trabecular architecture, and equivalent models that gave shear modulus values taking the trabecular architecture into account, were prepared. Displacement norms during loading were calculated, and the displacement error was evaluated. The model that gave shear modulus values taking the trabecular architecture into account showed an analytical error of around 10-20% in the cancellous bone region, while in the model that used incorrect shear modulus, the analytical error exceeded 40% in certain regions. The shear modulus should be evaluated precisely in addition to the Young modulus when considering the mechanics of peri-implant trabecular bone structure.

  2. Verification of the Prediction Accuracy of Annual Energy Output at Noma Wind Park by the Non-Stationary and Non-Linear Wind Synopsis Simulator, RIAM-COMPACT

    NASA Astrophysics Data System (ADS)

    Uchida, Takanori; Ohya, Yuji

    In the present study, the hub-height wind speed ratios for 16 individual wind directional groups were estimated by the RIAM-COMPACT for Noma Wind Park, Kagoshima Prefecture. The validity of the proposed estimation technique for the actual wind was examined. For this procedure, field observational data from the one year period between April, 2004 and March, 2005 were studied. In this case, the relative error on the prediction accuracy was less than 10% and less than 5% for the monthly and annual average wind speeds, respectively. Similar to the results for the annual average wind speed, the difference in the selected reference points (Wind Turbines #4 and #6) had little difference in the relative error on the prediction accuracy of the annual energy output. For both reference points, the relative error was within 10%.

  3. The accuracy of linear measurements of maxillary and mandibular edentulous sites in cone-beam computed tomography images with different fields of view and voxel sizes under simulated clinical conditions

    PubMed Central

    Ramesh, Aruna; Pagni, Sarah

    2016-01-01

    Purpose The objective of this study was to investigate the effect of varying resolutions of cone-beam computed tomography images on the accuracy of linear measurements of edentulous areas in human cadaver heads. Intact cadaver heads were used to simulate a clinical situation. Materials and Methods Fiduciary markers were placed in the edentulous areas of 4 intact embalmed cadaver heads. The heads were scanned with two different CBCT units using a large field of view (13 cm×16 cm) and small field of view (5 cm×8 cm) at varying voxel sizes (0.3 mm, 0.2 mm, and 0.16 mm). The ground truth was established with digital caliper measurements. The imaging measurements were then compared with caliper measurements to determine accuracy. Results The Wilcoxon signed rank test revealed no statistically significant difference between the medians of the physical measurements obtained with calipers and the medians of the CBCT measurements. A comparison of accuracy among the different imaging protocols revealed no significant differences as determined by the Friedman test. The intraclass correlation coefficient was 0.961, indicating excellent reproducibility. Inter-observer variability was determined graphically with a Bland-Altman plot and by calculating the intraclass correlation coefficient. The Bland-Altman plot indicated very good reproducibility for smaller measurements but larger discrepancies with larger measurements. Conclusion The CBCT-based linear measurements in the edentulous sites using different voxel sizes and FOVs are accurate compared with the direct caliper measurements of these sites. Higher resolution CBCT images with smaller voxel size did not result in greater accuracy of the linear measurements. PMID:27358816

  4. Preliminary capillary hysteresis simulations for fractured rocks -- model development and results of simulations

    SciTech Connect

    Niemi, A.; Bodvarsson, G.S.

    1991-11-01

    As part of the code development and modeling work being carried out to characterize the flow in the unsaturated zone at Yucca Mountain, Nevada, capillary hysteresis models simulating the history-dependence of the characteristic curves have been developed. The objective of the work has been both to develop the hysteresis models, as well as to obtain some preliminary estimates of the possible hysteresis effects in the fractured rocks at Yucca Mountain given the limitations of presently available data. Altogether three different models were developed based on work of other investigators reported in the literature. In these three models different principles are used for determining the scanning paths: in model (1) the scanning paths are interpolated from tabulated first-order scanning curves, in model (2) simple interpolation functions are used for scaling the scanning paths from the expressions of the main wetting and main drying curves and in model (3) the scanning paths are determined from expressions derived based on the dependent domain theory of hysteresis.

  5. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  6. A simulation study of the flight dynamics of elastic aircraft. Volume 1: Experiment, results and analysis

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Davidson, John B.; Schmidt, David K.

    1987-01-01

    The simulation experiment described addresses the effects of structural flexibility on the dynamic characteristics of a generic family of aircraft. The simulation was performed using the NASA Langley VMS simulation facility. The vehicle models were obtained as part of this research. The simulation results include complete response data and subjective pilot ratings and comments and so allow a variety of analyses. The subjective ratings and analysis of the time histories indicate that increased flexibility can lead to increased tracking errors, degraded handling qualities, and changes in the frequency content of the pilot inputs. These results, furthermore, are significantly affected by the visual cues available to the pilot.

  7. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  8. On the Minimal Accuracy Required for Simulating Self-gravitating Systems by Means of Direct N-body Methods

    NASA Astrophysics Data System (ADS)

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-01

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.

  9. ON THE MINIMAL ACCURACY REQUIRED FOR SIMULATING SELF-GRAVITATING SYSTEMS BY MEANS OF DIRECT N-BODY METHODS

    SciTech Connect

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-10

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.

  10. Linear and Logarithmic Speed-Accuracy Trade-Offs in Reciprocal Aiming Result from Task-Specific Parameterization of an Invariant Underlying Dynamics

    ERIC Educational Resources Information Center

    Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.

    2009-01-01

    The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…

  11. Three-dimensional Simulations of Thermonuclear Detonation with α-Network: Numerical Method and Preliminary Results

    NASA Astrophysics Data System (ADS)

    Khokhlov, A.; Domínguez, I.; Bacon, C.; Clifford, B.; Baron, E.; Hoeflich, P.; Krisciunas, K.; Suntzeff, N.; Wang, L.

    2012-07-01

    We describe a new astrophysical version of a cell-based adaptive mesh refinement code ALLA for reactive flow fluid dynamic simulations, including a new implementation of α-network nuclear kinetics, and present preliminary results of first three-dimensional simulations of incomplete carbon-oxygen detonation in Type Ia Supernovae.

  12. First results using a new technology for measuring masses of very short-lived nuclides with very high accuracy: The MISTRAL program at ISOLDE

    SciTech Connect

    Monsanglant, C.; Audi, G.; Conreur, G.; Cousin, R.; Doubre, H.; Jacotin, M.; Henry, S.; Kepinski, J.-F.; Lunney, D.; Saint Simon, M. de; Thibault, C.; Toader, C.; Bollen, G.; Lebee, G.; Scheidenberger, C.; Borcea, C.; Duma, M.; Kluge, H.-J.; Le Scornet, G.

    1999-11-16

    MISTRAL is an experimental program to measure masses of very short-lived nuclides (T{sub 1/2} down to a few ms), with a very high accuracy (a few 10{sup -7}). There were three data taking periods with radioactive beams and 22 masses of isotopes of Ne, Na, Mg, Al, K, Ca, and Ti were measured. The systematic errors are now under control at the level of 8x10{sup -7}, allowing to come close to the expected accuracy. Even for the very weakly produced {sup 30}Na (1 ion at the detector per proton burst), the final accuracy is 7x10{sup -7}.

  13. Results of GEANT simulations and comparison with first experiments at DANCE.

    SciTech Connect

    Reifarth, R.; Bredeweg, T. A.; Browne, J. C.; Esch, E. I.; Haight, R. C.; O'Donnell, J. M.; Kronenberg, A.; Rundberg, R. S.; Ullmann, J. L.; Vieira, D. J.; Wilhelmy, J. B.; Wouters, J. M.

    2003-07-29

    This report describes intensive Monte Carlo simulations carried out to be compared with the results of the first run cycle with DANCE (Detector for Advanced Neutron Capture Experiments). The experimental results were gained during the commissioning phase 2002/2003 with only a part of the array. Based on the results of these simulations the most important items to be improved before the next experiments will be addressed.

  14. A method for data handling numerical results in parallel OpenFOAM simulations

    SciTech Connect

    Anton, Alin; Muntean, Sebastian

    2015-12-31

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  15. Simulation loop between cad systems, GEANT-4 and GeoModel: Implementation and results

    NASA Astrophysics Data System (ADS)

    Sharmazanashvili, A.; Tsutskiridze, Niko

    2016-09-01

    Compare analysis of simulation and as-built geometry descriptions of detector is important field of study for data_vs_Monte-Carlo discrepancies. Shapes consistency and detalization is not important while adequateness of volumes and weights of detector components are essential for tracking. There are 2 main reasons of faults of geometry descriptions in simulation: (1) Difference between simulated and as-built geometry descriptions; (2) Internal inaccuracies of geometry transformations added by simulation software infrastructure itself. Georgian Engineering team developed hub on the base of CATIA platform and several tools enabling to read in CATIA different descriptions used by simulation packages, like XML->CATIA; VP1->CATIA; Geo-Model->CATIA; Geant4->CATIA. As a result it becomes possible to compare different descriptions with each other using the full power of CATIA and investigate both classes of reasons of faults of geometry descriptions. Paper represents results of case studies of ATLAS Coils and End-Cap toroid structures.

  16. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  17. Simulation of plasma turbulence in scrape-off layer conditions: the GBS code, simulation results and code validation

    NASA Astrophysics Data System (ADS)

    Ricci, P.; Halpern, F. D.; Jolliet, S.; Loizu, J.; Mosetto, A.; Fasoli, A.; Furno, I.; Theiler, C.

    2012-12-01

    Based on the drift-reduced Braginskii equations, the Global Braginskii Solver, GBS, is able to model the scrape-off layer (SOL) plasma turbulence in terms of the interplay between the plasma outflow from the tokamak core, the turbulent transport, and the losses at the vessel. Model equations, the GBS numerical algorithm, and GBS simulation results are described. GBS has been first developed to model turbulence in basic plasma physics devices, such as linear and simple magnetized toroidal devices, which contain some of the main elements of SOL turbulence in a simplified setting. In this paper we summarize the findings obtained from the simulation carried out in these configurations and we report the first simulations of SOL turbulence. We also discuss the validation project that has been carried out together with the GBS development.

  18. Results of computer calculations for a simulated distribution of kidney cells

    NASA Technical Reports Server (NTRS)

    Micale, F. J.

    1985-01-01

    The results of computer calculations for a simulated distribution of kidney cells are given. The calculations were made for different values of electroosmotic flow, U sub o, and the ratio of sample diameter to channel diameter, R.

  19. Classification accuracy improvement

    NASA Technical Reports Server (NTRS)

    Kistler, R.; Kriegler, F. J.

    1977-01-01

    Improvements made in processing system designed for MIDAS (prototype multivariate interactive digital analysis system) effects higher accuracy in classification of pixels, resulting in significantly-reduced processing time. Improved system realizes cost reduction factor of 20 or more.

  20. Optimal design of robot accuracy compensators

    SciTech Connect

    Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)

    1993-12-01

    The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.

  1. First Results Using a New Technology for Measuring Masses of Very Short-Lived Nuclides with Very High Accuracy: the MISTRAL Program at ISOLDE

    SciTech Connect

    C. Monsanglant; C. Toader; G. Audi; G. Bollen; C. Borcea; G. Conreur; R. Cousin; H. Doubre; M. Duma; M. Jacotin; S. Henry; J.-F. Kepinski; H.-J. Kluge; G. Lebee; G. Le Scornet; D. Lunney; M. de Saint Simon; C. Scheidenberger; C. Thibault

    1999-12-31

    MISTRAL is an experimental program to measure masses of very short-lived nuclides (T{sub 1/2} down to a few ms), with a very high accuracy (a few 10{sup -7}). There were three data taking periods with radioactive beams and 22 masses of isotopes of Ne, Na{clubsuit}, Mg, Al{clubsuit}, K, Ca, and Ti were measured. The systematic errors are now under control at the level of 8x10{sup -7}, allowing to come close to the expected accuracy. Even for the very weakly produced {sup 30}Na (1 ion at the detector per proton burst), the final accuracy is 7x10{sup -7}.

  2. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    SciTech Connect

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  3. Wave spectra of a shoaling wave field: A comparison of experimental and simulated results

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; Grosch, C. E.; Poole, L. R.

    1982-01-01

    Wave profile measurements made from an aircraft crossing the North Carolina continental shelf after passage of Tropical Storm Amy in 1975 are used to compute a series of wave energy spectra for comparison with simulated spectra. Results indicate that the observed wave field experiences refraction and shoaling effects causing statistically significant changes in the spectral density levels. A modeling technique is used to simulate the spectral density levels. Total energy levels of the simulated spectra are within 20 percent of those of the observed wave field. The results represent a successful attempt to theoretically simulate, at oceanic scales, the decay of a wave field which contains significant wave energies from deepwater through shoaling conditions.

  4. Columbus meteoroid/debris protection study - Experimental simulation techniques and results

    NASA Astrophysics Data System (ADS)

    Schneider, E.; Kitta, K.; Stilp, A.; Lambert, M.; Reimerdes, H. G.

    1992-08-01

    The methods and measurement techniques used in experimental simulations of micrometeoroid and space debris impacts with the ESA's laboratory module Columbus are described. Experiments were carried out at the two-stage light gas gun acceleration facilities of the Ernst-Mach Institute. Results are presented on simulations of normal impacts on bumper systems, oblique impacts on dual bumper systems, impacts into cooled targets, impacts into pressurized targets, and planar impacts of low-density projectiles.

  5. Design and CFD Simulation of the Drift Eliminators in Comparison with PIV Results

    NASA Astrophysics Data System (ADS)

    Stodůlka, Jiří; Vitkovičová, Rut

    2015-05-01

    Drift eliminators are the essential part of all modern cooling towers preventing significant losses of liquid water escaping to the enviroment. These eliminators need to be effective in terms of water capture but on the other hand causing only minimal pressure loss as well. A new type of such eliminator was designed and numerically simulated using CFD tools. Results of the simulation are compared with PIV visulisation on the prototype model.

  6. Results of NASA/FAA ground and flight simulation experiments concerning helicopter IFR airworthiness criteria

    NASA Technical Reports Server (NTRS)

    Lebacqz, J. V.; Chen, R. T. N.; Gerdes, R. M.; Weber, J. M.; Forrest, R. D.

    1982-01-01

    A sequence of ground and flight simulation experiments was conducted to investigate helicopter instrument-flight-rules airworthiness criteria. The first six of these experiments and major results are summarized. Five of the experiments were conducted on large-amplitude motion base simulators. The NASA-Army V/STOLAND UH-1H variable-stability helicopter was used in the flight experiment. Artificial stability and control augmentation, longitudinal and lateral control, and in pitch and roll attitude augmentation were investigated.

  7. THEMATIC ACCURACY OF THE 1992 NATIONAL LAND-COVER DATA (NLCD) FOR THE EASTERN UNITED STATES: STATISTICAL METHODOLOGY AND REGIONAL RESULTS

    EPA Science Inventory

    The accuracy of the National Land Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or a...

  8. SU-E-T-35: An Investigation of the Accuracy of Cervical IMRT Dose Distribution Using 2D/3D Ionization Chamber Arrays System and Monte Carlo Simulation

    SciTech Connect

    Zhang, Y; Yang, J; Liu, H; Liu, D

    2014-06-01

    Purpose: The purpose of this work is to compare the verification results of three solutions (2D/3D ionization chamber arrays measurement and Monte Carlo simulation), the results will help make a clinical decision as how to do our cervical IMRT verification. Methods: Seven cervical cases were planned with Pinnacle 8.0m to meet the clinical acceptance criteria. The plans were recalculated in the Matrixx and Delta4 phantom with the accurate plans parameters. The plans were also recalculated by Monte Carlo using leaf sequences and MUs for individual plans of every patient, Matrixx and Delta4 phantom. All plans of Matrixx and Delta4 phantom were delivered and measured. The dose distribution of iso slice, dose profiles, gamma maps of every beam were used to evaluate the agreement. Dose-volume histograms were also compared. Results: The dose distribution of iso slice and dose profiles from Pinnacle calculation were in agreement with the Monte Carlo simulation, Matrixx and Delta4 measurement. A 95.2%/91.3% gamma pass ratio was obtained between the Matrixx/Delta4 measurement and Pinnacle distributions within 3mm/3% gamma criteria. A 96.4%/95.6% gamma pass ratio was obtained between the Matrixx/Delta4 measurement and Monte Carlo simulation within 2mm/2% gamma criteria, almost 100% gamma pass ratio within 3mm/3% gamma criteria. The DVH plot have slightly differences between Pinnacle and Delta4 measurement as well as Pinnacle and Monte Carlo simulation, but have excellent agreement between Delta4 measurement and Monte Carlo simulation. Conclusion: It was shown that Matrixx/Delta4 and Monte Carlo simulation can be used very efficiently to verify cervical IMRT delivery. In terms of Gamma value the pass ratio of Matrixx was little higher, however, Delta4 showed more problem fields. The primary advantage of Delta4 is the fact it can measure true 3D dosimetry while Monte Carlo can simulate in patients CT images but not in phantom.

  9. SimTracker - Using the Web to track computer simulation results

    SciTech Connect

    Long, J.; Spencer, P.; Springmeyer, R.

    1998-08-26

    Large-scale computer simulations, a hallmark of computing at Lawrence Livermore National Laboratory (LLNL), often take days to run and can produce massive amounts of output. The typical environment of many LLNL scientists includes multiple hardware platforms, a large collection of eclectic software applications, data stored on many devices in many formats, and little standard metadata, which is accessible documentation about the data. The exploration of simulation results typically proceeds as a laborious process requiring knowledge of this complex environment and many application programs. We have addressed this problem by developing a web-based approach for exploring simulation results via the automatic generation of metadata summaries which provide convenient access to the data sets and associated analysis tools. In this paper we will describe the SimTracker tool for automatically generating metadata that serves as a quick overview and index to the archived results of simulations. The SimTracker application consists of two parts - a generation component and a viewing component. The generation component captures and generates calculation metadata from a simulation. These metadata include graphical snapshots from various stages of the run, pointers to the input and output files from the simulation, and assorted annotations describing the run. SimTracker generation can be done either during a simulation or afterwards. When integrated with a code system, SimTracker does its work on the fly, allowing the user to monitor a calculation while it is running. The viewing component of SimTracker provides a web-based mechanism for both quick perusing and careful analysis of simulation results. HTML is created on the fly from a series of Perl CGI scripts and metadata extracted from a database. A variety of views are provided, ranging from a high-level table of contents showing all of one's simulations, to an in-depth results page from which numeric values can be extracted

  10. Comdisco Simulation Results for PCM/PM Receivers in Non-Ideal Channels

    NASA Technical Reports Server (NTRS)

    Anabtawi, A.; Nguyen, T. M.; Hinedi, S. M.; Million, S.

    1994-01-01

    This paper studies, by computer simulations, the performance of a PCM/PM/NRZ receiver in the presence of two separate effects: unbalanced data stream and band-limited channel. The results obtained are then compared to the theoretical results presented in a previous report.

  11. Ride qualities criteria validation/pilot performance study: Flight simulator results

    NASA Technical Reports Server (NTRS)

    Nardi, L. U.; Kawana, H. Y.; Borland, C. J.; Lefritz, N. M.

    1976-01-01

    Pilot performance was studied during simulated manual terrain following flight for ride quality criteria validation. An existing B-1 simulation program provided the data for these investigations. The B-1 simulation program included terrain following flights under varying controlled conditions of turbulence, terrain, mission length, and system dynamics. The flight simulator consisted of a moving base cockpit which reproduced motions due to turbulence and control inputs. The B-1 aircraft dynamics were programmed with six-degrees-of-freedom equations of motion with three symmetric and two antisymmetric structural degrees of freedom. The results provided preliminary validation of existing ride quality criteria and identified several ride quality/handling quality parameters which may be of value in future ride quality/criteria development.

  12. Comparing Simulation Results with Traditional PRA Model on a Boiling Water Reactor Station Blackout Case Study

    SciTech Connect

    Zhegang Ma; Diego Mandelli; Curtis Smith

    2011-07-01

    A previous study used RELAP and RAVEN to conduct a boiling water reactor station black-out (SBO) case study in a simulation based environment to show the capabilities of the risk-informed safety margin characterization methodology. This report compares the RELAP/RAVEN simulation results with traditional PRA model results. The RELAP/RAVEN simulation run results were reviewed for their input parameters and output results. The input parameters for each simulation run include various timing information such as diesel generator or offsite power recovery time, Safety Relief Valve stuck open time, High Pressure Core Injection or Reactor Core Isolation Cooling fail to run time, extended core cooling operation time, depressurization delay time, and firewater injection time. The output results include the maximum fuel clad temperature, the outcome, and the simulation end time. A traditional SBO PRA model in this report contains four event trees that are linked together with the transferring feature in SAPHIRE software. Unlike the usual Level 1 PRA quantification process in which only core damage sequences are quantified, this report quantifies all SBO sequences, whether they are core damage sequences or success (i.e., non core damage) sequences, in order to provide a full comparison with the simulation results. Three different approaches were used to solve event tree top events and quantify the SBO sequences: “W” process flag, default process flag without proper adjustment, and default process flag with adjustment to account for the success branch probabilities. Without post-processing, the first two approaches yield incorrect results with a total conditional probability greater than 1.0. The last approach accounts for the success branch probabilities and provides correct conditional sequence probabilities that are to be used for comparison. To better compare the results from the PRA model and the simulation runs, a simplified SBO event tree was developed with only four

  13. High Fidelity Thermal Simulators for Non-Nuclear Testing: Analysis and Initial Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David

    2007-01-01

    Non-nuclear testing can be a valuable tool in the development of a space nuclear power system, providing system characterization data and allowing one to work through various fabrication, assembly and integration issues without the cost and time associated with a full ground nuclear test. In a non-nuclear test bed, electric heaters are used to simulate the heat from nuclear fuel. Testing with non-optimized heater elements allows one to assess thermal, heat transfer, and stress related attributes of a given system, but fails to demonstrate the dynamic response that would be present in an integrated, fueled reactor system. High fidelity thermal simulators that match both the static and the dynamic fuel pin performance that would be observed in an operating, fueled nuclear reactor can vastly increase the value of non-nuclear test results. With optimized simulators, the integration of thermal hydraulic hardware tests with simulated neutronie response provides a bridge between electrically heated testing and fueled nuclear testing, providing a better assessment of system integration issues, characterization of integrated system response times and response characteristics, and assessment of potential design improvements' at a relatively small fiscal investment. Initial conceptual thermal simulator designs are determined by simple one-dimensional analysis at a single axial location and at steady state conditions; feasible concepts are then input into a detailed three-dimensional model for comparison to expected fuel pin performance. Static and dynamic fuel pin performance for a proposed reactor design is determined using SINDA/FLUINT thermal analysis software, and comparison is made between the expected nuclear performance and the performance of conceptual thermal simulator designs. Through a series of iterative analyses, a conceptual high fidelity design can developed. Test results presented in this paper correspond to a "first cut" simulator design for a potential

  14. Geometry and Simulation Results for a Gas Turbine Representative of the Energy Efficient Engine (EEE)

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Beach, Tim; Turner, Mark; Siddappaji, Kiran; Hendricks, Eric S.

    2015-01-01

    This paper describes the geometry and simulation results of a gas-turbine engine based on the original EEE engine developed in the 1980s. While the EEE engine was never in production, the technology developed during the program underpins many of the current generation of gas turbine engines. This geometry is being explored as a potential multi-stage turbomachinery test case that may be used to develop technology for virtual full-engine simulation. Simulation results were used to test the validity of each component geometry representation. Results are compared to a zero-dimensional engine model developed from experimental data. The geometry is captured in a series of Initial Graphical Exchange Specification (IGES) files and is available on a supplemental DVD to this report.

  15. SIMULATION AND ANALYSIS OF MICROWAVE TRANSMISSION THROUGH ANELECTRON CLOUD, A COMPARISON OF RESULTS

    SciTech Connect

    Sonnad, Kiran G.; Furman, Miguel; Veitzer, Seth A.; Cary, John

    2006-04-15

    Simulation studies for transmission of microwaves through electron clouds show good agreement with analytic results. The electron cloud produces a shift in phase of the microwave. Experimental observation of this phenomena would lead to a useful diagnostic tool for accessing the local density of electron clouds in an accelerator. These experiments are being carried out at the CERN SPS and the PEP-II LER at SLAC and is proposed to be done at the Fermilab main injector. In this study, a brief analysis of the phase shift is provided and the results are compared with that obtained from simulations.

  16. An outcome-based learning model to identify emerging threats : experimental and simulation results.

    SciTech Connect

    Martinez-Moyano, I. J.; Conrad, S. H.; Andersen, D. F.; Decision and Information Sciences; SNL; Univ. at Albany

    2007-01-01

    The authors present experimental and simulation results of an outcome-based learning model as it applies to the identification of emerging threats. This model integrates judgment, decision making, and learning theories to provide an integrated framework for the behavioral study of emerging threats.

  17. Simulation and experimental results of kaleidoscope homogenizers for longitudinal diode pumping.

    PubMed

    Bartnicki, Eric; Bourdet, Gilbert L

    2010-03-20

    With the goal to set a homogenizer to allow coupling of a stack of diodes with a disk amplifier medium for a longitudinally pumped laser or amplifier, we report simulation and experimental results on homogenization of the light supplied by a large stack of diodes. We investigate various kaleidoscope cross-section shapes and various optical coupling configurations.

  18. The Vascular Model Repository: A Public Resource of Medical Imaging Data and Blood Flow Simulation Results.

    PubMed

    Wilson, Nathan M; Ortiz, Ana K; Johnson, Allison B

    2013-12-01

    Patient-specific blood flow simulations may provide insight into disease progression, treatment options, and medical device design that would be difficult or impossible to obtain experimentally. However, publicly available image data and computer models for researchers and device designers are extremely limited. The National Heart, Lung, and Blood Institute sponsored Open Source Medical Software Corporation (contract nos. HHSN268200800008C and HHSN268201100035C) and its university collaborators to build a repository (www.vascularmodel.org) including realistic, image-based anatomic models and related hemodynamic simulation results to address this unmet need.

  19. Late stage spinodal decomposition in binary fluids: comparison between computer simulation and experimental results

    NASA Astrophysics Data System (ADS)

    Koga, Tsuyoshi; Kawasaki, Kyozi; Takenaka, Mikihito; Hashimoto, Takeji

    1993-09-01

    We present detailed comparisons of results on the late stage dynamics of spinodal decomposition obtained by the computer simulation of the time-dependent Ginzburg-Landau equation with hydrodynamic interaction and experiments of a polymer mixture of polybutadiene and polyisoprene. We show that the temporally linear domain growth law, which is characteristic for viscous fluids, is observed in both simulation and experiment in the late stage. Some quantities obtained in such a hydrodynamic domain growth region, such as the interface area density and the scaling function, are compared in detail. Especially, we show that the scaling functions for the two systems are in quantitative agreement.

  20. Analysis Results for Lunar Soil Simulant Using a Portable X-Ray Fluorescence Analyzer

    NASA Technical Reports Server (NTRS)

    Boothe, R. E.

    2006-01-01

    Lunar soil will potentially be used for oxygen generation, water generation, and as filler for building blocks during habitation missions on the Moon. NASA s in situ fabrication and repair program is evaluating portable technologies that can assess the chemistry of lunar soil and lunar soil simulants. This Technical Memorandum summarizes the results of the JSC 1 lunar soil simulant analysis using the TRACeR III IV handheld x-ray fluorescence analyzer, manufactured by KeyMaster Technologies, Inc. The focus of the evaluation was to determine how well the current instrument configuration would detect and quantify the components of JSC-1.

  1. A global index of acoustic assessment of machines-results of experimental and simulation tests.

    PubMed

    Pleban, Dariusz

    2011-01-01

    A global index of machines was developed to assess noise emitted by machines and to predict noise levels at workstations. The global index is a function of several partial indices: sound power index, index of distance between the workstation and the machine, radiation directivity index, impulse and impact noise index and noise spectrum index. Tests were carried out to determine values of the global index for engine-generator; the inversion method for determining sound power level was used. It required modelling each tested generator with one omnidirectional substitute source. The partial indices and the global index were simulated, too. The results of the tests confirmed the correctness of the simulations. PMID:21939600

  2. Results of intravehicular manned cargo-transfer studies in simulated weightlessness

    NASA Technical Reports Server (NTRS)

    Spady, A. A., Jr.; Beasley, G. P.; Yenni, K. R.; Eisele, D. F.

    1972-01-01

    A parametric investigation was conducted in a water immersion simulator to determine the effect of package mass, moment of inertia, and size on the ability of man to transfer cargo in simulated weightlessness. Results from this study indicate that packages with masses of at least 744 kg and moments of inertia of at least 386 kg-m2 can be manually handled and transferred satisfactorily under intravehicular conditions using either one- or two-rail motion aids. Data leading to the conclusions and discussions of test procedures and equipment are presented.

  3. Methods for improving accuracy and extending results beyond periods covered by traditional ground-truth in remote sensing classification of a complex landscape

    NASA Astrophysics Data System (ADS)

    Mueller-Warrant, George W.; Whittaker, Gerald W.; Banowetz, Gary M.; Griffith, Stephen M.; Barnhart, Bradley L.

    2015-06-01

    Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that ground truth data from one year could be used to extrapolate previous or future landuse in a complex landscape where cropping systems do not generally change greatly from year to year because the majority of crops are established perennials or the same annual crops grown on the same fields over multiple years. Prior to testing this hypothesis, it was first necessary to classify 57 major landuses in the Willamette Valley of western Oregon from 2005 to 2011 using normal same year ground-truth, elaborating on previously published work and traditional sources such as Cropland Data Layers (CDL) to more fully include minor crops grown in the region. Available remote sensing data included Landsat, MODIS 16-day composites, and National Aerial Imagery Program (NAIP) imagery, all of which were resampled to a common 30 m resolution. The frequent presence of clouds and Landsat7 scan line gaps forced us to conduct of series of separate classifications in each year, which were then merged by choosing whichever classification used the highest number of cloud- and gap-free bands at any given pixel. Procedures adopted to improve accuracy beyond that achieved by maximum likelihood pixel classification included majority-rule reclassification of pixels within 91,442 Common Land Unit (CLU) polygons, smoothing and aggregation of areas outside the CLU polygons, and majority-rule reclassification over time of forest and urban development areas. Final classifications in all seven years separated annually disturbed agriculture, established perennial crops, forest, and urban development from each other at 90 to 95% overall 4-class validation accuracy. In the most successful use of subsequent year ground-truth data to classify prior year landuse, an

  4. Ca-Pri a Cellular Automata Phenomenological Research Investigation: Simulation Results

    NASA Astrophysics Data System (ADS)

    Iannone, G.; Troisi, A.

    2013-05-01

    Following the introduction of a phenomenological cellular automata (CA) model capable to reproduce city growth and urban sprawl, we develop a toy model simulation considering a realistic framework. The main characteristic of our approach is an evolution algorithm based on inhabitants preferences. The control of grown cells is obtained by means of suitable functions which depend on the initial condition of the simulation. New born urban settlements are achieved by means of a logistic evolution of the urban pattern while urban sprawl is controlled by means of the population evolution function. In order to compare model results with a realistic urban framework we have considered, as the area of study, the island of Capri (Italy) in the Mediterranean Sea. Two different phases of the urban evolution on the island have been taken into account: a new born initial growth as induced by geographic suitability and the simulation of urban spread after 1943 induced by the population evolution after this date.

  5. Monte Carlo simulations of microchannel plate detectors I: steady-state voltage bias results

    SciTech Connect

    Ming Wu, Craig Kruschwitz, Dane Morgan, Jiaming Morgan

    2008-07-01

    X-ray detectors based on straight-channel microchannel plates (MCPs) are a powerful diagnostic tool for two-dimensional, time-resolved imaging and timeresolved x-ray spectroscopy in the fields of laser-driven inertial confinement fusion and fast z-pinch experiments. Understanding the behavior of microchannel plates as used in such detectors is critical to understanding the data obtained. The subject of this paper is a Monte Carlo computer code we have developed to simulate the electron cascade in a microchannel plate under a static applied voltage. Also included in the simulation is elastic reflection of low-energy electrons from the channel wall, which is important at lower voltages. When model results were compared to measured microchannel plate sensitivities, good agreement was found. Spatial resolution simulations of MCP-based detectors were also presented and found to agree with experimental measurements.

  6. Computer simulation of shelf and stream profile geomorphic evolution resulting from eustasy and uplift

    SciTech Connect

    Johnson, R.M. )

    1993-04-01

    A two-dimensional computer simulation of shelf and stream profile evolution with sea level oscillation has been developed to illustrate the interplay of coastal and fluvial processes on uplifting continental margins. The shelf evolution portion of the simulation is based on the erosional model of Trenhaile (1989). The rate of high tide cliff erosion decreases as abrasion platform gradient decreases the sea cliff height increases. The rate of subtidal erosion decreases as the subtidal sea floor gradient decreases. Values are specified for annual wave energy, energy required to erode a cliff notch 1 meter deep, nominal low tidal erosion rate, and rate of removal of cliff debris. The values were chosen arbitrarily to yield a geomorphic evolution consistent with the present coast of northern California, where flights of uplifted marine terraces are common. The stream profile evolution simulation interfaces in real time with the shelf simulation. The stream profile consists of uniformly spaced cells, each representing the median height of a profile segment. The stream simulation results show that stream response to sea level change on an uplifting coast is dependent on the profile gradient near the stream mouth, relative to the shelf gradient. Small streams with steep gradients aggrade onto the emergent shelf during sea level fall and incise at the mountain front during sea level rise. Large streams with low gradients incise the emergent shelf during sea level fall and aggrade in their valleys during sea level rise.

  7. Water Vapor and Cloud Formation in the TTL: Simulation Results vs. Satellite Observations

    NASA Astrophysics Data System (ADS)

    Wang, T.; Dessler, A. E.; Schoeberl, M. R.

    2012-12-01

    Driven by analyzed winds and temperatures, a domain-filling forward trajectory model is used to simulate water vapor and clouds in the tropical tropopause layer (TTL). During this Lagrangian model calculations, excess water vapor is instantaneously removed from the parcel to keep the relative humidity with respect to ice from exceeding a specified (super) saturation level. The occurrences of dehydration serve as an indication of where and when clouds form. During the simulation, simple parameterizations for convective moistening through ice lofting and temperature perturbations from gravity waves are also included. Our simulations produce water vapor mixing ratios close to that observed by the Aura Microwave Limb Sounder (MLS). The results are consistent with the biases of reanalysis tropical tropopause temperature, which confirms the dominant role of the cold-point temperatures for regulating the water vapor abundances in the stratosphere. The simulation of cloud formation agrees with the patterns of cirrus distributions from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). It demonstrates that trajectory calculations driven by analyzed winds and temperatures can produce reasonable simulations of water vapor and cloud formation in the TTL.

  8. SU-D-16A-04: Accuracy of Treatment Plan TCP and NTCP Values as Determined Via Treatment Course Delivery Simulations

    SciTech Connect

    Siebers, J; Xu, H; Gordon, J

    2014-06-01

    Purpose: To to determine if tumor control probability (TCP) and normal tissue control probability (NTCP) values computed on the treatment planning image are representative of TCP/NTCP distributions resulting from probable positioning variations encountered during external-beam radiotherapy. Methods: We compare TCP/NTCP as typically computed on the planning PTV/OARs with distributions of those parameters computed for CTV/OARs via treatment delivery simulations which include the effect of patient organ deformations for a group of 19 prostate IMRT pseudocases. Planning objectives specified 78 Gy to PTV1=prostate CTV+5 mm margin, 66 Gy to PTV2=seminal vesicles+8 mm margin, and multiple bladder/rectum OAR objectives to achieve typical clinical OAR sparing. TCP were computed using the Poisson Model while NTCPs used the Lyman-Kutcher-Bruman model. For each patient, 1000 30-fraction virtual treatment courses were simulated with each fractional pseudo- time-oftreatment anatomy sampled from a principle component analysis patient deformation model. Dose for each virtual treatment-course was determined via deformable summation of dose from the individual fractions. CTVTCP/ OAR-NTCP values were computed for each treatment course, statistically analyzed, and compared with the planning PTV-TCP/OARNTCP values. Results: Mean TCP from the simulations differed by <1% from planned TCP for 18/19 patients; 1/19 differed by 1.7%. Mean bladder NTCP differed from the planned NTCP by >5% for 12/19 patients and >10% for 4/19 patients. Similarly, mean rectum NTCP differed by >5% for 12/19 patients, >10% for 4/19 patients. Both mean bladder and mean rectum NTCP differed by >5% for 10/19 patients and by >10% for 2/19 patients. For several patients, planned NTCP was less than the minimum or more than the maximum from the treatment course simulations. Conclusion: Treatment course simulations yield TCP values that are similar to planned values, while OAR NTCPs differ significantly, indicating the

  9. Simulation Results for PCM/PM/NRZ Receivers in Non-Ideal Channels

    NASA Technical Reports Server (NTRS)

    Anabtawi, Aseel

    1995-01-01

    This paper studies, by computer simulations, the performance of deep space telemetry signals that employ PCM/PM/NRZ modulation technique under the separate and combined effects of an unbalanced data stream, data asymmetry, and band-limited channel. The study is based on measuring the Symbol Error Rate (SER) performance and comparing the results to the theoretical results presented in previous reports [1,2]. Only the effects of imperfect carrier tracking due to an imperfect data stream are considered.

  10. Retained gas sampler extractor mixing and mass transfer rate study: Experimental and simulation results

    SciTech Connect

    Recknagle, K.P.; Bates, J.M.; Shekarriz, A.

    1997-11-01

    Research staff at Pacific Northwest National Laboratory conducted experimental testing and computer simulations of the impeller-stirred Retained Gas Sampler (RGS) gas extractor system. This work was performed to verify experimentally the effectiveness of the extractor at mixing viscous fluids of both Newtonian and non-Newtonian rheology representative of Hanford single- and double-shell wastes, respectively. Developing the computational models and validating their results by comparing them with experimental results would enable simulations of the mixing process for a range of fluid properties and mixing speeds. Five tests were performed with a full-scale, optically transparent model extractor to provide the data needed to compare mixing times for fluid rheology, mixer rotational direction, and mixing speed variation. The computer model was developed and exercised to simulate the tests. The tests demonstrated that rotational direction of the pitched impeller blades was not as important as fluid rheology in determining mixing time. The Newtonian fluid required at least six hours to mix at the hot cell operating speed of 3 rpm, and the non-Newtonian fluid required at least 46 hours at 3 rpm to become significantly mixed. In the non-Newtonian fluid tests, stagnant regions within the fluid sometimes required days to be fully mixed. Higher-speed (30 rpm) testing showed that the laminar mixing time was correlated to mixing speed. The tests demonstrated that, using the RGS extractor and current procedures, complete mixing of the waste samples in the hot cell should not be expected. The computer simulation of Newtonian fluid mixing gave results comparable to the test while simulation of non-Newtonian fluid mixing would require further development. In light of the laboratory test results, detailed parametric analysis of the mixing process was not performed.

  11. Simulation Results for the New NSTX HHFW Antenna Straps Design by Using Microwave Studio

    SciTech Connect

    Kung, C C; Brunkhorst, C; Greenough, N; Fredd, E; Castano, A; Miller, D; D'Amico, G; Yager, R; Hosea, J; Wilson, J R; Ryan, P

    2009-05-26

    Experimental results have shown that the high harmonic fast wave (HHFW) at 30 MHz can provide substantial plasma heating and current drive for the NSTX spherical tokamak operation. However, the present antenna strap design rarely achieves the design goal of delivering the full transmitter capability of 6 MW to the plasma. In order to deliver more power to the plasma, a new antenna strap design and the associated coaxial line feeds are being constructed. This new antenna strap design features two feedthroughs to replace the old single feed-through design. In the design process, CST Microwave Studio has been used to simulate the entire new antenna strap structure including the enclosure and the Faraday shield. In this paper, the antenna strap model and the simulation results will be discussed in detail. The test results from the new antenna straps with their associated resonant loops will be presented as well.

  12. Results of aerodynamic testing of large-scale wing sections in a simulated natural rain environment

    NASA Technical Reports Server (NTRS)

    Bezos, Gaudy M.; Dunham, R. Earl, Jr.; Campbell, Bryan A.; Melson, W. Edward, Jr.

    1990-01-01

    The NASA Langley Research Center has developed a large-scale ground testing capability for evaluating the effect of heavy rain on airfoil lift. The paper presents the results obtained at the Langley Aircraft Landing Dynamics Facility on a 10-foot cord NACA 64-210 wing section equipped with a leading-edge slat and double-slotted trailing-edge flap deflected to simulate landing conditions. Aerodynamic lift data were obtained with and without the rain simulation system turned on for an angle-of-attack range of 7.5 to 19.5 deg and for two rainfall conditions: 9 in/hr and 40 in/hr. The results are compared to and correlated with previous small-scale wind tunnel results for the same airfoil section. It appears that to first order, scale effects are not large and the wind tunnel research technique can be used to predict rain effects on airplane performance.

  13. High-Alpha Research Vehicle Lateral-Directional Control Law Description, Analyses, and Simulation Results

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Murphy, Patrick C.; Lallman, Frederick J.; Hoffler, Keith D.; Bacon, Barton J.

    1998-01-01

    This report contains a description of a lateral-directional control law designed for the NASA High-Alpha Research Vehicle (HARV). The HARV is a F/A-18 aircraft modified to include a research flight computer, spin chute, and thrust-vectoring in the pitch and yaw axes. Two separate design tools, CRAFT and Pseudo Controls, were integrated to synthesize the lateral-directional control law. This report contains a description of the lateral-directional control law, analyses, and nonlinear simulation (batch and piloted) results. Linear analysis results include closed-loop eigenvalues, stability margins, robustness to changes in various plant parameters, and servo-elastic frequency responses. Step time responses from nonlinear batch simulation are presented and compared to design guidelines. Piloted simulation task scenarios, task guidelines, and pilot subjective ratings for the various maneuvers are discussed. Linear analysis shows that the control law meets the stability margin guidelines and is robust to stability and control parameter changes. Nonlinear batch simulation analysis shows the control law exhibits good performance and meets most of the design guidelines over the entire range of angle-of-attack. This control law (designated NASA-1A) was flight tested during the Summer of 1994 at NASA Dryden Flight Research Center.

  14. Results from tight and loose coupled multiphysics in nuclear fuels performance simulations using BISON

    SciTech Connect

    Novascone, S. R.; Spencer, B. W.; Andrs, D.; Williamson, R. L.; Hales, J. D.; Perez, D. M.

    2013-07-01

    The behavior of nuclear fuel in the reactor environment is affected by multiple physics, most notably heat conduction and solid mechanics, which can have a strong influence on each other. To provide credible solutions, a fuel performance simulation code must have the ability to obtain solutions for each of the physics, including coupling between them. Solution strategies for solving systems of coupled equations can be categorized as loosely-coupled, where the individual physics are solved separately, keeping the solutions for the other physics fixed at each iteration, or tightly coupled, where the nonlinear solver simultaneously drives down the residual for each physics, taking into account the coupling between the physics in each nonlinear iteration. In this paper, we compare the performance of loosely and tightly coupled solution algorithms for thermomechanical problems involving coupled thermal and mechanical contact, which is a primary source of interdependence between thermal and mechanical solutions in fuel performance models. The results indicate that loosely-coupled simulations require significantly more nonlinear iterations, and may lead to convergence trouble when the thermal conductivity of the gap is too small. We also apply the tightly coupled solution strategy to a nuclear fuel simulation of an experiment in a test reactor. Studying the results from these simulations indicates that perhaps convergence for either approach may be problem dependent, i.e., there may be problems for which a loose coupled approach converges, where tightly coupled won't converge and vice versa. (authors)

  15. Results from Tight and Loose Coupled Multiphysics in Nuclear Fuels Performance Simulations using BISON

    SciTech Connect

    S. R. Novascone; B. W. Spencer; D. Andrs; R. L. Williamson; J. D. Hales; D. M. Perez

    2013-05-01

    The behavior of nuclear fuel in the reactor environment is affected by multiple physics, most notably heat conduction and solid mechanics, which can have a strong influence on each other. To provide credible solutions, a fuel performance simulation code must have the ability to obtain solutions for each of the physics, including coupling between them. Solution strategies for solving systems of coupled equations can be categorized as loosely-coupled, where the individual physics are solved separately, keeping the solutions for the other physics fixed at each iteration, or tightly coupled, where the nonlinear solver simultaneously drives down the residual for each physics, taking into account the coupling between the physics in each nonlinear iteration. In this paper, we compare the performance of loosely and tightly coupled solution algorithms for thermomechanical problems involving coupled thermal and mechanical contact, which is a primary source of interdependence between thermal and mechanical solutions in fuel performance models. The results indicate that loosely-coupled simulations require significantly more nonlinear iterations, and may lead to convergence trouble when the thermal conductivity of the gap is too small. We also apply the tightly coupled solution strategy to a nuclear fuel simulation of an experiment in a test reactor. Studying the results from these simulations indicates that perhaps convergence for either approach may be problem dependent, i.e., there may be problems for which a loose coupled approach converges, where tightly coupled won’t converge and vice versa.

  16. Laboratory simulations of lidar returns from clouds: experimental and numerical results.

    PubMed

    Zaccanti, G; Bruscaglioni, P; Gurioli, M; Sansoni, P

    1993-03-20

    The experimental results of laboratory simulations of lidar returns from clouds are presented. Measurements were carried out on laboratory-scaled cloud models by using a picosecond laser and a streak-camera system. The turbid structures simulating clouds were suspensions of polystyrene spheres in water. The geometrical situation was similar to that of an actual lidar sounding a cloud 1000 m distant and with a thickness of 300 m. Measurements were repeated for different concentrations and different sizes of spheres. The results show how the effect of multiple scattering depends on the scattering coefficient and on the phase function of the diffusers. The depolarization introduced by multiple scattering was also investigated. The results were also compared with numerical results obtained by Monte Carlo simulations. Substantially good agreement between numerical and experimental results was found. The measurements showed the adequacy of modern electro-optical systems to study the features of multiple-scattering effects on lidar echoes from atmosphere or ocean by means of experiments on well-controlled laboratory-scaled models. This adequacy provides the possibility of studying the influence of different effects in the laboratory in well-controlled situations.

  17. Rheology of Entangled Polymer Melts: Recent Results from Molecular Dynamics Simulations

    NASA Astrophysics Data System (ADS)

    Larson, Ronald G.

    2010-03-01

    Models for the rheology of entangled polymers, based on the ``tube" model are now open to investigation by molecular dynamics simulations using the Kremer-Grest ``pearl necklace" model of polymers. Here, we present extensive molecular dynamics simulations of the dynamics and stress in entangled melts of branched polymers and of ``binary blends" of diluted long probe chains entangled with a matrix of shorter chains. Direct evidence of ``hierarchical relaxation" is obtained in diffusion of asymmetric star polymers, wherein the rate of slow diffusion of the branch point is controlled by the much faster motion of the attached arm. In studies of binary blends, the ratio of their lengths is varied over a wide range to cover the crossover from the chain reptation regime to tube Rouse motion regime of the long probe chains. Reducing the matrix chain length results in a faster decay of the dynamic structure factor of the probe chains, in good agreement with recent Neutron Spin Echo experiments. The diffusion of the long chains, measured by the mean square displacements of the monomers and the centers of mass of the chains, demonstrates a systematic speed-up relative to the pure reptation behavior expected for monodisperse melts of sufficiently long polymers. On the other hand, the diffusion of the matrix chains is only weakly perturbed by the diluted long probe chains. The simulation results are qualitatively consistent with the theoretical predictions based on constraint release Rouse model, but a detailed comparison reveals the existence of a broad distribution of the disentanglement rates, which is partly confirmed by an analysis of the packing and diffusion of the matrix chains in the tube region of the probe chains. A coarse-grained simulation model based on the tube Rouse motion model with incorporation of the probability distribution of the tube segment jump rates is developed and shows results qualitatively consistent with the fine scale molecular dynamics

  18. Stable water isotope simulation by current land-surface schemes:Results of IPILPS phase 1

    SciTech Connect

    Henderson-Sellers, A.; Fischer, M.; Aleinov, I.; McGuffie, K.; Riley, W.J.; Schmidt, G.A.; Sturm, K.; Yoshimura, K.; Irannejad, P.

    2005-10-31

    Phase 1 of isotopes in the Project for Intercomparison of Land-surface Parameterization Schemes (iPILPS) compares the simulation of two stable water isotopologues ({sup 1}H{sub 2} {sup 18}O and {sup 1}H{sup 2}H{sup 16}O) at the land-atmosphere interface. The simulations are off-line, with forcing from an isotopically enabled regional model for three locations selected to offer contrasting climates and ecotypes: an evergreen tropical forest, a sclerophyll eucalypt forest and a mixed deciduous wood. Here we report on the experimental framework, the quality control undertaken on the simulation results and the method of intercomparisons employed. The small number of available isotopically-enabled land-surface schemes (ILSSs) limits the drawing of strong conclusions but, despite this, there is shown to be benefit in undertaking this type of isotopic intercomparison. Although validation of isotopic simulations at the land surface must await more, and much more complete, observational campaigns, we find that the empirically-based Craig-Gordon parameterization (of isotopic fractionation during evaporation) gives adequately realistic isotopic simulations when incorporated in a wide range of land-surface codes. By introducing two new tools for understanding isotopic variability from the land surface, the Isotope Transfer Function and the iPILPS plot, we show that different hydrological parameterizations cause very different isotopic responses. We show that ILSS-simulated isotopic equilibrium is independent of the total water and energy budget (with respect to both equilibration time and state), but interestingly the partitioning of available energy and water is a function of the models' complexity.

  19. Theoretical simulation of tumour oxygenation and results from acute and chronic hypoxia

    NASA Astrophysics Data System (ADS)

    Dasu, Alexandru; Toma-Dasu, Iuliana; Karlsson, Mikael

    2003-09-01

    The tumour microenvironment is considered to be responsible for the outcome of cancer treatment and therefore it is extremely important to characterize and quantify it. Unfortunately, most of the experimental techniques available now are invasive and generally it is not known how this influences the results. Non-invasive methods on the other hand have a geometrical resolution that is not always suited for the modelling of the tumour response. Theoretical simulation of the microenvironment may be an alternative method that can provide quantitative data for accurately describing tumour tissues. This paper presents a computerized model that allows the simulation of the tumour oxygenation. The model simulates numerically the fundamental physical processes of oxygen diffusion and consumption in a two-dimensional geometry in order to study the influence of the different parameters describing the tissue geometry. The paper also presents a novel method to simulate the effects of diffusion-limited (chronic) hypoxia and perfusion-limited (acute) hypoxia. The results show that all the parameters describing tissue vasculature are important for describing tissue oxygenation. Assuming that vascular structure is described by a distribution of inter-vessel distances, both the average and the width of the distribution are needed in order to fully characterize the tissue oxygenation. Incomplete data, such as distributions measured in a non-representative region of the tissue, may not give relevant tissue oxygenation. Theoretical modelling of tumour oxygenation also allows the separation between acutely and chronically hypoxic cells, a distinction that cannot always be seen with other methods. It was observed that the fraction of acutely hypoxic cells depends not only on the fraction of collapsed blood vessels at any particular moment, but also on the distribution of vessels in space as well. All these suggest that theoretical modelling of tissue oxygenation starting from the basic

  20. Examining the results of certain effects of high altitude on soldiers using modeling and simulation.

    PubMed

    von Tersch, Robert; Birch, Harry

    2009-10-01

    Operation Enduring Freedom conducted in the high mountains of Afghanistan posed new challenges for U.S. and coalition forces. The high mountains with elevations up to 25,000 feet and little to no road access limited the use of combat vehicles and some advanced weaponry. Small unit actions became the norm and soldiers experienced the effect of high elevation, where limited oxygen and its debilitating effects negatively impacted unacclimated soldiers. While the effects of high altitude on unacclimated soldiers are well documented, the results of those effects in a combat setting are not as well known. For this study, the authors focused on 3 areas: movement speed, response time, and judgment; used a state-of-the-art constructive modeling and simulation (M&S) tool; simulated a combat engagement with less capable unacclimated and fully capable acclimated soldiers; and captured the results, which scaled increased casualties for unacclimated and decreased casualties for acclimated soldiers. PMID:19891222

  1. Analysis of formation pressure test results in the Mount Elbert methane hydrate reservoir through numerical simulation

    USGS Publications Warehouse

    Kurihara, M.; Sato, A.; Funatsu, K.; Ouchi, H.; Masuda, Y.; Narita, H.; Collett, T.S.

    2011-01-01

    Targeting the methane hydrate (MH) bearing units C and D at the Mount Elbert prospect on the Alaska North Slope, four MDT (Modular Dynamic Formation Tester) tests were conducted in February 2007. The C2 MDT test was selected for history matching simulation in the MH Simulator Code Comparison Study. Through history matching simulation, the physical and chemical properties of the unit C were adjusted, which suggested the most likely reservoir properties of this unit. Based on these properties thus tuned, the numerical models replicating "Mount Elbert C2 zone like reservoir" "PBU L-Pad like reservoir" and "PBU L-Pad down dip like reservoir" were constructed. The long term production performances of wells in these reservoirs were then forecasted assuming the MH dissociation and production by the methods of depressurization, combination of depressurization and wellbore heating, and hot water huff and puff. The predicted cumulative gas production ranges from 2.16??106m3/well to 8.22??108m3/well depending mainly on the initial temperature of the reservoir and on the production method.This paper describes the details of modeling and history matching simulation. This paper also presents the results of the examinations on the effects of reservoir properties on MH dissociation and production performances under the application of the depressurization and thermal methods. ?? 2010 Elsevier Ltd.

  2. Simulating Late Ordovician deep ocean O2 with an earth system climate model. Preliminary results.

    NASA Astrophysics Data System (ADS)

    D'Amico, Daniel F.; Montenegro, Alvaro

    2016-04-01

    The geological record provides several lines of evidence that point to the occurrence of widespread and long lasting deep ocean anoxia during the Late Ordovician, between about 460-440 million years ago (ma). While a series of potential causes have been proposed, there is still large uncertainty regarding how the low oxygen levels came about. Here we use the University of Victoria Earth System Climate Model (UVic ESCM) with Late Ordovician paleogeography to verify the impacts of paleogeography, bottom topography, nutrient loading and cycling and atmospheric concentrations of O2 and CO2 on deep ocean oxygen concentration during the period of interest. Preliminary results so far are based on 10 simulations (some still ongoing) covering the following parameter space: CO2 concentrations of 2240 to 3780 ppmv (~8x to 13x pre-industrial), atmospheric O2 ranging from 8% to 12% per volume, oceanic PO4 and NO3 loading from present day to double present day, reductions in wind speed of 50% and 30% (winds are provided as a boundary condition in the UVic ESCM). For most simulations the deep ocean remains well ventilated. While simulations with higher CO2, lower atmospheric O2 and greater nutrient loading generate lower oxygen concentration in the deep ocean, bottom anoxia - here defined as concentrations <10 μmol L-1 - in these cases is restricted to the high-latitue northern hemisphere. Further simulations will address the impact of greater nutrient loads and bottom topography on deep ocean oxygen concentrations.

  3. Recent results from the GISS model of the global atmosphere. [circulation simulation for weather forecasting

    NASA Technical Reports Server (NTRS)

    Somerville, R. C. J.

    1975-01-01

    Large numerical atmospheric circulation models are in increasingly widespread use both for operational weather forecasting and for meteorological research. The results presented here are from a model developed at the Goddard Institute for Space Studies (GISS) and described in detail by Somerville et al. (1974). This model is representative of a class of models, recently surveyed by the Global Atmospheric Research Program (1974), designed to simulate the time-dependent, three-dimensional, large-scale dynamics of the earth's atmosphere.

  4. Scanning L-Band Active Passive (SLAP) - Recent Results from an Airborne Simulator for SMAP

    NASA Technical Reports Server (NTRS)

    Kim, Edward

    2015-01-01

    Scanning L-band Active Passive (SLAP) is a recently-developed NASA airborne instrument specially tailored to simulate the new Soil Moisture Active Passive (SMAP) satellite instrument suite. SLAP conducted its first test flights in December, 2013 and participated in its first science campaign-the IPHEX ground validation campaign of the GPM mission-in May, 2014. This paper will present results from additional test flights and science observations scheduled for 2015.

  5. PRELIMINARY RESULTS FROM A SIMULATION OF QUENCHED QCD WITH OVERL AP FERMIONS ON A LARGE LATTICE.

    SciTech Connect

    BERRUTO,F.GARRON,N.HOELBLING,D.LELLOUCH,L.REBBI,C.SHORESH,N.

    2003-07-15

    We simulate quenched QCD with the overlap Dirac operator. We work with the Wilson gauge action at {beta} = 6 on an 18{sup 3} x 64 lattice. We calculate quark propagators for a single source point and quark mass ranging from am{sub 4} = 0.03 to 0.75. We present here preliminary results based on the propagators for 60 gauge field configurations.

  6. Femtosecond laser for glaucoma treatment: the comparison between simulation and experimentation results on ocular tissue removal

    NASA Astrophysics Data System (ADS)

    Hou, Dong Xia; Ngoi, Bryan K. A.; Hoh, Sek Tien; Koh, Lee Huat K.; Deng, Yuan Zi

    2005-04-01

    In ophthalmology, the use of femtosecond lasers is receiving more attention than ever due to its extremely high intensity and ultra short pulse duration. It opens the highly beneficial possibilities for minimized side effects during surgery process, and one of the specific areas is laser surgery in glaucoma treatment. However, the sophisticated femtosecond laser-ocular tissue interaction mechanism hampers the clinical application of femtosecond laser to treat glaucoma. The potential contribution in this work lies in the fact, that this is the first time a modified moving breakdown theory is applied, which is appropriate for femtosecond time scale, to analyze femtosecond laser-ocular tissue interaction mechanism. Based on this theory, energy deposition and corresponding thermal increase are studied by both simulation and experimentation. A simulation model was developed using Matlab software, and the simulation result was validated through in-vitro laser-tissue interaction experiment using pig iris. By comparing the theoretical and experimental results, it is shown that femtosecond laser can obtain determined ocular tissue removal, and the thermal damage is evidently reduced. This result provides a promising potential for femtosecond laser in glaucoma treatment.

  7. Global Carbon Cycle Inside GISS ModelE GCM: Results of Equilibrium and Transient Simulations.

    NASA Astrophysics Data System (ADS)

    Aleinov, I.; Kiang, N. Y.; Romanou, A.; Puma, M. J.; Kharecha, P.; Moorcroft, P. R.; Kim, Y.

    2008-12-01

    We present simulation results for a fully coupled carbon cycle inside the ModelE General Circulation Model (GCM) developed at the NASA Goddard Institute for Space Studies (GISS). The current implementation utilizes the GISS dynamical atmospheric core coupled to the HYCOM ocean model. The atmospheric core uses a Quadratic Upstream Scheme (QUS) for advection of gas tracers, while HYCOM has its own built-in algorithm for advection of ocean tracers. The land surface part of the model consists of the GISS ground hydrology model coupled to the Ent dynamic global terrestrial ecosystem model. The ocean biogeochemistry model based on Watson Gregg's model was implemented inside the HYCOM ocean model. Together with ocean tracer transport, it describes all aspects of the carbon cycle inside the ocean and provides CO2 fluxes for exchange with the atmosphere. CO2 fluxes from land vegetation are provided by the Ent model, which employs well-known photosynthesis relationships of Farquhar, von Caemmerer, and Berry and stomatal conductance of Ball and Berry. Soil CO2 fluxes are also computed by the Ent model according to the CASA soil biogeochemistry model. We present results of fully coupled GCM simulations as well as off-line tests for different components. For GCM simulations, we present results of both equilibrium and transient runs and discuss implications of biases in GCM-predicted climate for accurate modeling of the carbon cycle.

  8. Spatial resolution effect on the simulated results of watershed scale models

    NASA Astrophysics Data System (ADS)

    Epelde, Ane; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Sauvage, Sabine; Sánchez-Pérez, José Miguel

    2016-04-01

    Numerical models are useful tools for water resources planning, development and management. Currently, their use is being spread and more complex modeling systems are being employed for these purposes. The adding of complexity allows the simulation of water quality related processes. Nevertheless, this implies a considerable increase on the computational requirements, which usually is compensated on the models by a decrease on their spatial resolution. The spatial resolution of the models is known to affect the simulation of hydrological processes and therefore, also the nutrient exportation and cycling processes. However, the implication of the spatial resolution on the simulated results is rarely assessed. In this study, we examine the effect of the change in the grid size on the integrated and distributed results of the Alegria River watershed model (Basque Country, Northern Spain). Variables such as discharge, water table level, relative water content of soils, nitrogen exportation and denitrification are analyzed in order to quantify the uncertainty involved in the spatial discretization of the watershed scale models. This is an aspect that needs to be carefully considered when numerical models are employed in watershed management studies or quality programs.

  9. Molecular simulation of aqueous electrolytes: water chemical potential results and Gibbs-Duhem equation consistency tests.

    PubMed

    Moučka, Filip; Nezbeda, Ivo; Smith, William R

    2013-09-28

    This paper deals with molecular simulation of the chemical potentials in aqueous electrolyte solutions for the water solvent and its relationship to chemical potential simulation results for the electrolyte solute. We use the Gibbs-Duhem equation linking the concentration dependence of these quantities to test the thermodynamic consistency of separate calculations of each quantity. We consider aqueous NaCl solutions at ambient conditions, using the standard SPC/E force field for water and the Joung-Cheatham force field for the electrolyte. We calculate the water chemical potential using the osmotic ensemble Monte Carlo algorithm by varying the number of water molecules at a constant amount of solute. We demonstrate numerical consistency of these results in terms of the Gibbs-Duhem equation in conjunction with our previous calculations of the electrolyte chemical potential. We present the chemical potential vs molality curves for both solvent and solute in the form of appropriately chosen analytical equations fitted to the simulation data. As a byproduct, in the context of the force fields considered, we also obtain values for the Henry convention standard molar chemical potential for aqueous NaCl using molality as the concentration variable and for the chemical potential of pure SPC/E water. These values are in reasonable agreement with the experimental values.

  10. Molecular simulation of aqueous electrolytes: Water chemical potential results and Gibbs-Duhem equation consistency tests

    NASA Astrophysics Data System (ADS)

    Moučka, Filip; Nezbeda, Ivo; Smith, William R.

    2013-09-01

    This paper deals with molecular simulation of the chemical potentials in aqueous electrolyte solutions for the water solvent and its relationship to chemical potential simulation results for the electrolyte solute. We use the Gibbs-Duhem equation linking the concentration dependence of these quantities to test the thermodynamic consistency of separate calculations of each quantity. We consider aqueous NaCl solutions at ambient conditions, using the standard SPC/E force field for water and the Joung-Cheatham force field for the electrolyte. We calculate the water chemical potential using the osmotic ensemble Monte Carlo algorithm by varying the number of water molecules at a constant amount of solute. We demonstrate numerical consistency of these results in terms of the Gibbs-Duhem equation in conjunction with our previous calculations of the electrolyte chemical potential. We present the chemical potential vs molality curves for both solvent and solute in the form of appropriately chosen analytical equations fitted to the simulation data. As a byproduct, in the context of the force fields considered, we also obtain values for the Henry convention standard molar chemical potential for aqueous NaCl using molality as the concentration variable and for the chemical potential of pure SPC/E water. These values are in reasonable agreement with the experimental values.

  11. El Niño and Greenhouse Warming: Results from Ensemble Simulations with the NCAR CCSM.

    NASA Astrophysics Data System (ADS)

    Zelle, Hein; van Oldenborgh, Geert Jan; Burgers, Gerrit; Dijkstra, Henk

    2005-11-01

    The changes in model ENSO behavior due to an increase in greenhouse gases, according to the Intergovernmental Panel on Climate Change (IPCC) Business-As-Usual scenario, are investigated using a 62-member ensemble 140-yr simulation (1940 2080) with the National Center for Atmospheric Research Community Climate System Model (CCSM; version 1.4). Although the global mean surface temperature increases by about 1.2 K over the period 2000 80, there are no significant changes in the ENSO period, amplitude, and spatial patterns. To explain this behavior, an analysis of the simulation results is combined with results from intermediate complexity coupled ocean atmosphere models. It is shown that this version of the CCSM is incapable of simulating a correct meridional extension of the equatorial wind stress response to equatorial SST anomalies. The wind response pattern is too narrow and its strength is insensitive to background SST. This leads to a more stable Pacific climate system, a shorter ENSO period, and a reduced sensitivity of ENSO to global warming.

  12. Preliminary Analysis and Simulation Results of Microwave Transmission Through an Electron Cloud

    SciTech Connect

    Sonnad, Kiran; Sonnad, Kiran; Furman, Miguel; Veitzer, Seth; Stoltz, Peter; Cary, John

    2007-01-12

    The electromagnetic particle-in-cell (PIC) code VORPAL is being used to simulate the interaction of microwave radiation through an electron cloud. The results so far showgood agreement with theory for simple cases. The study has been motivated by previous experimental work on this problem at the CERN SPS [1], experiments at the PEP-II Low Energy Ring (LER) at SLAC [4], and proposed experiments at the Fermilab Main Injector (MI). With experimental observation of quantities such as amplitude, phase and spectrum of the output microwave radiation and with support from simulations for different cloud densities and applied magnetic fields, this technique can prove to be a useful probe for assessing the presence as well as the densityof electron clouds.

  13. Simulated cosmic microwave background maps at 0.5 deg resolution: Basic results

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Bennett, C. L.; Kogut, A.

    1995-01-01

    We have simulated full-sky maps of the cosmic microwave background (CMB) anisotropy expected from cold dark matter (CDM) models at 0.5 deg and 1.0 deg angular resolution. Statistical properties of the maps are presented as a function of sky coverage, angular resolution, and instrument noise, and the implications of these results for observability of the Doppler peak are discussed. The rms fluctuations in a map are not a particularly robust probe of the existence of a Doppler peak; however, a full correlation analysis can provide reasonable sensitivity. We find that sensitivity to the Doppler peak depends primarily on the fraction of sky covered, and only secondarily on the angular resolution and noise level. Color plates of the simulated maps are presented to illustrate the anisotropies.

  14. Simulation and experimental results of optical and thermal modeling of gold nanoshells.

    PubMed

    Ghazanfari, Lida; Khosroshahi, Mohammad E

    2014-09-01

    This paper proposes a generalized method for optical and thermal modeling of synthesized magneto-optical nanoshells (MNSs) for biomedical applications. Superparamagnetic magnetite nanoparticles with diameter of 9.5 ± 1.4 nm are fabricated using co-precipitation method and subsequently covered by a thin layer of gold to obtain 15.8 ± 3.5 nm MNSs. In this paper, simulations and detailed analysis are carried out for different nanoshell geometry to achieve a maximum heat power. Structural, magnetic and optical properties of MNSs are assessed using vibrating sample magnetometer (VSM), X-ray diffraction (XRD), UV-VIS spectrophotometer, dynamic light scattering (DLS), and transmission electron microscope (TEM). Magnetic saturation of synthesized magnetite nanoparticles are reduced from 46.94 to 11.98 emu/g after coating with gold. The performance of the proposed optical-thermal modeling technique is verified by simulation and experimental results. PMID:25063109

  15. Experimental and computer simulation results of the spot welding process using SORPAS software

    NASA Astrophysics Data System (ADS)

    Al-Jader, M. A.; Cullen, J. D.; Athi, N.; Al-Shamma'a, A. I.

    2009-07-01

    The highly competitive nature of the automotive industry drives demand for improvements and increased precision engineering in resistance spot welding. Currently there are about 4300 weld points on the average steel vehicle. Current industrial monitoring systems check the quality of the nugget after processing 15 cars, once every two weeks. The nuggets are examined off line using a destructive process, which takes approximately 10 days to complete causing a long delay in the production process. This paper presents a simulation of the spot welding growth curves, along with a comparison to growth curves performed on an industrial spot welding machine. The correlation of experimental results shows that SORPAS simulations can be used as an off line measurement to reduce factory energy usage. The first section in your paper

  16. Computer simulation applied to jewellery casting: challenges, results and future possibilities

    NASA Astrophysics Data System (ADS)

    Tiberto, Dario; Klotz, Ulrich E.

    2012-07-01

    Computer simulation has been successfully applied in the past to several industrial processes (such as lost foam and die casting) by larger foundries and direct automotive suppliers, while for the jewelry sector it is a procedure which is not widespread, and which has been tested mainly in the context of research projects. On the basis of a recently concluded EU project, the authors here present the simulation of investment casting, using two different softwares: one for the filling step (Flow-3D®), the other one for the solidification (PoligonSoft®). A work on material characterization was conducted to obtain the necessary physical parameters for the investment (used for the mold) and for the gold alloys (through thermal analysis). A series of 18k and 14k gold alloys were cast in standard set-ups to have a series of benchmark trials with embedded thermocouples for temperature measurement, in order to compare and validate the software output in terms of the cooling curves for definite test parts. Results obtained with the simulation included the reduction of micro-porosity through an optimization of the feeding channels for a controlled solidification of the metal: examples of the predicted porosity in the cast parts (with metallographic comparison) will be shown. Considerations on the feasibility of applying the casting simulation in the jewelry sector will be reached, underlining the importance of the software parametrization necessary to obtain reliable results, and the discrepancies found with the experimental comparison. In addition an overview on further possibilities of application for the CFD in jewellery casting, such as the modeling of the centrifugal and tilting processes, will be presented.

  17. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    SciTech Connect

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L.

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.

  18. RESULTS OF CESIUM MASS TRANSFER TESTING FOR NEXT GENERATION SOLVENT WITH HANFORD WASTE SIMULANT AP-101

    SciTech Connect

    Peters, T.; Washington, A.; Fink, S.

    2011-09-27

    SRNL has performed an Extraction, Scrub, Strip (ESS) test using the next generation solvent and AP-101 Hanford Waste simulant. The results indicate that the next generation solvent (MG solvent) has adequate extraction behavior even in the face of a massive excess of potassium. The stripping results indicate poorer behavior, but this may be due to inadequate method detection limits. SRNL recommends further testing using hot tank waste or spiked simulant to provide for better detection limits. Furthermore, strong consideration should be given to performing an actual waste, or spiked waste demonstration using the 2cm contactor bank. The Savannah River Site currently utilizes a solvent extraction technology to selectively remove cesium from tank waste at the Multi-Component Solvent Extraction unit (MCU). This solvent consists of four components: the extractant - BoBCalixC6, a modifier - Cs-7B, a suppressor - trioctylamine, and a diluent, Isopar L{trademark}. This solvent has been used to successfully decontaminate over 2 million gallons of tank waste. However, recent work at Oak Ridge National Laboratory (ORNL), Argonne National Laboratory (ANL), and Savannah River National Laboratory (SRNL) has provided a basis to implement an improved solvent blend. This new solvent blend - referred to as Next Generation Solvent (NGS) - is similar to the current solvent, and also contains four components: the extractant - MAXCalix, a modifier - Cs-7B, a suppressor - LIX-79{trademark} guanidine, and a diluent, Isopar L{trademark}. Testing to date has shown that this 'Next Generation' solvent promises to provide far superior cesium removal efficiencies, and furthermore, is theorized to perform adequately even in waste with high potassium concentrations such that it could be used for processing Hanford wastes. SRNL has performed a cesium mass transfer test in to confirm this behavior, using a simulant designed to simulate Hanford AP-101 waste.

  19. Testing Friction Laws by Comparing Simulation Results With Experiments of Spontaneous Dynamic Rupture

    NASA Astrophysics Data System (ADS)

    Lu, X.; Lapusta, N.; Rosakis, A. J.

    2005-12-01

    Friction laws are typically introduced either based on theoretic ideas or by fitting laboratory experiments that reproduce only a small subset of possible behaviors. Hence it is important to validate the resulting laws by modeling experiments that produce spontaneous frictional behavior. Here we simulate experiments of spontaneous rupture transition from sub-Rayleigh to supershear done by Xia et al. (Science, 2004). In the experiments, two thin Homalite plates are pressed together along an inclined interface. Compressive load P is applied to the edges of the plates and the rupture is triggered by an explosion of a small wire. Xia et al. (2004) link the transition in their experiments to the Burridge-Andrews mechanism (Andrews, JGR, 1976) which involves initiation of a daughter crack in front of the main rupture. Xia et al. have measured transition lengths for different values of the load P and compared their results with numerical simulations of Andrews who used linear slip-weakening friction. They conclude that to obtain a good fit they need to assume that the critical slip of the slip-weakening law scales as P-1/2, as proposed by Ohnaka (JGR, 2003). Hence our first goal is to verify whether the dependence of the critical slip on the compressive load P is indeed necessary for a good fit to experimental measurements. To test that, we conducted simulations of the experiments by using boundary integral methodology in its spectral formulation (Perrin et al., 1995; Geubelle and Rice, 1995). We approximately model the wire explosion by temporary normal stress decrease in the region of the interface comparable to the size of the exploding wire. The simulations show good agreement of the transition length with the experimental results for different values of the load P, even though we keep the critical slip constant. Hence the dependence of the critical slip on P is not necessary to fit the experimental measurements. The inconsistency between Andrews' numerical results

  20. SRG110 Stirling Generator Dynamic Simulator Vibration Test Results and Analysis Correlation

    NASA Technical Reports Server (NTRS)

    Suarez, Vicente J.; Lewandowski, Edward J.; Callahan, John

    2006-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin (LM), and NASA Glenn Research Center (GRC) have been developing the Stirling Radioisotope Generator (SRG110) for use as a power system for space science missions. The launch environment enveloping potential missions results in a random input spectrum that is significantly higher than historical RPS launch levels and is a challenge for designers. Analysis presented in prior work predicted that tailoring the compliance at the generator-spacecraft interface reduced the dynamic response of the system thereby allowing higher launch load input levels and expanding the range of potential generator missions. To confirm analytical predictions, a dynamic simulator representing the generator structure, Stirling convertors and heat sources was designed and built for testing with and without a compliant interface. Finite element analysis was performed to guide the generator simulator and compliant interface design so that test modes and frequencies were representative of the SRG110 generator. This paper presents the dynamic simulator design, the test setup and methodology, test article modes and frequencies and dynamic responses, and post-test analysis results. With the compliant interface, component responses to an input environment exceeding the SRG110 qualification level spectrum were all within design allowables. Post-test analysis included finite element model tuning to match test frequencies and random response analysis using the test input spectrum. Analytical results were in good overall agreement with the test results and confirmed previous predictions that the SRG110 power system may be considered for a broad range of potential missions, including those with demanding launch environments.

  1. SRG110 Stirling Generator Dynamic Simulator Vibration Test Results and Analysis Correlation

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.; Suarez, Vicente J.; Goodnight, Thomas W.; Callahan, John

    2007-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin (LM), and NASA Glenn Research Center (GRC) have been developing the Stirling Radioisotope Generator (SRG110) for use as a power system for space science missions. The launch environment enveloping potential missions results in a random input spectrum that is significantly higher than historical radioisotope power system (RPS) launch levels and is a challenge for designers. Analysis presented in prior work predicted that tailoring the compliance at the generator-spacecraft interface reduced the dynamic response of the system thereby allowing higher launch load input levels and expanding the range of potential generator missions. To confirm analytical predictions, a dynamic simulator representing the generator structure, Stirling convertors and heat sources were designed and built for testing with and without a compliant interface. Finite element analysis was performed to guide the generator simulator and compliant interface design so that test modes and frequencies were representative of the SRG110 generator. This paper presents the dynamic simulator design, the test setup and methodology, test article modes and frequencies and dynamic responses, and post-test analysis results. With the compliant interface, component responses to an input environment exceeding the SRG110 qualification level spectrum were all within design allowables. Post-test analysis included finite element model tuning to match test frequencies and random response analysis using the test input spectrum. Analytical results were in good overall agreement with the test results and confirmed previous predictions that the SRG110 power system may be considered for a broad range of potential missions, including those with demanding launch environments.

  2. Development of ADOCS controllers and control laws. Volume 3: Simulation results and recommendations

    NASA Technical Reports Server (NTRS)

    Landis, Kenneth H.; Glusman, Steven I.

    1985-01-01

    The Advanced Cockpit Controls/Advanced Flight Control System (ACC/AFCS) study was conducted by the Boeing Vertol Company as part of the Army's Advanced Digital/Optical Control System (ADOCS) program. Specifically, the ACC/AFCS investigation was aimed at developing the flight control laws for the ADOCS demonstator aircraft which will provide satisfactory handling qualities for an attack helicopter mission. The three major elements of design considered are as follows: Pilot's integrated Side-Stick Controller (SSC) -- Number of axes controlled; force/displacement characteristics; ergonomic design. Stability and Control Augmentation System (SCAS)--Digital flight control laws for the various mission phases; SCAS mode switching logic. Pilot's Displays--For night/adverse weather conditions, the dynamics of the superimposed symbology presented to the pilot in a format similar to the Advanced Attack Helicopter (AAH) Pilot Night Vision System (PNVS) for each mission phase is a function of SCAS characteristics; display mode switching logic. Results of the five piloted simulations conducted at the Boeing Vertol and NASA-Ames simulation facilities are presented in Volume 3. Conclusions drawn from analysis of pilot rating data and commentary were used to formulate recommendations for the ADOCS demonstrator flight control system design. The ACC/AFCS simulation data also provide an extensive data base to aid the development of advanced flight control system design for future V/STOL aircraft.

  3. Flow-driven cloud formation and fragmentation: results from Eulerian and Lagrangian simulations

    NASA Astrophysics Data System (ADS)

    Heitsch, Fabian; Naab, Thorsten; Walch, Stefanie

    2011-07-01

    The fragmentation of shocked flows in a thermally bistable medium provides a natural mechanism to form turbulent cold clouds as precursors to molecular clouds. Yet because of the large density and temperature differences and the range of dynamical scales involved, following this process with numerical simulations is challenging. We compare two-dimensional simulations of flow-driven cloud formation without self-gravity, using the Lagrangian smoothed particle hydrodynamics (SPH) code VINE and the Eulerian grid code PROTEUS. Results are qualitatively similar for both methods, yet the variable spatial resolution of the SPH method leads to smaller fragments and thinner filaments, rendering the overall morphologies different. Thermal and hydrodynamical instabilities lead to rapid cooling and fragmentation into cold clumps with temperatures below 300 K. For clumps more massive than 1 M⊙ pc-1, the clump mass function has an average slope of -0.8. The internal velocity dispersion of the clumps is nearly an order of magnitude smaller than their relative motion, rendering it subsonic with respect to the internal sound speed of the clumps but supersonic as seen by an external observer. For the SPH simulations most of the cold gas resides at temperatures below 100 K, while the grid-based models show an additional, substantial component between 100 and 300 K. Independent of the numerical method, our models confirm that converging flows of warm neutral gas fragment rapidly and form high-density, low-temperature clumps as possible seeds for star formation.

  4. Nonthermal ion acceleration in magnetic reconnection: Results from magnetospheric observations and particle simulations

    NASA Astrophysics Data System (ADS)

    Hirai, Mariko; Hoshino, Masahiro

    Nonthermal ion acceleration in magnetic reconnection is investigated by using spacecraft ob-servations in the Earth's magnetotail and particle-in-cell (PIC) simulations. Magnetic recon-nection is believed to be an efficient particle accelerator in various environments in space, such as the pulsar magnetosphere, the solar corona and the Earth's magnetosphere. The Earth's magnetosphere particularly gives crucial clues to understand particle acceleration in magnetic reconnection since precise information on both fields and particles is available from spacecraft observations. Several nonthermal electron acceleration mechanisms, including the acceleration around the X-point and the magnetic pile-up region in the downstream, have been proposed and tested by recent PIC simulations as well as spacecraft observations. However nonthermal ion acceleration in magnetic reconnection still remains to be poorly understood in both ob-servational and simulation studies. We report on the first ever direct observational evidence of nonthermal ion acceleration in magnetic reconnection in the Earth's magnetotail based on the Geotail observations. Nonthermal protons accelerated up to several hundreds keV exhibit a power-law energy spectrum with a typical spectrum index 3-5. By conducting a statistical study on reconnection events in the Earth's magnetotail, we found efficient ion acceleration when the reconnection electric field is strong. On the other hand, the statistical study indicates that the efficiency of electron acceleration is rather controlled by the thickness of the reconnec-tion current sheet. We also performed PIC simulations of driven reconnection to investigate in detail acceleration mechanisms of both ions and electrons. Acceleration mechanisms as well as conditions necessary for the efficient particle acceleration are discussed based on these results.

  5. Large Area Mountain Permafrost Simulation at DEM Resolution. Results from the European Alps and Himalaya

    NASA Astrophysics Data System (ADS)

    Fiddes, J.

    2015-12-01

    We present a system that is able to simulate land-surface conditions at continental scales while accounting for parameters that vary on order of 10's of metres (e.g., topography or surface cover) by using a statistical subgrid scheme (Fiddes and Gruber 2012). The model chain is driven by output from atmospheric datasets with a simple in-house downscaling scheme which uses only data on atmospheric pressure-levels and a DEM (Fiddes and Gruber 2014). The scheme has been tested in the case of mountain permafrost in the European Alps (Fiddes and Gruber 2015) with good results. However the strength of the scheme is application to remote data-sparse regions. Recently we have applied the scheme to simulate permafrost conditions in the Western Himalaya. This included a simple approach to correct snow mass balance using MODIS products, as input precipitation from atmospheric models may often have bias. The scheme is flexible in choice of atmospheric model input data, numerical surface model and surface data. In this abstract we will (1) present the model chain, (2) show the results of simulating permafrost conditions over large areas using only global datasets as input and (3) give an outlook to simulating future conditions. Fiddes, J., Endrizzi, S., and Gruber, S. 2015: Large-area land surface simulations in heterogeneous terrain driven by global data sets: application to mountain permafrost, The Cryosphere, 9, 411-426, doi:10.5194/tc-9-411-2015, 2015. http://dx.doi.org/10.5194/tc-9-411-2015Fiddes, J. & Gruber, S. 2014: TopoSCALE v.1.0: downscaling gridded climate data in complex terrain, Geoscientific Model Development, 7, 387-405, http://dx.doi.org/10.5194/gmd-7-387-2014Fiddes, J. & Gruber, S. 2012: TopoSUB: a tool for efficient large area numerical modelling in complex topography at sub-grid scales, Geoscientific Model Development, 5, 1245-1257,http://dx.doi.org/10.5194/gmd-5-1245-2012

  6. Profiling wind and greenhouse gases by infrared-laser occultation: results from end-to-end simulations in windy air

    NASA Astrophysics Data System (ADS)

    Plach, A.; Proschek, V.; Kirchengast, G.

    2015-07-01

    The new mission concept of microwave and infrared-laser occultation between low-Earth-orbit satellites (LMIO) is designed to provide accurate and long-term stable profiles of atmospheric thermodynamic variables, greenhouse gases (GHGs), and line-of-sight (l.o.s.) wind speed with focus on the upper troposphere and lower stratosphere (UTLS). While the unique quality of GHG retrievals enabled by LMIO over the UTLS has been recently demonstrated based on end-to-end simulations, the promise of l.o.s. wind retrieval, and of joint GHG and wind retrieval, has not yet been analyzed in any realistic simulation setting. Here we use a newly developed l.o.s. wind retrieval algorithm, which we embedded in an end-to-end simulation framework that also includes the retrieval of thermodynamic variables and GHGs, and analyze the performance of both stand-alone wind retrieval and joint wind and GHG retrieval. The wind algorithm utilizes LMIO laser signals placed on the inflection points at the wings of the highly symmetric C18OO absorption line near 4767 cm-1 and exploits transmission differences from a wind-induced Doppler shift. Based on realistic example cases for a diversity of atmospheric conditions, ranging from tropical to high-latitude winter, we find that the retrieved l.o.s. wind profiles are of high quality over the lower stratosphere under all conditions, i.e., unbiased and accurate to within about 2 m s-1 over about 15 to 35 km. The wind accuracy degrades into the upper troposphere due to the decreasing signal-to-noise ratio of the wind-induced differential transmission signals. The GHG retrieval in windy air is not vulnerable to wind speed uncertainties up to about 10 m s-1 but is found to benefit in the case of higher speeds from the integrated wind retrieval that enables correction of wind-induced Doppler shift of GHG signals. Overall both the l.o.s. wind and GHG retrieval results are strongly encouraging towards further development and implementation of a LMIO mission.

  7. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  8. The Ten Commandments for Translating Simulation Results into Real-Life Performance

    ERIC Educational Resources Information Center

    Wenzler, Ivo

    2009-01-01

    Simulation designers are continuously facing the challenge of determining how much of the expected value the simulation has delivered to the client. Addressing this challenge is not easy, and it requires simulation designers to stretch their comfort zones. This article presents a ten-step approach for meeting simulation objectives and translating…

  9. Optical imaging of alpha emitters: simulations, phantom, and in vivo results

    NASA Astrophysics Data System (ADS)

    Boschi, Federico; Meo, Sergio Lo; Rossi, Pier Luca; Calandrino, Riccardo; Sbarbati, Andrea; Spinelli, Antonello E.

    2011-12-01

    There has been growing interest in investigating both the in vitro and in vivo detection of optical photons from a plethora of beta emitters using optical techniques. In this paper we have investigated an alpha particle induced fluorescence signal by using a commercial CCD-based small animal optical imaging system. The light emission of a 241Am source was simulated using GEANT4 and tested in different experimental conditions including the imaging of in vivo tissue. We believe that the results presented in this work can be useful to describe a possible mechanism for the in vivo detection of alpha emitters used for therapeutic purposes.

  10. Entry, Descent and Landing Systems Analysis: Exploration Class Simulation Overview and Results

    NASA Technical Reports Server (NTRS)

    DwyerCianciolo, Alicia M.; Davis, Jody L.; Shidner, Jeremy D.; Powell, Richard W.

    2010-01-01

    NASA senior management commissioned the Entry, Descent and Landing Systems Analysis (EDL-SA) Study in 2008 to identify and roadmap the Entry, Descent and Landing (EDL) technology investments that the agency needed to make in order to successfully land large payloads at Mars for both robotic and exploration or human-scale missions. The year one exploration class mission activity considered technologies capable of delivering a 40-mt payload. This paper provides an overview of the exploration class mission study, including technologies considered, models developed and initial simulation results from the EDL-SA year one effort.

  11. Two-dimensional copolymers and multifractality: comparing perturbative expansions, Monte Carlo simulations, and exact results.

    PubMed

    von Ferber, C; Holovatch, Yu

    2002-04-01

    We analyze the scaling laws for a set of two different species of long flexible polymer chains joined together at one of their extremities (copolymer stars) in space dimension D=2. We use a formerly constructed field-theoretic description and compare our perturbative results for the scaling exponents with recent conjectures for exact conformal scaling dimensions derived by a conformal invariance technique in the context of D=2 quantum gravity. A simple Monte Carlo simulation brings about reasonable agreement with both approaches. We analyze the remarkable multifractal properties of the spectrum of scaling exponents. PMID:12005898

  12. Results from simulated upper-plenum aerosol transport and aerosol resuspension experiments

    SciTech Connect

    Wright, A.L.; Pattison, W.L.

    1984-01-01

    Recent calculational results published as part of the Battelle-Columbus BMI-2104 source term study indicate that, for some LWR accident sequences, aerosol deposition in the reactor primary coolant system (PCS) can lead to significant reductions in the radionuclide source term. Aerosol transport and deposition in the PCS have been calculated in this study using the TRAP-MELT 2 computer code, which was developed at Battelle-Columbus; the status of validation of the TRAP-MELT 2 code has been described in an Oak Ridge National Laboratory (ORNL) report. The objective of the ORNL TRAP-MELT Validation Project, which is sponsored by the Fuel Systems Behavior Research Branch of the US Nuclear Regulatory Commission, is to conduct simulated reactor-vessel upper-plenum aerosol deposition and transport tests. The results from these tests will be used in the ongoing effort to validate TRAP-MELT 2. The TRAP-MELT Validation Project includes two experimental subtasks. In the Aerosol Transport Tests, aerosol transport in a vertical pipe is being studied; this geometry was chosen to simulate aerosol deposition and transport in the reactor-vessel upper-plenum. To date, four experiments have been performed; the results from these tests are presented in this paper. 7 refs., 4 figs., 4 tabs.

  13. Results from the simulations of geopotential coefficient estimation from gravity gradients

    NASA Astrophysics Data System (ADS)

    Bettadpur, S.; Schutz, B. E.; Lundberg, J. B.

    New information of the short and medium wavelength components of the geopotential is expected from the measurements of gravity gradients made by the future ESA Aristoteles and the NASA Superconducting Gravity Gradiometer missions. In this paper, results are presented from preliminary simulations concerning the estimation of the spherical harmonic coefficients of the geopotential expansion from gravity gradients data. Numerical issues in the brute-force inversion (BFI) of the gravity gradients data are examined, and numerical algorithms are developed that substantially speed up the computation of the potential, acceleration, and gradients, as well as the mapping from the gravity gradients to the geopotential coefficients. The solution of a large least squares problem is also examined, and computational requirements are determined for the implementation of a large scale inversion. A comparative analysis of the results from the BFI and a symmetry method is reported for the test simulations of the estimation of a degree and order 50 gravity field. The results from the two, in the presence of white noise, are seen to compare well. The latter method is implemented on a special, axially symmetric surface that fits the orbit within 380 meters.

  14. JT9D performance deterioration results from a simulated aerodynamic load test

    NASA Technical Reports Server (NTRS)

    Stakolich, E. G.; Stromberg, W. J.

    1981-01-01

    The results of testing to identify the effects of simulated aerodynamic flight loads on JT9D engine performance are presented. The test results were also used to refine previous analytical studies on the impact of aerodynamic flight loads on performance losses. To accomplish these objectives, a JT9D-7AH engine was assembled with average production clearances and new seals as well as extensive instrumentation to monitor engine performance, case temperatures, and blade tip clearance changes. A special loading device was designed and constructed to permit application of known moments and shear forces to the engine by the use of cables placed around the flight inlet. The test was conducted in the Pratt & Whitney Aircraft X-Ray Test Facility to permit the use of X-ray techniques in conjunction with laser blade tip proximity probes to monitor important engine clearance changes. Upon completion of the test program, the test engine was disassembled, and the condition of gas path parts and final clearances were documented. The test results indicate that the engine lost 1.1 percent in thrust specific fuel consumption (TSFC), as measured under sea level static conditions, due to increased operating clearances caused by simulated flight loads. This compares with 0.9 percent predicted by the analytical model and previous study efforts.

  15. Finite difference model for aquifer simulation in two dimensions with results of numerical experiments

    USGS Publications Warehouse

    Trescott, Peter C.; Pinder, George Francis; Larson, S.P.

    1976-01-01

    The model will simulate ground-water flow in an artesian aquifer, a water-table aquifer, or a combined artesian and water-table aquifer. The aquifer may be heterogeneous and anisotropic and have irregular boundaries. The source term in the flow equation may include well discharge, constant recharge, leakage from confining beds in which the effects of storage are considered, and evapotranspiration as a linear function of depth to water. The theoretical development includes presentation of the appropriate flow equations and derivation of the finite-difference approximations (written for a variable grid). The documentation emphasizes the numerical techniques that can be used for solving the simultaneous equations and describes the results of numerical experiments using these techniques. Of the three numerical techniques available in the model, the strongly implicit procedure, in general, requires less computer time and has fewer numerical difficulties than do the iterative alternating direction implicit procedure and line successive overrelaxation (which includes a two-dimensional correction procedure to accelerate convergence). The documentation includes a flow chart, program listing, an example simulation, and sections on designing an aquifer model and requirements for data input. It illustrates how model results can be presented on the line printer and pen plotters with a program that utilizes the graphical display software available from the Geological Survey Computer Center Division. In addition the model includes options for reading input data from a disk and writing intermediate results on a disk.

  16. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  17. Influence of land use on rainfall simulation results in the Souss basin, Morocco

    NASA Astrophysics Data System (ADS)

    Peter, Klaus Daniel; Ries, Johannes B.; Hssaine, Ali Ait

    2013-04-01

    Situated between the High and Anti-Atlas, the Souss basin is characterized by a dynamic land use change. It is one of the fastest growing agricultural regions of Morocco. Traditional mixed agriculture is replaced by extensive plantations of citrus fruits, bananas and vegetables in monocropping, mainly for the European market. For the implementation of the land use change and further expansion of the plantations into marginal land which was former unsuitable for agriculture, land levelling by heavy machinery is used to plane the fields and close the widespread gullies. These gully systems are cutting deep between the plantations and other arable land. Their development started already over 400 years ago with the introduction of sugar production. Heavy rainfall events lead to further strong soil and gully erosion in this with 200 mm mean annual precipitation normally arid region. Gullies are cutting into the arable land or are re-excavating their old stream courses. On the test sites around the city of Taroudant, a total of 122 rainfall simulations were conducted to analyze the susceptibility of soils to surface runoff and soil erosion under different land use. A small portable nozzle rainfall simulator is used for the rainfall simulation experiments, quantifying runoff and erosion rates on micro-plots with a size of 0.28 m2. A motor pump boosts the water regulated by a flow metre into the commercial full cone nozzle at a height of 2 m. The rainfall intensity is maintained at about 40 mm h-1 for each of the 30 min lasting experiments. Ten categories of land use are classified for different stages of levelling, fallow land, cultivation and rangeland. Results show that mean runoff coefficients and mean sediment loads are significantly higher (1.4 and 3.5 times respectively) on levelled study sites compared to undisturbed sites. However, the runoff coefficients of all land use types are relatively equal and reach high median coefficients from 39 to 56 %. Only the

  18. SZ effects in the Magneticum Pathfinder Simulation: Comparison with the Planck, SPT, and ACT results

    NASA Astrophysics Data System (ADS)

    Dolag, K.; Komatsu, E.; Sunyaev, R.

    2016-08-01

    We calculate the one-point probability density distribution functions (PDF) and the power spectra of the thermal and kinetic Sunyaev-Zeldovich (tSZ and kSZ) effects and the mean Compton Y parameter using the Magneticum Pathfinder simulations, state-of-the-art cosmological hydrodynamical simulations of a large cosmological volume of (896 Mpc/h)3. These simulations follow in detail the thermal and chemical evolution of the intracluster medium as well as the evolution of super-massive black holes and their associated feedback processes. We construct full-sky maps of tSZ and kSZ from the light-cones out to z = 0.17, and one realisation of 8°.8 × 8°.8 deep light-cone out to z = 5.2. The local universe at z < 0.027 is simulated by a constrained realisation. The tail of the one-point PDF of tSZ from the deep light-cone follows a power-law shape with an index of -3.2. Once convolved with the effective beam of Planck, it agrees with the PDF measured by Planck. The predicted tSZ power spectrum agrees with that of the Planck data at all multipoles up to l ≈ 1000, once the calculations are scaled to the Planck 2015 cosmological parameters with Ωm = 0.308 and σ8 = 0.8149. Consistent with the results in the literature, however, we continue to find the tSZ power spectrum at l = 3000 that is significantly larger than that estimated from the high-resolution ground-based data. The simulation predicts the mean fluctuating Compton Y value of bar{Y}=1.18× 10^{-6} for Ωm = 0.272 and σ8 = 0.809. Nearly half (≈5 × 10-7) of the signal comes from halos below a virial mass of 1013 M⊙/h. Scaling this to the Planck 2015 parameters, we find bar{Y}=1.57× {}10^{-6}.

  19. Evolution of star cluster systems in isolated galaxies: first results from direct N-body simulations

    NASA Astrophysics Data System (ADS)

    Rossi, L. J.; Bekki, K.; Hurley, J. R.

    2016-11-01

    The evolution of star clusters is largely affected by the tidal field generated by the host galaxy. It is thus in principle expected that under the assumption of a `universal' initial cluster mass function the properties of the evolved present-day mass function of star cluster systems should show a dependence on the properties of the galactic environment in which they evolve. To explore this expectation, a sophisticated model of the tidal field is required in order to study the evolution of star cluster systems in realistic galaxies. Along these lines, in this work we first describe a method developed for coupling N-body simulations of galaxies and star clusters. We then generate a data base of galaxy models along the Hubble sequence and calibrate evolutionary equations to the results of direct N-body simulations of star clusters in order to predict the clusters' mass evolution as function of the galactic environment. We finally apply our methods to explore the properties of evolved `universal' initial cluster mass functions and any dependence on the host galaxy morphology and mass distribution. The preliminary results show that an initial power-law distribution of the masses `universally' evolves into a lognormal distribution, with the properties correlated with the stellar mass and stellar mass density of the host galaxy.

  20. RESULTS OF COPPER CATALYZED PEROXIDE OXIDATION (CCPO) OF TANK 48H SIMULANTS

    SciTech Connect

    Peters, T.; Pareizs, J.; Newell, J.; Fondeur, F.; Nash, C.; White, T.; Fink, S.

    2012-08-14

    Savannah River National Laboratory (SRNL) performed a series of laboratory-scale experiments that examined copper-catalyzed hydrogen peroxide (H{sub 2}O{sub 2}) aided destruction of organic components, most notably tetraphenylborate (TPB), in Tank 48H simulant slurries. The experiments were designed with an expectation of conducting the process within existing vessels of Building 241-96H with minimal modifications to the existing equipment. Results of the experiments indicate that TPB destruction levels exceeding 99.9% are achievable, dependent on the reaction conditions. The following observations were made with respect to the major processing variables investigated. A lower reaction pH provides faster reaction rates (pH 7 > pH 9 > pH 11); however, pH 9 reactions provide the least quantity of organic residual compounds within the limits of species analyzed. Higher temperatures lead to faster reaction rates and smaller quantities of organic residual compounds. Higher concentrations of the copper catalyst provide faster reaction rates, but the highest copper concentration (500 mg/L) also resulted in the second highest quantity of organic residual compounds. Faster rates of H{sub 2}O{sub 2} addition lead to faster reaction rates and lower quantities of organic residual compounds. Testing with simulated slurries continues. Current testing is examining lower copper concentrations, refined peroxide addition rates, and alternate acidification methods. A revision of this report will provide updated findings with emphasis on defining recommended conditions for similar tests with actual waste samples.

  1. Ion cyclotron instability at Io: Hybrid simulation results compared to in situ observations

    NASA Astrophysics Data System (ADS)

    Šebek, Ondřej; Trávníček, Pavel M.; Walker, Raymond J.; Hellinger, Petr

    2016-08-01

    We present analysis of global three-dimensional hybrid simulations of Io's interaction with Jovian magnetospheric plasma. We apply a single-species model with simplified neutral-plasma chemistry and downscale Io in order to resolve the ion kinetic scales. We consider charge exchange, electron impact ionization, and photoionization by using variable rates of these processes to investigate their impact. Our results are in a good qualitative agreement with the in situ magnetic field measurements for five Galileo flybys around Io. The hybrid model describes ion kinetics self-consistently. This allows us to assess the distribution of temperature anisotropies around Io and thereby determine the possible triggering mechanism for waves observed near Io. We compare simulated dynamic spectra of magnetic fluctuations with in situ observations made by Galileo. Our results are consistent with both the spatial distribution and local amplitude of magnetic fluctuations found in the observations. Cyclotron waves, triggered probably by the growth of ion cyclotron instability, are observed mainly downstream of Io and on the flanks in regions farther from Io where the ion pickup rate is relatively low. Growth of the ion cyclotron instability is governed mainly by the charge exchange rate.

  2. Modelled air pollution levels versus EC air quality legislation - results from high resolution simulation.

    PubMed

    Chervenkov, Hristo

    2013-12-01

    An appropriate method for evaluating the air quality of a certain area is to contrast the actual air pollution levels to the critical ones, prescribed in the legislative standards. The application of numerical simulation models for assessing the real air quality status is allowed by the legislation of the European Community (EC). This approach is preferable, especially when the area of interest is relatively big and/or the network of measurement stations is sparse, and the available observational data are scarce, respectively. Such method is very efficient for similar assessment studies due to continuous spatio-temporal coverage of the obtained results. In the study the values of the concentration of the harmful substances sulphur dioxide, (SO2), nitrogen dioxide (NO2), particulate matter - coarse (PM10) and fine (PM2.5) fraction, ozone (O3), carbon monoxide (CO) and ammonia (NH3) in the surface layer obtained from modelling simulations with resolution 10 km on hourly bases are taken to calculate the necessary statistical quantities which are used for comparison with the corresponding critical levels, prescribed in the EC directives. For part of them (PM2.5, CO and NH3) this is done for first time with such resolution. The computational grid covers Bulgaria entirely and some surrounding territories and the calculations are made for every year in the period 1991-2000. The averaged over the whole time slice results can be treated as representative for the air quality situation of the last decade of the former century.

  3. Newest Results from the Investigation of Polymer-Induced Drag Reduction through Direct Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Dimitropoulos, Costas D.; Beris, Antony N.; Sureshkumar, R.; Handler, Robert A.

    1998-11-01

    This work continues our attempts to elucidate theoretically the mechanism of polymer-induced drag reduction through direct numerical simulations of turbulent channel flow, using an independently evaluated rheological model for the polymer stress. Using appropriate scaling to accommodate effects due to viscoelasticity reveals that there exists a great consistency in the results for different combinations of the polymer concentration and chain extension. This helps demonstrate that our obervations are applicable to very dilute systems, currently not possible to simulate. It also reinforces the hypothesis that one of the prerequisites for the phenomenon of drag reduction is sufficiently enhanced extensional viscosity, corresponding to the level of intensity and duration of extensional rates typically encountered during the turbulent flow. Moreover, these results motivate a study of the turbulence structure at larger Reynolds numbers and for different periodic computational cell sizes. In addition, the Reynolds stress budgets demonstrate that flow elasticity adversely affects the activities represented by the pressure-strain correlations, leading to a redistribution of turbulent kinetic energy amongst all directions. Finally, we discuss the influence of viscoelasticity in reducing the production of streamwise vorticity.

  4. Multipacting simulation and test results of BNL 704 MHz SRF gun

    SciTech Connect

    Xu W.; Belomestnykh, S.; Ben-Zvi, I.; Cullen, C. et al

    2012-05-20

    The BNL 704MHz SRF gun has a grooved choke joint to support the photo-cathode. Due to the distortion of grooves at the choke joint during the BCP for the choke joint, several multipacting barriers showed up when it was tested with Nb cathode stalk at JLab. We built a setup to use the spare large grain SRF cavity to test and condition the multipacting at BNL with various power sources up to 50kW. The test is carried out in three stages: testing the cavity performance without cathode, testing the cavity with the Nb cathode stalk that was used at Jlab, and testing the cavity with a copper cathode stalk that is based on the design for the SRF gun. This paper summarizes the results of multipacting simulation, and presents the large grain cavity test setup and the test results.

  5. Computer simulation results for PCM/PM/NRZ receivers in nonideal channels

    NASA Technical Reports Server (NTRS)

    Anabtawi, A.; Nguyen, T. M.; Million, S.

    1995-01-01

    This article studies, by computer simulations, the performance of deep-space telemetry signals that employ the pulse code modulation/phase modulation (PCM/PM) technique, using nonreturn-to-zero data, under the separate and combined effects of unbalanced data, data asymmetry, and a band-limited channel. The study is based on measuring the symbol error rate performance and comparing the results to the theoretical results presented in previous articles. Only the effects of imperfect carrier tracking due to an imperfect data stream are considered. The presence of an imperfect data stream (unbalanced and/or asymmetric) produces undesirable spectral components at the carrier frequency, creating an imperfect carrier reference that will degrade the performance of the telemetry system. Further disturbance to the carrier reference is caused by the intersymbol interference created by the band-limited channel.

  6. Free space optical communication flight mission: simulations and experimental results on ground level demonstrator

    NASA Astrophysics Data System (ADS)

    Mata Calvo, Ramon; Ferrero, Valter; Camatel, Stefano; Catalano, Valeria; Bonino, Luciana; Toselli, Italo

    2009-05-01

    In the context of the increasing demand in high-speed data link for scientific, planetary exploration and earth observation missions, the Italian Space Agency (ASI), involving Thales Alenia Space as prime, the Polytechnic of Turin and other Italian partners, is developing a program for feasibility demonstration of optical communication system with the goal of a prototype flight mission in the next future. We have designed and analyzed a ground level bidirectional Free Space Optical Communication (FSOC) Breadboard at 2.5Gbit/s working at 1550nm as an emulator of slant path link. The breadboard is full-working and we tested it back-toback, at 500m and 2.3km during one month. The distances were chosen in order to get an equivalent slant path cumulative turbulence in a ground level link. The measurements campaign was done during the day and the night time and under several weather conditions, from sunny, rainy or windy. So we could work under very different turbulence conditions from weak to strong turbulence. We measured the scintillation both, on-axis and off-axis by introducing known misalignments at the terminals, transmission losses at both path lengths and BER at both receivers. We present simulations results considering slant and ground level links, where we took into account the atmospheric effects; scintillation, beam spread, beam wander and fade probability, and comparing them with the ground level experimental results, we find a good agreement between them. Finally we discuss the results obtained in the experimentation and in the flight mission simulations in order to apply our experimental results in the next project phases.

  7. ULF foreshock under radial IMF: THEMIS observations and global kinetic simulation Vlasiator results compared

    NASA Astrophysics Data System (ADS)

    Palmroth, Minna; Rami, Vainio; Archer, Martin; Hietala, Heli; Afanasiev, Alexandr; Kempf, Yann; Hoilijoki, Sanni; von Alfthan, Sebastian

    2015-04-01

    For decades, a certain type of ultra low frequency waves with a period of about 30 seconds have been observed in the Earth's quasi-parallel foreshock. These waves, with a wavelength of about an Earth radius, are compressive and propagate with an average angle of 20 degrees with respect of the interplanetary magnetic field (IMF). The latter property has caused trouble to scientists as the growth rate for the instability causing the waves is maximized along the magnetic field. So far, these waves have been characterized by single or multi-spacecraft methods and 2-dimensional hybrid-PIC simulations, which have not fully reproduced the wave properties. Vlasiator is a newly developed, global hybrid-Vlasov simulation, which solves the six-dimensional phase space utilising the Vlasov equation for protons, while electrons are a charge-neutralising fluid. The outcome of the simulation is a global reproduction of ion-scale physics in a holistic manner where the generation of physical features can be followed in time and their consequences can be quantitatively characterised. Vlasiator produces the ion distribution functions and the related kinetic physics in unprecedented detail, in the global scale magnetospheric scale with a resolution of a couple of hundred kilometres in the ordinary space and 20 km/s in the velocity space. We run Vlasiator under a radial IMF in five dimensions consisting of the three-dimensional velocity space embedded in the ecliptic plane. We observe the generation of the 30-second ULF waves, and characterize their evolution and physical properties in time. We compare the results both to THEMIS observations and to the quasi-linear theory. We find that Vlasiator reproduces the foreshock ULF waves in all reported observational aspects, i.e., they are of the observed size in wavelength and period, they are compressive and propagate obliquely to the IMF. In particular, we discuss the issues related to the long-standing question of oblique propagation.

  8. Tank 241-AZ-101 criticality assessment resulting from pump jet mixing: Sludge mixing simulation

    SciTech Connect

    Onishi, Y.; Recknagle, K.

    1997-04-01

    Tank 241-AZ-101 (AZ-101) is one of 28 double-shell tanks located in the AZ farm in the Hanford Site`s 200 East Area. The tank contains a significant quantity of fissile materials, including an estimated 9.782 kg of plutonium. Before beginning jet pump mixing for mitigative purposes, the operations must be evaluated to demonstrate that they will be subcritical under both normal and credible abnormal conditions. The main objective of this study was to address a concern about whether two 300-hp pumps with four rotating 18.3-m/s (60-ft/s) jets can concentrate plutonium in their pump housings during mixer pump operation and cause a criticality. The three-dimensional simulation was performed with the time-varying TEMPEST code to determine how much the pump jet mixing of Tank AZ-101 will concentrate plutonium in the pump housing. The AZ-101 model predicted that the total amount of plutonium within the pump housing peaks at 75 g at 10 simulation seconds and decreases to less than 10 g at four minutes. The plutonium concentration in the entire pump housing peaks at 0.60 g/L at 10 simulation seconds and is reduced to below 0.1 g/L after four minutes. Since the minimum critical concentration of plutonium is 2.6 g/L, and the minimum critical plutonium mass under idealized plutonium-water conditions is 520 g, these predicted maximums in the pump housing are much lower than the minimum plutonium conditions needed to reach a criticality level. The initial plutonium maximum of 1.88 g/L still results in safety factor of 4.3 in the pump housing during the pump jet mixing operation.

  9. Accuracy of TCP performance models

    NASA Astrophysics Data System (ADS)

    Schwefel, Hans Peter; Jobmann, Manfred; Hoellisch, Daniel; Heyman, Daniel P.

    2001-07-01

    Despite the fact that most of todays' Internet traffic is transmitted via the TCP protocol, the performance behavior of networks with TCP traffic is still not well understood. Recent research activities have lead to a number of performance models for TCP traffic, but the degree of accuracy of these models in realistic scenarios is still questionable. This paper provides a comparison of the results (in terms of average throughput per connection) of three different `analytic' TCP models: I. the throughput formula in [Padhye et al. 98], II. the modified Engset model of [Heyman et al. 97], and III. the analytic TCP queueing model of [Schwefel 01] that is a packet based extension of (II). Results for all three models are computed for a scenario of N identical TCP sources that transmit data in individual TCP connections of stochastically varying size. The results for the average throughput per connection in the analytic models are compared with simulations of detailed TCP behavior. All of the analytic models are expected to show deficiencies in certain scenarios, since they neglect highly influential parameters of the actual real simulation model: The approach of Model (I) and (II) only indirectly considers queueing in bottleneck routers, and in certain scenarios those models are not able to adequately describe the impact of buffer-space, neither qualitatively nor quantitatively. Furthermore, (II) is insensitive to the actual distribution of the connection sizes. As a consequence, their prediction would also be insensitive of so-called long-range dependent properties in the traffic that are caused by heavy-tailed connection size distributions. The simulation results show that such properties cannot be neglected for certain network topologies: LRD properties can even have counter-intuitive impact on the average goodput, namely the goodput can be higher for small buffer-sizes.

  10. Role of dayside transients in a substorm process: Results from the global kinetic simulation Vlasiator

    NASA Astrophysics Data System (ADS)

    Palmroth, M.; Hoilijoki, S.; Pfau-Kempf, Y.; Hietala, H.; Nishimura, Y.; Angelopoulos, V.; Pulkkinen, T. I.; Ganse, U.; Hannuksela, O.; von Alfthan, S.; Battarbee, M. C.; Vainio, R. O.

    2015-12-01

    We investigate the dayside-nightside coupling of the magnetospheric dynamics in a global kinetic simulation displaying the entire magnetosphere. We use the newly developed Vlasiator (http://vlasiator.fmi.fi), which is the world's first global hybrid-Vlasov simulation modelling the ions as distribution functions, while electrons are treated as a charge-neutralising fluid. Here, we run Vlasiator in the 5-dimensional (5D) setup, where the ordinary space is presented in the 2D noon-midnight meridional plane, embedding in each grid cell the 3D velocity space. This approach combines the improved physical solution with fine resolution, allowing to investigate kinetic processes as a consequence of the global magnetospheric evolution. The simulation is during steady southward interplanetary magnetic field. We observe dayside reconnection and the resulting 2D representations of flux transfer events (FTE). FTE's move tailwards and distort the magnetopause, while the largest of them even modify the plasma sheet location. In the nightside, the plasma sheet shows bead-like density enhancements moving slowly earthward. The tailward side of the dipolar field stretches. Strong reconnection initiates first in the near-Earth region, forming a tailward-moving magnetic island that cannibalises other islands forming further down the tail, increasing the island's volume and complexity. After this, several reconnection lines are formed again in the near-Earth region, resulting in several magnetic islands. At first, none of the earthward moving islands reach the closed field region because just tailward of the dipolar region exists a relatively stable X-line, which is strong enough to push most of the magnetic islands tailward. However, finally one of the tailward X-lines is strong enough to overcome the X-line nearest to Earth, forming a strong surge into the dipolar field region as there is nothing anymore to hold back the propagation of the structure. We investigate this substorm

  11. Simulator sickness during driving simulation studies.

    PubMed

    Brooks, Johnell O; Goodenough, Richard R; Crisler, Matthew C; Klein, Nathan D; Alley, Rebecca L; Koon, Beatrice L; Logan, William C; Ogle, Jennifer H; Tyrrell, Richard A; Wills, Rebekkah F

    2010-05-01

    While driving simulators are a valuable tool for assessing multiple dimensions of driving performance under relatively safe conditions, researchers and practitioners must be prepared for participants that suffer from simulator sickness. This paper describes multiple theories of motion sickness and presents a method for assessing and reacting to simulator sickness symptoms. Results showed that this method identified individuals who were unable to complete a driving simulator study due to simulator sickness with greater than 90% accuracy and that older participants had a greater likelihood of simulator sickness than younger participants. Possible explanations for increased symptoms experienced by older participants are discussed as well as implications for research ethics and simulator sickness prevention.

  12. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): simulation design and preliminary results

    NASA Astrophysics Data System (ADS)

    Kravitz, B.; Robock, A.; Tilmes, S.; Boucher, O.; English, J. M.; Irvine, P. J.; Jones, A.; Lawrence, M. G.; MacCracken, M.; Muri, H.; Moore, J. C.; Niemeier, U.; Phipps, S. J.; Sillmann, J.; Storelvmo, T.; Wang, H.; Watanabe, S.

    2015-06-01

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more longwave radiation to escape to space. We discuss experiment designs, as well as the rationale for those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. This is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.

  13. Late Pop III Star Formation During the Epoch of Reionization: Results from the Renaissance Simulations

    NASA Astrophysics Data System (ADS)

    Xu, Hao; Norman, Michael L.; O’Shea, Brian W.; Wise, John H.

    2016-06-01

    We present results on the formation of Population III (Pop III) stars at redshift 7.6 from the Renaissance Simulations, a suite of extremely high-resolution and physics-rich radiation transport hydrodynamics cosmological adaptive-mesh refinement simulations of high-redshift galaxy formation performed on the Blue Waters supercomputer. In a survey volume of about 220 comoving Mpc3, we found 14 Pop III galaxies with recent star formation. The surprisingly late formation of Pop III stars is possible due to two factors: (i) the metal enrichment process is local and slow, leaving plenty of pristine gas to exist in the vast volume; and (ii) strong Lyman–Werner radiation from vigorous metal-enriched star formation in early galaxies suppresses Pop III formation in (“not so”) small primordial halos with mass less than ˜3 × 107 M ⊙. We quantify the properties of these Pop III galaxies and their Pop III star formation environments. We look for analogs to the recently discovered luminous Ly α emitter CR7, which has been interpreted as a Pop III star cluster within or near a metal-enriched star-forming galaxy. We find and discuss a system similar to this in some respects, however, the Pop III star cluster is far less massive and luminous than CR7 is inferred to be.

  14. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6). Simulation Design and Preliminary Results

    SciTech Connect

    Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; Boucher, Olivier; English, J.; Irvine, Peter; Jones, Andrew; Lawrence, M. G.; Maccracken, Michael C.; Muri, Helene O.; Moore, John; Niemeier, Ulrike; Phipps, Steven; Sillmann, Jana; Storelvmo, Trude; Wang, Hailong; Watanabe, Shingo

    2015-10-27

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more longwave radiation to escape to space. We discuss experiment designs, as well as the rationale for those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. This is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.

  15. Late Pop III Star Formation During the Epoch of Reionization: Results from the Renaissance Simulations

    NASA Astrophysics Data System (ADS)

    Xu, Hao; Norman, Michael L.; O'Shea, Brian W.; Wise, John H.

    2016-06-01

    We present results on the formation of Population III (Pop III) stars at redshift 7.6 from the Renaissance Simulations, a suite of extremely high-resolution and physics-rich radiation transport hydrodynamics cosmological adaptive-mesh refinement simulations of high-redshift galaxy formation performed on the Blue Waters supercomputer. In a survey volume of about 220 comoving Mpc3, we found 14 Pop III galaxies with recent star formation. The surprisingly late formation of Pop III stars is possible due to two factors: (i) the metal enrichment process is local and slow, leaving plenty of pristine gas to exist in the vast volume; and (ii) strong Lyman-Werner radiation from vigorous metal-enriched star formation in early galaxies suppresses Pop III formation in (“not so”) small primordial halos with mass less than ˜3 × 107 M ⊙. We quantify the properties of these Pop III galaxies and their Pop III star formation environments. We look for analogs to the recently discovered luminous Ly α emitter CR7, which has been interpreted as a Pop III star cluster within or near a metal-enriched star-forming galaxy. We find and discuss a system similar to this in some respects, however, the Pop III star cluster is far less massive and luminous than CR7 is inferred to be.

  16. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): Simulation design and preliminary results

    DOE PAGES

    Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; Boucher, Olivier; English, J. M.; Irvine, Peter J.; Jones, Andrew; Lawrence, M. G.; MacCracken, Michael C.; Muri, Helene O.; et al

    2015-10-27

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more long wave radiation to escape to space. We discuss experiment designs, as well as the rationale formore » those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. In conclusion, this is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.« less

  17. Results of transient simulations of a digital model of the Arikaree Aquifer near Wheatland, southeastern Wyoming

    USGS Publications Warehouse

    Hoxie, Dwight T.

    1979-01-01

    Revised ground-water pumpage data have been imposed on a ground-water flow model previously developed for the Arikaree aquifer in a 400 square-mile area in central Platte County, Wyo. Maximum permitted annual ground-water withdrawals of 750 acre-feet for industrial use were combined with three irrigation-pumping scenarios to predict the long-term effects on ground-water levels and streamflows. Total annual ground-water withdrawals of 8,806 acre-feet, 8,033 acre-feet, and 5,045 acre-feet were predicted to produce average water-level declines of 5 feet or more over areas of 99, 96, and 68 square miles, respectively, at the end of a 40-year simulation period. The first two pumping scenarios were predicted to produce average drawdowns of more than 50 feet over areas of 1.5 and 0.8 square miles, respectively, while the third scenario resulted in average drawdowns of less than 50 feet throughout the study area. In addition, these three pumping scenarios were predicted to cause streamflow reductions of 2.6, 2.0, and 1.4 cubic feet per second, respectively, in the Laramie River and 4.9, 4.7, and 3.7 cubic feet per second, respectively, in the North Laramie River at the end of the 40-year simulation period. (Kosco-USGS)

  18. Statistics of dark matter substructure - II. Comparison of model with simulation results

    NASA Astrophysics Data System (ADS)

    van den Bosch, Frank C.; Jiang, Fangzhou

    2016-05-01

    We compare subhalo mass and velocity functions obtained from different simulations with different subhalo finders among each other, and with predictions from the new semi-analytical model presented in Paper I. We find that subhalo mass functions (SHMFs) obtained using different subhalo finders agree with each other at the level of ˜20 per cent, but only at the low-mass end. At the massive end, subhalo finders that identify subhaloes based purely on density in configuration space dramatically underpredict the subhalo abundances by more than an order of magnitude. These problems are much less severe for subhalo velocity functions (SHVFs), indicating that they arise from issues related to assigning masses to the subhaloes, rather than from detecting them. Overall the predictions from the semi-analytical model are in excellent agreement with simulation results obtained using the more advanced subhalo finders that use information in six-dimensional phase-space. In particular, the model accurately reproduces the slope and host-mass-dependent normalization of both the subhalo mass and velocity functions. We find that the SHMFs and SHVFs have power-law slopes of 0.86 and 2.77, respectively, significantly shallower than what has been claimed in several studies in the literature.

  19. Simulation and Laboratory results of the Hard X-ray Polarimeter: X-Calibur

    NASA Astrophysics Data System (ADS)

    Guo, Qingzhen; Beilicke, M.; Kislat, F.; Krawczynski, H.

    2014-01-01

    X-ray polarimetry promises to give qualitatively new information about high-energy sources, such as binary black hole (BH) systems, Microquasars, active galactic nuclei (AGN), GRBs, etc. We designed, built and tested a hard X-ray polarimeter 'X-Calibur' to be flown in the focal plane of the InFOCuS grazing incidence hard X-ray telescope in 2014. X-Calibur combines a low-Z Compton scatterer with a CZT detector assembly to measure the polarization of 20- 80 keV X-rays making use of the fact that polarized photons Compton scatter preferentially perpendicular to the E field orientation. X-Calibur achieves a high detection efficiency of order unity. We optimized of the design of the instrument based on Monte Carlo simulations of polarized and unpolarized X-ray beams and of the most important background components. We have calibrated and tested X-Calibur extensively in the laboratory at Washington University and at the Cornell High-Energy Synchrotron Source (CHESS). Measurements using the highly polarized synchrotron beam at CHESS confirm the polarization sensitivity of the instrument. In this talk we report on the optimization of the design of the instrument based on Monte Carlo simulations, as well as results of laboratory calibration measurements characterizing the performance of the instrument.

  20. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): Simulation design and preliminary results

    SciTech Connect

    Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; Boucher, Olivier; English, J. M.; Irvine, Peter J.; Jones, Andrew; Lawrence, M. G.; MacCracken, Michael C.; Muri, Helene O.; Moore, John C.; Niemeier, Ulrike; Phipps, Steven J.; Sillmann, Jana; Storelvmo, Trude; Wang, Hailong; Watanabe, Shingo

    2015-10-27

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more long wave radiation to escape to space. We discuss experiment designs, as well as the rationale for those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. In conclusion, this is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.

  1. The Formation of Asteroid Satellites in Catastrophic Impacts: Results from Numerical Simulations

    NASA Technical Reports Server (NTRS)

    Durda, D. D.; Bottke, W. F., Jr.; Enke, B. L.; Asphaug, E.; Richardson, D. C.; Leinhardt, Z. M.

    2003-01-01

    We have performed new simulations of the formation of asteroid satellites by collisions, using a combination of hydrodynamical and gravitational dynamical codes. This initial work shows that both small satellites and ejected, co-orbiting pairs are produced most favorably by moderate-energy collisions at more direct, rather than oblique, impact angles. Simulations so far seem to be able to produce systems qualitatively similar to known binaries. Asteroid satellites provide vital clues that can help us understand the physics of hypervelocity impacts, the dominant geologic process affecting large main belt asteroids. Moreover, models of satellite formation may provide constraints on the internal structures of asteroids beyond those possible from observations of satellite orbital properties alone. It is probable that most observed main-belt asteroid satellites are by-products of cratering and/or catastrophic disruption events. Several possible formation mechanisms related to collisions have been identified: (i) mutual capture following catastrophic disruption, (ii) rotational fission due to glancing impact and spin-up, and (iii) re-accretion in orbit of ejecta from large, non-catastrophic impacts. Here we present results from a systematic investigation directed toward mapping out the parameter space of the first and third of these three collisional mechanisms.

  2. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): simulation design and preliminary results

    NASA Astrophysics Data System (ADS)

    Kravitz, B.; Robock, A.; Tilmes, S.; Boucher, O.; English, J. M.; Irvine, P. J.; Jones, A.; Lawrence, M. G.; MacCracken, M.; Muri, H.; Moore, J. C.; Niemeier, U.; Phipps, S. J.; Sillmann, J.; Storelvmo, T.; Wang, H.; Watanabe, S.

    2015-10-01

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more longwave radiation to escape to space. We discuss experiment designs, as well as the rationale for those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. This is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.

  3. Comparison of simulations and experimental results from ICF implosions using capsules of varying surface roughness.

    NASA Astrophysics Data System (ADS)

    Turner, R. E.; Glebov, V.

    2005-10-01

    We have conducted a series of indirect-drive ICF implosion experiments at Omega, using capsules with deliberately roughened surfaces. The 10 atm DD fill capsules had a convergence ratio of 18, higher than previous Nova experiments [M. Marinak et al, Phys. Plasmas 3, 2070 (1996)]; the pre-heat shielded, Ge-doped CH ablators had moderately high (˜200) Raleigh-Taylor growth factors. Each capsule's surface quality was measured using atomic force microscopy. Gated x-ray imaging of the imploded core was used to assure that basic symmetry was maintained, while `best-surface' capsules were used as controls with every experimental run. Neutron yields were observed to decrease as surface roughness increased. Integrated simulations, including mix modeling, have been performed, and are compared to the experimental results. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  4. Multiple Frequency Contrast Source Inversion Method for Vertical Electromagnetic Profiling: 2D Simulation Results and Analyses

    NASA Astrophysics Data System (ADS)

    Li, Jinghe; Song, Linping; Liu, Qing Huo

    2016-02-01

    A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.

  5. Inverse Comptonization in a Two Component Advective Flow: Results of a Monte Carlo simulation

    SciTech Connect

    Ghosh, Himadri; Chakrabarti, S. K.; Laurent, Philippe

    2008-10-08

    We compute the resultant spectrum due to multiple scattering of soft photons emitted from a Keplerian disk by thermal electrons inside a torus axisymmetrically placed around a black hole. In a two component advective flow model, the post-shock region is similar to a thick accretion disk and the pre-shock sub-keplerian flow is highly optically thin. As a preliminary run of the Monte Carlo simulation of the system, we assume the CENBOL to be a small (2-14r{sub g}) thick accretion disk without a cusp to allow bulk motion of the flow. Bulk Motion Comptonization (BMC) has also been added. We show that the spectral behaviour is very similar to what is predicted in Chakrabarti and Titarchuk (1995)

  6. AeroMACS C-Band Interference Modeling and Simulation Results

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey

    2010-01-01

    A new C-band (5091-5150 MHz) airport communications system designated as Aeronautical Mobile Airport Communications System (AeroMACS) is being planned under the Federal Aviation Administration s NextGen program. It is necessary to establish practical limits on AeroMACS transmission power from airports so that the threshold of interference into the Mobile Satellite Service (Globalstar) feeder uplinks is not exceeded. To help provide guidelines for these limits, interference models have been created with the commercial software Visualyse Professional. In this presentation, simulation results were shown for the aggregate interference power at low earth orbit from AeroMACS transmitters at each of up to 757 airports in the United States, Canada, Mexico, and the surrounding area. Both omni-directional and sectoral antenna configurations were modeled. Effects of antenna height, beamwidth, and tilt were presented.

  7. Comparison of road load simulator test results with track tests on electric vehicle propulsion system

    NASA Technical Reports Server (NTRS)

    Dustin, M. O.

    1983-01-01

    A special-purpose dynamometer, the road load simulator (RLS), is being used at NASA's Lewis Research Center to test and evaluate electric vehicle propulsion systems developed under DOE's Electric and Hybrid Vehicle Program. To improve correlation between system tests on the RLS and track tests, similar tests were conducted on the same propulsion system on the RLS and on a test track. These tests are compared in this report. Battery current to maintain a constant vehicle speed with a fixed throttle was used for the comparison. Scatter in the data was greater in the track test results. This is attributable to variations in tire rolling resistance and wind effects in the track data. It also appeared that the RLS road load, determined by coastdown tests on the track, was lower than that of the vehicle on the track. These differences may be due to differences in tire temperature.

  8. Solar flare model: Comparison of the results of numerical simulations and observations

    NASA Astrophysics Data System (ADS)

    Podgorny, I. M.; Vashenyuk, E. V.; Podgorny, A. I.

    2009-12-01

    The electrodynamic flare model is based on numerical 3D simulations with the real magnetic field of an active region. An energy of ˜1032 erg necessary for a solar flare is shown to accumulate in the magnetic field of a coronal current sheet. The thermal X-ray source in the corona results from plasma heating in the current sheet upon reconnection. The hard X-ray sources are located on the solar surface at the loop foot-points. They are produced by the precipitation of electron beams accelerated in field-aligned currents. Solar cosmic rays appear upon acceleration in the electric field along a singular magnetic X-type line. The generation mechanism of the delayed cosmic-ray component is also discussed.

  9. Biofilm formation and control in a simulated spacecraft water system - Two-year results

    NASA Technical Reports Server (NTRS)

    Schultz, John R.; Taylor, Robert D.; Flanagan, David T.; Carr, Sandra E.; Bruce, Rebekah J.; Svoboda, Judy V.; Huls, M. H.; Sauer, Richard L.; Pierson, Duane L.

    1991-01-01

    The ability of iodine to maintain microbial water quality in a simulated spacecraft water system is being studied. An iodine level of about 2.0 mg/L is maintained by passing ultrapure influent water through an iodinated ion exchange resin. Six liters are withdrawn daily and the chemical and microbial quality of the water is monitored regularly. Stainless steel coupons used to monitor biofilm formation are being analyzed by culture methods, epifluorescence microscopy, and scanning electron microscopy. Results from the first two years of operation show a single episode of high bacterial colony counts in the iodinated system. This growth was apparently controlled by replacing the iodinated ion exchange resin. Scanning electron microscopy indicates that the iodine has limited but not completely eliminated the formation of biofilm during the first two years of operation. Significant microbial contamination has been present continuously in a parallel noniodinated system since the third week of operation.

  10. Test Results From a Direct Drive Gas Reactor Simulator Coupled to a Brayton Power Conversion Unit

    NASA Technical Reports Server (NTRS)

    Hervol, David S.; Briggs, Maxwell H.; Owen, Albert K.; Bragg-Sitton, Shannon M.

    2009-01-01

    The Brayton Power Conversion Unit (BPCU) located at NASA Glenn Research Center (GRC) in Cleveland, OH is a closed cycle system incorporating a turboaltemator, recuperator, and gas cooler connected by gas ducts to an external gas heater. For this series of tests, the BPCU was modified by replacing the gas heater with the Direct Drive Gas heater or DOG. The DOG uses electric resistance heaters to simulate a fast spectrum nuclear reactor similar to those proposed for space power applications. The combined system thermal transient behavior was the focus of these tests. The BPCU was operated at various steady state points. At each point it was subjected to transient changes involving shaft rotational speed or DOG electrical input. This paper outlines the changes made to the test unit and describes the testing that took place along with the test results.

  11. Experimental and simulation study results for video landmark acquisition and tracking technology

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Tietz, J. C.; Thomas, H. M.; Lowrie, J. W.

    1979-01-01

    A synopsis of related Earth observation technology is provided and includes surface-feature tracking, generic feature classification and landmark identification, and navigation by multicolor correlation. With the advent of the Space Shuttle era, the NASA role takes on new significance in that one can now conceive of dedicated Earth resources missions. Space Shuttle also provides a unique test bed for evaluating advanced sensor technology like that described in this report. As a result of this type of rationale, the FILE OSTA-1 Shuttle experiment, which grew out of the Video Landmark Acquisition and Tracking (VILAT) activity, was developed and is described in this report along with the relevant tradeoffs. In addition, a synopsis of FILE computer simulation activity is included. This synopsis relates to future required capabilities such as landmark registration, reacquisition, and tracking.

  12. Recent Simulation Results on Ring Current Dynamics Using the Comprehensive Ring Current Model

    NASA Technical Reports Server (NTRS)

    Zheng, Yihua; Zaharia, Sorin G.; Lui, Anthony T. Y.; Fok, Mei-Ching

    2010-01-01

    Plasma sheet conditions and electromagnetic field configurations are both crucial in determining ring current evolution and connection to the ionosphere. In this presentation, we investigate how different conditions of plasma sheet distribution affect ring current properties. Results include comparative studies in 1) varying the radial distance of the plasma sheet boundary; 2) varying local time distribution of the source population; 3) varying the source spectra. Our results show that a source located farther away leads to a stronger ring current than a source that is closer to the Earth. Local time distribution of the source plays an important role in determining both the radial and azimuthal (local time) location of the ring current peak pressure. We found that post-midnight source locations generally lead to a stronger ring current. This finding is in agreement with Lavraud et al.. However, our results do not exhibit any simple dependence of the local time distribution of the peak ring current (within the lower energy range) on the local time distribution of the source, as suggested by Lavraud et al. [2008]. In addition, we will show how different specifications of the magnetic field in the simulation domain affect ring current dynamics in reference to the 20 November 2007 storm, which include initial results on coupling the CRCM with a three-dimensional (3-D) plasma force balance code to achieve self-consistency in the magnetic field.

  13. Simulated microgravity inhibits the proliferation of K562 erythroleukemia cells but does not result in apoptosis

    NASA Astrophysics Data System (ADS)

    Yi, Zong-Chun; Xia, Bing; Xue, Ming; Zhang, Guang-Yao; Wang, Hong; Zhou, Hui-Min; Sun, Yan; Zhuang, Feng-Yuan

    2009-07-01

    Astronauts and experimental animals in space develop the anemia of space flight, but the underlying mechanisms are still unclear. In this study, the impact of simulated microgravity on proliferation, cell death, cell cycle progress and cytoskeleton of erythroid progenitor-like K562 leukemia cells was observed. K562 cells were cultured in NASA Rotary Cell Culture System (RCCS) that was used to simulate microgravity (at 15 rpm). After culture for 24 h, 48 h, 72 h, and 96 h, the cell densities cultured in RCCS were only 55.5%, 54.3%, 67.2% and 66.4% of the flask-cultured control cells, respectively. The percentages of trypan blue-stained dead cells and the percentages of apoptotic cells demonstrated no difference between RCCS-cultured cells and flask-cultured cells at every time points (from 12 h to 96 h). Compared with flask-cultured cells, RCCS culture induced an accumulation of cell number at S phase concomitant with a decrease at G0/G1 and G2/M phases at 12 h. But 12 h later (from 24 h to 60 h), the distribution of cell cycle phases in RCCS-cultured cells became no difference compared to flask-cultured cells. Consistent with the changes of cell cycle distribution, the levels of intercellular cyclins in RCCS-cultured cells changed at 12 h, including a decrease in cyclin A, and the increasing in cyclin B, D1 and E, and then (from 24 h to 36 h) began to restore to control levels. After RCCS culture for 12-36 h, the microfilaments showed uneven and clustered distribution, and the microtubules were highly disorganized. These results indicated that RCCS-simulated microgravity could induce a transient inhibition of proliferation, but not result in apoptosis, which could involve in the development of space flight anemia. K562 cells could be a useful model to research the effects of microgravity on differentiation and proliferation of hematopoietic cells.

  14. Results of Aging Tests of Vendor-Produced Blended Feed Simulant

    SciTech Connect

    Russell, Renee L.; Buchmiller, William C.; Cantrell, Kirk J.; Peterson, Reid A.; Rinehart, Donald E.

    2009-04-21

    The Hanford Tank Waste Treatment and Immobilization Plant (WTP) is procuring through Pacific Northwest National Laboratory (PNNL) a minimum of five 3,500 gallon batches of waste simulant for Phase 1 testing in the Pretreatment Engineering Platform (PEP). To make sure that the quality of the simulant is acceptable, the production method was scaled up starting from laboratory-prepared simulant through 15-gallon vendor prepared simulant and 250-gallon vendor prepared simulant before embarking on the production of the 3500-gallon simulant batch by the vendor. The 3500-gallon PEP simulant batches were packaged in 250-gallon high molecular weight polyethylene totes at NOAH Technologies. The simulant was stored in an environmentally controlled environment at NOAH Technologies within their warehouse before blending or shipping. For the 15-gallon, 250-gallon, and 3500-gallon batch 0, the simulant was shipped in ambient temperature trucks with shipment requiring nominally 3 days. The 3500-gallon batch 1 traveled in a 70-75°F temperature controlled truck. Typically the simulant was uploaded in a PEP receiving tank within 24-hours of receipt. The first uploading required longer with it stored outside. Physical and chemical characterization of the 250-gallon batch was necessary to determine the effect of aging on the simulant in transit from the vendor and in storage before its use in the PEP. Therefore, aging tests were conducted on the 250-gallon batch of the vendor-produced PEP blended feed simulant to identify and determine any changes to the physical characteristics of the simulant when in storage. The supernate was also chemically characterized. Four aging scenarios for the vendor-produced blended simulant were studied: 1) stored outside in a 250-gallon tote, 2) stored inside in a gallon plastic bottle, 3) stored inside in a well mixed 5-L tank, and 4) subject to extended temperature cycling under summer temperature conditions in a gallon plastic bottle. The following

  15. THE ACCURACY OF USING THE ULYSSES RESULT OF THE SPATIAL INVARIANCE OF THE RADIAL HELIOSPHERIC FIELD TO COMPUTE THE OPEN SOLAR FLUX

    SciTech Connect

    Lockwood, M.; Owens, M.

    2009-08-20

    We survey observations of the radial magnetic field in the heliosphere as a function of position, sunspot number, and sunspot cycle phase. We show that most of the differences between pairs of simultaneous observations, normalized using the square of the heliocentric distance and averaged over solar rotations, are consistent with the kinematic 'flux excess' effect whereby the radial component of the frozen-in heliospheric field is increased by longitudinal solar wind speed structure. In particular, the survey shows that, as expected, the flux excess effect at high latitudes is almost completely absent during sunspot minimum but is almost the same as within the streamer belt at sunspot maximum. We study the uncertainty inherent in the use of the Ulysses result that the radial field is independent of heliographic latitude in the computation of the total open solar flux: we show that after the kinematic correction for the excess flux effect has been made it causes errors that are smaller than 4.5%, with a most likely value of 2.5%. The importance of this result for understanding temporal evolution of the open solar flux is reviewed.

  16. Research on an expert system for database operation of simulation-emulation math models. Volume 1, Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Kawamura, K.; Beale, G. O.; Schaffer, J. D.; Hsieh, B. J.; Padalkar, S.; Rodriguez-Moscoso, J. J.

    1985-01-01

    The results of the first phase of Research on an Expert System for Database Operation of Simulation/Emulation Math Models, is described. Techniques from artificial intelligence (AI) were to bear on task domains of interest to NASA Marshall Space Flight Center. One such domain is simulation of spacecraft attitude control systems. Two related software systems were developed to and delivered to NASA. One was a generic simulation model for spacecraft attitude control, written in FORTRAN. The second was an expert system which understands the usage of a class of spacecraft attitude control simulation software and can assist the user in running the software. This NASA Expert Simulation System (NESS), written in LISP, contains general knowledge about digital simulation, specific knowledge about the simulation software, and self knowledge.

  17. Free-Flight Test Results of Scale Models Simulating Viking Parachute/Lander Staging

    NASA Technical Reports Server (NTRS)

    Polutchko, Robert J.

    1973-01-01

    This report presents the results of Viking Aerothermodynamics Test D4-34.0. Motion picture coverage of a number of Scale model drop tests provides the data from which time-position characteristics as well as canopy shape and model system attitudes are measured. These data are processed to obtain the instantaneous drag during staging of a model simulating the Viking decelerator system during parachute staging at Mars. Through scaling laws derived prior to test (Appendix A and B) these results are used to predict such performance of the Viking decelerator parachute during staging at Mars. The tests were performed at the NASA/Kennedy Space Center (KSC) Vertical Assembly Building (VAB). Model assemblies were dropped 300 feet to a platform in High Bay No. 3. The data consist of an edited master film (negative) which is on permanent file in the NASA/LRC Library. Principal results of this investigation indicate that for Viking parachute staging at Mars: 1. Parachute staging separation distance is always positive and continuously increasing generally along the descent path. 2. At staging, the parachute drag coefficient is at least 55% of its prestage equilibrium value. One quarter minute later, it has recovered to its pre-stage value.

  18. Elastodynamic analysis of a gear pump. Part II: Meshing phenomena and simulation results

    NASA Astrophysics Data System (ADS)

    Mucchi, E.; Dalpiaz, G.; Rivola, A.

    2010-10-01

    A non-linear lumped kineto-elastodynamic model for the prediction of the dynamic behaviour of external gear pumps is presented. It takes into account the most important phenomena involved in the operation of this kind of machines. Two main sources of noise and vibration can be considered: pressure and gear meshing. Fluid pressure distribution on gears, which is time-varying, is computed and included as a resultant external force and torque acting on the gears. Parametric excitations due to time-varying meshing stiffness, the tooth profile errors (obtained by a metrological analysis), the backlash effects between meshing teeth, the lubricant squeeze and the possibility of tooth contact on both lines of action were also included. Finally, the torsional stiffness and damping of the driving shaft and the non-linear behaviour of the hydrodynamic journal bearings were also taken into account. Model validation was carried out on the basis of experimental data concerning case accelerations and force reactions. The model can be used in order to analyse the pump dynamic behaviour and to identify the effects of modifications in design and operation parameters, in terms of vibration and dynamic forces. Part I is devoted to the calculation of the gear eccentricity in the steady-state condition as result of the balancing between mean pressure loads, mean meshing force and bearing reactions, while in Part II the meshing phenomena are fully explained and the main simulation results are presented.

  19. Near-Infrared Spectroscopic Measurements of Calf Muscle during Walking at Simulated Reduced Gravity - Preliminary Results

    NASA Technical Reports Server (NTRS)

    Ellerby, Gwenn E. C.; Lee, Stuart M. C.; Stroud, Leah; Norcross, Jason; Gernhardt, Michael; Soller, Babs R.

    2008-01-01

    Consideration for lunar and planetary exploration space suit design can be enhanced by investigating the physiologic responses of individual muscles during locomotion in reduced gravity. Near-infrared spectroscopy (NIRS) provides a non-invasive method to study the physiology of individual muscles in ambulatory subjects during reduced gravity simulations. PURPOSE: To investigate calf muscle oxygen saturation (SmO2) and pH during reduced gravity walking at varying treadmill inclines and added mass conditions using NIRS. METHODS: Four male subjects aged 42.3 +/- 1.7 years (mean +/- SE) and weighing 77.9 +/- 2.4 kg walked at a moderate speed (3.2 +/- 0.2 km/h) on a treadmill at inclines of 0, 10, 20, and 30%. Unsuited subjects were attached to a partial gravity simulator which unloaded the subject to simulate body weight plus the additional weight of a space suit (121 kg) in lunar gravity (0.17G). Masses of 0, 11, 23, and 34 kg were added to the subject and then unloaded to maintain constant weight. Spectra were collected from the lateral gastrocnemius (LG), and SmO2 and pH were calculated using previously published methods (Yang et al. 2007 Optics Express ; Soller et al. 2008 J Appl Physiol). The effects of incline and added mass on SmO2 and pH were analyzed through repeated measures ANOVA. RESULTS: SmO2 and pH were both unchanged by added mass (p>0.05), so data from trials at the same incline were averaged. LG SmO2 decreased significantly with increasing incline (p=0.003) from 61.1 +/- 2.0% at 0% incline to 48.7 +/- 2.6% at 30% incline, while pH was unchanged by incline (p=0.12). CONCLUSION: Increasing the incline (and thus work performed) during walking causes the LG to extract more oxygen from the blood supply, presumably to support the increased metabolic cost of uphill walking. The lack of an effect of incline on pH may indicate that, while the intensity of exercise has increased, the LG has not reached a level of work above the anaerobic threshold. In these

  20. CZT detectors used in different irradiation geometries: Simulations and experimental results

    SciTech Connect

    Fritz, Shannon G.; Shikhaliev, Polad M.

    2009-04-15

    The purpose of this work was to evaluate potential advantages and limitations of CZT detectors used in surface-on, edge-on, and tilted angle irradiation geometries. Simulations and experimental investigations of the energy spectrum measured by a CZT detector have been performed using different irradiation geometries of the CZT. Experiments were performed using a CZT detector with 10x10 mm{sup 2} size and 3 mm thickness. The detector was irradiated with collimated photon beams from Am-241 (59.5 keV) and Co-57 (122 keV). The edge-scan method was used to measure the detector response function in edge-on illumination mode. The tilted angle mode was investigated with the radiation beam directed to the detector surface at angles of 90 degree sign , 15 degree sign , and 10 degree sign . The Hecht formalism was used to simulate theoretical energy spectra. The parameters used for simulations were matched to experiment to compare experimental and theoretical results. The tilted angle CZT detector suppressed the tailing of the spectrum and provided an increase in peak-to-total ratio from 38% at 90 degree sign to 83% at 10 degree sign tilt angle for 122 keV radiation. The corresponding increase for 59 keV radiation was from 60% at 90 degree sign to 85% at 10 degree sign tilt angle. The edge-on CZT detector provided high energy resolution when the beam thickness was much smaller than the thickness of CZT. The FWHM resolution in edge-on illumination mode was 4.2% for 122 keV beam with 0.3 mm thickness, and rapidly deteriorated when the thickness of the beam was increased. The energy resolution of surface-on geometry suffered from strong tailing effect at photon energies higher than 60 keV. It is concluded that tilted angle CZT provides high energy resolution but it is limited to a 1D linear array configuration. The surface-on CZT provides 2D pixel arrays but suffers from tailing effect and charge build up. The edge-on CZT is considered suboptimal as it requires small beam

  1. Wolter X-Ray Microscope Computed Tomography Ray-Trace Model with Preliminary Simulation Results

    SciTech Connect

    Jackson, J A

    2006-02-27

    code, (5) description of the modeling code, (6) the results of a number of preliminary imaging simulations, and (7) recommendations for future Wolter designs and for further modeling studies.

  2. A rainfall simulation experiment on soil and water conservation measures - Undesirable results

    NASA Astrophysics Data System (ADS)

    Hösl, R.; Strauss, P.

    2012-04-01

    Sediment and nutrient inputs from agriculturally used land into surface waters are one of the main problems concerning surface water quality. On-site soil and water conservation measures are getting more and more popular throughout the last decades and a lot of research has been done within this issue. Numerous studies can be found about rainfall simulation experiments with different conservation measures tested like no till, mulching employing different types of soil cover, as well as sub soiling practices. Many studies document a more or less great success in preventing soil erosion and enhancing water quality by implementing no till and mulching techniques on farmland but few studies also indicate higher erosion rates with implementation of conservation tillage practices (Strauss et al., 2003). In May 2011 we conducted a field rainfall simulation experiment in Upper Austria to test 5 different maize cultivation techniques: no till with rough seedbed, no till with fine seedbed, mulching with disc harrow and rotary harrow, mulching with rotary harrow and conventional tillage using plough and rotary harrow. Rough seedbed refers to the seedbed preparation at planting of the cover crops. On every plot except on the conventionally managed one cover crops (a mix of Trifolium alexandrinum, Phacelia, Raphanus sativus and Herpestes) were sown in August 2010. All plots were rained three times with deionised water (<50 μS.cm-1) for one hour with 50mm.h-1 rainfall intensity. Surface runoff and soil erosion were measured. Additionally, soil cover by mulch was measured as well as soil texture, bulk density, penetration resistance, surface roughness and soil water content before and after the simulation. The simulation experiments took place about 2 weeks after seeding of maize in spring 2011. The most effective cultivation techniques for soil prevention expectedly proved to be the no till variants, mean erosion rate was about 0.1 kg.h-1, mean surface runoff was 29 l.h-1

  3. A comparison of results from two simulators used for studies of astronaut maneuvering units. [with application to Skylab program

    NASA Technical Reports Server (NTRS)

    Stewart, E. C.; Cannaday, R. L.

    1973-01-01

    A comparison of the results from a fixed-base, six-degree-of -freedom simulator and a moving-base, three-degree-of-freedom simulator was made for a close-in, EVA-type maneuvering task in which visual cues of a target spacecraft were used for guidance. The maneuvering unit (the foot-controlled maneuvering unit of Skylab Experiment T020) employed an on-off acceleration command control system operated entirely by the feet. Maneuvers by two test subjects were made for the fixed-base simulator in six and three degrees of freedom and for the moving-base simulator in uncontrolled and controlled, EVA-type visual cue conditions. Comparisons of pilot ratings and 13 different quantitative parameters from the two simulators are made. Different results were obtained from the two simulators, and the effects of limited degrees of freedom and uncontrolled visual cues are discussed.

  4. Prediction Markets and Beliefs about Climate: Results from Agent-Based Simulations

    NASA Astrophysics Data System (ADS)

    Gilligan, J. M.; John, N. J.; van der Linden, M.

    2015-12-01

    Climate scientists have long been frustrated by persistent doubts a large portion of the public expresses toward the scientific consensus about anthropogenic global warming. The political and ideological polarization of this doubt led Vandenbergh, Raimi, and Gilligan [1] to propose that prediction markets for climate change might influence the opinions of those who mistrust the scientific community but do trust the power of markets.We have developed an agent-based simulation of a climate prediction market in which traders buy and sell future contracts that will pay off at some future year with a value that depends on the global average temperature at that time. The traders form a heterogeneous population with different ideological positions, different beliefs about anthropogenic global warming, and different degrees of risk aversion. We also vary characteristics of the market, including the topology of social networks among the traders, the number of traders, and the completeness of the market. Traders adjust their beliefs about climate according to the gains and losses they and other traders in their social network experience. This model predicts that if global temperature is predominantly driven by greenhouse gas concentrations, prediction markets will cause traders' beliefs to converge toward correctly accepting anthropogenic warming as real. This convergence is largely independent of the structure of the market and the characteristics of the population of traders. However, it may take considerable time for beliefs to converge. Conversely, if temperature does not depend on greenhouse gases, the model predicts that traders' beliefs will not converge. We will discuss the policy-relevance of these results and more generally, the use of agent-based market simulations for policy analysis regarding climate change, seasonal agricultural weather forecasts, and other applications.[1] MP Vandenbergh, KT Raimi, & JM Gilligan. UCLA Law Rev. 61, 1962 (2014).

  5. Democratic Population Decisions Result in Robust Policy-Gradient Learning: A Parametric Study with GPU Simulations

    PubMed Central

    Richmond, Paul; Buesing, Lars; Giugliano, Michele; Vasilaki, Eleni

    2011-01-01

    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a “non-democratic” mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons “vote” independently (“democratic”) for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated. PMID:21572529

  6. Initial quality performance results using a phantom to simulate chest computed radiography.

    PubMed

    Muhogora, Wilbroad; Padovani, Renato; Msaki, Peter

    2011-01-01

    The aim of this study was to develop a homemade phantom for quantitative quality control in chest computed radiography (CR). The phantom was constructed from copper, aluminium, and polymenthylmethacrylate (PMMA) plates as well as Styrofoam materials. Depending on combinations, the literature suggests that these materials can simulate the attenuation and scattering characteristics of lung, heart, and mediastinum. The lung, heart, and mediastinum regions were simulated by 10 mm x 10 mm x 0.5 mm, 10 mm x 10 mm x 0.5 mm and 10 mm x 10 mm x 1 mm copper plates, respectively. A test object of 100 mm x 100 mm and 0.2 mm thick copper was positioned to each region for CNR measurements. The phantom was exposed to x-rays generated by different tube potentials that covered settings in clinical use: 110-120 kVp (HVL=4.26-4.66 mm Al) at a source image distance (SID) of 180 cm. An approach similar to the recommended method in digital mammography was applied to determine the CNR values of phantom images produced by a Kodak CR 850A system with post-processing turned off. Subjective contrast-detail studies were also carried out by using images of Leeds TOR CDR test object acquired under similar exposure conditions as during CNR measurements. For clinical kVp conditions relevant to chest radiography, the CNR was highest over 90-100 kVp range. The CNR data correlated with the results of contrast detail observations. The values of clinical tube potentials at which CNR is the highest are regarded to be optimal kVp settings. The simplicity in phantom construction can offer easy implementation of related quality control program. PMID:21430855

  7. Democratic population decisions result in robust policy-gradient learning: a parametric study with GPU simulations.

    PubMed

    Richmond, Paul; Buesing, Lars; Giugliano, Michele; Vasilaki, Eleni

    2011-01-01

    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated. PMID:21572529

  8. SIMULATION RESULTS OF RUNNING THE AGS MMPS, BY STORING ENERGY IN CAPACITOR BANKS.

    SciTech Connect

    MARNERIS, I.

    2006-09-01

    The Brookhaven AGS is a strong focusing accelerator which is used to accelerate protons and various heavy ion species to equivalent maximum proton energy of 29 GeV. The AGS Main Magnet Power Supply (MMPS) is a thyristor control supply rated at 5500 Amps, +/-go00 Volts. The peak magnet power is 49.5 Mwatts. The power supply is fed from a motor/generator manufactured by Siemens. The motor is rated at 9 MW, input voltage 3 phase 13.8 KV 60 Hz. The generator is rated at 50 MVA its output voltage is 3 phase 7500 Volts. Thus the peak power requirements come from the stored energy in the rotor of the motor/generator. The rotor changes speed by about +/-2.5% of its nominal speed of 1200 Revolutions per Minute. The reason the power supply is powered by the Generator is that the local power company (LIPA) can not sustain power swings of +/- 50 MW in 0.5 sec if the power supply were to be interfaced directly with the AC lines. The Motor Generator is about 45 years old and Siemens is not manufacturing similar machines in the future. As a result we are looking at different ways of storing energy and being able to utilize it for our application. This paper will present simulations of a power supply where energy is stored in capacitor banks. The simulation program used is called PSIM Version 6.1. The control system of the power supply will also be presented. The average power from LIPA into the power supply will be kept constant during the pulsing of the magnets at +/-50 MW. The reactive power will also be kept constant below 1.5 MVAR. Waveforms will be presented.

  9. From Simulation to Real Robots with Predictable Results: Methods and Examples

    NASA Astrophysics Data System (ADS)

    Balakirsky, S.; Carpin, S.; Dimitoglou, G.; Balaguer, B.

    From a theoretical perspective, one may easily argue (as we will in this chapter) that simulation accelerates the algorithm development cycle. However, in practice many in the robotics development community share the sentiment that “Simulation is doomed to succeed” (Brooks, R., Matarić, M., Robot Learning, Kluwer Academic Press, Hingham, MA, 1993, p. 209). This comes in large part from the fact that many simulation systems are brittle; they do a fair-to-good job of simulating the expected, and fail to simulate the unexpected. It is the authors' belief that a simulation system is only as good as its models, and that deficiencies in these models lead to the majority of these failures. This chapter will attempt to address these deficiencies by presenting a systematic methodology with examples for the development of both simulated mobility models and sensor models for use with one of today's leading simulation engines. Techniques for using simulation for algorithm development leading to real-robot implementation will be presented, as well as opportunities for involvement in international robotics competitions based on these techniques.

  10. Urban Surface Network In Marseille: Network Optimization Using Numerical Simulations and Results

    NASA Astrophysics Data System (ADS)

    Pigeon, G.; Lemonsu, A.; Durand, P.; Masson, V.

    During the ESCOMPTE program (Field experiment to constrain models of atmo- spheric pollution and emissions transport) in Marseille between june and july 2001 an important device has been set up to describe the urban boundary layer over the built-up aera of Marseille. There was notably a network of 20 temperature and humid- ity sensors which has mesured the spatial and temporal variability of these parameters. Before the experiment the arrangement of the network had been optimized to get the maximum of information about these two varaibilities. We have worked on results of high resolution simulations containing the TEB scheme which represents the energy budgets associated with the gobal street geometry of the mesh. First, a qualitative analysis had enabled the identification of the characteristical phenomenons over the town of Marseille. There are narrows links beetween urban effects and local effects : marine advection and orography. Then, a quantitative analysis of the field has been developped. EOF (empirical orthogonal functions) have been used to characterised the spatial and temporal structures of the field evolution. Instrumented axis have been determined with all these results. Finally, we have choosen very carefully the locations of the instruments at the scale of the street to avoid that micro-climatic effects interfere with the meso-scale effect of the town. The recording of the mesurements, every 10 minutes, had started on the 12th of june and had finished on the 16th of july. We did not get any problem with the instrument and so all the period has been recorded every 10 minutes. The analysis of the datas will be led on different way. First, will be done a temporal study. We want to determine if the times when occur phenomenons are linked to the location in the town. We will interest particulary to the warming during the morning and the cooling during the evening. Then, we will look for correlation between the temperature and mixing ratio with the wind

  11. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  12. Simulated changes in potentiometric levels resulting from groundwater development for phosphate mines, west-central Florida

    USGS Publications Warehouse

    Wilson, W.E.; Gerhart, J.M.

    1979-01-01

    A digital model of two-dimensional groundwater flow was used to predict changes in the potentiometric surface of the Floridan aquifer resulting from groundwater development for proposed and existing phosphate mines during 1976-2000. The modeled area covers 15,379 km2 in west-central Florida. In 1975, groundwater withdrawn from the Floridan aquifer for irrigation, phosphate mines, other industries and municipal supplies averaged about 28,500 l/s. Withdrawals for phosphate mines are expected to shift from Polk County to adjacent counties to the south and west, and to decline from about 7,620 l/s in 1975 to about 7,060 l/s in 2000. The model was calibrated under steady-state and transient conditions. Input parameters included aquifer transmissivity and storage coefficient; thickness, vertical hydraulic conductivity, and storage coefficient of the upper confining bed; altitudes of the water table and potentiometric surface; and groundwater withdrawals. Simulation of November 1976 to October 2000, using projected combined pumping rates for existing and proposed phosphate mines, resulted in a rise in the potentiometric surface of about 6 m in Polk County, and a decline of about 4 m in parts of Manatee and Hardee counties. ?? 1979.

  13. Simulation results of Pulse Shape Discrimination (PSD) for background reduction in INTEGRAL Spectrometer (SPI) germanium detectors

    NASA Technical Reports Server (NTRS)

    Slassi-Sennou, S. A.; Boggs, S. E.; Feffer, P. T.; Lin, R. P.

    1997-01-01

    Pulse Shape Discrimination (PSD) for background reduction will be used in the INTErnational Gamma Ray Astrophysics Laboratory (INTEGRAL) imaging spectrometer (SPI) to improve the sensitivity from 200 keV to 2 MeV. The observation of significant astrophysical gamma ray lines in this energy range is expected, where the dominant component of the background is the beta(sup -) decay in the Ge detectors due to the activation of Ge nuclei by cosmic rays. The sensitivity of the SPI will be improved by rejecting beta(sup -) decay events while retaining photon events. The PSD technique will distinguish between single and multiple site events. Simulation results of PSD for INTEGRAL-type Ge detectors using a numerical model for pulse shape generation are presented. The model was shown to agree with the experimental results for a narrow inner bore closed end cylindrical detector. Using PSD, a sensitivity improvement factor of the order of 2.4 at 0.8 MeV is expected.

  14. Simulation Results of the Huygens Probe Entry and Descent Trajectory Reconstruction Algorithm

    NASA Technical Reports Server (NTRS)

    Kazeminejad, B.; Atkinson, D. H.; Perez-Ayucar, M.

    2005-01-01

    Cassini/Huygens is a joint NASA/ESA mission to explore the Saturnian system. The ESA Huygens probe is scheduled to be released from the Cassini spacecraft on December 25, 2004, enter the atmosphere of Titan in January, 2005, and descend to Titan s surface using a sequence of different parachutes. To correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for "ground-truthing" Orbiter remote sensing measurements, it is essential that the probe entry and descent trajectory reconstruction be performed as early as possible in the postflight data analysis phase. The Huygens Descent Trajectory Working Group (DTWG), a subgroup of the Huygens Science Working Team (HSWT), is responsible for developing a methodology and performing the entry and descent trajectory reconstruction. This paper provides an outline of the trajectory reconstruction methodology, preliminary probe trajectory retrieval test results using a simulated synthetic Huygens dataset developed by the Huygens Project Scientist Team at ESA/ESTEC, and a discussion of strategies for recovery from possible instrument failure.

  15. Results and Lessons Learned from Performance Testing of Humans in Spacesuits in Simulated Reduced Gravity

    NASA Technical Reports Server (NTRS)

    Chappell, Steven P.; Norcross, Jason R.; Gernhardt, Michael L.

    2009-01-01

    NASA's Constellation Program has plans to return to the Moon within the next 10 years. Although reaching the Moon during the Apollo Program was a remarkable human engineering achievement, fewer than 20 extravehicular activities (EVAs) were performed. Current projections indicate that the next lunar exploration program will require thousands of EVAs, which will require spacesuits that are better optimized for human performance. Limited mobility and dexterity, and the position of the center of gravity (CG) are a few of many features of the Apollo suit that required significant crew compensation to accomplish the objectives. Development of a new EVA suit system will ideally result in performance close to or better than that in shirtsleeves at 1 G, i.e., in "a suit that is a pleasure to work in, one that you would want to go out and explore in on your day off." Unlike the Shuttle program, in which only a fraction of the crew perform EVA, the Constellation program will require that all crewmembers be able to perform EVA. As a result, suits must be built to accommodate and optimize performance for a larger range of crew anthropometry, strength, and endurance. To address these concerns, NASA has begun a series of tests to better understand the factors affecting human performance and how to utilize various lunar gravity simulation environments available for testing.

  16. LSP Simulation and Analytical Results on Electromagnetic Wave Scattering on Coherent Density Structures

    NASA Astrophysics Data System (ADS)

    Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T.

    2014-09-01

    The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics (HEDP) and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present PIC simulation results on EM scattering on vortex type density structures using the LSP code and compare them with analytical results. Acknowledgement: This work was supported by the Air Force Research laboratory, the Air Force Office of Scientific Research, the Naval Research Laboratory and NNSA/DOE grant no. DE-FC52-06NA27616 at the University of Nevada at Reno.

  17. A mathematical model and simulation results of plasma enhanced chemical vapor deposition of silicon nitride films

    NASA Astrophysics Data System (ADS)

    Konakov, S. A.; Krzhizhanovskaya, V. V.

    2015-01-01

    We developed a mathematical model of Plasma Enhanced Chemical Vapor Deposition (PECVD) of silicon nitride thin films from SiH4-NH3-N2-Ar mixture, an important application in modern materials science. Our multiphysics model describes gas dynamics, chemical physics, plasma physics and electrodynamics. The PECVD technology is inherently multiscale, from macroscale processes in the chemical reactor to atomic-scale surface chemistry. Our macroscale model is based on Navier-Stokes equations for a transient laminar flow of a compressible chemically reacting gas mixture, together with the mass transfer and energy balance equations, Poisson equation for electric potential, electrons and ions balance equations. The chemical kinetics model includes 24 species and 58 reactions: 37 in the gas phase and 21 on the surface. A deposition model consists of three stages: adsorption to the surface, diffusion along the surface and embedding of products into the substrate. A new model has been validated on experimental results obtained with the "Plasmalab System 100" reactor. We present the mathematical model and simulation results investigating the influence of flow rate and source gas proportion on silicon nitride film growth rate and chemical composition.

  18. Instability of surface lenticular vortices: results from laboratory experiments and numerical simulations

    NASA Astrophysics Data System (ADS)

    Lahaye, Noé; Paci, Alexandre; Smith, Stefan Llewellyn

    2016-04-01

    We examine the instability of lenticular vortices -- or lenses -- in a stratified rotating fluid. The simplest configuration is one in which the lenses overlay a deep layer and have a free surface, and this can be studied using a two-layer rotating shallow water model. We report results from laboratory experiments and high-resolution direct numerical simulations of the destabilization of vortices with constant potential vorticity, and compare these to a linear stability analysis. The stability properties of the system are governed by two parameters: the typical upper-layer potential vorticity and the size (depth) of the vortex. Good agreement is found between analytical, numerical and experimental results for the growth rate and wavenumber of the instability. The nonlinear saturation of the instability is associated with conversion from potential to kinetic energy and weak emission of gravity waves, giving rise to the formation of coherent vortex multipoles with trapped waves. The impact of flow in the lower layer is examined. In particular, it is shown that the growth rate can be strongly affected and the instability can be suppressed for certain types of weak co-rotating flow.

  19. [Implementation results of emission standards of air pollutants for thermal power plants: a numerical simulation].

    PubMed

    Wang, Zhan-Shan; Pan, Li-Bo

    2014-03-01

    The emission inventory of air pollutants from the thermal power plants in the year of 2010 was set up. Based on the inventory, the air quality of the prediction scenarios by implementation of both 2003-version emission standard and the new emission standard were simulated using Models-3/CMAQ. The concentrations of NO2, SO2, and PM2.5, and the deposition of nitrogen and sulfur in the year of 2015 and 2020 were predicted to investigate the regional air quality improvement by the new emission standard. The results showed that the new emission standard could effectively improve the air quality in China. Compared with the implementation results of the 2003-version emission standard, by 2015 and 2020, the area with NO2 concentration higher than the emission standard would be reduced by 53.9% and 55.2%, the area with SO2 concentration higher than the emission standard would be reduced by 40.0%, the area with nitrogen deposition higher than 1.0 t x km(-2) would be reduced by 75.4% and 77.9%, and the area with sulfur deposition higher than 1.6 t x km(-2) would be reduced by 37.1% and 34.3%, respectively.

  20. Results from simulated remote-handled transuranic waste experiments at the Waste Isolation Pilot Plant (WIPP)

    SciTech Connect

    Molecke, M A

    1992-01-01

    Multi-year, simulated remote-handled transuranic waste (RH TRU, nonradioactive) experiments are being conducted underground in the Waste Isolation Pilot-Plant (WIPP) facility. These experiments involve the near-reference (thermal and geometrical) testing of eight full size RH TRU test containers emplaced into horizontal, unlined rock salt boreholes. Half of the test emplacements are partially filled with bentonite/silica-sand backfill material. All test containers were electrically heated at about 115 W/each for three years, then raised to about 300 W/each for the remaining time. Each test borehole was instrumented with a selection of remote-reading thermocouples, pressure gages, borehole vertical-closure gages, and vertical and horizontal borehole-diameter closure gages. Each test emplacements was also periodically opened for visual inspections of brine intrusions and any interactions with waste package materials, materials sampling, manual closure measurements, and observations of borehole changes. Effects of heat on borehole closure rates and near-field materials (metals, backfill, rock salt, and intruding brine) interactions were closely monitored as a function of time. This paper summarizes results for the first five years of in situ test operation with supporting instrumentation and laboratory data and interpretations. Some details of RH TRU waste package materials, designs, and assorted underground test observations are also discussed. Based on the results, the tested RH TRU waste packages, materials, and emplacement geometry in unlined salt boreholes appear to be quite adequate for initial WIPP repository-phase operations.

  1. Ground Truth Accuracy Tests of GPS Seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.

    2005-12-01

    As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.

  2. Comparisons of EOS MLS cloud ice measurements with ECMWF analyses and GCM simulations : initial results

    NASA Technical Reports Server (NTRS)

    Li, J. - L.; Waliser, D. E.; Jiang, J. H.; Wu, D. L.; Read, W.; Waters, J. W.

    2005-01-01

    To assess the status of global climate models (GCMs) in simulating upper-tropospheric ice water content (IWC), a new set of IWC measurements from the Earth Observing System's Microwave Limb Sounder (MLS) are used. Comparisons are made with ECMWF analyses and simulations from several GCMs, including two with multi-scale-modeling framework.

  3. Simulation Framework for Rapid Entry, Descent, and Landing (EDL) Analysis, Phase 2 Results

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.

    2011-01-01

    The NASA Engineering and Safety Center (NESC) was requested to establish the Simulation Framework for Rapid Entry, Descent, and Landing (EDL) Analysis assessment, which involved development of an enhanced simulation architecture using the Program to Optimize Simulated Trajectories II simulation tool. The assessment was requested to enhance the capability of the Agency to provide rapid evaluation of EDL characteristics in systems analysis studies, preliminary design, mission development and execution, and time-critical assessments. Many of the new simulation framework capabilities were developed to support the Agency EDL-Systems Analysis (SA) team that is conducting studies of the technologies and architectures that are required to enable human and higher mass robotic missions to Mars. The findings, observations, and recommendations from the NESC are provided in this report.

  4. WE-D-17A-03: Improvement of Accuracy of Spot-Scanning Proton Beam Delivery for Liver Tumor by Real-Time Tumor-Monitoring and Gating System: A Simulation Study

    SciTech Connect

    Matsuura, T; Shimizu, S; Miyamoto, N; Takao, S; Toramatsu, C; Nihongi, H; Yamada, T; Shirato, H; Fujii, Y; Umezawa, M; Umegaki, K

    2014-06-15

    Purpose: To improve the accuracy of spot-scanning proton beam delivery for target in motion, a real-time tumor-monitoring and gating system using fluoroscopy images was developed. This study investigates the efficacy of this method for treatment of liver tumors using simulation. Methods: Three-dimensional position of a fiducial marker inserted close to the tumor is calculated in real time and proton beam is gated according to the marker's distance from the planned position (Shirato, 2012). The efficient beam delivery is realized even for the irregular and sporadic motion signals, by employing the multiple-gated irradiations per operation cycle (Umezawa, 2012). For each of two breath-hold CTs (CTV=14.6cc, 63.1cc), dose distributions were calculated with internal margins corresponding to freebreathing (FB) and real-time gating (RG) with a 2-mm gating window. We applied 8 trajectories of liver tumor recorded during the treatment of RTRT in X-ray therapy and 6 initial timings. Dmax/Dmin in CTV, mean liver dose (MLD), and irradiation time to administer 3 Gy (RBE) dose were estimated assuming rigid motion of targets by using in-house simulation tools and VQA treatment planning system (Hitachi, Ltd., Tokyo). Results: Dmax/Dmin was degraded by less than 5% compared to the prescribed dose with all motion parameters for smaller CTV and less than 7% for larger CTV with one exception. Irradiation time showed only a modest increase if RG was used instead of FB; the average value over motion parameters was 113 (FB) and 138 s (RG) for smaller CTV and 120 (FB) and 207 s (RG) for larger CTV. In RG, it was within 5 min for all but one trajectory. MLD was markedly decreased by 14% and 5–6% for smaller and larger CTVs respectively, if RG was applied. Conclusions: Spot-scanning proton beam was shown to be delivered successfully to liver tumor without much lengthening of treatment time. This research was supported by the Cabinet Office, Government of Japan and the Japan Society for

  5. Feature Extraction from Simulations and Experiments: Preliminary Results Using a Fluid Mix Problem

    SciTech Connect

    Kamath, C; Nguyen, T

    2005-01-04

    Code validation, or comparing the output of computer simulations to experiments, is necessary to determine which simulation is a better approximation to an experiment. It can also be used to determine how the input parameters in a simulation can be modified to yield output that is closer to the experiment. In this report, we discuss our experiences in the use of image processing techniques for extracting features from 2-D simulations and experiments. These features can be used in comparing the output of simulations to experiments, or to other simulations. We first describe the problem domain and the data. We next explain the need for cleaning or denoising the experimental data and discuss the performance of different techniques. Finally, we discuss the features of interest and describe how they can be extracted from the data. The focus in this report is on extracting features from experimental and simulation data for the purpose of code validation; the actual interpretation of these features and their use in code validation is left to the domain experts.

  6. Chemical compatibility screening results of plastic packaging to mixed waste simulants

    SciTech Connect

    Nigrey, P.J.; Dickens, T.G.

    1995-12-01

    We have developed a chemical compatibility program for evaluating transportation packaging components for transporting mixed waste forms. We have performed the first phase of this experimental program to determine the effects of simulant mixed wastes on packaging materials. This effort involved the screening of 10 plastic materials in four liquid mixed waste simulants. The testing protocol involved exposing the respective materials to {approximately}3 kGy of gamma radiation followed by 14 day exposures to the waste simulants of 60 C. The seal materials or rubbers were tested using VTR (vapor transport rate) measurements while the liner materials were tested using specific gravity as a metric. For these tests, a screening criteria of {approximately}1 g/m{sup 2}/hr for VTR and a specific gravity change of 10% was used. It was concluded that while all seal materials passed exposure to the aqueous simulant mixed waste, EPDM and SBR had the lowest VTRs. In the chlorinated hydrocarbon simulant mixed waste, only VITON passed the screening tests. In both the simulant scintillation fluid mixed waste and the ketone mixture simulant mixed waste, none of the seal materials met the screening criteria. It is anticipated that those materials with the lowest VTRs will be evaluated in the comprehensive phase of the program. For specific gravity testing of liner materials the data showed that while all materials with the exception of polypropylene passed the screening criteria, Kel-F, HDPE, and XLPE were found to offer the greatest resistance to the combination of radiation and chemicals.

  7. Shock timing experiments on the National Ignition Facility: Initial results and comparison with simulation

    NASA Astrophysics Data System (ADS)

    Robey, H. F.; Boehly, T. R.; Celliers, P. M.; Eggert, J. H.; Hicks, D.; Smith, R. F.; Collins, R.; Bowers, M. W.; Krauter, K. G.; Datte, P. S.; Munro, D. H.; Milovich, J. L.; Jones, O. S.; Michel, P. A.; Thomas, C. A.; Olson, R. E.; Pollaine, S.; Town, R. P. J.; Haan, S.; Callahan, D.; Clark, D.; Edwards, J.; Kline, J. L.; Dixit, S.; Schneider, M. B.; Dewald, E. L.; Widmann, K.; Moody, J. D.; Döppner, T.; Radousky, H. B.; Throop, A.; Kalantar, D.; DiNicola, P.; Nikroo, A.; Kroll, J. J.; Hamza, A. V.; Horner, J. B.; Bhandarkar, S. D.; Dzenitis, E.; Alger, E.; Giraldez, E.; Castro, C.; Moreno, K.; Haynam, C.; LaFortune, K. N.; Widmayer, C.; Shaw, M.; Jancaitis, K.; Parham, T.; Holunga, D. M.; Walters, C. F.; Haid, B.; Mapoles, E. R.; Sater, J.; Gibson, C. R.; Malsbury, T.; Fair, J.; Trummer, D.; Coffee, K. R.; Burr, B.; Berzins, L. V.; Choate, C.; Brereton, S. J.; Azevedo, S.; Chandrasekaran, H.; Eder, D. C.; Masters, N. D.; Fisher, A. C.; Sterne, P. A.; Young, B. K.; Landen, O. L.; Van Wonterghem, B. M.; MacGowan, B. J.; Atherton, J.; Lindl, J. D.; Meyerhofer, D. D.; Moses, E.

    2012-04-01

    Capsule implosions on the National Ignition Facility (NIF) [Lindl et al., Phys. Plasmas 11, 339 (2004)] are underway with the goal of compressing deuterium-tritium (DT) fuel to a sufficiently high areal density (ρR) to sustain a self-propagating burn wave required for fusion power gain greater than unity. These implosions are driven with a carefully tailored sequence of four shock waves that must be timed to very high precision in order to keep the DT fuel on a low adiabat. Initial experiments to measure the strength and relative timing of these shocks have been conducted on NIF in a specially designed surrogate target platform known as the keyhole target. This target geometry and the associated diagnostics are described in detail. The initial data are presented and compared with numerical simulations. As the primary goal of these experiments is to assess and minimize the adiabat in related DT implosions, a methodology is described for quantifying the adiabat from the shock velocity measurements. Results are contrasted between early experiments that exhibited very poor shock timing and subsequent experiments where a modified target geometry demonstrated significant improvement.

  8. Preliminary Experimental Results of Integrated Gasification Fuel Cell Operation Using Hardware Simulation

    SciTech Connect

    Traverso, Alberto; Tucker, David; Haynes, Comas L.

    2012-07-01

    A newly developed integrated gasification fuel cell (IGFC) hybrid system concept has been tested using the Hybrid Performance (Hyper) project hardware-based simulation facility at the U.S. Department of Energy, National Energy Technology Laboratory. The cathode-loop hardware facility, previously connected to the real-time fuel cell model, was integrated with a real-time model of a gasifier of solid (biomass and fossil) fuel. The fuel cells are operated at the compressor delivery pressure, and they are fueled by an updraft atmospheric gasifier, through the syngas conditioning train for tar removal and syngas compression. The system was brought to steady state; then several perturbations in open loop (variable speed) and closed loop (constant speed) were performed in order to characterize the IGFC behavior. Coupled experiments and computations have shown the feasibility of relatively fast control of the plant as well as a possible mitigation strategy to reduce the thermal stress on the fuel cells as a consequence of load variation and change in gasifier operating conditions. Results also provided an insight into the different features of variable versus constant speed operation of the gas turbine section.

  9. Do tanning salons adhere to new legal regulations? Results of a simulated client trial in Germany.

    PubMed

    Möllers, Tobias; Pischke, Claudia R; Zeeb, Hajo

    2016-03-01

    In August 2009 and January 2012, two regulations were passed in Germany to limit UV exposure in the general population. These regulations state that no minors are allowed to use tanning devices. Personnel of tanning salons is mandated to offer counseling regarding individual skin type, to create a dosage plan with the customer and to provide a list describing harmful effects of UV radiation. Furthermore, a poster of warning criteria has to be visible and readable at all times inside the tanning salon. It is unclear whether these regulations are followed by employees of tanning salons in Germany, and we are not aware of any studies examining the implementation of the regulations at individual salons. We performed a simulated client study visiting 20 tanning salons in the city-state of Bremen in the year 2014, using a short checklist of criteria derived from the legal requirements, to evaluate whether legal requirements were followed or not. We found that only 20 % of the tanning salons communicated adverse health effects of UV radiation in visible posters and other materials and that only 60 % of the salons offered the required determination of the skin type to customers. In addition, only 60 % of the salons offered to complete the required dosage plan with their customers. To conclude, our results suggest that the new regulations are insufficiently implemented in Bremen. Additional control mechanisms appear necessary to ensure that consumers are protected from possible carcinogenic effects of excessive UV radiation.

  10. Wide Bandpass and Narrow Bandstop Microstrip Filters Based on Hilbert Fractal Geometry: Design and Simulation Results

    PubMed Central

    Mezaal, Yaqeen S.; Eyyuboglu, Halil T.; Ali, Jawad K.

    2014-01-01

    This paper presents new Wide Bandpass Filter (WBPF) and Narrow Bandstop Filter (NBSF) incorporating two microstrip resonators, each resonator is based on 2nd iteration of Hilbert fractal geometry. The type of filter as pass or reject band has been adjusted by coupling gap parameter (d) between Hilbert resonators using a substrate with a dielectric constant of 10.8 and a thickness of 1.27 mm. Numerical simulation results as well as a parametric study of d parameter on filter type and frequency responses are presented and studied. WBPF has designed at resonant frequencies of 2 and 2.2 GHz with a bandwidth of 0.52 GHz, −28 dB return loss and −0.125 dB insertion loss while NBSF has designed for electrical specifications of 2.37 GHz center frequency, 20 MHz rejection bandwidth, −0.1873 dB return loss and 13.746 dB insertion loss. The proposed technique offers a new alternative to construct low-cost high-performance filter devices, suitable for a wide range of wireless communication systems. PMID:25536436

  11. Preliminary results for a two-dimensional simulation of the working process of a Stirling engine

    SciTech Connect

    Makhkamov, K.K.; Ingham, D.B.

    1998-07-01

    Stirling engines have several potential advantages over existing types of engines, in particular they can use renewable energy sources for power production and their performance meets the demands on the environmental security. In order to design Stirling Engines properly, and to put into effect their potential performance, it is important to more accurately mathematically simulate its working process. At present, a series of very important mathematical models are used for describing the working process of Stirling Engines and these are, in general, classified as models of three levels. All the models consider one-dimensional schemes for the engine and assume a uniform fluid velocity, temperature and pressure profiles at each plane of the internal gas circuit of the engine. The use of two-dimensional CFD models can significantly extend the capabilities for the detailed analysis of the complex heat transfer and gas dynamic processes which occur in the internal gas circuit, as well as in the external circuit of the engine. In this paper a two-dimensional simplified frame (no construction walls) calculation scheme for the Stirling Engine has been assumed and the standard {kappa}-{var{underscore}epsilon} turbulence model has been used for the analysis of the engine working process. The results obtained show that the use of two-dimensional CFD models gives the possibility of gaining a much greater insight into the fluid flow and heat transfer processes which occur in Stirling Engines.

  12. The Plasma Wake Downstream of Lunar Topographic Obstacles: Preliminary Results from 2D Particle Simulations

    NASA Technical Reports Server (NTRS)

    Zimmerman, Michael I.; Farrell, W. M.; Snubbs, T. J.; Halekas, J. S.

    2011-01-01

    Anticipating the plasma and electrical environments in permanently shadowed regions (PSRs) of the moon is critical in understanding local processes of space weathering, surface charging, surface chemistry, volatile production and trapping, exo-ion sputtering, and charged dust transport. In the present study, we have employed the open-source XOOPIC code [I] to investigate the effects of solar wind conditions and plasma-surface interactions on the electrical environment in PSRs through fully two-dimensional pattic1e-in-cell simulations. By direct analogy with current understanding of the global lunar wake (e.g., references) deep, near-terminator, shadowed craters are expected to produce plasma "mini-wakes" just leeward of the crater wall. The present results (e.g., Figure I) are in agreement with previous claims that hot electrons rush into the crater void ahead of the heavier ions, fanning a negative cloud of charge. Charge separation along the initial plasma-vacuum interface gives rise to an ambipolar electric field that subsequently accelerates ions into the void. However, the situation is complicated by the presence of the dynamic lunar surface, which develops an electric potential in response to local plasma currents (e.g., Figure Ia). In some regimes, wake structure is clearly affected by the presence of the charged crater floor as it seeks to achieve current balance (i.e. zero net current to the surface).

  13. Optimal piezoelectric beam shape for single and broadband vibration energy harvesting: Modeling, simulation and experimental results

    NASA Astrophysics Data System (ADS)

    Muthalif, Asan G. A.; Nordin, N. H. Diyana

    2015-03-01

    Harvesting energy from the surroundings has become a new trend in saving our environment. Among the established ones are solar panels, wind turbines and hydroelectric generators which have successfully grown in meeting the world's energy demand. However, for low powered electronic devices; especially when being placed in a remote area, micro scale energy harvesting is preferable. One of the popular methods is via vibration energy scavenging which converts mechanical energy (from vibration) to electrical energy by the effect of coupling between mechanical variables and electric or magnetic fields. As the voltage generated greatly depends on the geometry and size of the piezoelectric material, there is a need to define an optimum shape and configuration of the piezoelectric energy scavenger. In this research, mathematical derivations for unimorph piezoelectric energy harvester are presented. Simulation is done using MATLAB and COMSOL Multiphysics software to study the effect of varying the length and shape of the beam to the generated voltage. Experimental results comparing triangular and rectangular shaped piezoelectric beam are also presented.

  14. Wide Bandpass and Narrow Bandstop Microstrip Filters based on Hilbert fractal geometry: design and simulation results.

    PubMed

    Mezaal, Yaqeen S; Eyyuboglu, Halil T; Ali, Jawad K

    2014-01-01

    This paper presents new Wide Bandpass Filter (WBPF) and Narrow Bandstop Filter (NBSF) incorporating two microstrip resonators, each resonator is based on 2nd iteration of Hilbert fractal geometry. The type of filter as pass or reject band has been adjusted by coupling gap parameter (d) between Hilbert resonators using a substrate with a dielectric constant of 10.8 and a thickness of 1.27 mm. Numerical simulation results as well as a parametric study of d parameter on filter type and frequency responses are presented and studied. WBPF has designed at resonant frequencies of 2 and 2.2 GHz with a bandwidth of 0.52 GHz, -28 dB return loss and -0.125 dB insertion loss while NBSF has designed for electrical specifications of 2.37 GHz center frequency, 20 MHz rejection bandwidth, -0.1873 dB return loss and 13.746 dB insertion loss. The proposed technique offers a new alternative to construct low-cost high-performance filter devices, suitable for a wide range of wireless communication systems. PMID:25536436

  15. Biofilm formation and control in a simulated spacecraft water system - Three year results

    NASA Technical Reports Server (NTRS)

    Schultz, John R.; Flanagan, David T.; Bruce, Rebekah J.; Mudgett, Paul D.; Carr, Sandra E.; Rutz, Jeffrey A.; Huls, M. H.; Sauer, Richard L.; Pierson, Duane L.

    1992-01-01

    Two simulated spacecraft water systems are being used to evaluate the effectiveness of iodine for controlling microbial contamination within such systems. An iodine concentration of about 2.0 mg/L is maintained in one system by passing ultrapure water through an iodinated ion exchange resin. Stainless steel coupons with electropolished and mechanically-polished sides are being used to monitor biofilm formation. Results after three years of operation show a single episode of significant bacterial growth in the iodinated system when the iodine level dropped to 1.9 mg/L. This growth was apparently controlled by replacing the iodinated ion exchange resin, thereby increasing the iodine level. The second batch of resin has remained effective in controlling microbial growth down to an iodine level of 1.0 mg/L. SEM indicates that the iodine has impeded but may have not completely eliminated the formation of biofilm. Metals analyses reveal some corrosion in the iodinated system after 3 years of continuous exposure. Significant microbial contamination has been present continuously in a parallel noniodinated system since the third week of operation.

  16. Soil nitrogen balance under wastewater management: Field measurements and simulation results

    USGS Publications Warehouse

    Sophocleous, M.; Townsend, M.A.; Vocasek, F.; Ma, L.; KC, A.

    2009-01-01

    The use of treated wastewater for irrigation of crops could result in high nitrate-nitrogen (NO3-N) concentrations in the vadose zone and ground water. The goal of this 2-yr field-monitoring study in the deep silty clay loam soils south of Dodge City, Kansas, was to assess how and under what circumstances N from the secondary-treated, wastewater-irrigated corn reached the deep (20-45 m) water table of the underlying High Plains aquifer and what could be done to minimize this problem. We collected 15.2-m-deep soil cores for characterization of physical and chemical properties; installed neutron probe access tubes to measure soil-water content and suction lysimeters to sample soil water periodically; sampled monitoring, irrigation, and domestic wells in the area; and obtained climatic, crop, irrigation, and N application rate records for two wastewater-irrigated study sites. These data and additional information were used to run the Root Zone Water Quality Model to identify key parameters and processes that influence N losses in the study area. We demonstrated that NO3-N transport processes result in significant accumulations of N in the vadose zone and that NO3-N in the underlying ground water is increasing with time. Root Zone Water Quality Model simulations for two wastewater-irrigated study sites indicated that reducing levels of corn N fertilization by more than half to 170 kg ha-1 substantially increases N-use efficiency and achieves near-maximum crop yield. Combining such measures with a crop rotation that includes alfalfa should further reduce the accumulation and downward movement of NO3-N in the soil profile. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  17. Velocity structure of a bottom simulating reflector offshore Peru: Results from full waveform inversion

    USGS Publications Warehouse

    Pecher, I.A.; Minshull, T.A.; Singh, S.C.; Von Huene, R.

    1996-01-01

    Much of our knowledge of the worldwide distribution of submarine gas hydrates comes from seismic observations of Bottom Simulating Reflectors (BSRs). Full waveform inversion has proven to be a reliable technique for studying the fine structure of BSRs using the compressional wave velocity. We applied a non-linear full waveform inversion technique to a BSR at a location offshore Peru. We first determined the large-scale features of seismic velocity variations using a statistical inversion technique to maximise coherent energy along travel-time curves. These velocities were used for a starting velocity model for the full waveform inversion, which yielded a detailed velocity/depth model in the vicinity of the BSR. We found that the data are best fit by a model in which the BSR consists of a thin, low-velocity layer. The compressional wave velocity drops from 2.15 km/s down to an average of 1.70 km/s in an 18m thick interval, with a minimum velocity of 1.62 km/s in a 6 m interval. The resulting compressional wave velocity was used to estimate gas content in the sediments. Our results suggest that the low velocity layer is a 6-18 m thick zone containing a few percent of free gas in the pore space. The presence of the BSR coincides with a region of vertical uplift. Therefore, we suggest that gas at this BSR is formed by a dissociation of hydrates at the base of the hydrate stability zone due to uplift and subsequently a decrease in pressure.

  18. Test Results from a Direct Drive Gas Reactor Simulator Coupled to a Brayton Power Conversion Unit

    NASA Technical Reports Server (NTRS)

    Hervol, David S.; Briggs, Maxwell H.; Owen, Albert K.; Bragg-Sitton, Shannon M.; Godfroy, Thomas J.

    2010-01-01

    Component level testing of power conversion units proposed for use in fission surface power systems has typically been done using relatively simple electric heaters for thermal input. These heaters do not adequately represent the geometry or response of proposed reactors. As testing of fission surface power systems transitions from the component level to the system level it becomes necessary to more accurately replicate these reactors using reactor simulators. The Direct Drive Gas-Brayton Power Conversion Unit test activity at the NASA Glenn Research Center integrates a reactor simulator with an existing Brayton test rig. The response of the reactor simulator to a change in Brayton shaft speed is shown as well as the response of the Brayton to an insertion of reactivity, corresponding to a drum reconfiguration. The lessons learned from these tests can be used to improve the design of future reactor simulators which can be used in system level fission surface power tests.

  19. ATMOSPHERIC MERCURY SIMULATION USING THE CMAQ MODEL: FORMULATION DESCRIPTION AND ANALYSIS OF WET DEPOSITION RESULTS

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) modeling system has recently been adapted to simulate the emission, transport, transformation and deposition of atmospheric mercury in three distinct forms; elemental mercury gas, reactive gaseous mercury, and particulate mercury. Emis...

  20. Results of two-dimensional time-evolved phase screen computer simulations

    NASA Astrophysics Data System (ADS)

    Gamble, Kevin J.; Weeks, Arthur R.; Myler, Harley R.; Rabadi, Wissam A.

    1995-06-01

    This paper presents a 2D computer simulation of observed intensity and phase behind a time evolved phase screen. Both spatial and temporal statistics of the observed intensity is compared to theoretical predictions. In particular, the intensity statistics as a function of detector position within the propagated laser beam are investigated. The computer simulation program was written using the C-programming language running on a SUN SPARC-5 workstation.

  1. NPE 2010 results - Independent performance assessment by simulated CTBT violation scenarios

    NASA Astrophysics Data System (ADS)

    Ross, O.; Bönnemann, C.; Ceranna, L.; Gestermann, N.; Hartmann, G.; Plenefisch, T.

    2012-04-01

    earthquakes by seismological analysis. The remaining event at Black Thunder Mine, Wyoming, on 23 Oct at 21:15 UTC showed clear explosion characteristics. It caused also Infrasound detections at one station in Canada. An infrasonic one station localization algorithm led to event localization results comparable in precision to the teleseismic localization. However, the analysis of regional seismological stations gave the most accurate result giving an error ellipse of about 60 square kilometer. Finally a forward ATM simulation was performed with the candidate event as source in order to reproduce the original detection scenario. The ATM results showed a simulated station fingerprint in the IMS very similar to the fictitious detections given in the NPE 2010 scenario which is an additional confirmation that the event was correctly identified. The shown event analysis of the NPE 2010 serves as successful example for Data Fusion between the technology of radionuclide detection supported by ATM and seismological methodology as well as infrasound signal processing.

  2. Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1997-01-01

    An important put of building mathematical models based on measured date is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. An expression is developed for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates when the output residuals are colored. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle. As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, whereas conventional parameter accuracy measures were optimistic.

  3. Direct Numerical Simulation of Liquid Nozzle Spray with Comparison to Shadowgraphy and X-Ray Computed Tomography Experimental Results

    NASA Astrophysics Data System (ADS)

    van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis

    2014-11-01

    In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.

  4. Multiple Hypothesis Tracking (MHT) for Space Surveillance: Results and Simulation Studies

    NASA Astrophysics Data System (ADS)

    Singh, N.; Poore, A.; Sheaff, C.; Aristoff, J.; Jah, M.

    2013-09-01

    tracking performance compared to existing methods at a lower computational cost, especially for closely-spaced objects, in realistic multi-sensor multi-object tracking scenarios over multiple regimes of space. Specifically, we demonstrate that the prototype MHT system can accurately and efficiently process tens of thousands of UCTs and angles-only UCOs emanating from thousands of objects in LEO, GEO, MEO and HELO, many of which are closely-spaced, in real-time on a single laptop computer, thereby making it well-suited for large-scale breakup and tracking scenarios. This is possible in part because complexity reduction techniques are used to control the runtime of MHT without sacrificing accuracy. We assess the performance of MHT in relation to other tracking methods in multi-target, multi-sensor scenarios ranging from easy to difficult (i.e., widely-spaced objects to closely-spaced objects), using realistic physics and probabilities of detection less than one. In LEO, it is shown that the MHT system is able to address the challenges of processing breakups by analyzing multiple frames of data simultaneously in order to improve association decisions, reduce cross-tagging, and reduce unassociated UCTs. As a result, the multi-frame MHT system can establish orbits up to ten times faster than single-frame methods. Finally, it is shown that in GEO, MEO and HELO, the MHT system is able to address the challenges of processing angles-only optical observations by providing a unified multi-frame framework.

  5. High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media 2. Transport results

    USGS Publications Warehouse

    Naff, R.L.; Haley, D.F.; Sudicky, E.A.

    1998-01-01

    In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.

  6. Particle-In-Cell (PIC) code simulation results and comparison with theory scaling laws for photoelectron-generated radiation

    SciTech Connect

    Dipp, T.M. |

    1993-12-01

    The generation of radiation via photoelectrons induced off of a conducting surface was explored using Particle-In-Cell (PIC) code computer simulations. Using the MAGIC PIC code, the simulations were performed in one dimension to handle the diverse scale lengths of the particles and fields in the problem. The simulations involved monoenergetic, nonrelativistic photoelectrons emitted normal to the illuminated conducting surface. A sinusoidal, 100% modulated, 6.3263 ns pulse train, as well as unmodulated emission, were used to explore the behavior of the particles, fields, and generated radiation. A special postprocessor was written to convert the PIC code simulated electron sheath into far-field radiation parameters by means of rigorous retarded time calculations. The results of the small-spot PIC simulations were used to generate various graphs showing resonance and nonresonance radiation quantities such as radiated lobe patterns, frequency, and power. A database of PIC simulation results was created and, using a nonlinear curve-fitting program, compared with theoretical scaling laws. Overall, the small-spot behavior predicted by the theoretical scaling laws was generally observed in the PIC simulation data, providing confidence in both the theoretical scaling laws and the PIC simulations.

  7. Results Of Copper Catalyzed Peroxide Oxidation (CCPO) Of Tank 48H Simulants

    SciTech Connect

    Peters, T. B.; Pareizs, J. M.; Newell, J. D.; Fondeur, F. F.; Nash, C. A.; White, T. L.; Fink, S. D.

    2012-12-13

    Savannah River National Laboratory (SRNL) performed a series of laboratory-scale experiments that examined copper-catalyzed hydrogen peroxide (H{sub 2}O{sub 2}) aided destruction of organic components, most notably tetraphenylborate (TPB), in Tank 48H simulant slurries. The experiments were designed with an expectation of conducting the process within existing vessels of Building 241-96H with minimal modifications to the existing equipment. Results of the experiments indicate that TPB destruction levels exceeding 99.9% are achievable, dependent on the reaction conditions. A lower reaction pH provides faster reaction rates (pH 7 > pH 9 > pH 11); however, pH 9 reactions provide the least quantity of organic residual compounds within the limits of species analyzed. Higher temperatures lead to faster reaction rates and smaller quantities of organic residual compounds. A processing temperature of 50°C as part of an overall set of conditions appears to provide a viable TPB destruction time on the order of 4 days. Higher concentrations of the copper catalyst provide faster reaction rates, but the highest copper concentration (500 mg/L) also resulted in the second highest quantity of organic residual compounds. The data in this report suggests 100-250 mg/L as a minimum. Faster rates of H{sub 2}O{sub 2} addition lead to faster reaction rates and lower quantities of organic residual compounds. An addition rate of 0.4 mL/hour, scaled to the full vessel, is suggested for the process. SRNL recommends that for pH adjustment, an acid addition rate 42 mL/hour, scaled to the full vessel, is used. This is the same addition rate used in the testing. Even though the TPB and phenylborates can be destroyed in a relative short time period, the residual organics will take longer to degrade to <10 mg/L. Low level leaching on titanium occurred, however, the typical concentrations of released titanium are very low (~40 mg/L or less). A small amount of leaching under these conditions is not

  8. Results from simulated contact-handled transuranic waste experiments at the Waste Isolation Pilot Plant

    SciTech Connect

    Molecke, M.A.; Sorensen, N.R.; Krumhansl, J.L.

    1993-12-31

    We conducted in situ experiments with nonradioactive, contact-handled transuranic (CH TRU) waste drums at the Waste Isolation Pilot Plant (WIPP) facility for about four years. We performed these tests in two rooms in rock salt, at WIPP, with drums surrounded by crushed salt or 70 wt % salt/30 wt % bentonite clay backfills, or partially submerged in a NaCl brine pool. Air and brine temperatures were maintained at {approximately}40C. These full-scale (210-L drum) experiments provided in situ data on: backfill material moisture-sorption and physical properties in the presence of brine; waste container corrosion adequacy; and, migration of chemical tracers (nonradioactive actinide and fission product simulants) in the near-field vicinity, all as a function of time. Individual drums, backfill, and brine samples were removed periodically for laboratory evaluations. Waste container testing in the presence of brine and brine-moistened backfill materials served as a severe overtest of long-term conditions that could be anticipated in an actual salt waste repository. We also obtained relevant operational-test emplacement and retrieval experience. All test results are intended to support both the acceptance of actual TRU wastes at the WIPP and performance assessment data needs. We provide an overview and technical data summary focusing on the WIPP CH TRU envirorunental overtests involving 174 waste drums in the presence of backfill materials and the brine pool, with posttest laboratory materials analyses of backfill sorbed-moisture content, CH TRU drum corrosion, tracer migration, and associated test observations.

  9. Recovery of yttrium from cathode ray tubes and lamps’ fluorescent powders: experimental results and economic simulation

    SciTech Connect

    Innocenzi, V. De Michelis, I.; Ferella, F.; Vegliò, F.

    2013-11-15

    Highlights: • Fluorescent powder of lamps. • Fluorescent powder of cathode ray rubes. • Recovery of yttrium from fluorescent powders. • Economic simulation for the processes to recover yttrium from WEEE. - Abstract: In this paper, yttrium recovery from fluorescent powder of lamps and cathode ray tubes (CRTs) is described. The process for treating these materials includes the following: (a) acid leaching, (b) purification of the leach liquors using sodium hydroxide and sodium sulfide, (c) precipitation of yttrium using oxalic acid, and (d) calcinations of oxalates for production of yttrium oxides. Experimental results have shown that process conditions necessary to purify the solutions and recover yttrium strongly depend on composition of the leach liquor, in other words, whether the powder comes from treatment of CRTs or lamp. In the optimal experimental conditions, the recoveries of yttrium oxide are about 95%, 55%, and 65% for CRT, lamps, and CRT/lamp mixture (called MIX) powders, respectively. The lower yields obtained during treatments of MIX and lamp powders are probably due to the co-precipitation of yttrium together with other metals contained in the lamps powder only. Yttrium loss can be reduced to minimum changing the experimental conditions with respect to the case of the CRT process. In any case, the purity of final products from CRT, lamps, and MIX is greater than 95%. Moreover, the possibility to treat simultaneously both CRT and lamp powders is very important and interesting from an industrial point of view since it could be possible to run a single plant treating fluorescent powder coming from two different electronic wastes.

  10. CORRECTING FOR INTERSTELLAR SCATTERING DELAY IN HIGH-PRECISION PULSAR TIMING: SIMULATION RESULTS

    SciTech Connect

    Palliyaguru, Nipuni; McLaughlin, Maura; Stinebring, Daniel; Demorest, Paul; Jones, Glenn E-mail: maura.mclaughlin@mail.wvu.edu E-mail: pdemores@nrao.edu

    2015-12-20

    Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any method to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.

  11. Correcting for Interstellar Scattering Delay in High-precision Pulsar Timing: Simulation Results

    NASA Astrophysics Data System (ADS)

    Palliyaguru, Nipuni; Stinebring, Daniel; McLaughlin, Maura; Demorest, Paul; Jones, Glenn

    2015-12-01

    Light travel time changes due to gravitational waves (GWs) may be detected within the next decade through precision timing of millisecond pulsars. Removal of frequency-dependent interstellar medium (ISM) delays due to dispersion and scattering is a key issue in the detection process. Current timing algorithms routinely correct pulse times of arrival (TOAs) for time-variable delays due to cold plasma dispersion. However, none of the major pulsar timing groups correct for delays due to scattering from multi-path propagation in the ISM. Scattering introduces a frequency-dependent phase change in the signal that results in pulse broadening and arrival time delays. Any method to correct the TOA for interstellar propagation effects must be based on multi-frequency measurements that can effectively separate dispersion and scattering delay terms from frequency-independent perturbations such as those due to a GW. Cyclic spectroscopy, first described in an astronomical context by Demorest (2011), is a potentially powerful tool to assist in this multi-frequency decomposition. As a step toward a more comprehensive ISM propagation delay correction, we demonstrate through a simulation that we can accurately recover impulse response functions (IRFs), such as those that would be introduced by multi-path scattering, with a realistic signal-to-noise ratio (S/N). We demonstrate that timing precision is improved when scatter-corrected TOAs are used, under the assumptions of a high S/N and highly scattered signal. We also show that the effect of pulse-to-pulse “jitter” is not a serious problem for IRF reconstruction, at least for jitter levels comparable to those observed in several bright pulsars.

  12. Testing and Results of Human Metabolic Simulation Utilizing Ultrasonic Nebulizer Technology for Water Vapor Generation

    NASA Technical Reports Server (NTRS)

    Stubbe, Matthew; Curley, Su

    2010-01-01

    Life support technology must be evaluated thoroughly before ever being implemented into a functioning design. A major concern during that evaluation is safety. The ability to mimic human metabolic loads allows test engineers to evaluate the effectiveness of new technologies without risking injury to any actual humans. The main function of most life support technologies is the removal of carbon dioxide (CO2) and water (H2O) vapor. As such any good human metabolic simulator (HMS) will mimic the human body s ability to produce these items. Introducing CO2 into a test chamber is a very straightforward process with few unknowns so the focus of this particular new HMS design was on the much more complicated process of introducing known quantities of H2O vapor on command. Past iterations of the HMS have utilized steam which is very hard to keep in vapor phase while transporting and injecting into a test chamber. Also steam adds large quantities of heat to any test chamber, well beyond what an actual human does. For the new HMS an alternative approach to water vapor generation was designed utilizing ultrasonic nebulizers as a method for creating water vapor. Ultrasonic technology allows water to be vibrated into extremely tiny pieces (2-5 microns) and evaporate without requiring additional heating. Doing this process inside the test chamber itself allows H2O vapor generation without the unwanted heat and the challenging process of transporting water vapor. This paper presents the design details as well as results of all initial and final acceptance system testing. Testing of the system was performed at a range of known human metabolic rates in both sea-level and reduced pressure environments. This multitude of test points fully defines the systems capabilities as they relate to actual environmental systems testing.

  13. Isotonic contraction as a result of cooperation of sarcomeres--a model and simulation outcome.

    PubMed

    Wünsch, Z

    1996-01-01

    The molecular level of the functional structure of the contractile apparatus of cross-striated muscle has been mapped out almost minutely. Most authors accept the basic principles of the theory of sliding filaments and the theory of operation of molecular generators of force which, of course, are progressively updated by integrating new knowledge. The idea of the model delineated below does not contradict these theories, for it refers to another level of the system's hierarchy. The definition of the system, hereafter referred to Ideal Sarcomere (IS), takes into account the fact that, during isotonic contraction, a large number of not wholly independently working sarcomeres and molecular generators of force is active in a synergistic way. The shortening velocity of isotonically contracting IS is determined by the relation between quantities conveying different tasks of active generators of force and the influence of the system parameters. Although IS is derived from simple axiomatic predicates, it has properties which were not premediated in defining the system and which, in spite of this, correspond to some properties of the biological original. The equations of the system allow us to calculate the shortening velocity of 'isotonic contraction' and other variables and parameters and show, inter alia, an alternative way to derive and interpret the relations stated in Hill's force-velocity equation. The simulation results indicate that the macroscopic manifestations of isotonic contraction may be also contingent on the properties of the cooperating system of the multitude of sarcomeres, which also constitutes one part of the functional structure of muscle. PMID:8924648

  14. Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project

    NASA Astrophysics Data System (ADS)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Fazio, D.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Sedov, A.; Twomey, M. S.; Wang, F.; Zaytsev, A.

    2015-12-01

    During the LHC Long Shutdown 1 (LSI) period, that started in 2013, the Simulation at Point1 (Sim@P1) project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High-Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 Virtual Machines (VMs) each with 8 CPU cores, for a total of up to 22000 parallel jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 project, operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 33 million CPU-hours and it generated more than 1.1 billion Monte Carlo events. The design aspects are presented: the virtualization platform exploited by Sim@P1 avoids interferences with TDAQ operations and it guarantees the security and the usability of the ATLAS private network. The cloud mechanism allows the separation of the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support) levels. This paper focuses on the operational aspects of such a large system during the upcoming LHC Run 2 period: simple, reliable, and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and back, to exploit the resources when they are not used for the data acquisition, even for short periods. The evolution of the central OpenStack infrastructure is described, as it was upgraded from Folsom to the Icehouse release, including the scalability issues addressed.

  15. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  16. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible

  17. Three-dimensional MHD simulation of the Caltech plasma jet experiment: first results

    SciTech Connect

    Zhai, Xiang; Bellan, Paul M.; Li, Hui; Li, Shengtai E-mail: pbellan@caltech.edu E-mail: sli@lanl.gov

    2014-08-10

    Magnetic fields are believed to play an essential role in astrophysical jets with observations suggesting the presence of helical magnetic fields. Here, we present three-dimensional (3D) ideal MHD simulations of the Caltech plasma jet experiment using a magnetic tower scenario as the baseline model. Magnetic fields consist of an initially localized dipole-like poloidal component and a toroidal component that is continuously being injected into the domain. This flux injection mimics the poloidal currents driven by the anode-cathode voltage drop in the experiment. The injected toroidal field stretches the poloidal fields to large distances, while forming a collimated jet along with several other key features. Detailed comparisons between 3D MHD simulations and experimental measurements provide a comprehensive description of the interplay among magnetic force, pressure, and flow effects. In particular, we delineate both the jet structure and the transition process that converts the injected magnetic energy to other forms. With suitably chosen parameters that are derived from experiments, the jet in the simulation agrees quantitatively with the experimental jet in terms of magnetic/kinetic/inertial energy, total poloidal current, voltage, jet radius, and jet propagation velocity. Specifically, the jet velocity in the simulation is proportional to the poloidal current divided by the square root of the jet density, in agreement with both the experiment and analytical theory. This work provides a new and quantitative method for relating experiments, numerical simulations, and astrophysical observation, and demonstrates the possibility of using terrestrial laboratory experiments to study astrophysical jets.

  18. Continuous glucose monitoring and trend accuracy: news about a trend compass.

    PubMed

    Signal, Matthew; Gottlieb, Rebecca; Le Compte, Aaron; Chase, J Geoffrey

    2014-09-01

    Continuous glucose monitoring (CGM) devices are being increasingly used to monitor glycemia in people with diabetes. One advantage with CGM is the ability to monitor the trend of sensor glucose (SG) over time. However, there are few metrics available for assessing the trend accuracy of CGM devices. The aim of this study was to develop an easy to interpret tool for assessing trend accuracy of CGM data. SG data from CGM were compared to hourly blood glucose (BG) measurements and trend accuracy was quantified using the dot product. Trend accuracy results are displayed on the Trend Compass, which depicts trend accuracy as a function of BG. A trend performance table and Trend Index (TI) metric are also proposed. The Trend Compass was tested using simulated CGM data with varying levels of error and variability, as well as real clinical CGM data. The results show that the Trend Compass is an effective tool for differentiating good trend accuracy from poor trend accuracy, independent of glycemic variability. Furthermore, the real clinical data show that the Trend Compass assesses trend accuracy independent of point bias error. Finally, the importance of assessing trend accuracy as a function of BG level is highlighted in a case example of low and falling BG data, with corresponding rising SG data. This study developed a simple to use tool for quantifying trend accuracy. The resulting trend accuracy is easily interpreted on the Trend Compass plot, and if required, performance table and TI metric. PMID:24876437

  19. Preliminary results of column experiments simulating nutrients transport in artificial recharge by treated wastewater

    NASA Astrophysics Data System (ADS)

    Leal, María; Meffe, Raffaella; Lillo, Javier

    2013-04-01

    the field site. Wastewater synthesized in the laboratory simulates the secondary effluent used for recharge activities in the Experimental Plant of Carrión de los Céspedes, Experimental results showed that ammonium and phosphates are clearly retarded when infiltrating through both materials (zeolite and palygorskite) as consequence of cation exchange and surface complexation processes. Indeed, after about 14 days from the beginning of the experiments the two compounds do not appear at the column effluent exhibiting a very strong retardation. Concerning nitrites and nitrates, no retardation is observed. Preliminary interpretation of the experimental results by means of the geochemical modeling code PHREEQ-C confirmed and quantified the importance of specific reactive processes affecting transport of nutrients through the applied reactive materials.

  20. Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise

    NASA Astrophysics Data System (ADS)

    Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej

    2010-11-01

    The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.

  1. Improving traffic noise simulations using space syntax: preliminary results from two roadway systems.

    PubMed

    M Dzhambov, Angel; D Dimitrova, Donka; H Turnovska, Tanya

    2014-09-01

    Noise pollution is one of the four major pollutions in the world. In order to implement adequate strategies for noise control, assessment of traffic-generated noise is essential in city planning and management. The aim of this study was to determine whether space syntax could improve the predictive power of noise simulation. This paper reports a record linkage study which combined a documentary method with space syntax analysis. It analyses data about traffic flow as well as field-measured and computer-simulated traffic noise in two Bulgarian agglomerations. Our findings suggest that space syntax might have a potential in predicting traffic noise exposure by improving models for noise simulations using specialised software or actual traffic counts. The scientific attention might need to be directed towards space syntax in order to study its further application in current models and algorithms for noise prediction. PMID:25222575

  2. Comparison of preliminary results from Airborne Aster Simulator (AAS) with TIMS data

    NASA Technical Reports Server (NTRS)

    Kannari, Yoshiaki; Mills, Franklin; Watanabe, Hiroshi; Ezaka, Teruya; Narita, Tatsuhiko; Chang, Sheng-Huei

    1992-01-01

    The Japanese Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER), being developed for a NASA EOS-A satellite, will have 3 VNIR, 6 SWIR, and 5 TIR (8-12 micron) bands. An Airborne ASTER Simulator (AAS) was developed for Japan Resources Observation System Organization (JAROS) by the Geophysical Environmental Research Group (GER) Corp. to research surface temperature and emission features in the MWIR/TIR, to simulate ASTER's TIR bands, and to study further possibility of MWIR/TIR bands. ASTER Simulator has 1 VNIR, 3 MWIR (3-5 microns), and 20 (currently 24) TIR bands. Data was collected over 3 sites - Cuprite, Nevada; Long Valley/Mono Lake, California; and Death Valley, California - with simultaneous ground truth measurements. Preliminary data collected by AAS for Cuprite, Nevada is presented and AAS data is compared with Thermal Infrared Multispectral Scanner (TIMS) data.

  3. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    PubMed Central

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  4. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  5. Summary of results of January climate simulations with the GISS coarse-mesh model

    NASA Technical Reports Server (NTRS)

    Spar, J.; Cohen, C.; Wu, P.

    1981-01-01

    The large scale climates generated by extended runs of the model are relatively independent of the initial atmospheric conditions, if the first few months of each simulation are discarded. The perpetual January simulations with a specified SST field produced excessive snow accumulation over the continents of the Northern Hemisphere. Mass exchanges between the cold (warm) continents and the warm (cold) adjacent oceans produced significant surface pressure changes over the oceans as well as over the land. The effect of terrain and terrain elevation on the amount of precipitation was examined. The evaporation of continental moisture was calculated to cause large increases in precipitation over the continents.

  6. Test Results From a Simulated High-Voltage Lunar Power Transmission Line

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur; Hervol, David

    2008-01-01

    The Alternator Test Unit (ATU) in the Lunar Power System Facility (LPSF) located at the NASA Glenn Research Center (GRC) in Cleveland, Ohio was modified to simulate high-voltage transmission capability. The testbed simulated a 1 km transmission cable length from the ATU to the LPSF using resistors and inductors installed between the distribution transformers. Power factor correction circuitry was used to compensate for the reactance of the distribution system to improve the overall power factor. This test demonstrated that a permanent magnet alternator can successfully provide high-frequency ac power to a lunar facility located at a distance.

  7. Test Results from a Simulated High Voltage Lunar Power Transmission Line

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur; Hervol, David

    2008-01-01

    The Alternator Test Unit (ATU) in the Lunar Power System Facility (LPSF) located at the NASA Glenn Research Center (GRC) in Cleveland, OH was modified to simulate high voltage transmission capability. The testbed simulated a 1 km transmission cable length from the ATU to the LPSF using resistors and inductors installed between the distribution transformers. Power factor correction circuitry was used to compensate for the reactance of the distribution system to improve the overall power factor. This test demonstrated that a permanent magnet alternator can successfully provide high frequency AC power to a lunar facility located at a distance.

  8. 2D FTLE in 3D Flows: The accuracy of using two-dimensional data for Lagrangian analysis in a three-dimensional turbulent channel simulation

    NASA Astrophysics Data System (ADS)

    Rockwood, Matthew; Green, Melissa

    2012-11-01

    In experimental, three-dimensional vortex-dominated flows, common particle image velocimetry (PIV) data is often collected in only the plane of interest due to equipment constraints. For flows with significant out of plane velocities or velocity gradients, this can create large discrepancies in Lagrangian analyses that require accurate particle trajectories. A Finite Time Lyapunov Exponent (FTLE) analysis is one such example, and has been shown to be very powerful at examining vortex dynamics and interactions in a variety of aperiodic flows. In this work, FTLE analysis of a turbulent channel simulation was conducted using both full three-dimensional velocity data and modified planar data extracted from the same computational domain. When the out of plane velocity component is neglected the difference in FTLE fields is non-trivial. A quantitative comparison and computation of error is presented for several planes across the width of the channel to determine the efficacy of using 2D analyses on the inherently 3D flows.

  9. Magnesium-Cationic Dummy Atom Molecules Enhance Representation of DNA Polymerase β in Molecular Dynamics Simulations: Improved Accuracy in Studies of Structural Features and Mutational Effects

    PubMed Central

    Oelschlaeger, Peter; Klahn, Marco; Beard, William A.; Wilson, Samuel H.; Warshel, Arieh

    2007-01-01

    Summary Human DNA polymerase β (pol β) fills gaps in DNA as part of base excision DNA repair. Due to its small size it is a convenient model enzyme for other DNA polymerases. Its active site contains two Mg2+ ions, of which one binds an incoming dNTP and one catalyzes its condensation with the DNA primer strand. Simulating such binuclear metalloenzymes accurately but computationally efficiently is a challenging task. Here, we present a magnesium-cationic dummy atom approach that can easily be implemented in molecular mechanical force fields such as the ENZYMIX or the AMBER force fields. All properties investigated in this paper, that is, structure and energetics of both Michaelis complexes and transition state (TS) complexes were represented more accurately using the magnesium-cationic dummy atom model than using the traditional one-atom representation for Mg2+ ions. The improved agreement between calculated free energies of binding of TS models to different pol β variants and the experimentally determined activation free energies indicates that this model will be useful in studying mutational effects on catalytic efficiency and fidelity of DNA polymerases. The model should also have broad applicability to the modeling of other magnesium-containing proteins. PMID:17174326

  10. Halo abundance matching: accuracy and conditions for numerical convergence

    NASA Astrophysics Data System (ADS)

    Klypin, Anatoly; Prada, Francisco; Yepes, Gustavo; Heß, Steffen; Gottlöber, Stefan

    2015-03-01

    Accurate predictions of the abundance and clustering of dark matter haloes play a key role in testing the standard cosmological model. Here, we investigate the accuracy of one of the leading methods of connecting the simulated dark matter haloes with observed galaxies- the halo abundance matching (HAM) technique. We show how to choose the optimal values of the mass and force resolution in large volume N-body simulations so that they provide accurate estimates for correlation functions and circular velocities for haloes and their subhaloes - crucial ingredients of the HAM method. At the 10 per cent accuracy, results converge for ˜50 particles for haloes and ˜150 particles for progenitors of subhaloes. In order to achieve this level of accuracy a number of conditions should be satisfied. The force resolution for the smallest resolved (sub)haloes should be in the range (0.1-0.3)rs, where rs is the scale radius of (sub)haloes. The number of particles for progenitors of subhaloes should be ˜150. We also demonstrate that the two-body scattering plays a minor role for the accuracy of N-body simulations thanks to the relatively small number of crossing-times of dark matter in haloes, and the limited force resolution of cosmological simulations.

  11. Simulation of Anxiety Situations and Its Resultant Effect on Anxiety and Classroom Interaction of Student Teachers.

    ERIC Educational Resources Information Center

    Gustafson, Kent L.

    The purpose of this research experiment was to investigate the effectiveness of one type of simulation (consisting of a series of anxiety-inducing motion picture vignettes, split-screen video tape recording, and a trained recall worker) in reducing anxiety and thereby increasing the subsequent classroom interaction of student teachers. A secondary…

  12. Simulation and Gaming to Promote Health Education: Results of a Usability Test

    ERIC Educational Resources Information Center

    Albu, Mihai; Atack, Lynda; Srivastava, Ishaan

    2015-01-01

    Objective: Motivating clients to change the health behaviour, and maintaining an interest in exercise programmes, is an ongoing challenge for health educators. With new developments in technology, simulation and gaming are increasingly being considered as ways to motivate users, support learning and promote positive health behaviours. The purpose…

  13. Onboard utilization of ground control points for image correction. Volume 2: Analysis and simulation results

    NASA Technical Reports Server (NTRS)

    1981-01-01

    An approach to remote sensing that meets future mission requirements was investigated. The deterministic acquisition of data and the rapid correction of data for radiometric effects and image distortions are the most critical limitations of remote sensing. The following topics are discussed: onboard image correction systems, GCP navigation system simulation, GCP analysis, and image correction analysis measurement.

  14. Orbiter/shuttle carrier aircraft separation: Wind tunnel, simulation, and flight test overview and results

    NASA Technical Reports Server (NTRS)

    Homan, D. J.; Denison, D. E.; Elchert, K. C.

    1980-01-01

    A summary of the approach and landing test phase of the space shuttle program is given from the orbiter/shuttle carrier aircraft separation point of view. The data and analyses used during the wind tunnel testing, simulation, and flight test phases in preparation for the orbiter approach and landing tests are reported.

  15. Simulation results of liquid and plastic scintillator detectors for reactor antineutrino detection - A comparison

    NASA Astrophysics Data System (ADS)

    Kashyap, V. K. S.; Pant, L. M.; Mohanty, A. K.; Datar, V. M.

    2016-03-01

    A simulation study of two kinds of scintillation detectors has been done using GEANT4. We compare plastic scintillator and liquid scintillator based designs for detecting electron antineutrinos emitted from the core of reactors. The motivation for this study is to set up an experiment at the research reactor facility at BARC for very short baseline neutrino oscillation study and remote reactor monitoring.

  16. Jovian Plasma Torus Interaction with Europa: 3D Hybrid Kinetic Simulation. First results

    NASA Technical Reports Server (NTRS)

    Lipatov, A. S.; Cooper, J. F.; Paterson, W. R.; Sittler, E. C.; Hartle, R. E.; Simpson, D. G.

    2010-01-01

    The hybrid kinetic model supports comprehensive simulation of the interaction between different spatial and energetic elements of the Europa-moon-magnetosphere system with respect to variable upstream magnetic field and flux or density distributions of plasma and energetic ions, electrons, and neutral atoms. This capability is critical for improving the interpretation of the existing Europa flyby measurements from the Galileo orbiter mission, and for planning flyby and orbital measurements, (including the surface and atmospheric compositions) for future missions. The simulations are based on recent models of the atmosphere of Europa (Cassidy etal.,2007;Shematovichetal.,2005). In contrast to previous approaches with MHD simulations, the hybrid model allows us to fully take into account the finite gyro radius effect and electron pressure, and to correctly estimate the ion velocity distribution and the fluxes along the magnetic field (assuming an initial Maxwellian velocity distribution for upstream background ions).Non-thermal distributions of upstream plasma will be addressed in future work. Photoionization,electron-impact ionization, charge exchange and collisions between the ions and neutrals are also included in our model. We consider two models for background plasma:(a) with O(++) ions; (b) with O(++) and S(++) ions. The majority of O2 atmosphere is thermal with an extended cold population (Cassidyetal.,2007). A few first simulations already include an induced magnetic dipole; however, several important effects of induced magnetic fields arising from oceanic shell conductivity will be addressed in later work.

  17. Manned systems utilization analysis (study 2.1). Volume 3: LOVES computer simulations, results, and analyses

    NASA Technical Reports Server (NTRS)

    Stricker, L. T.

    1975-01-01

    The LOVES computer program was employed to analyze the geosynchronous portion of the NASA's 1973 automated satellite mission model from 1980 to 1990. The objectives of the analyses were: (1) to demonstrate the capability of the LOVES code to provide the depth and accuracy of data required to support the analyses; and (2) to tradeoff the concept of space servicing automated satellites composed of replaceable modules against the concept of replacing expendable satellites upon failure. The computer code proved to be an invaluable tool in analyzing the logistic requirements of the various test cases required in the tradeoff. It is indicated that the concept of space servicing offers the potential for substantial savings in the cost of operating automated satellite systems.

  18. Blood-Borne Markers of Fatigue in Competitive Athletes – Results from Simulated Training Camps

    PubMed Central

    Hecksteden, Anne; Skorski, Sabrina; Schwindling, Sascha; Hammes, Daniel; Pfeiffer, Mark; Kellmann, Michael; Ferrauti, Alexander; Meyer, Tim

    2016-01-01

    Assessing current fatigue of athletes to fine-tune training prescriptions is a critical task in competitive sports. Blood-borne surrogate markers are widely used despite the scarcity of validation trials with representative subjects and interventions. Moreover, differences between training modes and disciplines (e.g. due to differences in eccentric force production or calorie turnover) have rarely been studied within a consistent design. Therefore, we investigated blood-borne fatigue markers during and after discipline-specific simulated training camps. A comprehensive panel of blood-born indicators was measured in 73 competitive athletes (28 cyclists, 22 team sports, 23 strength) at 3 time-points: after a run-in resting phase (d 1), after a 6-day induction of fatigue (d 8) and following a subsequent 2-day recovery period (d 11). Venous blood samples were collected between 8 and 10 a.m. Courses of blood-borne indicators are considered as fatigue dependent if a significant deviation from baseline is present at day 8 (Δfatigue) which significantly regresses towards baseline until day 11 (Δrecovery). With cycling, a fatigue dependent course was observed for creatine kinase (CK; Δfatigue 54±84 U/l; Δrecovery -60±83 U/l), urea (Δfatigue 11±9 mg/dl; Δrecovery -10±10 mg/dl), free testosterone (Δfatigue -1.3±2.1 pg/ml; Δrecovery 0.8±1.5 pg/ml) and insulin linke growth factor 1 (IGF-1; Δfatigue -56±28 ng/ml; Δrecovery 53±29 ng/ml). For urea and IGF-1 95% confidence intervals for days 1 and 11 did not overlap with day 8. With strength and high-intensity interval training, respectively, fatigue-dependent courses and separated 95% confidence intervals were present for CK (strength: Δfatigue 582±649 U/l; Δrecovery -618±419 U/l; HIIT: Δfatigue 863±952 U/l; Δrecovery -741±842 U/l) only. These results indicate that, within a comprehensive panel of blood-borne markers, changes in fatigue are most accurately reflected by urea and IGF-1 for cycling and by CK

  19. Towards Regional Lunar Gravity Fields Using Lunar Prospector Extended Mission Data - Simulations and Results

    NASA Astrophysics Data System (ADS)

    Goossens, S.; Visser, P.; Floberghagen, R.; Koop, R.; Ambrosius, B.

    2002-12-01

    Until this date, the lunar gravimetric inverse problem has mainly been posed as a global problem, solving for gravity fields over the whole of the Moon. The asymmetric sampling of the force field requires that some sort of regularisation be applied in order to have a meaningful global solution that does not provide spurious information on the far side. On one hand these global solutions work very well in terms of overall orbit quality and consistency, despite the fact that roughly one half of the surface lacks sampling. On the other hand, excellently sampled regions cannot be determined at maximum spatial resolution without affecting too much the solution on the far side, which in itself is highly unstable. Since the Lunar Prospector mission, there are many of such excellently sampled regions on the near side of the Moon. In order to exhaust the information present in the tracking data of this satellite, regional methods for solving the gravity field of well-sampled areas become interesting. We present a method to extract regional gravity information from Doppler and Range tracking of the Lunar Prospector spacecraft. The method incorporates the GEODYN II software package for tracking data processing and orbit determination, and a software package to analyse the residuals from the orbit determination process, and to transform these residuals into gravity anomalies on the lunar surface by means of a Stokes method. Simulations will show how well a gravity signal in the residuals can be recovered. Results from orbit determination using 20 days of Lunar Prospector Extended Mission data will be shown, to demonstrate the readiness of the method to process real-life satellite data. With missions in the future such as SELENE, which will provide the first global tracking data set of the Moon ever, global and regional methods to solve for gravity field products will remain equally of interest, since they both can give complementary insight into the low and high resolution

  20. Blood-Borne Markers of Fatigue in Competitive Athletes - Results from Simulated Training Camps.

    PubMed

    Hecksteden, Anne; Skorski, Sabrina; Schwindling, Sascha; Hammes, Daniel; Pfeiffer, Mark; Kellmann, Michael; Ferrauti, Alexander; Meyer, Tim

    2016-01-01

    Assessing current fatigue of athletes to fine-tune training prescriptions is a critical task in competitive sports. Blood-borne surrogate markers are widely used despite the scarcity of validation trials with representative subjects and interventions. Moreover, differences between training modes and disciplines (e.g. due to differences in eccentric force production or calorie turnover) have rarely been studied within a consistent design. Therefore, we investigated blood-borne fatigue markers during and after discipline-specific simulated training camps. A comprehensive panel of blood-born indicators was measured in 73 competitive athletes (28 cyclists, 22 team sports, 23 strength) at 3 time-points: after a run-in resting phase (d 1), after a 6-day induction of fatigue (d 8) and following a subsequent 2-day recovery period (d 11). Venous blood samples were collected between 8 and 10 a.m. Courses of blood-borne indicators are considered as fatigue dependent if a significant deviation from baseline is present at day 8 (Δfatigue) which significantly regresses towards baseline until day 11 (Δrecovery). With cycling, a fatigue dependent course was observed for creatine kinase (CK; Δfatigue 54±84 U/l; Δrecovery -60±83 U/l), urea (Δfatigue 11±9 mg/dl; Δrecovery -10±10 mg/dl), free testosterone (Δfatigue -1.3±2.1 pg/ml; Δrecovery 0.8±1.5 pg/ml) and insulin linke growth factor 1 (IGF-1; Δfatigue -56±28 ng/ml; Δrecovery 53±29 ng/ml). For urea and IGF-1 95% confidence intervals for days 1 and 11 did not overlap with day 8. With strength and high-intensity interval training, respectively, fatigue-dependent courses and separated 95% confidence intervals were present for CK (strength: Δfatigue 582±649 U/l; Δrecovery -618±419 U/l; HIIT: Δfatigue 863±952 U/l; Δrecovery -741±842 U/l) only. These results indicate that, within a comprehensive panel of blood-borne markers, changes in fatigue are most accurately reflected by urea and IGF-1 for cycling and by CK

  1. A STOL airworthiness investigation using a simulation of an augmentor wing transport. Volume 1: Summary of results and airworthiness implications

    NASA Technical Reports Server (NTRS)

    Stapleford, R. L.; Heffley, R. K.; Hynes, C. S.; Scott, B. C.

    1974-01-01

    A simulator study of STOL airworthiness criteria was conducted using a model of an augmentor wing transport. The approach, flare and landing, go-around, and takeoff phases of flight were investigated. The results are summarized and possible implications with regard to airworthiness criteria are discussed. The results provide a data base for future STOL airworthiness requirements and a preliminary indication of potential problem areas. The results are also compared to the results from an earlier simulation of the Breguet 941S. Where possible, airworthiness criteria are proposed for consideration.

  2. Recent results and future challenges for large scale Particle-In-Cell simulations of plasma-based accelerator concepts

    SciTech Connect

    Huang, C.; An, W.; Decyk, V.K.; Lu, W.; Mori, W.B.; Tsung, F.S.; Tzoufras, M.; Morshed, S.; Antomsen, T.; Feng, B.; Katsouleas, T; Fonseca, R.A.; Martins, S.F.; Vieira, J.; Silva, L.O.; Geddes, C.G.R.; Cormier-Michel, E; Vay, J.-L.; Esarey, E.; Leemans, W.P.; Bruhwiler, D.L.; Cowan, B.; Cary, J.R.; Paul, K.

    2009-05-01

    The concept and designs of plasma-based advanced accelerators for high energy physics and photon science are modeled in the SciDAC COMPASS project with a suite of Particle-In-Cell codes and simulation techniques including the full electromagnetic model, the envelope model, the boosted frame approach and the quasi-static model. In this paper, we report the progress of the development of these models and techniques and present recent results achieved with large-scale parallel PIC simulations. The simulation needs for modeling the plasma-based advanced accelerator at the energy frontier is discussed and a path towards this goal is outlined.

  3. Additional road markings as an indication of speed limits: results of a field experiment and a driving simulator study.

    PubMed

    Daniels, Stijn; Vanrie, Jan; Dreesen, An; Brijs, Tom

    2010-05-01

    Although speed limits are indicated by road signs, road users are not always aware, while driving, of the actual speed limit on a given road segment. The Roads and Traffic Agency developed additional road markings in order to support driver decisions on speed on 70 km/h roads in Flanders-Belgium. In this paper the results are presented of two evaluation studies, both a field study and a simulator study, on the effects of the additional road markings on speed behaviour. The results of the field study showed no substantial effect of the markings on speed behaviour. Neither did the simulator study, with slightly different stimuli. Nevertheless an effect on lateral position was noticed in the simulator study, showing at least some effect of the markings. The role of conspicuity of design elements and expectations towards traffic environments is discussed. Both studies illustrate well some strengths and weaknesses of observational field studies compared to experimental simulator studies.

  4. Performance simulation of a combustion engine charged by a variable geometry turbocharger. I - Prerequirements, boundary conditions and model development. II - Simulation algorithm, computed results

    NASA Astrophysics Data System (ADS)

    Malobabic, M.; Buttschardt, W.; Rautenberg, M.

    The paper presents a theoretical derivation of the relationship between a variable geometry turbocharger and the combustion engine, using simplified boundary conditions and model restraints and taking into account the combustion process itself as well as the nonadiabatic operating conditions for the turbine and the compressor. The simulation algorithm is described, and the results computed using this algorithm are compared with measurements performed on a test engine in combination with a controllable turbocharger with adjustable turbine inlet guide vanes. In addition, the results of theoretical parameter studies are presented, which include the simulation of a given turbocharger with variable geometry in combination with different sized combustion engines and the simulation of different sized variable-geometry turbochargers in combination with a given combustion engine.

  5. Simulation shows hospitals that cooperate on infection control obtain better results than hospitals acting alone.

    PubMed

    Lee, Bruce Y; Bartsch, Sarah M; Wong, Kim F; Yilmaz, S Levent; Avery, Taliser R; Singh, Ashima; Song, Yeohan; Kim, Diane S; Brown, Shawn T; Potter, Margaret A; Platt, Richard; Huang, Susan S

    2012-10-01

    Efforts to control life-threatening infections, such as with methicillin-resistant Staphylococcus aureus (MRSA), can be complicated when patients are transferred from one hospital to another. Using a detailed computer simulation model of all hospitals in Orange County, California, we explored the effects when combinations of hospitals tested all patients at admission for MRSA and adopted procedures to limit transmission among patients who tested positive. Called "contact isolation," these procedures specify precautions for health care workers interacting with an infected patient, such as wearing gloves and gowns. Our simulation demonstrated that each hospital's decision to test for MRSA and implement contact isolation procedures could affect the MRSA prevalence in all other hospitals. Thus, our study makes the case that further cooperation among hospitals--which is already reflected in a few limited collaborative infection control efforts under way--could help individual hospitals achieve better infection control than they could achieve on their own.

  6. RUSICA initial implementations: Simulation results of sandy shore evolution in Porto Cesareo, Italy

    NASA Astrophysics Data System (ADS)

    Calidonna, Claudia Roberta; Di Gregorio, Salvatore; Gullace, Francesco; Gullı, Daniel; Lupiano, Valeria

    2016-06-01

    Beach recession is spreading in Mediterranean by effects of climatic change. RUSICA is a Cellular Automata model, that is in developing phase for simulating such a complex phenomenon, considering its main mechanisms: loose particles (sand, gravel, silt, clay, etc.) mobilization, suspension, deposit and transport, triggered by waves and currents. A simplified version of the model was implemented and applied to data, related to the sandy shore of Torre Lapillo (Porto Cesareo, Italy), in August 2010, where shore evolution was monitored, even if data quality and quantity aren't ideal in order to feed RUSICA. Simulations of different scenarios of stormy sea in that area evidenced the adequate performance of the model in capturing the main emergent features of the phenomenon in despite of the simplified approach.

  7. Open Cherry Picker simulation results. [manned platform for satellite servicing from Shuttle

    NASA Technical Reports Server (NTRS)

    Nathan, C. A.

    1982-01-01

    The Open Cherry Picker (OCP) is a manned platform, mounted at the end of the Remote Manipulator System (RMS), which is used to enhance extravehicular activities. The objective of the simulation program described was to reduce the existing complexity of those OCP design features that are mandatory for initial Space Shuttle applications. The OCP development test article consists of a torque box, a rotating foot restraint, a rotating stanchion that houses handholds, and a tool storage section with an interface with payload modules. If the size or complexity of the payload increases, payload handling devices may be added at a later data. The simulations have shown that the crew can control the RMS from the Aft Flight Deck of the Shuttle, using voice commands from the EVA crewman. No need for a stabilizer was evident, and RMS dynamics due to crew-induced workloads were found to be minor.

  8. RHF RELAP5 model and preliminary loss-of-offsite-power simulation results for LEU conversion

    SciTech Connect

    Licht, J. R.; Bergeron, A.; Dionne, B.; Thomas, F.

    2014-08-01

    The purpose of this document is to describe the current state of the RELAP5 model for the Institut Laue-Langevin High Flux Reactor (RHF) located in Grenoble, France, and provide an update to the key information required to complete, for example, simulations for a loss of offsite power (LOOP) accident. A previous status report identified a list of 22 items to be resolved in order to complete the RELAP5 model. Most of these items have been resolved by ANL and the RHF team. Enough information was available to perform preliminary safety analyses and define the key items that are still required. Section 2 of this document describes the RELAP5 model of RHF. The final part of this section briefly summarizes previous model issues and resolutions. Section 3 of this document describes preliminary LOOP simulations for both HEU and LEU fuel at beginning of cycle conditions.

  9. Three-Dimensional Numerical Simulations of Equatorial Spread F: Results and Observations in the Pacific Sector

    NASA Technical Reports Server (NTRS)

    Aveiro, H. C.; Hysell, D. L.; Caton, R. G.; Groves, K. M.; Klenzing, J.; Pfaff, R. F.; Stoneback, R.; Heelis, R. A.

    2012-01-01

    A three-dimensional numerical simulation of plasma density irregularities in the postsunset equatorial F region ionosphere leading to equatorial spread F (ESF) is described. The simulation evolves under realistic background conditions including bottomside plasma shear flow and vertical current. It also incorporates C/NOFS satellite data which partially specify the forcing. A combination of generalized Rayleigh-Taylor instability (GRT) and collisional shear instability (CSI) produces growing waveforms with key features that agree with C/NOFS satellite and ALTAIR radar observations in the Pacific sector, including features such as gross morphology and rates of development. The transient response of CSI is consistent with the observation of bottomside waves with wavelengths close to 30 km, whereas the steady state behavior of the combined instability can account for the 100+ km wavelength waves that predominate in the F region.

  10. Dispersion curves from short-time molecular dynamics simulation. 1. Diatomic chain results

    SciTech Connect

    Noid, D.W.; Broocks, B.T.; Gray, S.K.; Marple, S.L.

    1988-06-16

    The multiple signal classification method (MUSIC) for frequency estimation is used to compute the frequency dispersion curves of a diatomic chain from the time-dependent structure factor. In this paper, the authors demonstrate that MUSIC can accurately determine the frequencies from very short time trajectories. MUSIC is also used to show how the frequencies can vary in time, i.e., along a trajectory. The method is ideally suited for analyzing molecular dynamics simulations of large systems.

  11. Some results from two-dimensional simulation of neutrino-dominated universes

    SciTech Connect

    Melott, A.L.

    1984-12-01

    Numerical methods for simulating a neutrino-dominated universe are discussed. A calculation of microwave-background perturbations, normalized to the numerical models, predicts effects just beyond the reach of current experiments. Since the angular correlation W(theta) varies significantly with the observer's position, global inferences from local observations may not be valid in universes dominated by massive neutrinos. Flat structures will form throughout the distribution. 35 references.

  12. Recent results and proposed observing system simulation experiments (OSSE) to link research and operation

    NASA Astrophysics Data System (ADS)

    Masutani, Michiko

    2016-05-01

    Observing System Simulation Experiment (OSSE)s are a challenge to operational weather services, because many of the efforts offer long-term rather than short-term benefits. Effective interaction between Research and Operation (R2O and O2R) is required for successful OSSE. First concept and procedures of OSSE are describer. Overview of OSSEs accomplished at NOAA/NCEP and JCSDA in recent years will be presented. Further proposed OSSEs are also presented.

  13. Evaluation of automated decision making methodologies and development of an integrated robotic system simulation: Study results

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelley, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.

    1986-01-01

    The implementation of a generic computer simulation for manipulator systems (ROBSIM) is described. The program is written in FORTRAN, and allows the user to: (1) Interactively define a manipulator system consisting of multiple arms, load objects, targets, and an environment; (2) Request graphic display or replay of manipulator motion; (3) Investigate and simulate various control methods including manual force/torque and active compliance control; and (4) Perform kinematic analysis, requirements analysis, and response simulation of manipulamotion. Previous reports have described the algorithms and procedures for using ROBSIM. These reports are superseded and additional features which were added are described. They are: (1) The ability to define motion profiles and compute loads on a common base to which manipulator arms are attached; (2) Capability to accept data describing manipulator geometry from a Computer Aided Design data base using the Initial Graphics exchange Specification format; (3) A manipulator control algorithm derived from processing the TV image of known reference points on a target; and (4) A vocabulary of simple high level task commands which can be used to define task scenarios.

  14. Basin scale reactive-transport simulations of CO2 leakage and resulting metal transport in a shallow drinking water aquifer

    NASA Astrophysics Data System (ADS)

    Navarre-Sitchler, A.; Maxwell, R. M.; Hammond, G. E.; Lichtner, P. C.

    2011-12-01

    Leakage of CO2 from underground storage formations into overlying aquifers will decrease groundwater pH resulting in a geochemical response of the aquifer. If metal containing aquifer minerals dissolve as a part of this response, there is a risk of exceeding regulatory limits set by the EPA. Risk assessment methods require a realistic prediction of the maximum metal concentration at wells or other points of exposure. Currently, these predictions are based on numerical reactive transport simulations of CO2 leaks. While previous studies have simulated galena dissolution as a source of lead to explore the potential for contamination of drinking water aquifers, it may be more realistic to simulate lead release from more common minerals that are known to contain trace amounts of metals, e.g. calcite. Model domains for these previous studies are often sub-km in scale or have very coarse grid resolution, due to computation limitations. In this study we simulate CO2 leakage into a drinking water aquifer using the massively parallel subsurface flow and reactive transport code PFLOTRAN. The regional model domain is 4km x 1km x 0.1 km. Even with fairly coarse grid spacing (~ 9 m x 9 m x 0.9 m), the simulations have > 49 million degrees of freedom, requiring the use of High-Performance Computing (HPC). Our simulations are run on Jaguar at Oak Ridge National Laboratory. Lead concentrations in extraction wells 3 km down gradient from a CO2 leak increase above background concentrations due to kinetic mineral dissolution along the flow path. Increases in aqueous concentrations are less when lead is allowed to sorb onto mineral surfaces. Surprisingly, lead concentration increases are greater in simulations where lead is present as a trace constituent in calcite (5% by volume) relative to simulations with galena (0.001% by volume) as the lead source. It appears that galena becomes oversaturated and begins to precipitate, a result observed in previous modeling studies, and its low

  15. The effects of bed rest on crew performance during simulated shuttle reentry. Volume 1: Study overview and physiological results

    NASA Technical Reports Server (NTRS)

    Chambers, A.; Vykukal, H. C.

    1974-01-01

    A centrifuge study was carried out to measure physiological stress and control task performance during simulated space shuttle orbiter reentry. Jet pilots were tested with, and without, anti-g-suit protection. The pilots were exposed to simulated space shuttle reentry acceleration profiles before, and after, ten days of complete bed rest, which produced physiological deconditioning similar to that resulting from prolonged exposure to orbital zero g. Pilot performance in selected control tasks was determined during simulated reentry, and before and after each simulation. Physiological stress during reentry was determined by monitoring heart rate, blood pressure, and respiration rate. Study results indicate: (1) heart rate increased during the simulated reentry when no g protection was given, and remained at or below pre-bed rest values when g-suits were used; (2) pilots preferred the use of g-suits to muscular contraction for control of vision tunneling and grayout during reentry; (3) prolonged bed rest did not alter blood pressure or respiration rate during reentry, but the peak reentry acceleration level did; and (4) pilot performance was not affected by prolonged bed rest or simulated reentry.

  16. Material Modeling of 6000 Series Aluminum Alloy Sheets with Different Density Cube Textures and Effect on the Accuracy of Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Yanaga, Daisaku; Kuwabara, Toshihiko; Uema, Naoyuki; Asano, Mineo

    2011-08-01

    Biaxial tensile tests of 6000 series aluminum alloy sheet with different density cube textures were carried out using cruciform specimens similar to that developed by one of the authors [Kuwabara, T. et al., J. Material Process. Technol., 80/81(1998), 517-523.]. The specimens are loaded under linear stress paths in a servo-controlled biaxial tensile testing machine. Plastic orthotropy remained coaxial with the principal stresses throughout every experiment. Successive contours of plastic work in stress space and the directions of plastic strain rates were precisely measured and compared with those calculated using selected yield functions. The Yld2000-2d yield functions with exponents of 12 and 6 [Barlat, F. et al., Int. J. Plasticity 19 (2003), 1297-1319] are capable of reproducing the general trends of the work contours and the directions of plastic strain rates observed for test materials with high and low cube textures, respectively. Hydraulic bulge tests were also conducted and the variation of thickness strain along the meridian direction of the bulged specimen was compared with that calculated using finite element analysis (FEA) based on the Yld2000-2d yield functions with exponents of 12 and 6. The differences of cube texture cause significant differences in the strain distributions of the bulged specimens, and the FEA results calculated using the Yld2000-2d yield functions show good agreement with the measurement results.

  17. Geologic results of the TMS survey over Mt. Emmons, Colorado. [Thematic Mapper Simulator

    NASA Technical Reports Server (NTRS)

    Rickman, D. L.; Sadowski, R. M.

    1985-01-01

    In 1981, NASA conducted with an American company a cooperative study, involving the use of Thematic Mapper Simulator (TMS) data. The study was concerned with an area near Crested Butte, Colorado, which contains a known, but unmined, major molybdenum deposit. Detailed ground observations in the Mt. Emmons area demonstrated that the imagery was extremely effective for detection of geologically significant features. The imagery specifically delineated areas of ferric iron staining, seritization, and hornfelized rock. Attention is given to data acquisition and data processing, field work in 1982 and in 1983, the integration of gravity data, and costs.

  18. Prediction of SFL Interruption Performance from the Results of Arc Simulation during High-Current Phase

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Chul; Lee, Won-Ho; Kim, Woun-Jea

    2015-09-01

    The design and development procedures of SF6 gas circuit breakers are still largely based on trial and error through testing although the development costs go higher every year. The computation cannot cover the testing satisfactorily because all the real processes arc not taken into account. But the knowledge of the arc behavior and the prediction of the thermal-flow inside the interrupters by numerical simulations are more useful than those by experiments due to the difficulties to obtain physical quantities experimentally and the reduction of computational costs in recent years. In this paper, in order to get further information into the interruption process of a SF6 self-blast interrupter, which is based on a combination of thermal expansion and the arc rotation principle, gas flow simulations with a CFD-arc modeling are performed during the whole switching process such as high-current period, pre-current zero period, and current-zero period. Through the complete work, the pressure-rise and the ramp of the pressure inside the chamber before current zero as well as the post-arc current after current zero should be a good criterion to predict the short-line fault interruption performance of interrupters.

  19. James Webb Space Telescope optical simulation testbed III: first experimental results with linear-control alignment

    NASA Astrophysics Data System (ADS)

    Egron, Sylvain; Lajoie, Charles-Philippe; Leboulleux, Lucie; N'Diaye, Mamadou; Pueyo, Laurent; Choquet, Élodie; Perrin, Marshall D.; Ygouf, Marie; Michau, Vincent; Bonnefois, Aurélie; Fusco, Thierry; Escolle, Clément; Ferrari, Marc; Hugot, Emmanuel; Soummer, Rémi

    2016-07-01

    The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a tabletop experiment designed to study wavefront sensing and control for a segmented space telescope, including both commissioning and maintenance activities. JOST is complementary to existing testbeds for JWST (e.g. the Ball Aerospace Testbed Telescope TBT) given its compact scale and flexibility, ease of use, and colocation at the JWST Science and Operations Center. The design of JOST reproduces the physics of JWST's three-mirror anastigmat (TMA) using three custom aspheric lenses. It provides similar quality image as JWST (80% Strehl ratio) over a field equivalent to a NIRCam module, but at 633 nm. An Iris AO segmented mirror stands for the segmented primary mirror of JWST. Actuators allow us to control (1) the 18 segments of the segmented mirror in piston, tip, tilt and (2) the second lens, which stands for the secondary mirror, in tip, tilt and x, y, z positions. We present the full linear control alignment infrastructure developed for JOST, with an emphasis on multi-field wavefront sensing and control. Our implementation of the Wavefront Sensing (WFS) algorithms using phase diversity is experimentally tested. The wavefront control (WFC) algorithms, which rely on a linear model for optical aberrations induced by small misalignments of the three lenses, are tested and validated on simulations.

  20. Results of two-phase natural circulation in hot-leg U-bend simulation experiments

    SciTech Connect

    Ishii, M.; Lee, S.Y.; Abou El-Seoud, S.

    1987-01-01

    In order to study the two-phase natural circulation and flow termination during a small break loss of coolant accident in LWR, simulation experiments have been performed using two different thermal-hydraulic loops. The main focus of the experiment was the two-phase flow behavior in the hot-leg U-bend typical of BandW LWR systems. The first group of experiments was carried out in the nitrogen gas-water adiabatic simulation loop and the second in the Freon 113 boiling and condensation loop. Both of the loops have been designed as a flow visualization facility and built according to the two-phase flow scaling criteria developed under this program. The nitrogen gas-water system has been used to isolate key hydrodynamic phenomena such as the phase distribution, relative velocity between phases, two-phase flow regimes and flow termination mechanisms, whereas the Freon loop has been used to study the effect of fluid properties, phase changes and coupling between hydrodynamic and heat transfer phenomena. Significantly different behaviors have been observed due to the non-equilibrium phase change phenomena such as the flashing and condensation in the Freon loop. The phenomena created much more unstable hydrodynamic conditions which lead to cyclic or oscillatory flow behaviors.

  1. Monte Carlo Simulations of Microchannel Plate Detectors II: Pulsed Voltage Results

    SciTech Connect

    Kruschwitz, Craig A.; Wu, Ming; Rochau, Greg A.

    2011-02-11

    This paper is part of a continuing study of straight-channel microchannel plate (MCP)–based x-ray detectors. Such detectors are a useful diagnostic tool for two-dimensional, time-resolved imaging and time-resolved x-ray spectroscopy. To interpret the data from such detectors, it is critical to develop a better understanding of the behavior of MCPs biased with subnanosecond voltage pulses. The subject of this paper is a Monte Carlo computer code that simulates the electron cascade in a MCP channel under an arbitrary pulsed voltage, particularly those pulses with widths comparable to the transit time of the electron cascade in the MCP under DC voltage bias. We use this code to study the gain as a function of time (also called the gate profile or optical gate) for various voltage pulse shapes, including pulses measured along the MCP. In addition, experimental data of MCP behavior in pulsed mode are obtained with a short-pulse UV laser. Comparisons between the simulations and experimental data show excellent agreement for both the gate profile and the peak relative sensitivity along the MCP strips. We report that the dependence of relative gain on peak voltage increases in sensitivity in pulsed mode when the width of the high-voltage waveform is smaller than the transit time of cascading electrons in the MCP.

  2. Study of silicon crystal surface formation based on molecular dynamics simulation results

    NASA Astrophysics Data System (ADS)

    Barinovs, G.; Sabanskis, A.; Muiznieks, A.

    2014-04-01

    The equilibrium shape of <110>-oriented single crystal silicon nanowire, 8 nm in cross-section, was found from molecular dynamics simulations using LAMMPS molecular dynamics package. The calculated shape agrees well to the shape predicted from experimental observations of nanocavities in silicon crystals. By parametrization of the shape and scaling to a known value of {111} surface energy, Wulff form for solid-vapor interface was obtained. The Wulff form for solid-liquid interface was constructed using the same model of the shape as for the solid-vapor interface. The parameters describing solid-liquid interface shape were found using values of surface energies in low-index directions known from published molecular dynamics simulations. Using an experimental value of the liquid-vapor interface energy for silicon and graphical solution of Herring's equation, we constructed angular diagram showing relative equilibrium orientation of solid-liquid, liquid-vapor and solid-vapor interfaces at the triple phase line. The diagram gives quantitative predictions about growth angles for different growth directions and formation of facets on the solid-liquid and solid-vapor interfaces. The diagram can be used to describe growth ridges appearing on the crystal surface grown from a melt. Qualitative comparison to the ridges of a Float zone silicon crystal cone is given.

  3. Progress in Modeling Global Atmospheric CO2 Fluxes and Transport: Results from Simulations with Diurnal Fluxes

    NASA Technical Reports Server (NTRS)

    Collatz, G. James; Kawa, R.

    2007-01-01

    Progress in better determining CO2 sources and sinks will almost certainly rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. Use of advanced data requires improved modeling and analysis capability. Under NASA Carbon Cycle Science support we seek to develop and integrate improved formulations for 1) atmospheric transport, 2) terrestrial uptake and release, 3) biomass and 4) fossil fuel burning, and 5) observational data analysis including inverse calculations. The transport modeling is based on meteorological data assimilation analysis from the Goddard Modeling and Assimilation Office. Use of assimilated met data enables model comparison to CO2 and other observations across a wide range of scales of variability. In this presentation we focus on the short end of the temporal variability spectrum: hourly to synoptic to seasonal. Using CO2 fluxes at varying temporal resolution from the SIB 2 and CASA biosphere models, we examine the model's ability to simulate CO2 variability in comparison to observations at different times, locations, and altitudes. We find that the model can resolve much of the variability in the observations, although there are limits imposed by vertical resolution of boundary layer processes. The influence of key process representations is inferred. The high degree of fidelity in these simulations leads us to anticipate incorporation of realtime, highly resolved observations into a multiscale carbon cycle analysis system that will begin to bridge the gap between top-down and bottom-up flux estimation, which is a primary focus of NACP.

  4. A STOL airworthiness investigation using a simulation of a deflected slipstream transport. Volume 1: Summary of results and airworthiness implications

    NASA Technical Reports Server (NTRS)

    Stapleford, R. L.; Heffley, R. K.; Rumold, R. C.; Hynes, C. S.; Scott, B. C.

    1974-01-01

    A simulator study of short takeoff and landing (STOL) aircraft was conducted using a model of a deflected slipstream transport aircraft. The subjects considered are: (1) the approach, (2) flare and landing, (3) go-around, and (4) takeoff phases of flight. The results are summarized and possible implications with regard to airworthiness criteria are discussed. A data base is provided for future STOL airworthiness requirements and a preliminary indication of potential problem areas is developed. Comparison of the simulation results with various proposed STOL criteria indicates significant deficiencies in many of these criteria.

  5. Hybrid guiding-centre/full-orbit simulations in non-axisymmetric magnetic geometry exploiting general criterion for guiding-centre accuracy

    NASA Astrophysics Data System (ADS)

    Pfefferlé, D.; Graves, J. P.; Cooper, W. A.

    2015-05-01

    To identify under what conditions guiding-centre or full-orbit tracing should be used, an estimation of the spatial variation of the magnetic field is proposed, not only taking into account gradient and curvature terms but also parallel currents and the local shearing of field-lines. The criterion is derived for general three-dimensional magnetic equilibria including stellarator plasmas. Details are provided on how to implement it in cylindrical coordinates and in flux coordinates that rely on the geometric toroidal angle. A means of switching between guiding-centre and full-orbit equations at first order in Larmor radius with minimal discrepancy is shown. Techniques are applied to a MAST (mega amp spherical tokamak) helical core equilibrium in which the inner kinked flux-surfaces are tightly compressed against the outer axisymmetric mantle and where the parallel current peaks at the nearly rational surface. This is put in relation with the simpler situation B(x, y, z) = B0[sin(kx)ey + cos(kx)ez], for which full orbits and lowest order drifts are obtained analytically. In the kinked equilibrium, the full orbits of NBI fast ions are solved numerically and shown to follow helical drift surfaces. This result partially explains the off-axis redistribution of neutral beam injection fast particles in the presence of MAST long-lived modes (LLM).

  6. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B; Hartz, Sarah M; Johnson, Eric O; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen's kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  7. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  8. 222Rn transport in a fractured crystalline rock aquifer: Results from numerical simulations

    USGS Publications Warehouse

    Folger, P.F.; Poeter, E.; Wanty, R.B.; Day, W.; Frishman, D.

    1997-01-01

    Dissolved 222Rn concentrations in ground water from a small wellfield underlain by fractured Middle Proterozoic Pikes Peak Granite southwest of Denver, Colorado range from 124 to 840 kBq m-3 (3360-22700 pCi L-1). Numerical simulations of flow and transport between two wells show that differences in equivalent hydraulic aperture of transmissive fractures, assuming a simplified two-fracture system and the parallel-plate model, can account for the different 222Rn concentrations in each well under steady-state conditions. Transient flow and transport simulations show that 222Rn concentrations along the fracture profile are influenced by 222Rn concentrations in the adjoining fracture and depend on boundary conditions, proximity of the pumping well to the fracture intersection, transmissivity of the conductive fractures, and pumping rate. Non-homogeneous distribution (point sources) of 222Rn parent radionuclides, uranium and 226Ra, can strongly perturb the dissolved 222Rn concentrations in a fracture system. Without detailed information on the geometry and hydraulic properties of the connected fracture system, it may be impossible to distinguish the influence of factors controlling 222Rn distribution or to determine location of 222Rn point sources in the field in areas where ground water exhibits moderate 222Rn concentrations. Flow and transport simulations of a hypothetical multifracture system consisting of ten connected fractures, each 10 m in length with fracture apertures ranging from 0.1 to 1.0 mm, show that 222Rn concentrations at the pumping well can vary significantly over time. Assuming parallel-plate flow, transmissivities of the hypothetical system vary over four orders of magnitude because transmissivity varies with the cube of fracture aperture. The extreme hydraulic heterogeneity of the simple hypothetical system leads to widely ranging 222Rn values, even assuming homogeneous distribution of uranium and 226Ra along fracture walls. Consequently, it is

  9. Secondary reconnection, energisation and turbulence in dipolarisation fronts: results of a 3D kinetic simulation campaign

    NASA Astrophysics Data System (ADS)

    Lapenta, Giovanni; Goldman, Martin; Newman, David; olshevskyi, Vyacheslav; Markidis, Stefano

    2016-04-01

    Dipolarization fronts (DF) are formed by reconnection outflows interacting with the pre-existing environment. These regions are host of important energy exchanges [1], particle acceleration [2] and a complex structure and evolution [3]. Our recent work has investigated these regions via fully kinetic 3D simulations [4]. As reported recently on Nature Physics [3], based on 3D fully kinetic simulations started with a well defined x-line, we observe that in the DF reconnection transitions towards a more chaotic regime. In the fronts an instability devel- ops caused by the local gradients of the density and by the unfavourable acceleration and field line curvature. The consequence is the break up of the fronts in a fashion similar to the classical fluid Rayleigh-Taylor instability with the formation of "fingers" of plasma and embedded magnetic fields. These fingers interact and produce secondary reconnection sites. We present several different diagnostics that prove the existence of these secondary reconnection sites. Each site is surrounded by its own electron diffusion region. At the fronts the ions are generally not magnetized and considerable ion slippage is present. The discovery we present is that electrons are also slipping, forming localized diffusion regions near secondary reconnection sites [1]. The consequence of this discovery is twofold. First, the instability in the fronts has strong energetic implications. We observe that the energy transfer locally is very strong, an order of magnitude stronger than in the "X" line. However, this energy transfer is of both signs as it is natural for a wavy rippling with regions of magnetic to kinetic and regions of kinetic to magnetic energy conversion. Second, and most important for this session, is that MMS should not limit the search for electron diffusion regions to the location marked with X in all reconnection cartoons. Our simulations predict more numerous and perhaps more easily measurable electron diffusion

  10. Recent electron-cloud simulation results for the main damping rings of the NLC and TESLA linear colliders

    SciTech Connect

    Pivi, M.; Raubenheimer, T.O.; Furman, M.A.

    2003-05-01

    In the beam pipe of the Main Damping Ring (MDR) of the Next Linear Collider (NLC), ionization of residual gases and secondary emission give rise to an electron-cloud which stabilizes to equilibrium after few bunch trains. In this paper, we present recent computer simulation results for the main features of the electron cloud at the NLC and preliminary simulation results for the TESLA main damping rings, obtained with the code POSINST that has been developed at LBNL, and lately in collaboration with SLAC, over the past 7 years. Possible remedies to mitigate the effect are also discussed. We have recently included the possibility to simulate different magnetic field configurations in our code including solenoid, quadrupole, sextupole and wiggler.

  11. From the experimental simulation to integrated non-destructive analysis by means of optical and infrared techniques: results compared

    NASA Astrophysics Data System (ADS)

    Sfarra, S.; Ibarra-Castanedo, C.; Lambiase, F.; Paoletti, D.; Di Ilio, A.; Maldague, X.

    2012-11-01

    In this work the possibility of modeling manufacturing ceramic products is analyzed through the application of transient thermography, holographic interferometry and digital speckle photography, in order to identify the subsurface defects characteristics. This integrated method could be used to understand the nature of heterogeneous materials (such as plastic, sponge simulating a void, wood, aluminum) potentially contained within ceramic materials, as well as to predict crack formation due to t