Sample records for samples simulation results

  1. Improved importance sampling technique for efficient simulation of digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, Dingqing; Yao, Kung

    1988-01-01

    A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.

  2. Adaptive control of theophylline therapy: importance of blood sampling times.

    PubMed

    D'Argenio, D Z; Khakmahd, K

    1983-10-01

    A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.

  3. Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.

    ERIC Educational Resources Information Center

    Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas

    2002-01-01

    Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…

  4. Sampling Simulations for Assessing the Accuracy of U.S. Agricultural Crop Mapping from Remotely Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Dwyer, Linnea; Yadav, Kamini; Congalton, Russell G.

    2017-04-01

    Providing adequate food and water for a growing, global population continues to be a major challenge. Mapping and monitoring crops are useful tools for estimating the extent of crop productivity. GFSAD30 (Global Food Security Analysis Data at 30m) is a program, funded by NASA, that is producing global cropland maps by using field measurements and remote sensing images. This program studies 8 major crop types, and includes information on cropland area/extent, if crops are irrigated or rainfed, and the cropping intensities. Using results from the US and the extensive reference data available, CDL (USDA Crop Data Layer), we will experiment with various sampling simulations to determine optimal sampling for thematic map accuracy assessment. These simulations will include varying the sampling unit, the sampling strategy, and the sample number. Results of these simulations will allow us to recommend assessment approaches to handle different cropping scenarios.

  5. Numerical simulations of motion-insensitive diffusion imaging based on the distant dipolar field effects.

    PubMed

    Lin, Tao; Sun, Huijun; Chen, Zhong; You, Rongyi; Zhong, Jianhui

    2007-12-01

    Diffusion weighting in MRI is commonly achieved with the pulsed-gradient spin-echo (PGSE) method. When combined with spin-warping image formation, this method often results in ghosts due to the sample's macroscopic motion. It has been shown experimentally (Kennedy and Zhong, MRM 2004;52:1-6) that these motion artifacts can be effectively eliminated by the distant dipolar field (DDF) method, which relies on the refocusing of spatially modulated transverse magnetization by the DDF within the sample itself. In this report, diffusion-weighted images (DWIs) using both DDF and PGSE methods in the presence of macroscopic sample motion were simulated. Numerical simulation results quantify the dependence of signals in DWI on several key motion parameters and demonstrate that the DDF DWIs are much less sensitive to macroscopic sample motion than the traditional PGSE DWIs. The results also show that the dipolar correlation distance (d(c)) can alter contrast in DDF DWIs. The simulated results are in good agreement with the experimental results reported previously.

  6. Enhanced Sampling in Free Energy Calculations: Combining SGLD with the Bennett's Acceptance Ratio and Enveloping Distribution Sampling Methods.

    PubMed

    König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R

    2012-10-09

    One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.

  7. Communication: Multiple atomistic force fields in a single enhanced sampling simulation

    NASA Astrophysics Data System (ADS)

    Hoang Viet, Man; Derreumaux, Philippe; Nguyen, Phuong H.

    2015-07-01

    The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.

  8. Improving the Acquisition and Management of Sample Curation Data

    NASA Technical Reports Server (NTRS)

    Todd, Nancy S.; Evans, Cindy A.; Labasse, Dan

    2011-01-01

    This paper discusses the current sample documentation processes used during and after a mission, examines the challenges and special considerations needed for designing effective sample curation data systems, and looks at the results of a simulated sample result mission and the lessons learned from this simulation. In addition, it introduces a new data architecture for an integrated sample Curation data system being implemented at the NASA Astromaterials Acquisition and Curation department and discusses how it improves on existing data management systems.

  9. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  10. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  11. Spatial Sampling of Weather Data for Regional Crop Yield Simulations

    NASA Technical Reports Server (NTRS)

    Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian; hide

    2016-01-01

    Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management data for regional simulations of crop yields is still needed.

  12. Influence of sampling rate on the calculated fidelity of an aircraft simulation

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1983-01-01

    One of the factors that influences the fidelity of an aircraft digital simulation is the sampling rate. As the sampling rate is increased, the calculated response of the discrete representation tends to coincide with the response of the corresponding continuous system. Because of computer limitations, however, the sampling rate cannot be increased indefinitely. Moreover, real-time simulation requirements demand that a finite sampling rate be adopted. In view of these restrictions, a study was undertaken to determine the influence of sampling rate on the response characteristics of a simulated aircraft describing short-period oscillations. Changes in the calculated response characteristics of the simulated aircraft degrade the fidelity of the simulation. In the present context, fidelity degradation is defined as the percentage change in those characteristics that have the greatest influence on pilot opinion: short period frequency omega, short period damping ratio zeta, and the product omega zeta. To determine the influence of the sampling period on these characteristics, the equations describing the response of a DC-8 aircraft to elevator control inputs were used. The results indicate that if the sampling period is too large, the fidelity of the simulation can be degraded.

  13. Two-dimensional numerical simulation of boron diffusion for pyramidally textured silicon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Fa-Jun, E-mail: Fajun.Ma@nus.edu.sg; Duttagupta, Shubham; Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117576

    2014-11-14

    Multidimensional numerical simulation of boron diffusion is of great relevance for the improvement of industrial n-type crystalline silicon wafer solar cells. However, surface passivation of boron diffused area is typically studied in one dimension on planar lifetime samples. This approach neglects the effects of the solar cell pyramidal texture on the boron doping process and resulting doping profile. In this work, we present a theoretical study using a two-dimensional surface morphology for pyramidally textured samples. The boron diffusivity and segregation coefficient between oxide and silicon in simulation are determined by reproducing measured one-dimensional boron depth profiles prepared using different boronmore » diffusion recipes on planar samples. The established parameters are subsequently used to simulate the boron diffusion process on textured samples. The simulated junction depth is found to agree quantitatively well with electron beam induced current measurements. Finally, chemical passivation on planar and textured samples is compared in device simulation. Particularly, a two-dimensional approach is adopted for textured samples to evaluate chemical passivation. The intrinsic emitter saturation current density, which is only related to Auger and radiative recombination, is also simulated for both planar and textured samples. The differences between planar and textured samples are discussed.« less

  14. Developing effective sampling designs for monitoring natural resources in Alaskan national parks: an example using simulations and vegetation data

    USGS Publications Warehouse

    Thompson, William L.; Miller, Amy E.; Mortenson, Dorothy C.; Woodward, Andrea

    2011-01-01

    Monitoring natural resources in Alaskan national parks is challenging because of their remoteness, limited accessibility, and high sampling costs. We describe an iterative, three-phased process for developing sampling designs based on our efforts to establish a vegetation monitoring program in southwest Alaska. In the first phase, we defined a sampling frame based on land ownership and specific vegetated habitats within the park boundaries and used Path Distance analysis tools to create a GIS layer that delineated portions of each park that could be feasibly accessed for ground sampling. In the second phase, we used simulations based on landcover maps to identify size and configuration of the ground sampling units (single plots or grids of plots) and to refine areas to be potentially sampled. In the third phase, we used a second set of simulations to estimate sample size and sampling frequency required to have a reasonable chance of detecting a minimum trend in vegetation cover for a specified time period and level of statistical confidence. Results of the first set of simulations indicated that a spatially balanced random sample of single plots from the most common landcover types yielded the most efficient sampling scheme. Results of the second set of simulations were compared with field data and indicated that we should be able to detect at least a 25% change in vegetation attributes over 31. years by sampling 8 or more plots per year every five years in focal landcover types. This approach would be especially useful in situations where ground sampling is restricted by access.

  15. Digital simulation of scalar optical diffraction: revisiting chirp function sampling criteria and consequences.

    PubMed

    Voelz, David G; Roggemann, Michael C

    2009-11-10

    Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.

  16. Spatial adaptive sampling in multiscale simulation

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.

    2014-07-01

    In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.

  17. Examination of the MMPI-2 restructured form (MMPI-2-RF) validity scales in civil forensic settings: findings from simulation and known group samples.

    PubMed

    Wygant, Dustin B; Ben-Porath, Yossef S; Arbisi, Paul A; Berry, David T R; Freeman, David B; Heilbronner, Robert L

    2009-11-01

    The current study examined the effectiveness of the MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath and Tellegen, 2008) over-reporting indicators in civil forensic settings. The MMPI-2-RF includes three revised MMPI-2 over-reporting validity scales and a new scale to detect over-reported somatic complaints. Participants dissimulated medical and neuropsychological complaints in two simulation samples, and a known-groups sample used symptom validity tests as a response bias criterion. Results indicated large effect sizes for the MMPI-2-RF validity scales, including a Cohen's d of .90 for Fs in a head injury simulation sample, 2.31 for FBS-r, 2.01 for F-r, and 1.97 for Fs in a medical simulation sample, and 1.45 for FBS-r and 1.30 for F-r in identifying poor effort on SVTs. Classification results indicated good sensitivity and specificity for the scales across the samples. This study indicates that the MMPI-2-RF over-reporting validity scales are effective at detecting symptom over-reporting in civil forensic settings.

  18. Simulations and experiments on RITA-2 at PSI

    NASA Astrophysics Data System (ADS)

    Klausen, S. N.; Lefmann, K.; McMorrow, D. F.; Altorfer, F.; Janssen, S.; Lüthy, M.

    The cold-neutron triple-axis spectrometer RITA-2 designed and built at Riso National Laboratory was installed at the neutron source SINQ at Paul Scherrer Institute in April/May 2001. In connection with the installation of RITA-2, computer simulations were performed using the neutron ray-tracing package McStas. The simulation results are compared to real experimental results obtained with a powder sample. Especially, the flux at the sample position and the resolution function of the spectrometer are investigated.

  19. Convergence of sampling in protein simulations

    NASA Astrophysics Data System (ADS)

    Hess, Berk

    2002-03-01

    With molecular dynamics protein dynamics can be simulated in atomic detail. Current computers are not fast enough to probe all available conformations, but fluctuations around one conformation can be sampled to a reasonable extent. The motions with the largest fluctuations can be filtered out of a simulation using covariance or principal component analysis. A problem with this analysis is that random diffusion can appear as correlated motion. An analysis is presented of how long a simulation should be to obtain relevant results for global motions. The analysis reveals that the cosine content of the principal components is a good indicator for bad sampling.

  20. Analytical Simulations of Energy-Absorbing Impact Spheres for a Mars Sample Return Earth Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Billings, Marcus Dwight; Fasanella, Edwin L. (Technical Monitor)

    2002-01-01

    Nonlinear dynamic finite element simulations were performed to aid in the design of an energy-absorbing impact sphere for a passive Earth Entry Vehicle (EEV) that is a possible architecture for the Mars Sample Return (MSR) mission. The MSR EEV concept uses an entry capsule and energy-absorbing impact sphere designed to contain and limit the acceleration of collected samples during Earth impact without a parachute. The spherical shaped impact sphere is composed of solid hexagonal and pentagonal foam-filled cells with hybrid composite, graphite-epoxy/Kevlar cell walls. Collected Martian samples will fit inside a smaller spherical sample container at the center of the EEV's cellular structure. Comparisons were made of analytical results obtained using MSC.Dytran with test results obtained from impact tests performed at NASA Langley Research Center for impact velocities from 30 to 40 m/s. Acceleration, velocity, and deformation results compared well with the test results. The correlated finite element model was then used for simulations of various off-nominal impact scenarios. Off-nominal simulations at an impact velocity of 40 m/s included a rotated cellular structure impact onto a flat surface, a cellular structure impact onto an angled surface, and a cellular structure impact onto the corner of a step.

  1. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    PubMed

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  2. Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity

    DOE PAGES

    Gordiz, Kiarash; Singh, David J.; Henry, Asegun

    2015-01-29

    In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less

  3. Nonlinear vs. linear biasing in Trp-cage folding simulations

    NASA Astrophysics Data System (ADS)

    Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka

    2015-03-01

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.

  4. Nonlinear vs. linear biasing in Trp-cage folding simulations.

    PubMed

    Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka

    2015-03-21

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.

  5. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  6. Influence of sample preparation on the transformation of low-density to high-density amorphous ice: An explanation based on the potential energy landscape

    NASA Astrophysics Data System (ADS)

    Giovambattista, Nicolas; Starr, Francis W.; Poole, Peter H.

    2017-07-01

    Experiments and computer simulations of the transformations of amorphous ices display different behaviors depending on sample preparation methods and on the rates of change of temperature and pressure to which samples are subjected. In addition to these factors, simulation results also depend strongly on the chosen water model. Using computer simulations of the ST2 water model, we study how the sharpness of the compression-induced transition from low-density amorphous ice (LDA) to high-density amorphous ice (HDA) is influenced by the preparation of LDA. By studying LDA samples prepared using widely different procedures, we find that the sharpness of the LDA-to-HDA transformation is correlated with the depth of the initial LDA sample in the potential energy landscape (PEL), as characterized by the inherent structure energy. Our results show that the complex phenomenology of the amorphous ices reported in experiments and computer simulations can be understood and predicted in a unified way from knowledge of the PEL of the system.

  7. Simulation of Tip-Sample Interaction in the Atomic Force Microscope

    NASA Technical Reports Server (NTRS)

    Good, Brian S.; Banerjea, Amitava

    1994-01-01

    Recent simulations of the interaction between planar surfaces and model Atomic Force Microscope (AFM) tips have suggested that there are conditions under which the tip may become unstable and 'avalanche' toward the sample surface. Here we investigate via computer simulation the stability of a variety of model AFM tip configurations with respect to the avalanche transition for a number of fcc metals. We perform Monte-Carlo simulations at room temperature using the Equivalent Crystal Theory (ECT) of Smith and Banerjea. Results are compared with recent experimental results as well as with our earlier work on the avalanche of parallel planar surfaces. Our results on a model single-atom tip are in excellent agreement with recent experiments on tunneling through mechanically-controlled break junctions.

  8. Assessment of the effect of population and diary sampling methods on estimation of school-age children exposure to fine particles.

    PubMed

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2014-12-01

    Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.

  9. Simulation techniques for estimating error in the classification of normal patterns

    NASA Technical Reports Server (NTRS)

    Whitsitt, S. J.; Landgrebe, D. A.

    1974-01-01

    Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.

  10. The effect of sampling techniques used in the multiconfigurational Ehrenfest method

    NASA Astrophysics Data System (ADS)

    Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.

    2018-05-01

    In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.

  11. The effect of sampling techniques used in the multiconfigurational Ehrenfest method.

    PubMed

    Symonds, C; Kattirtzi, J A; Shalashilin, D V

    2018-05-14

    In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.

  12. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    NASA Astrophysics Data System (ADS)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  13. Simulation of RBS spectra with known 3D sample surface roughness

    NASA Astrophysics Data System (ADS)

    Malinský, Petr; Siegel, Jakub; Hnatowicz, Vladimir; Macková, Anna; Švorčík, Václav

    2017-09-01

    The Rutherford Backscattering Spectrometry (RBS) is a technique for elemental depth profiling with a nanometer depth resolution. Possible surface roughness of analysed samples can deteriorate the RBS spectra and makes their interpretation more difficult and ambiguous. This work describes the simulation of RBS spectra which takes into account real 3D morphology of the sample surface obtained by AFM method. The RBS spectrum is calculated as a sum of the many particular spectra obtained for randomly chosen particle trajectories over sample 3D landscape. The spectra, simulated for different ion beam incidence angles, are compared to the experimental ones measured with 2.0 MeV 4He+ ions. The main aim of this work is to obtain more definite information on how a particular surface morphology and measuring geometry affects the RBS spectra and derived elemental depth profiles. A reasonable agreement between the measured and simulated spectra was found and the results indicate that the AFM data on the sample surface can be used for the simulation of RBS spectra.

  14. [Research on Time-frequency Characteristics of Magneto-acoustic Signal of Different Thickness Medium Based on Wave Summing Method].

    PubMed

    Zhang, Shunqi; Yin, Tao; Ma, Ren; Liu, Zhipeng

    2015-08-01

    Functional imaging method of biological electrical characteristics based on magneto-acoustic effect gives valuable information of tissue in early tumor diagnosis, therein time and frequency characteristics analysis of magneto-acoustic signal is important in image reconstruction. This paper proposes wave summing method based on Green function solution for acoustic source of magneto-acoustic effect. Simulations and analysis under quasi 1D transmission condition are carried out to time and frequency characteristics of magneto-acoustic signal of models with different thickness. Simulation results of magneto-acoustic signal were verified through experiments. Results of the simulation with different thickness showed that time-frequency characteristics of magneto-acoustic signal reflected thickness of sample. Thin sample, which is less than one wavelength of pulse, and thick sample, which is larger than one wavelength, showed different summed waveform and frequency characteristics, due to difference of summing thickness. Experimental results verified theoretical analysis and simulation results. This research has laid a foundation for acoustic source and conductivity reconstruction to the medium with different thickness in magneto-acoustic imaging.

  15. Quantifying sampling noise and parametric uncertainty in atomistic-to-continuum simulations using surrogate models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.

    2015-08-11

    We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less

  16. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  17. The VIIRS Ocean Data Simulator Enhancements and Results

    NASA Technical Reports Server (NTRS)

    Robinson, Wayne D.; Patt, Fredrick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-01-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  18. The VIIRS ocean data simulator enhancements and results

    NASA Astrophysics Data System (ADS)

    Robinson, Wayne D.; Patt, Frederick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-10-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  19. Nonlinear vs. linear biasing in Trp-cage folding simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spiwok, Vojtěch, E-mail: spiwokv@vscht.cz; Oborský, Pavel; Králová, Blanka

    2015-03-21

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energymore » minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.« less

  20. Massively parallel simulator of optical coherence tomography of inhomogeneous turbid media.

    PubMed

    Malektaji, Siavash; Lima, Ivan T; Escobar I, Mauricio R; Sherif, Sherif S

    2017-10-01

    An accurate and practical simulator for Optical Coherence Tomography (OCT) could be an important tool to study the underlying physical phenomena in OCT such as multiple light scattering. Recently, many researchers have investigated simulation of OCT of turbid media, e.g., tissue, using Monte Carlo methods. The main drawback of these earlier simulators is the long computational time required to produce accurate results. We developed a massively parallel simulator of OCT of inhomogeneous turbid media that obtains both Class I diffusive reflectivity, due to ballistic and quasi-ballistic scattered photons, and Class II diffusive reflectivity due to multiply scattered photons. This Monte Carlo-based simulator is implemented on graphic processing units (GPUs), using the Compute Unified Device Architecture (CUDA) platform and programming model, to exploit the parallel nature of propagation of photons in tissue. It models an arbitrary shaped sample medium as a tetrahedron-based mesh and uses an advanced importance sampling scheme. This new simulator speeds up simulations of OCT of inhomogeneous turbid media by about two orders of magnitude. To demonstrate this result, we have compared the computation times of our new parallel simulator and its serial counterpart using two samples of inhomogeneous turbid media. We have shown that our parallel implementation reduced simulation time of OCT of the first sample medium from 407 min to 92 min by using a single GPU card, to 12 min by using 8 GPU cards and to 7 min by using 16 GPU cards. For the second sample medium, the OCT simulation time was reduced from 209 h to 35.6 h by using a single GPU card, and to 4.65 h by using 8 GPU cards, and to only 2 h by using 16 GPU cards. Therefore our new parallel simulator is considerably more practical to use than its central processing unit (CPU)-based counterpart. Our new parallel OCT simulator could be a practical tool to study the different physical phenomena underlying OCT, or to design OCT systems with improved performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Design and testing of coring bits on drilling lunar rock simulant

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jiang, Shengyuan; Tang, Dewei; Xu, Bo; Ma, Chao; Zhang, Hui; Qin, Hongwei; Deng, Zongquan

    2017-02-01

    Coring bits are widely utilized in the sampling of celestial bodies, and their drilling behaviors directly affect the sampling results and drilling security. This paper introduces a lunar regolith coring bit (LRCB), which is a key component of sampling tools for lunar rock breaking during the lunar soil sampling process. We establish the interaction model between the drill bit and rock at a small cutting depth, and the two main influential parameters (forward and outward rake angles) of LRCB on drilling loads are determined. We perform the parameter screening task of LRCB with the aim to minimize the weight on bit (WOB). We verify the drilling load performances of LRCB after optimization, and the higher penetrations per revolution (PPR) are, the larger drilling loads we gained. Besides, we perform lunar soil drilling simulations to estimate the efficiency on chip conveying and sample coring of LRCB. The results of the simulation and test are basically consistent on coring efficiency, and the chip removal efficiency of LRCB is slightly lower than HIT-H bit from simulation. This work proposes a method for the design of coring bits in subsequent extraterrestrial explorations.

  2. Real time flight simulation methodology

    NASA Technical Reports Server (NTRS)

    Parrish, E. A.; Cook, G.; Mcvey, E. S.

    1976-01-01

    An example sensitivity study is presented to demonstrate how a digital autopilot designer could make a decision on minimum sampling rate for computer specification. It consists of comparing the simulated step response of an existing analog autopilot and its associated aircraft dynamics to the digital version operating at various sampling frequencies and specifying a sampling frequency that results in an acceptable change in relative stability. In general, the zero order hold introduces phase lag which will increase overshoot and settling time. It should be noted that this solution is for substituting a digital autopilot for a continuous autopilot. A complete redesign could result in results which more closely resemble the continuous results or which conform better to original design goals.

  3. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  4. Evaluations of lunar regolith simulants

    NASA Astrophysics Data System (ADS)

    Taylor, Lawrence A.; Pieters, Carle M.; Britt, Daniel

    2016-07-01

    Apollo lunar regolith samples are not available in quantity for engineering studies with In-Situ Resource Utilization (ISRU). Therefore, with expectation of a return to the Moon, dozens of regolith (soil) simulants have been developed, to some extent a result of inefficient distribution of NASA-sanctioned simulants. In this paper, we review many of these simulants, with evaluations of their short-comings. In 2010, the NAC-PSS committee instructed the Lunar Exploration Advisory Group (LEAG) and CAPTEM (the NASA committee recommending on the appropriations of Apollo samples) to report on the status of lunar regolith simulants. This report is reviewed here-in, along with a list of the plethora of lunar regolith simulants and references. In addition, and importantly, a special, unique Apollo 17 soil sample (70050) discussed, which has many of the properties sought for ISRU studies, should be available in reasonable amounts for ISRU studies.

  5. A simulative comparison of respondent driven sampling with incentivized snowball sampling – the “strudel effect”

    PubMed Central

    Gyarmathy, V. Anna; Johnston, Lisa G.; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A.

    2014-01-01

    Background Respondent driven sampling (RDS) and Incentivized Snowball Sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). Methods We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania (“original sample”) to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. Results The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1 to 12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. Conclusions When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called “strudel effect” is discussed in the paper. PMID:24360650

  6. Simulation on Poisson and negative binomial models of count road accident modeling

    NASA Astrophysics Data System (ADS)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  7. Mock juror sampling issues in jury simulation research: A meta-analysis.

    PubMed

    Bornstein, Brian H; Golding, Jonathan M; Neuschatz, Jeffrey; Kimbrough, Christopher; Reed, Krystia; Magyarics, Casey; Luecht, Katherine

    2017-02-01

    The advantages and disadvantages of jury simulation research have often been debated in the literature. Critics chiefly argue that jury simulations lack verisimilitude, particularly through their use of student mock jurors, and that this limits the generalizabilty of the findings. In the present article, the question of sample differences (student v. nonstudent) in jury research was meta-analyzed for 6 dependent variables: 3 criminal (guilty verdicts, culpability, and sentencing) and 3 civil (liability verdicts, continuous liability, and damages). In total, 53 studies (N = 17,716) were included in the analysis (40 criminal and 13 civil). The results revealed that guilty verdicts, culpability ratings, and damage awards did not vary with sample. Furthermore, the variables that revealed significant or marginally significant differences, sentencing and liability judgments, had small or contradictory effect sizes (e.g., effects on dichotomous and continuous liability judgments were in opposite directions). In addition, with the exception of trial presentation medium, moderator effects were small and inconsistent. These results may help to alleviate concerns regarding the use of student samples in jury simulation research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  9. Idealized vs. Realistic Microstructures: An Atomistic Simulation Case Study on γ/γ' Microstructures.

    PubMed

    Prakash, Aruna; Bitzek, Erik

    2017-01-23

    Single-crystal Ni-base superalloys, consisting of a two-phase γ / γ ' microstructure, retain high strengths at elevated temperatures and are key materials for high temperature applications, like, e.g., turbine blades of aircraft engines. The lattice misfit between the γ and γ ' phases results in internal stresses, which significantly influence the deformation and creep behavior of the material. Large-scale atomistic simulations that are often used to enhance our understanding of the deformation mechanisms in such materials must accurately account for such misfit stresses. In this work, we compare the internal stresses in both idealized and experimentally-informed, i.e., more realistic, γ / γ ' microstructures. The idealized samples are generated by assuming, as is frequently done, a periodic arrangement of cube-shaped γ ' particles with planar γ / γ ' interfaces. The experimentally-informed samples are generated from two different sources to produce three different samples-the scanning electron microscopy micrograph-informed quasi-2D atomistic sample and atom probe tomography-informed stoichiometric and non-stoichiometric atomistic samples. Additionally, we compare the stress state of an idealized embedded cube microstructure with finite element simulations incorporating 3D periodic boundary conditions. Subsequently, we study the influence of the resulting stress state on the evolution of dislocation loops in the different samples. The results show that the stresses in the atomistic and finite element simulations are almost identical. Furthermore, quasi-2D boundary conditions lead to a significantly different stress state and, consequently, different evolution of the dislocation loop, when compared to samples with fully 3D boundary conditions.

  10. Application of advanced sampling and analysis methods to predict the structure of adsorbed protein on a material surface

    PubMed Central

    Abramyan, Tigran M.; Hyde-Volpe, David L.; Stuart, Steven J.; Latour, Robert A.

    2017-01-01

    The use of standard molecular dynamics simulation methods to predict the interactions of a protein with a material surface have the inherent limitations of lacking the ability to determine the most likely conformations and orientations of the adsorbed protein on the surface and to determine the level of convergence attained by the simulation. In addition, standard mixing rules are typically applied to combine the nonbonded force field parameters of the solution and solid phases the system to represent interfacial behavior without validation. As a means to circumvent these problems, the authors demonstrate the application of an efficient advanced sampling method (TIGER2A) for the simulation of the adsorption of hen egg-white lysozyme on a crystalline (110) high-density polyethylene surface plane. Simulations are conducted to generate a Boltzmann-weighted ensemble of sampled states using force field parameters that were validated to represent interfacial behavior for this system. The resulting ensembles of sampled states were then analyzed using an in-house-developed cluster analysis method to predict the most probable orientations and conformations of the protein on the surface based on the amount of sampling performed, from which free energy differences between the adsorbed states were able to be calculated. In addition, by conducting two independent sets of TIGER2A simulations combined with cluster analyses, the authors demonstrate a method to estimate the degree of convergence achieved for a given amount of sampling. The results from these simulations demonstrate that these methods enable the most probable orientations and conformations of an adsorbed protein to be predicted and that the use of our validated interfacial force field parameter set provides closer agreement to available experimental results compared to using standard CHARMM force field parameterization to represent molecular behavior at the interface. PMID:28514864

  11. Manufactured Porous Ambient Surface Simulants

    NASA Technical Reports Server (NTRS)

    Carey, Elizabeth M.; Peters, Gregory H.; Chu, Lauren; Zhou, Yu Meng; Cohen, Brooklin; Panossian, Lara; Green, Jacklyn R.; Moreland, Scott; Backes, Paul

    2016-01-01

    The planetary science decadal survey for 2013-2022 (Vision and Voyages, NRC 2011) has promoted mission concepts for sample acquisition from small solar system bodies. Numerous comet-sampling tools are in development to meet this standard. Manufactured Porous Ambient Surface Simulants (MPASS) materials provide an opportunity to simulate variable features at ambient temperatures and pressures to appropriately test potential sample acquisition systems for comets, asteroids, and planetary surfaces. The original "flavor" of MPASS materials is known as Manufactured Porous Ambient Comet Simulants (MPACS), which was developed in parallel with the development of the Biblade Comet Sampling System (Backes et al., in review). The current suite of MPACS materials was developed through research of the physical and mechanical properties of comets from past comet missions results and modeling efforts, coordination with the science community at the Jet Propulsion Laboratory and testing of a wide range of materials and formulations. These simulants were required to represent the physical and mechanical properties of cometary nuclei, based on the current understanding of the science community. Working with cryogenic simulants can be tedious and costly; thus MPACS is a suite of ambient simulants that yields a brittle failure mode similar to that of cryogenic icy materials. Here we describe our suite of comet simulants known as MPACS that will be used to test and validate the Biblade Comet Sampling System (Backes et al., in review).

  12. Multi-component Cu-Strengthened Steel Welding Simulations: Atom Probe Tomography and Synchrotron X-ray Diffraction Analyses

    NASA Astrophysics Data System (ADS)

    Hunter, Allen H.; Farren, Jeffrey D.; DuPont, John N.; Seidman, David N.

    2015-07-01

    An experimental steel with the composition Fe-1.39Cu-2.70Ni-0.58Al-0.48Mn-0.48Si-0.065Nb-0.05C (wt pct) or alternatively Fe-1.43Cu-2.61Ni-1.21Al-0.48Mn-0.98Si-0.039Nb-0.23C (at. pct) has been developed at Northwestern University, which has both high toughness and high strength after quenching and aging treatments. Simulated heat-affected zone (HAZ) samples are utilized to analyze the microstructures typically obtained after gas metal arc welding (GMAW). Dissolution within the HAZ of cementite (Fe3C) and NbC (F.C.C.) is revealed using synchrotron X-ray diffraction, while dissolution of Cu precipitates is measured employing local electrode atom probe tomography. The results are compared to Thermo-Calc equilibrium calculations. Comparison of measured Cu precipitate radii, number density, and volume fraction with similar measurements from a GMAW sample suggests that the cooling rate in the simulations is faster than in the experimental GMAW sample, resulting in significantly less Cu precipitate nucleation and growth during the cooling part of the weld thermal cycle. The few Cu precipitates detected in the simulated samples are primarily located on grain boundaries resulting from heterogeneous nucleation. The dissolution of NbC precipitates and the resultant austenite coarsening in the highest-temperature sample, coupled with a rapid cooling rate, results in the growth of bainite, and an increase in the strength of the matrix in the absence of significant Cu precipitation.

  13. Visual comparison testing of automotive paint simulation

    NASA Astrophysics Data System (ADS)

    Meyer, Gary; Fan, Hua-Tzu; Seubert, Christopher; Evey, Curtis; Meseth, Jan; Schnackenberg, Ryan

    2015-03-01

    An experiment was performed to determine whether typical industrial automotive color paint comparisons made using real physical samples could also be carried out using a digital simulation displayed on a calibrated color television monitor. A special light booth, designed to facilitate evaluation of the car paint color with reflectance angle, was employed in both the real and virtual color comparisons. Paint samples were measured using a multi-angle spectrophotometer and were simulated using a commercially available software package. Subjects performed the test quicker using the computer graphic simulation, and results indicate that there is only a small difference between the decisions made using the light booth and the computer monitor. This outcome demonstrates the potential of employing simulations to replace some of the time consuming work with real physical samples that still characterizes material appearance work in industry.

  14. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Effect of alignment of easy axes on dynamic magnetization of immobilized magnetic nanoparticles

    NASA Astrophysics Data System (ADS)

    Yoshida, Takashi; Matsugi, Yuki; Tsujimura, Naotaka; Sasayama, Teruyoshi; Enpuku, Keiji; Viereck, Thilo; Schilling, Meinhard; Ludwig, Frank

    2017-04-01

    In some biomedical applications of magnetic nanoparticles (MNPs), the particles are physically immobilized. In this study, we explore the effect of the alignment of the magnetic easy axes on the dynamic magnetization of immobilized MNPs under an AC excitation field. We prepared three immobilized MNP samples: (1) a sample in which easy axes are randomly oriented, (2) a parallel-aligned sample in which easy axes are parallel to the AC field, and (3) an orthogonally aligned sample in which easy axes are perpendicular to the AC field. First, we show that the parallel-aligned sample has the largest hysteresis in the magnetization curve and the largest harmonic magnetization spectra, followed by the randomly oriented and orthogonally aligned samples. For example, 1.6-fold increase was observed in the area of the hysteresis loop of the parallel-aligned sample compared to that of the randomly oriented sample. To quantitatively discuss the experimental results, we perform a numerical simulation based on a Fokker-Planck equation, in which probability distributions for the directions of the easy axes are taken into account in simulating the prepared MNP samples. We obtained quantitative agreement between experiment and simulation. These results indicate that the dynamic magnetization of immobilized MNPs is significantly affected by the alignment of the easy axes.

  16. Results of computer calculations for a simulated distribution of kidney cells

    NASA Technical Reports Server (NTRS)

    Micale, F. J.

    1985-01-01

    The results of computer calculations for a simulated distribution of kidney cells are given. The calculations were made for different values of electroosmotic flow, U sub o, and the ratio of sample diameter to channel diameter, R.

  17. Are They Bloody Guilty? Blood Doping with Simulated Samples

    ERIC Educational Resources Information Center

    Stuart, Parker E.; Lees, Kelsey D.; Milanick, Mark A.

    2014-01-01

    In this practice-based lab, students are provided with four Olympic athlete profiles and simulated blood and urine samples to test for illegal substances and blood-doping practices. Throughout the course of the lab, students design and conduct a testing procedure and use their results to determine which athletes won their medals fairly. All of the…

  18. Design and simulation study of the immunization Data Quality Audit (DQA).

    PubMed

    Woodard, Stacy; Archer, Linda; Zell, Elizabeth; Ronveaux, Olivier; Birmingham, Maureen

    2007-08-01

    The goal of the Data Quality Audit (DQA) is to assess whether the Global Alliance for Vaccines and Immunization-funded countries are adequately reporting the number of diphtheria-tetanus-pertussis immunizations given, on which the "shares" are awarded. Given that this sampling design is a modified two-stage cluster sample (modified because a stratified, rather than a simple, random sample of health facilities is obtained from the selected clusters); the formula for the calculation of the standard error for the estimate is unknown. An approximated standard error has been proposed, and the first goal of this simulation is to assess the accuracy of the standard error. Results from the simulations based on hypothetical populations were found not to be representative of the actual DQAs that were conducted. Additional simulations were then conducted on the actual DQA data to better access the precision of the DQ with both the original and the increased sample sizes.

  19. Multi-scale image segmentation and numerical modeling in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Alves, G. C.; Vanorio, T.

    2016-12-01

    Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.

  20. UAS in the NAS Project: Large-Scale Communication Architecture Simulations with NASA GRC Gen5 Radio Model

    NASA Technical Reports Server (NTRS)

    Kubat, Gregory

    2016-01-01

    This report provides a description and performance characterization of the large-scale, Relay architecture, UAS communications simulation capability developed for the NASA GRC, UAS in the NAS Project. The system uses a validated model of the GRC Gen5 CNPC, Flight-Test Radio model. Contained in the report is a description of the simulation system and its model components, recent changes made to the system to improve performance, descriptions and objectives of sample simulations used for test and verification, and a sampling and observations of results and performance data.

  1. Efficient evaluation of sampling quality of molecular dynamics simulations by clustering of dihedral torsion angles and Sammon mapping.

    PubMed

    Frickenhaus, Stephan; Kannan, Srinivasaraghavan; Zacharias, Martin

    2009-02-01

    A direct conformational clustering and mapping approach for peptide conformations based on backbone dihedral angles has been developed and applied to compare conformational sampling of Met-enkephalin using two molecular dynamics (MD) methods. Efficient clustering in dihedrals has been achieved by evaluating all combinations resulting from independent clustering of each dihedral angle distribution, thus resolving all conformational substates. In contrast, Cartesian clustering was unable to accurately distinguish between all substates. Projection of clusters on dihedral principal component (PCA) subspaces did not result in efficient separation of highly populated clusters. However, representation in a nonlinear metric by Sammon mapping was able to separate well the 48 highest populated clusters in just two dimensions. In addition, this approach also allowed us to visualize the transition frequencies between clusters efficiently. Significantly, higher transition frequencies between more distinct conformational substates were found for a recently developed biasing-potential replica exchange MD simulation method allowing faster sampling of possible substates compared to conventional MD simulations. Although the number of theoretically possible clusters grows exponentially with peptide length, in practice, the number of clusters is only limited by the sampling size (typically much smaller), and therefore the method is well suited also for large systems. The approach could be useful to rapidly and accurately evaluate conformational sampling during MD simulations, to compare different sampling strategies and eventually to detect kinetic bottlenecks in folding pathways.

  2. The redshift distribution of cosmological samples: a forward modeling approach

    NASA Astrophysics Data System (ADS)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-08-01

    Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  3. The redshift distribution of cosmological samples: a forward modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizesmore » and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.« less

  4. Laboratory experiments to investigate sublimation rates of water ice in nighttime lunar regolith

    NASA Astrophysics Data System (ADS)

    Piquette, Marcus; Horányi, Mihály; Stern, S. Alan

    2017-09-01

    The existence of water ice on the lunar surface has been a long-standing topic with implications for both lunar science and in-situ resource utilization (ISRU). Cold traps on the lunar surface may have conditions necessary to retain water ice, but no laboratory experiments have been conducted to verify modeling results. We present an experiment testing the ability to thermally control bulk samples of lunar regolith simulant mixed with water ice under vacuum in an effort to constrain sublimation rates. The simulant used was JSC-1A lunar regolith simulant developed by NASA's Johnson Space Center. Samples with varying ratios of water ice and JSC-1A regolith simulant, totally about 1 kg, were placed under vacuum and cooled to 100 K to simulate conditions in lunar cold traps. The resulting sublimation of water ice over an approximately five-day period was measured by comparing the mass of the samples before and after the experimental run. Our results indicate that water ice in lunar cold traps is stable on timescales comparable to the lunar night, and should continue to be studied as possible resources for future utilization. This experiment also gauges the efficacy of the synthetic lunar atmosphere mission (SLAM) as a low-cost water resupply mission to lunar outposts.

  5. Shock compression of strongly correlated oxides: A liquid-regime equation of state for cerium(IV) oxide

    NASA Astrophysics Data System (ADS)

    Weck, Philippe F.; Cochrane, Kyle R.; Root, Seth; Lane, J. Matthew D.; Shulenburger, Luke; Carpenter, John H.; Sjostrom, Travis; Mattsson, Thomas R.; Vogler, Tracy J.

    2018-03-01

    The shock Hugoniot for full-density and porous CeO2 was investigated in the liquid regime using ab initio molecular dynamics (AIMD) simulations with Erpenbeck's approach based on the Rankine-Hugoniot jump conditions. The phase space was sampled by carrying out NVT simulations for isotherms between 6000 and 100 000 K and densities ranging from ρ =2.5 to 20 g /cm3 . The impact of on-site Coulomb interaction corrections +U on the equation of state (EOS) obtained from AIMD simulations was assessed by direct comparison with results from standard density functional theory simulations. Classical molecular dynamics (CMD) simulations were also performed to model atomic-scale shock compression of larger porous CeO2 models. Results from AIMD and CMD compression simulations compare favorably with Z-machine shock data to 525 GPa and gas-gun data to 109 GPa for porous CeO2 samples. Using results from AIMD simulations, an accurate liquid-regime Mie-Grüneisen EOS was built for CeO2. In addition, a revised multiphase SESAME-type EOS was constrained using AIMD results and experimental data generated in this work. This study demonstrates the necessity of acquiring data in the porous regime to increase the reliability of existing analytical EOS models.

  6. Fast Monte Carlo simulation of a dispersive sample on the SEQUOIA spectrometer at the SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granroth, Garrett E; Chen, Meili; Kohl, James Arthur

    2007-01-01

    Simulation of an inelastic scattering experiment, with a sample and a large pixilated detector, usually requires days of time because of finite processor speeds. We report simulations on an SNS (Spallation Neutron Source) instrument, SEQUOIA, that reduce the time to less than 2 hours by using parallelization and the resources of the TeraGrid. SEQUOIA is a fine resolution (∆E/Ei ~ 1%) chopper spectrometer under construction at the SNS. It utilizes incident energies from Ei = 20 meV to 2 eV and will have ~ 144,000 detector pixels covering 1.6 Sr of solid angle. The full spectrometer, including a 1-D dispersivemore » sample, has been simulated using the Monte Carlo package McStas. This paper summarizes the method of parallelization for and results from these simulations. In addition, limitations of and proposed improvements to current analysis software will be discussed.« less

  7. Predicting Stress vs. Strain Behaviors of Thin-Walled High Pressure Die Cast Magnesium Alloy with Actual Pore Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Kyoo Sil; Barker, Erin; Cheng, Guang

    2016-01-06

    In this paper, a three-dimensional (3D) microstructure-based finite element modeling method (i.e., extrinsic modeling method) is developed, which can be used in examining the effects of porosity on the ductility/fracture of Mg castings. For this purpose, AM60 Mg tensile samples were generated under high-pressure die-casting in a specially-designed mold. Before the tensile test, the samples were CT-scanned to obtain the pore distributions within the samples. 3D microstructure-based finite element models were then developed based on the obtained actual pore distributions of the gauge area. The input properties for the matrix material were determined by fitting the simulation result to themore » experimental result of a selected sample, and then used for all the other samples’ simulation. The results show that the ductility and fracture locations predicted from simulations agree well with the experimental results. This indicates that the developed 3D extrinsic modeling method may be used to examine the influence of various aspects of pore sizes/distributions as well as intrinsic properties (i.e., matrix properties) on the ductility/fracture of Mg castings.« less

  8. Temperature-accelerated molecular dynamics gives insights into globular conformations sampled in the free state of the AC catalytic domain.

    PubMed

    Selwa, Edithe; Huynh, Tru; Ciccotti, Giovanni; Maragliano, Luca; Malliavin, Thérèse E

    2014-10-01

    The catalytic domain of the adenyl cyclase (AC) toxin from Bordetella pertussis is activated by interaction with calmodulin (CaM), resulting in cAMP overproduction in the infected cell. In the X-ray crystallographic structure of the complex between AC and the C terminal lobe of CaM, the toxin displays a markedly elongated shape. As for the structure of the isolated protein, experimental results support the hypothesis that more globular conformations are sampled, but information at atomic resolution is still lacking. Here, we use temperature-accelerated molecular dynamics (TAMD) simulations to generate putative all-atom models of globular conformations sampled by CaM-free AC. As collective variables, we use centers of mass coordinates of groups of residues selected from the analysis of standard molecular dynamics (MD) simulations. Results show that TAMD allows extended conformational sampling and generates AC conformations that are more globular than in the complexed state. These structures are then refined via energy minimization and further unrestrained MD simulations to optimize inter-domain packing interactions, thus resulting in the identification of a set of hydrogen bonds present in the globular conformations. © 2014 Wiley Periodicals, Inc.

  9. Use of multiple picosecond high-mass molecular dynamics simulations to predict crystallographic B-factors of folded globular proteins.

    PubMed

    Pang, Yuan-Ping

    2016-09-01

    Predicting crystallographic B-factors of a protein from a conventional molecular dynamics simulation is challenging, in part because the B-factors calculated through sampling the atomic positional fluctuations in a picosecond molecular dynamics simulation are unreliable, and the sampling of a longer simulation yields overly large root mean square deviations between calculated and experimental B-factors. This article reports improved B-factor prediction achieved by sampling the atomic positional fluctuations in multiple picosecond molecular dynamics simulations that use uniformly increased atomic masses by 100-fold to increase time resolution. Using the third immunoglobulin-binding domain of protein G, bovine pancreatic trypsin inhibitor, ubiquitin, and lysozyme as model systems, the B-factor root mean square deviations (mean ± standard error) of these proteins were 3.1 ± 0.2-9 ± 1 Å 2 for Cα and 7.3 ± 0.9-9.6 ± 0.2 Å 2 for Cγ, when the sampling was done for each of these proteins over 20 distinct, independent, and 50-picosecond high-mass molecular dynamics simulations with AMBER forcefield FF12MC or FF14SB. These results suggest that sampling the atomic positional fluctuations in multiple picosecond high-mass molecular dynamics simulations may be conducive to a priori prediction of crystallographic B-factors of a folded globular protein.

  10. Monitoring and identification of spatiotemporal landscape changes in multiple remote sensing images by using a stratified conditional Latin hypercube sampling approach and geostatistical simulation.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh

    2011-06-01

    This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.

  11. Implementation and Testing of Turbulence Models for the F18-HARV Simulation

    NASA Technical Reports Server (NTRS)

    Yeager, Jessie C.

    1998-01-01

    This report presents three methods of implementing the Dryden power spectral density model for atmospheric turbulence. Included are the equations which define the three methods and computer source code written in Advanced Continuous Simulation Language to implement the equations. Time-history plots and sample statistics of simulated turbulence results from executing the code in a test program are also presented. Power spectral densities were computed for sample sequences of turbulence and are plotted for comparison with the Dryden spectra. The three model implementations were installed in a nonlinear six-degree-of-freedom simulation of the High Alpha Research Vehicle airplane. Aircraft simulation responses to turbulence generated with the three implementations are presented as plots.

  12. Idealized vs. Realistic Microstructures: An Atomistic Simulation Case Study on γ/γ′ Microstructures

    PubMed Central

    Prakash, Aruna; Bitzek, Erik

    2017-01-01

    Single-crystal Ni-base superalloys, consisting of a two-phase γ/γ′ microstructure, retain high strengths at elevated temperatures and are key materials for high temperature applications, like, e.g., turbine blades of aircraft engines. The lattice misfit between the γ and γ′ phases results in internal stresses, which significantly influence the deformation and creep behavior of the material. Large-scale atomistic simulations that are often used to enhance our understanding of the deformation mechanisms in such materials must accurately account for such misfit stresses. In this work, we compare the internal stresses in both idealized and experimentally-informed, i.e., more realistic, γ/γ′ microstructures. The idealized samples are generated by assuming, as is frequently done, a periodic arrangement of cube-shaped γ′ particles with planar γ/γ′ interfaces. The experimentally-informed samples are generated from two different sources to produce three different samples—the scanning electron microscopy micrograph-informed quasi-2D atomistic sample and atom probe tomography-informed stoichiometric and non-stoichiometric atomistic samples. Additionally, we compare the stress state of an idealized embedded cube microstructure with finite element simulations incorporating 3D periodic boundary conditions. Subsequently, we study the influence of the resulting stress state on the evolution of dislocation loops in the different samples. The results show that the stresses in the atomistic and finite element simulations are almost identical. Furthermore, quasi-2D boundary conditions lead to a significantly different stress state and, consequently, different evolution of the dislocation loop, when compared to samples with fully 3D boundary conditions. PMID:28772453

  13. Estimating rare events in biochemical systems using conditional sampling.

    PubMed

    Sundar, V S

    2017-01-28

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  14. Lunar Regolith Characterization for Simulant Design and Evaluation using Figure of Merit Algorithms

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Douglas L.; Melemore, Carole A.; Fikes, John C.; Stoeser, Douglas B.; Wentworth, Susan J.; McKay, David S.

    2009-01-01

    NASA's Marshall Space Flight Center (MSFC), in conjunction with the United States Geological Survey (USGS) and aided by personnel from the Astromaterials Research and Exploration Science group at Johnson Space Center (ARES-JSC), is implementing a new data acquisition strategy to support the development and evaluation of lunar regolith simulants. The first analyses of lunar regolith samples by the simulant group were carried out in early 2008 on samples from Apollo 16 core 64001/64002. The results of these analyses are combined with data compiled from the literature to generate a reference composition and particle size distribution (PSD)) for lunar highlands regolith. In this paper we present the specifics of particle type composition and PSD for this reference composition. Furthermore. we use Figure-of-Merit (FoM) routines to measure the characteristics of a number of lunar regolith simulants against this reference composition. The lunar highlands regolith reference composition and the FoM results are presented to guide simulant producers and simulant users in their research and development processes.

  15. Identifying the potential of changes to blood sample logistics using simulation.

    PubMed

    Jørgensen, Pelle; Jacobsen, Peter; Poulsen, Jørgen Hjelm

    2013-01-01

    Using simulation as an approach to display and improve internal logistics at hospitals has great potential. This study shows how a simulation model displaying the morning blood-taking round at a Danish public hospital can be developed and utilized with the aim of improving the logistics. The focus of the simulation was to evaluate changes made to the transportation of blood samples between wards and the laboratory. The average- (AWT) and maximum waiting time (MWT) from a blood sample was drawn at the ward until it was received at the laboratory, and the distribution of arrivals of blood samples in the laboratory were used as the evaluation criteria. Four different scenarios were tested and compared with the current approach: (1) Using AGVs (mobile robots), (2) using a pneumatic tube system, (3) using porters that are called upon, or (4) using porters that come to the wards every 45 minutes. Furthermore, each of the scenarios was tested in terms of what amount of resources would give the optimal result. The simulations showed a big improvement potential in implementing a new technology/mean for transporting the blood samples. The pneumatic tube system showed the biggest potential lowering the AWT and MWT with approx. 36% and 18%, respectively. Additionally, all of the scenarios had a more even distribution of arrivals except for porters coming to the wards every 45 min. As a consequence of the results obtained in the study, the hospital decided to implement a pneumatic tube system.

  16. Effect of Particle Shape on Mechanical Behaviors of Rocks: A Numerical Study Using Clumped Particle Model

    PubMed Central

    Rong, Guan; Liu, Guang; Zhou, Chuang-bing

    2013-01-01

    Since rocks are aggregates of mineral particles, the effect of mineral microstructure on macroscopic mechanical behaviors of rocks is inneglectable. Rock samples of four different particle shapes are established in this study based on clumped particle model, and a sphericity index is used to quantify particle shape. Model parameters for simulation in PFC are obtained by triaxial compression test of quartz sandstone, and simulation of triaxial compression test is then conducted on four rock samples with different particle shapes. It is seen from the results that stress thresholds of rock samples such as crack initiation stress, crack damage stress, and peak stress decrease with the increasing of the sphericity index. The increase of sphericity leads to a drop of elastic modulus and a rise in Poisson ratio, while the decreasing sphericity usually results in the increase of cohesion and internal friction angle. Based on volume change of rock samples during simulation of triaxial compression test, variation of dilation angle with plastic strain is also studied. PMID:23997677

  17. Effect of particle shape on mechanical behaviors of rocks: a numerical study using clumped particle model.

    PubMed

    Rong, Guan; Liu, Guang; Hou, Di; Zhou, Chuang-Bing

    2013-01-01

    Since rocks are aggregates of mineral particles, the effect of mineral microstructure on macroscopic mechanical behaviors of rocks is inneglectable. Rock samples of four different particle shapes are established in this study based on clumped particle model, and a sphericity index is used to quantify particle shape. Model parameters for simulation in PFC are obtained by triaxial compression test of quartz sandstone, and simulation of triaxial compression test is then conducted on four rock samples with different particle shapes. It is seen from the results that stress thresholds of rock samples such as crack initiation stress, crack damage stress, and peak stress decrease with the increasing of the sphericity index. The increase of sphericity leads to a drop of elastic modulus and a rise in Poisson ratio, while the decreasing sphericity usually results in the increase of cohesion and internal friction angle. Based on volume change of rock samples during simulation of triaxial compression test, variation of dilation angle with plastic strain is also studied.

  18. Comparison of measured and simulated concentrations of 133Xe in the shallow subsurface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Christine M.; Biegalski, Steven R.; Lowre

    2018-09-01

    Radioactive isotopes of the noble gases xenon and argon are considered primary indicators of an underground nuclear explosion. However, high atmospheric concentrations from other anthropogenic sources may lead to an elevation in the underground levels of these gases, particularly in times of increasing atmospheric pressure. In 2014, a week long sampling campaign near Canadian Nuclear Laboratories in the Ottawa River Valley resulted in first of their kind measurements of atmospheric 133Xe that had been pressed into the subsurface. In an effort to better understand this imprinting process, a second follow-up sampling campaign was conducted in the same location in 2016.more » The results of the second sampling campaign, where samples were collected at depths of 1 and 2 meters over a 14 day period and measured for their 133Xe concentration, are presented here. Gas transport and sample concentrations were predicted using the Subsurface Transport over Multiple Phases (STOMP) simulator. These results are examined and compared to the corresponding experimental results.« less

  19. Comparison of measured and simulated concentrations of 133 Xe in the shallow subsurface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, C.; Biegalski, S. R.; Lowrey, J. D.

    Radioactive isotopes of the noble gases xenon and argon are considered primary indicators of an underground nuclear explosion. However, high atmospheric concentrations from other anthropogenic sources may lead to an elevation in the underground levels of these gases, particularly in times of increasing atmospheric pressure. In 2014, a week long sampling campaign near Canadian Nuclear Laboratories in the Ottawa River Valley resulted in first of their kind measurements of atmospheric 133Xe that had been pressed into the subsurface. In an effort to better understand this imprinting process, a second follow-up sampling campaign was conducted in the same location in 2016.more » The results of the second sampling campaign, where samples were collected at depths of 1 and 2 meters over a 14 day period and measured for their 133Xe concentration, are presented here. Gas transport and sample concentrations were predicted using the Subsurface Transport over Multiple Phases (STOMP) simulator. These results are examined and compared to the corresponding experimental results.« less

  20. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  1. Laboratory simulation of the effects of overburden stress on the specific storage of shallow artesian aquifers

    USGS Publications Warehouse

    Sepúlveda, Nicasio; Zack, A.L.; Krishna, J.H.; Quinones-Aponte, Vicente; Gomez-Gomez, Fernando; Morris, G.L.

    1990-01-01

    A laboratory experiment to measure the specific storage of an aquifer material was conducted. A known dead load, simulating an overburden load, was applied to a sample of completely saturated aquifer material contained inside a cylinder. After the dead load was applied, water was withdrawn from the sample, causing the hydrostatic pressure to decrease and the effective stress to increase. The resulting compression of the sample and the amount of water withdrawn were measured after equilibrium was reached. The procedure was repeated by increasing the dead load and the hydrostatic pressure followed by withdrawing water to determine new values of effective stress and compaction. The simulated dead loads are typical of those experienced by shallow artesian aquifers. The void ratio and the effective stress of the aquifer sample, as simulated by different dead loads, determine the pore volume compressibility which, in turn, determines the values of specific storage. An analytical algorithm was used to independently determine the stress dependent profile of specific storage. These values are found to be in close agreement with laboratory results. Implications for shallow artesian aquifers, with relatively small overburden stress, are also addressed.

  2. MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package

    DOE PAGES

    Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; ...

    2015-11-28

    MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less

  3. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGES

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  4. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lin, Guang, E-mail: guanglin@purdue.edu

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  5. Using Multistate Reweighting to Rapidly and Efficiently Explore Molecular Simulation Parameters Space for Nonbonded Interactions.

    PubMed

    Paliwal, Himanshu; Shirts, Michael R

    2013-11-12

    Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.

  6. The Abundance of Large Arcs From CLASH

    NASA Astrophysics Data System (ADS)

    Xu, Bingxiao; Postman, Marc; Meneghetti, Massimo; Coe, Dan A.; Clash Team

    2015-01-01

    We have developed an automated arc-finding algorithm to perform a rigorous comparison of the observed and simulated abundance of large lensed background galaxies (a.k.a arcs). We use images from the CLASH program to derive our observed arc abundance. Simulated CLASH images are created by performing ray tracing through mock clusters generated by the N-body simulation calibrated tool -- MOKA, and N-body/hydrodynamic simulations -- MUSIC, over the same mass and redshift range as the CLASH X-ray selected sample. We derive a lensing efficiency of 15 ± 3 arcs per cluster for the X-ray selected CLASH sample and 4 ± 2 arcs per cluster for the simulated sample. The marginally significant difference (3.0 σ) between the results for the observations and the simulations can be explained by the systematically smaller area with magnification larger than 3 (by a factor of ˜4) in both MOKA and MUSIC mass models relative to those derived from the CLASH data. Accounting for this difference brings the observed and simulated arc statistics into full agreement. We find that the source redshift distribution does not have big impact on the arc abundance but the arc abundance is very sensitive to the concentration of the dark matter halos. Our results suggest that the solution to the "arc statistics problem" lies primarily in matching the cluster dark matter distribution.

  7. Analysis of Lunar Highland Regolith Samples from Apollo 16 Drive Core 64001/2 and Lunar Regolith Simulants - An Expanding Comparative Database

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Doug; Stoeser, Doug; Wentworth, Susan J.; Botha, Pieter WSK; Butcher, Alan R.; McKay, David; Horsch, Hanna; Benedictus, Aukje; Gottlieb, Paul

    2008-01-01

    We present modal data from QEMSCAN(registered TradeMark) beam analysis of Apollo 16 samples from drive core 64001/2. The analyzed lunar samples are thin sections 64002,6019 (5.0-8.0 cm depth) and 64001,6031 (50.0-53.1 cm depth) and sieved grain mounts 64002,262 and 64001,374 from depths corresponding to the thin sections, respectively. We also analyzed lunar highland regolith simulants NU-LHT-1M, -2M, and OB-1, low-Ti mare simulants JSC-1, -lA, -1AF, and FJS-1, and high-Ti mare simulant MLS-1. The preliminary results comprise the beginning of an internally consistent database of lunar regolith and regolith simulant mineral and glass information. This database, combined with previous and concurrent studies on phase chemistry, bulk chemistry, and with data on particle shape and size distribution, will serve to guide lunar scientists and engineers in choosing simulants for their applications. These results are modal% by phase rather than by particle type, so they are not directly comparable to most previously published lunar data that report lithic fragments, monomineralic particles, agglutinates, etc. Of the highland simulants, 08-1 has an integrated modal composition closer than NU-LHT-1M to that of the 64001/2 samples, However, this and other studies show that NU-LHT-1M and -2M have minor and trace mineral (e.g., Fe-Ti oxides and phosphates) populations and mineral and glass chemistry closer to these lunar samples. The finest fractions (0-20 microns) in the sieved lunar samples are enriched in glass relative to the integrated compositions by approx.30% for 64002,262 and approx.15% for 64001,374. Plagioclase, pyroxene, and olivine are depleted in these finest fractions. This could be important to lunar dust mitigation efforts and astronaut health - none of the analyzed simulants show this trend. Contrary to previously reported modal analyses of monomineralic grains in lunar regolith, these area% modal analyses do not show a systematic increase in plagiociase/pyroxene as size fraction decreases.

  8. Simulation of fatigue fracture of TiNi shape memory alloy samples at cyclic loading in pseudoelastic state

    NASA Astrophysics Data System (ADS)

    Belyaev, Fedor S.; Volkov, Aleksandr E.; Evard, Margarita E.; Khvorov, Aleksandr A.

    2018-05-01

    Microstructural simulation of mechanical behavior of shape memory alloy samples at cyclic loading in the pseudoelastic state has been carried out. Evolution of the oriented and scattered deformation defects leading to damage accumulation and resulting in the fatigue fracture has been taken into account. Simulations were performed for the regime of loading imitating that for endovascular stents: preliminary straining, unloading, deformation up to some mean level of the strain and subsequent mechanical cycling at specified strain amplitude. Dependence of the fatigue life on the loading parameters (pre-strain, mean and amplitude values of strain) has been obtained. The results show a good agreement with available experimental data.

  9. Kissing loop interaction in adenine riboswitch: insights from umbrella sampling simulations.

    PubMed

    Di Palma, Francesco; Bottaro, Sandro; Bussi, Giovanni

    2015-01-01

    Riboswitches are cis-acting regulatory RNA elements prevalently located in the leader sequences of bacterial mRNA. An adenine sensing riboswitch cis-regulates adeninosine deaminase gene (add) in Vibrio vulnificus. The structural mechanism regulating its conformational changes upon ligand binding mostly remains to be elucidated. In this open framework it has been suggested that the ligand stabilizes the interaction of the distal "kissing loop" complex. Using accurate full-atom molecular dynamics with explicit solvent in combination with enhanced sampling techniques and advanced analysis methods it could be possible to provide a more detailed perspective on the formation of these tertiary contacts. In this work, we used umbrella sampling simulations to study the thermodynamics of the kissing loop complex in the presence and in the absence of the cognate ligand. We enforced the breaking/formation of the loop-loop interaction restraining the distance between the two loops. We also assessed the convergence of the results by using two alternative initialization protocols. A structural analysis was performed using a novel approach to analyze base contacts. Contacts between the two loops were progressively lost when larger inter-loop distances were enforced. Inter-loop Watson-Crick contacts survived at larger separation when compared with non-canonical pairing and stacking interactions. Intra-loop stacking contacts remained formed upon loop undocking. Our simulations qualitatively indicated that the ligand could stabilize the kissing loop complex. We also compared with previously published simulation studies. Kissing complex stabilization given by the ligand was compatible with available experimental data. However, the dependence of its value on the initialization protocol of the umbrella sampling simulations posed some questions on the quantitative interpretation of the results and called for better converged enhanced sampling simulations.

  10. Does an uneven sample size distribution across settings matter in cross-classified multilevel modeling? Results of a simulation study.

    PubMed

    Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C

    2018-06-06

    Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Impact of Sampling Schemes on Demographic Inference: An Empirical Study in Two Species with Different Mating Systems and Demographic Histories

    PubMed Central

    St. Onge, K. R.; Palmé, A. E.; Wright, S. I.; Lascoux, M.

    2012-01-01

    Most species have at least some level of genetic structure. Recent simulation studies have shown that it is important to consider population structure when sampling individuals to infer past population history. The relevance of the results of these computer simulations for empirical studies, however, remains unclear. In the present study, we use DNA sequence datasets collected from two closely related species with very different histories, the selfing species Capsella rubella and its outcrossing relative C. grandiflora, to assess the impact of different sampling strategies on summary statistics and the inference of historical demography. Sampling strategy did not strongly influence the mean values of Tajima’s D in either species, but it had some impact on the variance. The general conclusions about demographic history were comparable across sampling schemes even when resampled data were analyzed with approximate Bayesian computation (ABC). We used simulations to explore the effects of sampling scheme under different demographic models. We conclude that when sequences from modest numbers of loci (<60) are analyzed, the sampling strategy is generally of limited importance. The same is true under intermediate or high levels of gene flow (4Nm > 2–10) in models in which global expansion is combined with either local expansion or hierarchical population structure. Although we observe a less severe effect of sampling than predicted under some earlier simulation models, our results should not be seen as an encouragement to neglect this issue. In general, a good coverage of the natural range, both within and between populations, will be needed to obtain a reliable reconstruction of a species’s demographic history, and in fact, the effect of sampling scheme on polymorphism patterns may itself provide important information about demographic history. PMID:22870403

  12. PROCESS SIMULATION OF COLD PRESSING OF ARMSTRONG CP-Ti POWDERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabau, Adrian S; Gorti, Sarma B; Peter, William H

    A computational methodology is presented for the process simulation of cold pressing of Armstrong CP-Ti Powders. The computational model was implemented in the commercial finite element program ABAQUSTM. Since the powder deformation and consolidation is governed by specific pressure-dependent constitutive equations, several solution algorithms were developed for the ABAQUS user material subroutine, UMAT. The solution algorithms were developed for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations that describe the evolution of the state variables. Since ABAQUS requires the use of a full Newton-Raphson algorithm for the stress-strainmore » equations, an algorithm for obtaining the tangent/linearization moduli, which is consistent with the return-mapping algorithm, also was developed. Numerical simulation results are presented for the cold compaction of the Ti powders. Several simulations were conducted for cylindrical samples with different aspect ratios. The numerical simulation results showed that for the disk samples, the minimum von Mises stress was approximately half than its maximum value. The hydrostatic stress distribution exhibits a variation smaller than that of the von Mises stress. It was found that for the disk and cylinder samples the minimum hydrostatic stresses were approximately 23 and 50% less than its maximum value, respectively. It was also found that the minimum density was noticeably affected by the sample height.« less

  13. Simulating and explaining passive air sampling rates for semi-volatile compounds on polyurethane foam passive samplers

    PubMed Central

    Petrich, Nicholas T.; Spak, Scott N.; Carmichael, Gregory R.; Hu, Dingfei; Martinez, Andres; Hornbuckle, Keri C.

    2013-01-01

    Passive air samplers (PAS) including polyurethane foam (PUF) are widely deployed as an inexpensive and practical way to sample semi-volatile pollutants. However, concentration estimates from PAS rely on constant empirical mass transfer rates, which add unquantified uncertainties to concentrations. Here we present a method for modeling hourly sampling rates for semi-volatile compounds from hourly meteorology using first-principle chemistry, physics, and fluid dynamics, calibrated from depuration experiments. This approach quantifies and explains observed effects of meteorology on variability in compound-specific sampling rates and analyte concentrations; simulates nonlinear PUF uptake; and recovers synthetic hourly concentrations at a reference temperature. Sampling rates are evaluated for polychlorinated biphenyl congeners at a network of Harner model samplers in Chicago, Illinois during 2008, finding simulated average sampling rates within analytical uncertainty of those determined from loss of depuration compounds, and confirming quasi-linear uptake. Results indicate hourly, daily and interannual variability in sampling rates, sensitivity to temporal resolution in meteorology, and predictable volatility-based relationships between congeners. We quantify importance of each simulated process to sampling rates and mass transfer and assess uncertainty contributed by advection, molecular diffusion, volatilization, and flow regime within the PAS, finding PAS chamber temperature contributes the greatest variability to total process uncertainty (7.3%). PMID:23837599

  14. Investigation of mechanical and thermal properties of microwave-sintered lunar simulant materials using 2.45 GHz radiation

    NASA Technical Reports Server (NTRS)

    Meek, T. T.

    1990-01-01

    The mechanical and thermal properties of lunar simulant material were investigated. An alternative method of examining thermal shock in microwave-sintered lunar samples was researched. A computer code was developed that models how the fracture toughness of a thermally shocked lunar simulant sample is related to the sample hardness as measured by a micro-hardness indentor apparatus. This technique enables much data to be gathered from a few samples. Several samples were sintered at different temperatures and for different times at the temperatures. The melting and recrystallization characteristics of a well-studied binary system were also investigated to see if the thermodynamic barrier for the nucleation of a crystalline phase may be affected by the presence of a microwave field. The system chosen was the albite (sodium alumino silicate) anorthite system (calcium alumino silicate). The results of these investigations are presented.

  15. Simulation of sampling effects in FPAs

    NASA Astrophysics Data System (ADS)

    Cook, Thomas H.; Hall, Charles S.; Smith, Frederick G.; Rogne, Timothy J.

    1991-09-01

    The use of multiplexers and large focal plane arrays in advanced thermal imaging systems has drawn renewed attention to sampling and aliasing issues in imaging applications. As evidenced by discussions in a recent workshop, there is no clear consensus among experts whether aliasing in sensor designs can be readily tolerated, or must be avoided at all cost. Further, there is no straightforward, analytical method that can answer the question, particularly when considering image interpreters as different as humans and autonomous target recognizers (ATR). However, the means exist for investigating sampling and aliasing issues through computer simulation. The U.S. Army Tank-Automotive Command (TACOM) Thermal Image Model (TTIM) provides realistic sensor imagery that can be evaluated by both human observers and TRs. This paper briefly describes the history and current status of TTIM, explains the simulation of FPA sampling effects, presents validation results of the FPA sensor model, and demonstrates the utility of TTIM for investigating sampling effects in imagery.

  16. Faster protein folding using enhanced conformational sampling of molecular dynamics simulation.

    PubMed

    Kamberaj, Hiqmet

    2018-05-01

    In this study, we applied swarm particle-like molecular dynamics (SPMD) approach to enhance conformational sampling of replica exchange simulations. In particular, the approach showed significant improvement in sampling efficiency of conformational phase space when combined with replica exchange method (REM) in computer simulation of peptide/protein folding. First we introduce the augmented dynamical system of equations, and demonstrate the stability of the algorithm. Then, we illustrate the approach by using different fully atomistic and coarse-grained model systems, comparing them with the standard replica exchange method. In addition, we applied SPMD simulation to calculate the time correlation functions of the transitions in a two dimensional surface to demonstrate the enhancement of transition path sampling. Our results showed that folded structure can be obtained in a shorter simulation time using the new method when compared with non-augmented dynamical system. Typically, in less than 0.5 ns using replica exchange runs assuming that native folded structure is known and within simulation time scale of 40 ns in the case of blind structure prediction. Furthermore, the root mean square deviations from the reference structures were less than 2Å. To demonstrate the performance of new method, we also implemented three simulation protocols using CHARMM software. Comparisons are also performed with standard targeted molecular dynamics simulation method. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Designing a monitoring program to estimate estuarine survival of anadromous salmon smolts: simulating the effect of sample design on inference

    USGS Publications Warehouse

    Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.

    2015-01-01

    A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data could result in better informed management decisions and assist in guidance for more effective estuarine restoration projects.

  18. Equilibrium Sampling in Biomolecular Simulation

    PubMed Central

    2015-01-01

    Equilibrium sampling of biomolecules remains an unmet challenge after more than 30 years of atomistic simulation. Efforts to enhance sampling capability, which are reviewed here, range from the development of new algorithms to parallelization to novel uses of hardware. Special focus is placed on classifying algorithms — most of which are underpinned by a few key ideas — in order to understand their fundamental strengths and limitations. Although algorithms have proliferated, progress resulting from novel hardware use appears to be more clear-cut than from algorithms alone, partly due to the lack of widely used sampling measures. PMID:21370970

  19. Simulation of the Effects of Random Measurement Errors

    ERIC Educational Resources Information Center

    Kinsella, I. A.; Hannaidh, P. B. O.

    1978-01-01

    Describes a simulation method for measurement of errors that requires calculators and tables of random digits. Each student simulates the random behaviour of the component variables in the function and by combining the results of all students, the outline of the sampling distribution of the function can be obtained. (GA)

  20. Affected States Soft Independent Modeling by Class Analogy from the Relation Between Independent Variables, Number of Independent Variables and Sample Size

    PubMed Central

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz

    2013-01-01

    Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065

  1. Numerical simulation and analysis for low-frequency rock physics measurements

    NASA Astrophysics Data System (ADS)

    Dong, Chunhui; Tang, Genyang; Wang, Shangxu; He, Yanxiao

    2017-10-01

    In recent years, several experimental methods have been introduced to measure the elastic parameters of rocks in the relatively low-frequency range, such as differential acoustic resonance spectroscopy (DARS) and stress-strain measurement. It is necessary to verify the validity and feasibility of the applied measurement method and to quantify the sources and levels of measurement error. Relying solely on the laboratory measurements, however, we cannot evaluate the complete wavefield variation in the apparatus. Numerical simulations of elastic wave propagation, on the other hand, are used to model the wavefield distribution and physical processes in the measurement systems, and to verify the measurement theory and analyze the measurement results. In this paper we provide a numerical simulation method to investigate the acoustic waveform response of the DARS system and the quasi-static responses of the stress-strain system, both of which use axisymmetric apparatus. We applied this method to parameterize the properties of the rock samples, the sample locations and the sensor (hydrophone and strain gauges) locations and simulate the measurement results, i.e. resonance frequencies and axial and radial strains on the sample surface, from the modeled wavefield following the physical experiments. Rock physical parameters were estimated by inversion or direct processing of these data, and showed a perfect match with the true values, thus verifying the validity of the experimental measurements. Error analysis was also conducted for the DARS system with 18 numerical samples, and the sources and levels of error are discussed. In particular, we propose an inversion method for estimating both density and compressibility of these samples. The modeled results also showed fairly good agreement with the real experiment results, justifying the effectiveness and feasibility of our modeling method.

  2. Exhaustively sampling peptide adsorption with metadynamics.

    PubMed

    Deighan, Michael; Pfaendtner, Jim

    2013-06-25

    Simulating the adsorption of a peptide or protein and obtaining quantitative estimates of thermodynamic observables remains challenging for many reasons. One reason is the dearth of molecular scale experimental data available for validating such computational models. We also lack simulation methodologies that effectively address the dual challenges of simulating protein adsorption: overcoming strong surface binding and sampling conformational changes. Unbiased classical simulations do not address either of these challenges. Previous attempts that apply enhanced sampling generally focus on only one of the two issues, leaving the other to chance or brute force computing. To improve our ability to accurately resolve adsorbed protein orientation and conformational states, we have applied the Parallel Tempering Metadynamics in the Well-Tempered Ensemble (PTMetaD-WTE) method to several explicitly solvated protein/surface systems. We simulated the adsorption behavior of two peptides, LKα14 and LKβ15, onto two self-assembled monolayer (SAM) surfaces with carboxyl and methyl terminal functionalities. PTMetaD-WTE proved effective at achieving rapid convergence of the simulations, whose results elucidated different aspects of peptide adsorption including: binding free energies, side chain orientations, and preferred conformations. We investigated how specific molecular features of the surface/protein interface change the shape of the multidimensional peptide binding free energy landscape. Additionally, we compared our enhanced sampling technique with umbrella sampling and also evaluated three commonly used molecular dynamics force fields.

  3. SSAGES: Software Suite for Advanced General Ensemble Simulations.

    PubMed

    Sidky, Hythem; Colón, Yamil J; Helfferich, Julian; Sikora, Benjamin J; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S; Reid, Daniel R; Sevgen, Emre; Thapar, Vikram; Webb, Michael A; Whitmer, Jonathan K; de Pablo, Juan J

    2018-01-28

    Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques-including adaptive biasing force, string methods, and forward flux sampling-that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.

  4. Nonuniform sampling techniques for antenna applications

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Yahya; Cheung, Rudolf Lap-Tung

    1987-01-01

    A two-dimensional sampling technique, which can employ irregularly spaced samples (amplitude and phase) in order to generate the complete far-field patterns is presented. The technique implements a matrix inversion algorithm, which depends only on the nonuniform sampled data point locations and with no dependence on the actual field values at these points. A powerful simulation algorithm is presented to allow a real-life simulation of many reflector/feed configurations and to determine the usefulness of the nonuniform sampling technique for the copolar and cross-polar patterns. Additionally, an overlapped window concept and a generalized error simulation model are discussed to identify the stability of the technique for recovering the field data among the nonuniform sampled data. Numerical results are tailored for the pattern reconstruction of a 20-m offset reflector antenna operating at L-band. This reflector is planned to be used in a proposed measurement concept of large antenna aboard the Space Shuttle, whereby it would be almost impractical to accurately control the movement of the Shuttle with respect to the RF source in prescribed directions in order to generate uniform sampled points. Also, application of the nonuniform sampling technique to patterns obtained using near-field measured data is demonstrated. Finally, results of an actual far-field measurement are presented for the construction of patterns of a reflector antenna from a set of nonuniformly distributed measured amplitude and phase data.

  5. Simulation analyses of space use: Home range estimates, variability, and sample size

    USGS Publications Warehouse

    Bekoff, Marc; Mech, L. David

    1984-01-01

    Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size (number of locations). As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecific variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.

  6. The Simulation Realization of Pavement Roughness in the Time Domain

    NASA Astrophysics Data System (ADS)

    XU, H. L.; He, L.; An, D.

    2017-10-01

    As the needs for the dynamic study on the vehicle-pavement system and the simulated vibration table test, how to simulate the pavement roughness actually is important guarantee for whether calculation and test can reflect the actual situation or not. Using the power spectral density function, the simulation of pavement roughness can be realized by Fourier inverse transform. The main idea of this method was that the spectrum amplitude and random phase were obtained separately according to the power spectrum, and then the simulation of pavement roughness was obtained in the time domain through the Fourier inverse transform (IFFT). In the process, the sampling interval (Δl) was 0.1m, and the sampling points(N) was 4096, which satisfied the accuracy requirements. Using this method, the simulate results of pavement roughness (A~H grades) were obtain in the time domain.

  7. Positive Wigner functions render classical simulation of quantum computation efficient.

    PubMed

    Mari, A; Eisert, J

    2012-12-07

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  8. Achieving Rigorous Accelerated Conformational Sampling in Explicit Solvent.

    PubMed

    Doshi, Urmi; Hamelberg, Donald

    2014-04-03

    Molecular dynamics simulations can provide valuable atomistic insights into biomolecular function. However, the accuracy of molecular simulations on general-purpose computers depends on the time scale of the events of interest. Advanced simulation methods, such as accelerated molecular dynamics, have shown tremendous promise in sampling the conformational dynamics of biomolecules, where standard molecular dynamics simulations are nonergodic. Here we present a sampling method based on accelerated molecular dynamics in which rotatable dihedral angles and nonbonded interactions are boosted separately. This method (RaMD-db) is a different implementation of the dual-boost accelerated molecular dynamics, introduced earlier. The advantage is that this method speeds up sampling of the conformational space of biomolecules in explicit solvent, as the degrees of freedom most relevant for conformational transitions are accelerated. We tested RaMD-db on one of the most difficult sampling problems - protein folding. Starting from fully extended polypeptide chains, two fast folding α-helical proteins (Trpcage and the double mutant of C-terminal fragment of Villin headpiece) and a designed β-hairpin (Chignolin) were completely folded to their native structures in very short simulation time. Multiple folding/unfolding transitions could be observed in a single trajectory. Our results show that RaMD-db is a promisingly fast and efficient sampling method for conformational transitions in explicit solvent. RaMD-db thus opens new avenues for understanding biomolecular self-assembly and functional dynamics occurring on long time and length scales.

  9. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    PubMed

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  10. Morphological changes in polycrystalline Fe after compression and release

    NASA Astrophysics Data System (ADS)

    Gunkelmann, Nina; Tramontina, Diego R.; Bringa, Eduardo M.; Urbassek, Herbert M.

    2015-02-01

    Despite a number of large-scale molecular dynamics simulations of shock compressed iron, the morphological properties of simulated recovered samples are still unexplored. Key questions remain open in this area, including the role of dislocation motion and deformation twinning in shear stress release. In this study, we present simulations of homogeneous uniaxial compression and recovery of large polycrystalline iron samples. Our results reveal significant recovery of the body-centered cubic grains with some deformation twinning driven by shear stress, in agreement with experimental results by Wang et al. [Sci. Rep. 3, 1086 (2013)]. The twin fraction agrees reasonably well with a semi-analytical model which assumes a critical shear stress for twinning. On reloading, twins disappear and the material reaches a very low strength value.

  11. Protecting High Energy Barriers: A New Equation to Regulate Boost Energy in Accelerated Molecular Dynamics Simulations.

    PubMed

    Sinko, William; de Oliveira, César Augusto F; Pierce, Levi C T; McCammon, J Andrew

    2012-01-10

    Molecular dynamics (MD) is one of the most common tools in computational chemistry. Recently, our group has employed accelerated molecular dynamics (aMD) to improve the conformational sampling over conventional molecular dynamics techniques. In the original aMD implementation, sampling is greatly improved by raising energy wells below a predefined energy level. Recently, our group presented an alternative aMD implementation where simulations are accelerated by lowering energy barriers of the potential energy surface. When coupled with thermodynamic integration simulations, this implementation showed very promising results. However, when applied to large systems, such as proteins, the simulation tends to be biased to high energy regions of the potential landscape. The reason for this behavior lies in the boost equation used since the highest energy barriers are dramatically more affected than the lower ones. To address this issue, in this work, we present a new boost equation that prevents oversampling of unfavorable high energy conformational states. The new boost potential provides not only better recovery of statistics throughout the simulation but also enhanced sampling of statistically relevant regions in explicit solvent MD simulations.

  12. Upward Flame Spread Over Thin Solids in Partial Gravity

    NASA Technical Reports Server (NTRS)

    Feier, I. I.; Shih, H. Y.; Sacksteder, K. R.; Tien, J. S.

    2001-01-01

    The effects of partial-gravity, reduced pressure, and sample width on upward flame spread over a thin cellulose fuel were studied experimentally and the results were compared to a numerical flame spread simulation. Fuel samples 1-cm, 2-cm, and 4-cm wide were burned in air at reduced pressures of 0.2 to 0.4 atmospheres in simulated gravity environments of 0.1-G, 0.16-G (Lunar), and 0.38-G (Martian) onboard the NASA KC-135 aircraft and in normal-gravity tests. Observed steady flame propagation speeds and pyrolysis lengths were approximately proportional to the gravity level. Flames spread more quickly and were longer with the wider samples and the variations with gravity and pressure increased with sample width. A numerical simulation of upward flame spread was developed including three-dimensional Navier-Stokes equations, one-step Arrhenius kinetics for the gas phase flame and for the solid surface decomposition, and a fuel-surface radiative loss. The model provides detailed structure of flame temperatures, the flow field interactions with the flame, and the solid fuel mass disappearance. The simulation agrees with experimental flame spread rates and their dependence on gravity level but predicts a wider flammable region than found by experiment. Some unique three-dimensional flame features are demonstrated in the model results.

  13. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  14. Combining inferences from models of capture efficiency, detectability, and suitable habitat to classify landscapes for conservation of threatened bull trout

    USGS Publications Warehouse

    Peterson, J.; Dunham, J.B.

    2003-01-01

    Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.

  15. Dynamical Casimir Effect for Gaussian Boson Sampling.

    PubMed

    Peropadre, Borja; Huh, Joonsuk; Sabín, Carlos

    2018-02-28

    We show that the Dynamical Casimir Effect (DCE), realized on two multimode coplanar waveg-uide resonators, implements a gaussian boson sampler (GBS). The appropriate choice of the mirror acceleration that couples both resonators translates into the desired initial gaussian state and many-boson interference in a boson sampling network. In particular, we show that the proposed quantum simulator naturally performs a classically hard task, known as scattershot boson sampling. Our result unveils an unprecedented computational power of DCE, and paves the way for using DCE as a resource for quantum simulation.

  16. Computational investigation of noble gas adsorption and separation by nanoporous materials.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allendorf, Mark D.; Sanders, Joseph C.; Greathouse, Jeffery A.

    2008-10-01

    Molecular simulations are used to assess the ability of metal-organic framework (MOF) materials to store and separate noble gases. Specifically, grand canonical Monte Carlo simulation techniques are used to predict noble gas adsorption isotherms at room temperature. Experimental trends of noble gas inflation curves of a Zn-based material (IRMOF-1) are matched by the simulation results. The simulations also predict that IRMOF-1 selectively adsorbs Xe atoms in Xe/Kr and Xe/Ar mixtures at total feed gas pressures of 1 bar (14.7 psia) and 10 bar (147 psia). Finally, simulations of a copper-based MOF (Cu-BTC) predict this material's ability to selectively adsorb Xemore » and Kr atoms when present in trace amounts in atmospheric air samples. These preliminary results suggest that Cu-BTC may be an ideal candidate for the pre-concentration of noble gases from air samples. Additional simulations and experiments are needed to determine the saturation limit of Cu-BTC for xenon, and whether any krypton atoms would remain in the Cu-BTC pores upon saturation.« less

  17. Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge

    PubMed Central

    Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.

    2016-01-01

    Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749

  18. Simulation of local ion transport in lamellar block copolymer electrolytes based on electron micrographs

    DOE PAGES

    Chintapalli, Mahati; Higa, Kenneth; Chen, X. Chelsea; ...

    2016-12-19

    A method is presented in this paper to relate local morphology and ionic conductivity in a solid, lamellar block copolymer electrolyte for lithium batteries, by simulating conductivity through transmission electron micrographs. The electrolyte consists of polystyrene-block-poly(ethylene oxide) mixed with lithium bis(trifluoromethanesulfonyl) imide salt (SEO/LiTFSI), where the polystyrene phase is structural phase and the poly(ethylene oxide)/LiTFSI phase is ionically conductive. The electric potential distribution is simulated in binarized micrographs by solving the Laplace equation with constant potential boundary conditions. A morphology factor, f, is reported for each image by calculating the effective conductivity relative to a homogenous conductor. Images from twomore » samples are examined, one annealed with large lamellar grains and one unannealed with small grains. The average value off is 0.45 ± 0.04 for the annealed sample, and 0.37 ± 0.03 for the unannealed sample, both close to the value predicted by effective medium theory, 1/2. Simulated conductivities are compared to published experimental conductivities. The value of f Unannealed/f Annealed is 0.82 for simulations and 6.2 for experiments. Simulation results correspond well to predictions by effective medium theory but do not explain the experimental measurements. Finally, observation of nanoscale morphology over length scales greater than the size of the micrographs (~1 μm) may be required to explain the experimental results.« less

  19. Structural Diversity of Ligand-Binding Androgen Receptors Revealed by Microsecond Long Molecular Dynamics Simulations and Enhanced Sampling.

    PubMed

    Duan, Mojie; Liu, Na; Zhou, Wenfang; Li, Dan; Yang, Minghui; Hou, Tingjun

    2016-09-13

    Androgen receptor (AR) plays important roles in the development of prostate cancer (PCa). The antagonistic drugs, which suppress the activity of AR, are widely used in the treatment of PCa. However, the molecular mechanism of antagonism about how ligands affect the structures of AR remains elusive. To better understand the conformational variability of ARs bound with agonists or antagonists, we performed long time unbiased molecular dynamics (MD) simulations and enhanced sampling simulations for the ligand binding domain of AR (AR-LBD) in complex with various ligands. Based on the simulation results, we proposed an allosteric pathway linking ligands and helix 12 (H12) of AR-LBD, which involves the interactions among the ligands and the residues W741, H874, and I899. The interaction pathway provides an atomistic explanation of how ligands affect the structure of AR-LBD. A repositioning of H12 was observed, but it is facilitated by the C-terminal of H12, instead of by the loop between helix 11 (H11) and H12. The bias-exchange metadynamics simulations further demonstrated the above observations. More importantly, the free energy profiles constructed by the enhanced sampling simulations revealed the transition process between the antagonistic form and agonistic form of AR-LBD. Our results would be helpful for the design of more efficient antagonists of AR to combat PCa.

  20. Simulation of local ion transport in lamellar block copolymer electrolytes based on electron micrographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chintapalli, Mahati; Higa, Kenneth; Chen, X. Chelsea

    A method is presented in this paper to relate local morphology and ionic conductivity in a solid, lamellar block copolymer electrolyte for lithium batteries, by simulating conductivity through transmission electron micrographs. The electrolyte consists of polystyrene-block-poly(ethylene oxide) mixed with lithium bis(trifluoromethanesulfonyl) imide salt (SEO/LiTFSI), where the polystyrene phase is structural phase and the poly(ethylene oxide)/LiTFSI phase is ionically conductive. The electric potential distribution is simulated in binarized micrographs by solving the Laplace equation with constant potential boundary conditions. A morphology factor, f, is reported for each image by calculating the effective conductivity relative to a homogenous conductor. Images from twomore » samples are examined, one annealed with large lamellar grains and one unannealed with small grains. The average value off is 0.45 ± 0.04 for the annealed sample, and 0.37 ± 0.03 for the unannealed sample, both close to the value predicted by effective medium theory, 1/2. Simulated conductivities are compared to published experimental conductivities. The value of f Unannealed/f Annealed is 0.82 for simulations and 6.2 for experiments. Simulation results correspond well to predictions by effective medium theory but do not explain the experimental measurements. Finally, observation of nanoscale morphology over length scales greater than the size of the micrographs (~1 μm) may be required to explain the experimental results.« less

  1. Latin hypercube sampling and geostatistical modeling of spatial uncertainty in a spatially explicit forest landscape model simulation

    Treesearch

    Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu

    2005-01-01

    Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...

  2. Robust spectral-domain optical coherence tomography speckle model and its cross-correlation coefficient analysis

    PubMed Central

    Liu, Xuan; Ramella-Roman, Jessica C.; Huang, Yong; Guo, Yuan; Kang, Jin U.

    2013-01-01

    In this study, we proposed a generic speckle simulation for optical coherence tomography (OCT) signal, by convolving the point spread function (PSF) of the OCT system with the numerically synthesized random sample field. We validated our model and used the simulation method to study the statistical properties of cross-correlation coefficients (XCC) between Ascans which have been recently applied in transverse motion analysis by our group. The results of simulation show that over sampling is essential for accurate motion tracking; exponential decay of OCT signal leads to an under estimate of motion which can be corrected; lateral heterogeneity of sample leads to an over estimate of motion for a few pixels corresponding to the structural boundary. PMID:23456001

  3. Enhanced sampling of glutamate receptor ligand-binding domains.

    PubMed

    Lau, Albert Y

    2018-04-14

    The majority of excitatory synaptic transmission in the central nervous system is mediated by ionotropic glutamate receptors (iGluRs). These membrane-bound protein assemblies consist of modular domains that can be genetically isolated and expressed, which has resulted in a plethora of crystal structures of individual domains in different conformations bound to different ligands. These structures have presented opportunities for molecular dynamics (MD) simulation studies. To examine the free energies that govern molecular behavior, simulation strategies and algorithms have been developed, collectively called enhanced sampling methods This review focuses on the use of enhanced sampling MD simulations of isolated iGluR ligand-binding domains to characterize thermodynamic properties important to receptor function. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Statistical modelling as an aid to the design of retail sampling plans for mycotoxins in food.

    PubMed

    MacArthur, Roy; MacDonald, Susan; Brereton, Paul; Murray, Alistair

    2006-01-01

    A study has been carried out to assess appropriate statistical models for use in evaluating retail sampling plans for the determination of mycotoxins in food. A compound gamma model was found to be a suitable fit. A simulation model based on the compound gamma model was used to produce operating characteristic curves for a range of parameters relevant to retail sampling. The model was also used to estimate the minimum number of increments necessary to minimize the overall measurement uncertainty. Simulation results showed that measurements based on retail samples (for which the maximum number of increments is constrained by cost) may produce fit-for-purpose results for the measurement of ochratoxin A in dried fruit, but are unlikely to do so for the measurement of aflatoxin B1 in pistachio nuts. In order to produce a more accurate simulation, further work is required to determine the degree of heterogeneity associated with batches of food products. With appropriate parameterization in terms of physical and biological characteristics, the systems developed in this study could be applied to other analyte/matrix combinations.

  5. Stability and bias of classification rates in biological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.; Hines, J.E.

    1990-01-01

    We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases

  6. Estimation of river and stream temperature trends under haphazard sampling

    USGS Publications Warehouse

    Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao

    2015-01-01

    Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.

  7. Comparison of Monte Carlo simulation of gamma ray attenuation coefficients of amino acids with XCOM program and experimental data

    NASA Astrophysics Data System (ADS)

    Elbashir, B. O.; Dong, M. G.; Sayyed, M. I.; Issa, Shams A. M.; Matori, K. A.; Zaid, M. H. M.

    2018-06-01

    The mass attenuation coefficients (μ/ρ), effective atomic numbers (Zeff) and electron densities (Ne) of some amino acids obtained experimentally by the other researchers have been calculated using MCNP5 simulations in the energy range 0.122-1.330 MeV. The simulated values of μ/ρ, Zeff, and Ne were compared with the previous experimental work for the amino acids samples and a good agreement was noticed. Moreover, the values of mean free path (MFP) for the samples were calculated using MCNP5 program and compared with the theoretical results obtained by XCOM. The investigation of μ/ρ, Zeff, Ne and MFP values of amino acids using MCNP5 simulations at various photon energies when compared with the XCOM values and previous experimental data for the amino acids samples revealed that MCNP5 code provides accurate photon interaction parameters for amino acids.

  8. Reaching multi-nanosecond timescales in combined QM/MM molecular dynamics simulations through parallel horsetail sampling.

    PubMed

    Martins-Costa, Marilia T C; Ruiz-López, Manuel F

    2017-04-15

    We report an enhanced sampling technique that allows to reach the multi-nanosecond timescale in quantum mechanics/molecular mechanics molecular dynamics simulations. The proposed technique, called horsetail sampling, is a specific type of multiple molecular dynamics approach exhibiting high parallel efficiency. It couples a main simulation with a large number of shorter trajectories launched on independent processors at periodic time intervals. The technique is applied to study hydrogen peroxide at the water liquid-vapor interface, a system of considerable atmospheric relevance. A total simulation time of a little more than 6 ns has been attained for a total CPU time of 5.1 years representing only about 20 days of wall-clock time. The discussion of the results highlights the strong influence of the solvation effects at the interface on the structure and the electronic properties of the solute. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Enhanced Sampling of Molecular Dynamics Simulations of a Polyalanine Octapeptide: Effects of the Periodic Boundary Conditions on Peptide Conformation.

    PubMed

    Kasahara, Kota; Sakuraba, Shun; Fukuda, Ikuo

    2018-03-08

    We investigate the problem of artifacts caused by the periodic boundary conditions (PBC) used in molecular simulation studies. Despite the long history of simulations with PBCs, the existence of measurable artifacts originating from PBCs applied to inherently nonperiodic physical systems remains controversial. Specifically, these artifacts appear as differences between simulations of the same system but with different simulation-cell sizes. Earlier studies have implied that, even in the simple case of a small model peptide in water, sampling inefficiency is a major obstacle to understanding these artifacts. In this study, we have resolved the sampling issue using the replica exchange molecular dynamics (REMD) enhanced-sampling method to explore PBC artifacts. Explicitly solvated zwitterionic polyalanine octapeptides with three different cubic-cells, having dimensions of L = 30, 40, and 50 Å, were investigated to elucidate the differences with 64 replica × 500 ns REMD simulations using the AMBER parm99SB force field. The differences among them were not large overall, and the results for the L = 30 and 40 Å simulations in the conformational free energy landscape were found to be very similar at room temperature. However, a small but statistically significant difference was seen for L = 50 Å. We observed that extended conformations were slightly overstabilized in the smaller systems. The origin of these artifacts is discussed by comparison to an electrostatic calculation method without PBCs.

  10. Reactive Oxygen Species (ROS) generation by lunar simulants

    NASA Astrophysics Data System (ADS)

    Kaur, Jasmeet; Rickman, Douglas; Schoonen, Martin A.

    2016-05-01

    The current interest in human exploration of the Moon and past experiences of Apollo astronauts has rekindled interest into the possible harmful effects of lunar dust on human health. In comparison to the Apollo-era explorations, human explorers may be weeks on the Moon, which will raise the risk of inhalation exposure. The mineralogical composition of lunar dust is well documented, but its effects on human health are not fully understood. With the aim of understanding the reactivity of dusts that may be encountered on geologically different lunar terrains, we have studied Reactive Oxygen Species (ROS) generation by a suite of lunar simulants of different mineralogical-chemical composition dispersed in water and Simulated Lung Fluid (SLF). To further explore the reactivity of simulants under lunar environmental conditions, we compared the reactivity of simulants both in air and inert atmosphere. As the impact of micrometeorites with consequent shock-induced stresses is a major environmental factor on the Moon, we also studied the effect of mechanical stress on samples. Mechanical stress was induced by hand crushing the samples both in air and inert atmosphere. The reactivity of samples after crushing was analyzed for a period of up to nine days. Hydrogen peroxide (H2O2) in water and SLF was analyzed by an in situ electrochemical probe and hydroxyl radical (•OH) by Electron Spin Resonance (ESR) spectroscopy and Adenine probe. Out of all simulants, CSM-CL-S was found to be the most reactive simulant followed by OB-1 and then JSC-1A simulant. The overall reactivity of samples in the inert atmosphere was higher than in air. Fresh crushed samples showed a higher level of reactivity than uncrushed samples. Simulant samples treated to create agglutination, including the formation of zero-valent iron, showed less reactivity than untreated simulants. ROS generation in SLF is initially slower than in deionized water (DI), but the ROS formation is sustained for as long as 7.5 h. By contrast ROS is formed rapidly within 30 min when simulants are dispersed in DI, but then the concentration either stabilizes or decreases over time. The results indicate that mechanical stress and the absence of molecular oxygen and water, which are important environmental characteristics of the lunar environment, can lead to enhanced production of ROS in general. However, compositional difference among simulants is the most important factor in governing the production of ROS. Simulants with glass content in excess of 40 wt% appear to produce as much as of order of magnitude more ROS than simulants with lower glass content.

  11. Landscape scale vegetation-type conversion and fire hazard in the San Francisco bay area open spaces

    USGS Publications Warehouse

    Russell, W.H.; McBride, J.R.

    2003-01-01

    Successional pressures resulting from fire suppression and reduced grazing have resulted in vegetation-type conversion in the open spaces surrounding the urbanized areas of the San Francisco bay area. Coverage of various vegetation types were sampled on seven sites using a chronosequence of remote images in order to measure change over time. Results suggest a significant conversion of grassland to shrubland dominated by Baccharis pilularison five of the seven sites sampled. An increase in Pseudotsuga menziesii coverage was also measured on the sites where it was present. Increases fuel and fire hazard were determined through field sampling and use of the FARSITE fire area simulator. A significant increase in biomass resulting from succession of grass-dominated to shrub-dominated communities was evident. In addition, results from the FARSITE simulations indicated significantly higher fire-line intensity, and flame length associated with shrublands over all other vegetation types sampled. These results indicate that the replacement of grass dominated with shrub-dominated landscapes has increased the probability of high intensity fires. ?? 2003 Elsevier Science B.V. All rights reserved.

  12. Semi-Autonomous Small Unmanned Aircraft Systems for Sampling Tornadic Supercell Thunderstorms

    NASA Astrophysics Data System (ADS)

    Elston, Jack S.

    This work describes the development of a network-centric unmanned aircraft system (UAS) for in situ sampling of supercell thunderstorms. UAS have been identified as a well-suited platform for meteorological observations given their portability, endurance, and ability to mitigate atmospheric disturbances. They represent a unique tool for performing targeted sampling in regions of a supercell thunderstorm previously unreachable through other methods. Doppler radar can provide unique measurements of the wind field in and around supercell thunderstorms. In order to exploit this capability, a planner was developed that can optimize ingress trajectories for severe storm penetration. The resulting trajectories were examined to determine the feasibility of such a mission, and to optimize ingress in terms of flight time and exposure to precipitation. A network-centric architecture was developed to handle the large amount of distributed data produced during a storm sampling mission. Creation of this architecture was performed through a bottom-up design approach which reflects and enhances the interplay between networked communication and autonomous aircraft operation. The advantages of the approach are demonstrated through several field and hardware-in-the-loop experiments containing different hardware, networking protocols, and objectives. Results are provided from field experiments involving the resulting network-centric architecture. An airmass boundary was sampled in the Collaborative Colorado Nebraska Unmanned Aircraft Experiment (CoCoNUE). Utilizing lessons learned from CoCoNUE, a new concept of operations (CONOPS) and UAS were developed to perform in situ sampling of supercell thunderstorms. Deployment during the Verification of the Origins of Rotation in Tornadoes Experiment 2 (VORTEX2) resulted in the first ever sampling of the airmass associated with the rear flank downdraft of a tornadic supercell thunderstorm by a UAS. Hardware-in-the-loop simulation capability was added to the UAS to enable further assessment of the system and CONOPS. The simulation combines a full six degree-of-freedom aircraft dynamic model with wind and precipitation data from simulations of severe convective storms. Interfaces were written to involve as much of the system's field hardware as possible, including the creation of a simulated radar product server. A variety of simulations were conducted to evaluate different aspects of the CONOPS used for the 2010 VORTEX2 field campaign.

  13. The Detection and Statistics of Giant Arcs behind CLASH Clusters

    NASA Astrophysics Data System (ADS)

    Xu, Bingxiao; Postman, Marc; Meneghetti, Massimo; Seitz, Stella; Zitrin, Adi; Merten, Julian; Maoz, Dani; Frye, Brenda; Umetsu, Keiichi; Zheng, Wei; Bradley, Larry; Vega, Jesus; Koekemoer, Anton

    2016-02-01

    We developed an algorithm to find and characterize gravitationally lensed galaxies (arcs) to perform a comparison of the observed and simulated arc abundance. Observations are from the Cluster Lensing And Supernova survey with Hubble (CLASH). Simulated CLASH images are created using the MOKA package and also clusters selected from the high-resolution, hydrodynamical simulations, MUSIC, over the same mass and redshift range as the CLASH sample. The algorithm's arc elongation accuracy, completeness, and false positive rate are determined and used to compute an estimate of the true arc abundance. We derive a lensing efficiency of 4 ± 1 arcs (with length ≥6″ and length-to-width ratio ≥7) per cluster for the X-ray-selected CLASH sample, 4 ± 1 arcs per cluster for the MOKA-simulated sample, and 3 ± 1 arcs per cluster for the MUSIC-simulated sample. The observed and simulated arc statistics are in full agreement. We measure the photometric redshifts of all detected arcs and find a median redshift zs = 1.9 with 33% of the detected arcs having zs > 3. We find that the arc abundance does not depend strongly on the source redshift distribution but is sensitive to the mass distribution of the dark matter halos (e.g., the c-M relation). Our results show that consistency between the observed and simulated distributions of lensed arc sizes and axial ratios can be achieved by using cluster-lensing simulations that are carefully matched to the selection criteria used in the observations.

  14. Modeling and Simulation of Upset-Inducing Disturbances for Digital Systems in an Electromagnetic Reverberation Chamber

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.

  15. Health plan auditing: 100-percent-of-claims vs. random-sample audits.

    PubMed

    Sillup, George P; Klimberg, Ronald K

    2011-01-01

    The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.

  16. The Impact of Sampling Schemes on the Site Frequency Spectrum in Nonequilibrium Subdivided Populations

    PubMed Central

    Städler, Thomas; Haubold, Bernhard; Merino, Carlos; Stephan, Wolfgang; Pfaffelhuber, Peter

    2009-01-01

    Using coalescent simulations, we study the impact of three different sampling schemes on patterns of neutral diversity in structured populations. Specifically, we are interested in two summary statistics based on the site frequency spectrum as a function of migration rate, demographic history of the entire substructured population (including timing and magnitude of specieswide expansions), and the sampling scheme. Using simulations implementing both finite-island and two-dimensional stepping-stone spatial structure, we demonstrate strong effects of the sampling scheme on Tajima's D (DT) and Fu and Li's D (DFL) statistics, particularly under specieswide (range) expansions. Pooled samples yield average DT and DFL values that are generally intermediate between those of local and scattered samples. Local samples (and to a lesser extent, pooled samples) are influenced by local, rapid coalescence events in the underlying coalescent process. These processes result in lower proportions of external branch lengths and hence lower proportions of singletons, explaining our finding that the sampling scheme affects DFL more than it does DT. Under specieswide expansion scenarios, these effects of spatial sampling may persist up to very high levels of gene flow (Nm > 25), implying that local samples cannot be regarded as being drawn from a panmictic population. Importantly, many data sets on humans, Drosophila, and plants contain signatures of specieswide expansions and effects of sampling scheme that are predicted by our simulation results. This suggests that validating the assumption of panmixia is crucial if robust demographic inferences are to be made from local or pooled samples. However, future studies should consider adopting a framework that explicitly accounts for the genealogical effects of population subdivision and empirical sampling schemes. PMID:19237689

  17. Quantitative basis for component factors of gas flow proportional counting efficiencies

    NASA Astrophysics Data System (ADS)

    Nichols, Michael C.

    This dissertation investigates the counting efficiency calibration of a gas flow proportional counter with beta-particle emitters in order to (1) determine by measurements and simulation the values of the component factors of beta-particle counting efficiency for a proportional counter, (2) compare the simulation results and measured counting efficiencies, and (3) determine the uncertainty of the simulation and measurements. Monte Carlo simulation results by the MCNP5 code were compared with measured counting efficiencies as a function of sample thickness for 14C, 89Sr, 90Sr, and 90Y. The Monte Carlo model simulated strontium carbonate with areal thicknesses from 0.1 to 35 mg cm-2. The samples were precipitated as strontium carbonate with areal thicknesses from 3 to 33 mg cm-2 , mounted on membrane filters, and counted on a low background gas flow proportional counter. The estimated fractional standard deviation was 2--4% (except 6% for 14C) for efficiency measurements of the radionuclides. The Monte Carlo simulations have uncertainties estimated to be 5 to 6 percent for carbon-14 and 2.4 percent for strontium-89, strontium-90, and yttrium-90. The curves of simulated counting efficiency vs. sample areal thickness agreed within 3% of the curves of best fit drawn through the 25--49 measured points for each of the four radionuclides. Contributions from this research include development of uncertainty budgets for the analytical processes; evaluation of alternative methods for determining chemical yield critical to the measurement process; correcting a bias found in the MCNP normalization of beta spectra histogram; clarifying the interpretation of the commonly used ICRU beta-particle spectra for use by MCNP; and evaluation of instrument parameters as applied to the simulation model to obtain estimates of the counting efficiency from simulated pulse height tallies.

  18. Using Computer Graphics in Statistics.

    ERIC Educational Resources Information Center

    Kerley, Lyndell M.

    1990-01-01

    Described is software which allows a student to use simulation to produce analytical output as well as graphical results. The results include a frequency histogram of a selected population distribution, a frequency histogram of the distribution of the sample means, and test the normality distributions of the sample means. (KR)

  19. Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations

    PubMed Central

    Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi

    2016-01-01

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation. PMID:27227775

  20. Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations.

    PubMed

    Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi

    2016-01-01

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation.

  1. [Development of a microenvironment test chamber for airborne microbe research].

    PubMed

    Zhan, Ningbo; Chen, Feng; Du, Yaohua; Cheng, Zhi; Li, Chenyu; Wu, Jinlong; Wu, Taihu

    2017-10-01

    One of the most important environmental cleanliness indicators is airborne microbe. However, the particularity of clean operating environment and controlled experimental environment often leads to the limitation of the airborne microbe research. This paper designed and implemented a microenvironment test chamber for airborne microbe research in normal test conditions. Numerical simulation by Fluent showed that airborne microbes were evenly dispersed in the upper part of test chamber, and had a bottom-up concentration growth distribution. According to the simulation results, the verification experiment was carried out by selecting 5 sampling points in different space positions in the test chamber. Experimental results showed that average particle concentrations of all sampling points reached 10 7 counts/m 3 after 5 minutes' distributing of Staphylococcus aureus , and all sampling points showed the accordant mapping of concentration distribution. The concentration of airborne microbe in the upper chamber was slightly higher than that in the middle chamber, and that was also slightly higher than that in the bottom chamber. It is consistent with the results of numerical simulation, and it proves that the system can be well used for airborne microbe research.

  2. Simulating carbon sequestration using cellular automata and land use assessment for Karaj, Iran

    NASA Astrophysics Data System (ADS)

    Khatibi, Ali; Pourebrahim, Sharareh; Mokhtar, Mazlin Bin

    2018-06-01

    Carbon sequestration has been proposed as a means of slowing the atmospheric and marine accumulation of greenhouse gases. This study used observed and simulated land use/cover changes to investigate and predict carbon sequestration rates in the city of Karaj. Karaj, a metropolis of Iran, has undergone rapid population expansion and associated changes in recent years, and these changes make it suitable for use as a case study for rapidly expanding urban areas. In particular, high quality agricultural space, green space and gardens have rapidly transformed into industrial, residential and urban service areas. Five classes of land use/cover (residential, agricultural, rangeland, forest and barren areas) were considered in the study; vegetation and soil samples were taken from 20 randomly selected locations. The level of carbon sequestration was determined for the vegetation samples by calculating the amount of organic carbon present using the dry plant weight method, and for soil samples by using the method of Walkley and Black. For each area class, average values of carbon sequestration in vegetation and soil samples were calculated to give a carbon sequestration index. A cellular automata approach was used to simulate changes in the classes. Finally, the carbon sequestration indices were combined with simulation results to calculate changes in carbon sequestration for each class. It is predicted that, in the 15 year period from 2014 to 2029, much agricultural land will be transformed into residential land, resulting in a severe reduction in the level of carbon sequestration. Results from this study indicate that expansion of forest areas in urban counties would be an effective means of increasing the levels of carbon sequestration. Finally, future opportunities to include carbon sequestration into the simulation of land use/cover changes are outlined.

  3. DNAPL distribution in the source zone: Effect of soil structure and uncertainty reduction with increased sampling density

    NASA Astrophysics Data System (ADS)

    Pantazidou, Marina; Liu, Ke

    2008-02-01

    This paper focuses on parameters describing the distribution of dense nonaqueous phase liquid (DNAPL) contaminants and investigates the variability of these parameters that results from soil heterogeneity. In addition, it quantifies the uncertainty reduction that can be achieved with increased density of soil sampling. Numerical simulations of DNAPL releases were performed using stochastic realizations of hydraulic conductivity fields generated with the same geostatistical parameters and conditioning data at two sampling densities, thus generating two simulation ensembles of low and high density (three-fold increase) of soil sampling. The results showed that DNAPL plumes in aquifers identical in a statistical sense exhibit qualitatively different patterns, ranging from compact to finger-like. The corresponding quantitative differences were expressed by defining several alternative measures that describe the DNAPL plume and computing these measures for each simulation of the two ensembles. The uncertainty in the plume features under study was affected to different degrees by the variability of the soil, with coefficients of variation ranging from about 20% to 90%, for the low-density sampling. Meanwhile, the increased soil sampling frequency resulted in reductions of uncertainty varying from 7% to 69%, for low- and high-uncertainty variables, respectively. In view of the varying uncertainty in the characteristics of a DNAPL plume, remedial designs that require estimates of the less uncertain features of the plume may be preferred over others that need a more detailed characterization of the source zone architecture.

  4. Prediction of meat spectral patterns based on optical properties and concentrations of the major constituents.

    PubMed

    ElMasry, Gamal; Nakauchi, Shigeki

    2016-03-01

    A simulation method for approximating spectral signatures of minced meat samples was developed depending on concentrations and optical properties of the major chemical constituents. Minced beef samples of different compositions scanned on a near-infrared spectroscopy and on a hyperspectral imaging system were examined. Chemical composition determined heuristically and optical properties collected from authenticated references were simulated to approximate samples' spectral signatures. In short-wave infrared range, the resulting spectrum equals the sum of the absorption of three individual absorbers, that is, water, protein, and fat. By assuming homogeneous distributions of the main chromophores in the mince samples, the obtained absorption spectra are found to be a linear combination of the absorption spectra of the major chromophores present in the sample. Results revealed that developed models were good enough to derive spectral signatures of minced meat samples with a reasonable level of robustness of a high agreement index value more than 0.90 and ratio of performance to deviation more than 1.4.

  5. Motions and entropies in proteins as seen in NMR relaxation experiments and molecular dynamics simulations.

    PubMed

    Allnér, Olof; Foloppe, Nicolas; Nilsson, Lennart

    2015-01-22

    Molecular dynamics simulations of E. coli glutaredoxin1 in water have been performed to relate the dynamical parameters and entropy obtained in NMR relaxation experiments, with results extracted from simulated trajectory data. NMR relaxation is the most widely used experimental method to obtain data on dynamics of proteins, but it is limited to relatively short timescales and to motions of backbone amides or in some cases (13)C-H vectors. By relating the experimental data to the all-atom picture obtained in molecular dynamics simulations, valuable insights on the interpretation of the experiment can be gained. We have estimated the internal dynamics and their timescales by calculating the generalized order parameters (O) for different time windows. We then calculate the quasiharmonic entropy (S) and compare it to the entropy calculated from the NMR-derived generalized order parameter of the amide vectors. Special emphasis is put on characterizing dynamics that are not expressed through the motions of the amide group. The NMR and MD methods suffer from complementary limitations, with NMR being restricted to local vectors and dynamics on a timescale determined by the rotational diffusion of the solute, while in simulations, it may be difficult to obtain sufficient sampling to ensure convergence of the results. We also evaluate the amount of sampling obtained with molecular dynamics simulations and how it is affected by the length of individual simulations, by clustering of the sampled conformations. We find that two structural turns act as hinges, allowing the α helix between them to undergo large, long timescale motions that cannot be detected in the time window of the NMR dipolar relaxation experiments. We also show that the entropy obtained from the amide vector does not account for correlated motions of adjacent residues. Finally, we show that the sampling in a total of 100 ns molecular dynamics simulation can be increased by around 50%, by dividing the trajectory into 10 replicas with different starting velocities.

  6. Characterization of naturally occurring radioactive materials in Libyan oil pipe scale using a germanium detector and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Habib, A. S.; Shutt, A. L.; Regan, P. H.; Matthews, M. C.; Alsulaiti, H.; Bradley, D. A.

    2014-02-01

    Radioactive scale formation in various oil production facilities is acknowledged to pose a potential significant health and environmental issue. The presence of such an issue in Libyan oil fields was recognized as early as 1998. The naturally occurring radioactive materials (NORM) involved in this matter are radium isotopes (226Ra and 228Ra) and their decay products, precipitating into scales formed on the surfaces of production equipment. A field trip to a number of onshore Libyan oil fields has indicated the existence of elevated levels of specific activity in a number of locations in some of the more mature oil fields. In this study, oil scale samples collected from different parts of Libya have been characterized using gamma spectroscopy through use of a well shielded HPGe spectrometer. To avoid potential alpha-bearing dust inhalation and in accord with safe working practices at this University, the samples, contained in plastic bags and existing in different geometries, are not permitted to be opened. MCNP, a Monte Carlo simulation code, is being used to simulate the spectrometer and the scale samples in order to obtain the system absolute efficiency and then to calculate sample specific activities. The samples are assumed to have uniform densities and homogeneously distributed activity. Present results are compared to two extreme situations that were assumed in a previous study: (i) with the entire activity concentrated at a point on the sample surface proximal to the detector, simulating the sample lowest activity, and; (ii) with the entire activity concentrated at a point on the sample surface distal to the detector, simulating the sample highest activity.

  7. Interlaboratory comparability, bias, and precision for four laboratories measuring constituents in precipitation, November 1982-August 1983

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Malo, B.A.

    1985-01-01

    Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)

  8. Simulations of single-particle imaging of hydrated proteins with x-ray free-electron lasers

    NASA Astrophysics Data System (ADS)

    Fortmann-Grote, C.; Bielecki, J.; Jurek, Z.; Santra, R.; Ziaja-Motyka, B.; Mancuso, A. P.

    2017-08-01

    We employ start-to-end simulations to model coherent diffractive imaging of single biomolecules using x-ray free electron lasers. This technique is expected to yield new structural information about biologically relevant macromolecules thanks to the ability to study the isolated sample in its natural environment as opposed to crystallized or cryogenic samples. The effect of the solvent on the diffraction pattern and interpretability of the data is an open question. We present first results of calculations where the solvent is taken into account explicitly. They were performed with a molecular dynamics scheme for a sample consisting of a protein and a hydration layer of varying thickness. Through R-factor analysis of the simulated diffraction patterns from hydrated samples, we show that the scattering background from realistic hydration layers of up to 3 Å thickness presents no obstacle for the resolution of molecular structures at the sub-nm level.

  9. Effect of Different Sampling Schedules on Results of Bioavailability and Bioequivalence Studies: Evaluation by Means of Monte Carlo Simulations.

    PubMed

    Kano, Eunice Kazue; Chiann, Chang; Fukuda, Kazuo; Porta, Valentina

    2017-08-01

    Bioavailability and bioequivalence study is one of the most frequently performed investigations in clinical trials. Bioequivalence testing is based on the assumption that 2 drug products will be therapeutically equivalent when they are equivalent in the rate and extent to which the active drug ingredient or therapeutic moiety is absorbed and becomes available at the site of drug action. In recent years there has been a significant growth in published papers that use in silico studies based on mathematical simulations to analyze pharmacokinetic and pharmacodynamic properties of drugs, including bioavailability and bioequivalence aspects. The goal of this study is to evaluate the usefulness of in silico studies as a tool in the planning of bioequivalence, bioavailability and other pharmacokinetic assays, e.g., to determine an appropriate sampling schedule. Monte Carlo simulations were used to define adequate blood sampling schedules for a bioequivalence assay comparing 2 different formulations of cefadroxil oral suspensions. In silico bioequivalence studies comparing different formulation of cefadroxil oral suspensions using various sampling schedules were performed using models. An in vivo study was conducted to confirm in silico results. The results of in silico and in vivo bioequivalence studies demonstrated that schedules with fewer sampling times are as efficient as schedules with larger numbers of sampling times in the assessment of bioequivalence, but only if T max is included as a sampling time. It was also concluded that in silico studies are useful tools in the planning of bioequivalence, bioavailability and other pharmacokinetic in vivo assays. © Georg Thieme Verlag KG Stuttgart · New York.

  10. HYDRA: High Speed Simulation Architecture for Precision Spacecraft Formation Flying

    NASA Technical Reports Server (NTRS)

    Martin, Bryan J.; Sohl, Garett A.

    2003-01-01

    This viewgraph presentation describes HYDRA, which is architecture to facilitate high-fidelity and real-time simulation of formation flying missions. The contents include: 1) Motivation; 2) Objective; 3) HYDRA-Description and Overview; 4) HYDRA-Hierarchy; 5) Communication in HYDRA; 6) Simulation Specific Concerns in HYDRA; 7) Example application (Formation Acquisition); and 8) Sample Problem Results.

  11. Free-energy analyses of a proton transfer reaction by simulated-tempering umbrella sampling and first-principles molecular dynamics simulations.

    PubMed

    Mori, Yoshiharu; Okamoto, Yuko

    2013-02-01

    A simulated tempering method, which is referred to as simulated-tempering umbrella sampling, for calculating the free energy of chemical reactions is proposed. First principles molecular dynamics simulations with this simulated tempering were performed to study the intramolecular proton transfer reaction of malonaldehyde in an aqueous solution. Conformational sampling in reaction coordinate space can be easily enhanced with this method, and the free energy along a reaction coordinate can be calculated accurately. Moreover, the simulated-tempering umbrella sampling provides trajectory data more efficiently than the conventional umbrella sampling method.

  12. The simulation of temperature distribution and relative humidity with liquid concentration of 50% using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Yohana, Eflita; Yulianto, Mohamad Endy; Kwang-Hwang, Choi; Putro, Bondantio; Yohanes Aditya W., A.

    2015-12-01

    The study of humidity distribution simulation inside a room has been widely conducted by using computational fluid dynamics (CFD). Here, the simulation was done by employing inputs in the experiment of air humidity reduction in a sample house. Liquid dessicant CaCl2was used in this study to absorb humidity in the air, so that the enormity of humidity reduction occured during the experiment could be obtained.The experiment was conducted in the morning at 8 with liquid desiccant concentration of 50%, nozzle dimension of 0.2 mms attached in dehumidifier, and the debit of air which entered the sample house was 2.35 m3/min. Both in inlet and outlet sides of the room, a DHT 11 censor was installed and used to note changes in humidity and temperature during the experiment. In normal condition without turning on the dehumidifier, the censor noted that the average temperature inside the room was 28°C and RH of 65%.The experiment result showed that the relative humidity inside a sample house was decreasing up to 52% in inlet position. Further, through the results obtained from CFD simulation, the temperature distribution and relative humidity inside the sample house could be seen. It showed that the concentration of liquid desiccant of 50% experienced a decrease while the relative humidity distribution was considerably good since the average RH was 55% followed by the increase in air temperature of 29.2° C inside the sample house.

  13. Computational analysis for selectivity of histone deacetylase inhibitor by replica-exchange umbrella sampling molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Shuichiro; Sakae, Yoshitake; Itoh, Yukihiro; Suzuki, Takayoshi; Okamoto, Yuko

    2018-03-01

    We performed protein-ligand docking simulations with a ligand T247, which has been reported as a selective inhibitor of a histone deacetylase HDAC3, by the replica-exchange umbrella sampling method in order to estimate the free energy profiles along ligand docking pathways of HDAC3-T247 and HDAC2-T247 systems. The simulation results showed that the docked state of the HDAC3-T247 system is more stable than that of the HDAC2-T247 system although the amino-acid sequences and structures of HDAC3 and HDAC2 are very similar. By comparing structures obtained from the simulations of both systems, we found the difference between structures of hydrophobic residues at the entrance of the catalytic site. Moreover, we performed conventional molecular dynamics simulations of HDAC3 and HDAC2 systems without T247, and the results also showed the same difference of the hydrophobic structures. Therefore, we consider that this hydrophobic structure contributes to the stabilization of the docked state of the HDAC3-T247 system. Furthermore, we show that Tyr209, which is one of the hydrophobic residues in HDAC2, plays a key role in the instability from the simulation results of a mutated-HDAC2 system.

  14. Analysis of simulated high burnup nuclear fuel by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Singh, Manjeet; Sarkar, Arnab; Banerjee, Joydipta; Bhagat, R. K.

    2017-06-01

    Advanced Heavy Water Reactor (AHWR) grade (Th-U)O2 fuel sample and Simulated High Burn-Up Nuclear Fuels (SIMFUEL) samples mimicking the 28 and 43 GWd/Te irradiated burn-up fuel were studied using laser-induced breakdown spectroscopy (LIBS) setup in a simulated hot-cell environment from a distance of > 1.5 m. Resolution of < 38 pm has been used to record the complex spectra of the SIMFUEL samples. By using spectrum comparison and database matching > 60 emission lines of fission products was identified. Among them only a few emission lines were found to generate calibration curves. The study demonstrates the possibility to investigate impurities at concentrations around hundreds of ppm, rapidly at atmospheric pressure without any sample preparation. The results of Ba and Mo showed the advantage of LIBS analysis over traditional methods involving sample dissolution, which introduces possible elemental loss. Limits of detections (LOD) under Ar atmosphere shows significant improvement, which is shown to be due to the formation of stable plasma.

  15. Effect of simulated sampling disturbance on creep behaviour of rock salt

    NASA Astrophysics Data System (ADS)

    Guessous, Z.; Gill, D. E.; Ladanyi, B.

    1987-10-01

    This article presents the results of an experimental study of creep behaviour of a rock salt under uniaxial compression as a function of prestrain, simulating sampling disturbance. The prestrain was produced by radial compressive loading of the specimens prior to creep testing. The tests were conducted on an artifical salt to avoid excessive scattering of the results. The results obtained from several series of single-stage creep tests show that, at short-term, the creep response of salt is strongly affected by the preloading history of samples. The nature of this effect depends upon the intensity of radial compressive preloading, and its magnitude is a function of the creep stress level. The effect, however, decreases with increasing plastic deformation, indicating that large creep strains may eventually lead to a complete loss of preloading memory.

  16. When Can Clades Be Potentially Resolved with Morphology?

    PubMed Central

    Bapst, David W.

    2013-01-01

    Morphology-based phylogenetic analyses are the only option for reconstructing relationships among extinct lineages, but often find support for conflicting hypotheses of relationships. The resulting lack of phylogenetic resolution is generally explained in terms of data quality and methodological issues, such as character selection. A previous suggestion is that sampling ancestral morphotaxa or sampling multiple taxa descended from a long-lived, unchanging lineage can also yield clades which have no opportunity to share synapomorphies. This lack of character information leads to a lack of ‘intrinsic’ resolution, an issue that cannot be solved with additional morphological data. It is unclear how often we should expect clades to be intrinsically resolvable in realistic circumstances, as intrinsic resolution must increase as taxonomic sampling decreases. Using branching simulations, I quantify intrinsic resolution across several models of morphological differentiation and taxonomic sampling. Intrinsically unresolvable clades are found to be relatively frequent in simulations of both extinct and living taxa under realistic sampling scenarios, implying that intrinsic resolution is an issue for morphology-based analyses of phylogeny. Simulations which vary the rates of sampling and differentiation were tested for their agreement to observed distributions of durations from well-sampled fossil records and also having high intrinsic resolution. This combination only occurs in those datasets when differentiation and sampling rates are both unrealistically high relative to branching and extinction rates. Thus, the poor phylogenetic resolution occasionally observed in morphological phylogenetics may result from a lack of intrinsic resolvability within groups. PMID:23638034

  17. A user's guide to the ssWavelets package

    Treesearch

    J.H. ​Gove

    2017-01-01

    ssWavelets is an R package that is meant to be used in conjunction with the sampSurf package (Gove, 2012) to perform wavelet decomposition on the results of a sampling surface simulation. In general, the wavelet filter decomposes the sampSurf simulation results by scale (distance), with each scale corresponding to a different level of the...

  18. Superhydrophobic surfaces: From nature to biomimetic through VOF simulation.

    PubMed

    Liu, Chunbao; Zhu, Ling; Bu, Weiyang; Liang, Yunhong

    2018-04-01

    The contact angle, surface structure and chemical compositions of Canna leaves were investigated. According to the surface structure of Canna leaves which observed by Scanning Electron Microscopy(SEM), the CFD (Computational Fluid Dynamics)model was established and the method of volume of fluid (VOF) was used to simulate the process of droplet impacting on the surface and established a smooth surface for comparison to verify that the surface structure was an important factor of the superhydrophobic properties. Based on the study of Canna leaf and VOF simulation of its surface structure, the superhydrophobic samples were processed successfully and showed a good superhydrophobic property with a contact angle of 156 ± 1 degrees. A high-speed camera (5000 frames per second) was used to assess droplet movement and determine the contact time of the samples. The contact time for the sample was 13.1 ms. The results displayed that the artificial superhydrophobic surface is perfect for the performance of superhydrophobic properties. The VOF simulation method was efficient, accurate and low cost before machining artificial superhydrophobic samples. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Migration of formaldehyde from melamine-ware: UK 2008 survey results.

    PubMed

    Potter, E L J; Bradley, E L; Davies, C R; Barnes, K A; Castle, L

    2010-06-01

    Fifty melamine-ware articles were tested for the migration of formaldehyde - with hexamethylenetetramine (HMTA) expressed as formaldehyde - to see whether the total specific migration limit (SML(T)) was being observed. The SML(T), given in European Commission Directive 2002/72/EC as amended, is 15 mg kg(-1). Fourier transform-infrared (FT-IR) spectroscopy was carried out on the articles to confirm the plastic type. Articles were exposed to the food simulant 3% (w/v) aqueous acetic acid under conditions representing their worst foreseeable use. Formaldehyde and HMTA in food simulants were determined by a spectrophotometric derivatization procedure. Positive samples were confirmed by a second spectrophotometric procedure using an alternative derivatization agent. As all products purchased were intended for repeat use, three sequential exposures to the simulant were carried out. Formaldehyde was detected in the simulant exposed to 43 samples. Most of the levels found were well below the limits set in law such that 84% of the samples tested were compliant. However, eight samples had formaldehyde levels that were clearly above the legal maximum at six to 65 times the SML(T).

  20. Simulations of a Thin Sampling Calorimeter with GEANT/FLUKA

    NASA Technical Reports Server (NTRS)

    Lee, Jeongin; Watts, John; Howell, Leonard; Rose, M. Franklin (Technical Monitor)

    2000-01-01

    The Advanced Cosmic-ray Composition Experiment for the Space Station (ACCESS) will investigate the origin, composition and acceleration mechanism of cosmic rays by measuring the elemental composition of the cosmic rays up to 10(exp 15) eV. These measurements will be made with a thin ionization calorimeter and a transition radiation detector. This paper reports studies of a thin sampling calorimeter concept for the ACCESS thin ionization calorimeter. For the past year, a Monte Carlo simulation study of a Thin Sampling Calorimeter (TSC) design has been conducted to predict the detector performance and to design the system for achieving the ACCESS scientific objectives. Simulation results show that the detector energy resolution function resembles a Gaussian distribution and the energy resolution of TSC is about 40%. In addition, simulations of the detector's response to an assumed broken power law cosmic ray spectra in the region where the 'knee' of the cosmic ray spectrum occurs have been conducted and clearly show that a thin sampling calorimeter can provide sufficiently accurate estimates of the spectral parameters to meet the science requirements of ACCESS. n

  1. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  2. A small single-nozzle rainfall simulator to measure erosion response on different burn severities in southern British Columbia, Canada

    NASA Astrophysics Data System (ADS)

    Covert, Ashley; Jordan, Peter

    2010-05-01

    To study the effects of wildfire burn severity on runoff generation and soil erosion from high intensity rainfall, we constructed an effective yet simple rainfall simulator that was inexpensive, portable and easily operated by two people on steep, forested slopes in southern British Columbia, Canada. The entire apparatus, including simulator, pumps, hoses, collapsible water bladders and sample bottles, was designed to fit into a single full-sized pick-up truck. The three-legged simulator extended to approximately 3.3 metres above ground on steep slopes and used a single Spraying Systems 1/2HH-30WSQ nozzle which can easily be interchanged for other sized nozzles. Rainfall characteristics were measured using a digital camera which took images of the raindrops against a grid. Median drop size and velocity 5 cm above ground were measured and found to be 3/4 of the size of natural rain drops of that diameter class, and fell 7% faster than terminal velocity. The simulator was used for experiments on runoff and erosion on sites burned in 2007 by two wildfires in southern British Columbia. Simulations were repeated one and two years after the fires. Rainfall was simulated at an average rate of 67 mm hr-1 over a 1 m2 plot for 20 minutes. This rainfall rate is similar to the 100 year return period rainfall intensity for this duration at a nearby weather station. Simulations were conducted on five replicate 1 m2 plots in each experimental unit including high burn severity, moderate burn severity, unburned, and unburned with forest floor removed. During the simulation a sample was collected for 30 seconds every minute, with two additional samples until runoff ceased, resulting in 22 samples per simulation. Runoff, overland flow coefficient, infiltration and sediment yield were compared between treatments. Additional simulations were conducted immediately after a 2009 wildfire to test different mulch treatments. Typical results showed that runoff on plots with high burn severity and with forest floor removed was similar, reaching on average a steady rate of about 60% of rainfall rate after about 7 minutes. Runoff on unburned plots with intact forest floor was much lower, typically less than 20% of rainfall rate. Sediment yield was greatest on plots with forest floor removed, followed by severely burned plots. Sediment yield on unburned and moderately burned plots was very low to zero. These results are consistent with qualitative observations made following several extreme rainfall events on recent burns in the region.

  3. Panchromatic spectral energy distributions of simulated galaxies: results at redshift z = 0

    NASA Astrophysics Data System (ADS)

    Goz, David; Monaco, Pierluigi; Granato, Gian Luigi; Murante, Giuseppe; Domínguez-Tenreiro, Rosa; Obreja, Aura; Annunziatella, Marianna; Tescari, Edoardo

    2017-08-01

    We present predictions of spectral energy distributions (SEDs), from the UV to the FIR, of simulated galaxies at z = 0. These were obtained by post-processing the results of an N-body+hydro simulation of a cosmological box of side 25 Mpc, which uses the Multi-Phase Particle Integrator (MUPPI) for star formation and stellar feedback, with the grasil-3d radiative transfer code that includes reprocessing of UV light by dust. Physical properties of our sample of ˜500 galaxies resemble observed ones, though with some tension at small and large stellar masses. Comparing predicted SEDs of simulated galaxies with different samples of local galaxies, we find that these resemble observed ones, when normalized at 3.6 μm. A comparison with the Herschel Reference Survey shows that the average SEDs of galaxies, divided in bins of star formation rate (SFR), are reproduced in shape and absolute normalization to within a factor of ˜2, while average SEDs of galaxies divided in bins of stellar mass show tensions that are an effect of the difference of simulated and observed galaxies in the stellar mass-SFR plane. We use our sample to investigate the correlation of IR luminosity in Spitzer and Herschel bands with several galaxy properties. SFR is the quantity that best correlates with IR light up to 160 μm, while at longer wavelengths better correlations are found with molecular mass and, at 500 μm, with dust mass. However, using the position of the FIR peak as a proxy for cold dust temperature, we assess that heating of cold dust is mostly determined by SFR, with stellar mass giving only a minor contribution. We finally show how our sample of simulated galaxies can be used as a guide to understand the physical properties and selection biases of observed samples.

  4. Order parameter free enhanced sampling of the vapor-liquid transition using the generalized replica exchange method.

    PubMed

    Lu, Qing; Kim, Jaegil; Straub, John E

    2013-03-14

    The generalized Replica Exchange Method (gREM) is extended into the isobaric-isothermal ensemble, and applied to simulate a vapor-liquid phase transition in Lennard-Jones fluids. Merging an optimally designed generalized ensemble sampling with replica exchange, gREM is particularly well suited for the effective simulation of first-order phase transitions characterized by "backbending" in the statistical temperature. While the metastable and unstable states in the vicinity of the first-order phase transition are masked by the enthalpy gap in temperature replica exchange method simulations, they are transformed into stable states through the parameterized effective sampling weights in gREM simulations, and join vapor and liquid phases with a succession of unimodal enthalpy distributions. The enhanced sampling across metastable and unstable states is achieved without the need to identify a "good" order parameter for biased sampling. We performed gREM simulations at various pressures below and near the critical pressure to examine the change in behavior of the vapor-liquid phase transition at different pressures. We observed a crossover from the first-order phase transition at low pressure, characterized by the backbending in the statistical temperature and the "kink" in the Gibbs free energy, to a continuous second-order phase transition near the critical pressure. The controlling mechanisms of nucleation and continuous phase transition are evident and the coexistence properties and phase diagram are found in agreement with literature results.

  5. Demonstrating an Order-of-Magnitude Sampling Enhancement in Molecular Dynamics Simulations of Complex Protein Systems.

    PubMed

    Pan, Albert C; Weinreich, Thomas M; Piana, Stefano; Shaw, David E

    2016-03-08

    Molecular dynamics (MD) simulations can describe protein motions in atomic detail, but transitions between protein conformational states sometimes take place on time scales that are infeasible or very expensive to reach by direct simulation. Enhanced sampling methods, the aim of which is to increase the sampling efficiency of MD simulations, have thus been extensively employed. The effectiveness of such methods when applied to complex biological systems like proteins, however, has been difficult to establish because even enhanced sampling simulations of such systems do not typically reach time scales at which convergence is extensive enough to reliably quantify sampling efficiency. Here, we obtain sufficiently converged simulations of three proteins to evaluate the performance of simulated tempering, a member of a widely used class of enhanced sampling methods that use elevated temperature to accelerate sampling. Simulated tempering simulations with individual lengths of up to 100 μs were compared to (previously published) conventional MD simulations with individual lengths of up to 1 ms. With two proteins, BPTI and ubiquitin, we evaluated the efficiency of sampling of conformational states near the native state, and for the third, the villin headpiece, we examined the rate of folding and unfolding. Our comparisons demonstrate that simulated tempering can consistently achieve a substantial sampling speedup of an order of magnitude or more relative to conventional MD.

  6. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    PubMed

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  7. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software

    PubMed Central

    Dols, W. Stuart; Persily, Andrew K.; Morrow, Jayne B.; Matzke, Brett D.; Sego, Landon H.; Nuffer, Lisa L.; Pulsipher, Brent A.

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones. PMID:27134782

  8. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software.

    PubMed

    Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.

  9. Monte Carlo simulation of air sampling methods for the measurement of radon decay products.

    PubMed

    Sima, Octavian; Luca, Aurelian; Sahagia, Maria

    2017-08-01

    A stochastic model of the processes involved in the measurement of the activity of the 222 Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the 222 Rn decay products concentrations in the air are realistically evaluated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Identifying the Experimental and Theoretical Effective Characteristics of Nonaligned Anisotropic Metamaterials

    DTIC Science & Technology

    2015-06-18

    nanorods, while the second demonstrated long and skinny nanorods. This is demosntrated in Figure 20. The University of Dayton research group asked Dr...here and in other research carried out with this methodology. 5.4 Simulations 5.4.1 Experimental results. All of the simulations carried out in this...samples that Dr. Sarangan and Dr. Shah of the University of Dayton presented to Dr. Marciniak’s research group . The samples consisted of silver tilted

  11. Cognitive deficits are associated with poorer simulated driving in older adults with heart failure

    PubMed Central

    2013-01-01

    Background Cognitive impairment is prevalent in older adults with heart failure (HF) and associated with reduced functional independence. HF patients appear at risk for reduced driving ability, as past work in other medical samples has shown cognitive dysfunction to be an important contributor to driving performance. The current study examined whether cognitive dysfunction was independently associated with reduced driving simulation performance in a sample of HF patients. Methods 18 persons with HF (67.72; SD = 8.56 year) completed echocardiogram and a brief neuropsychological test battery assessing global cognitive function, attention/executive function, memory and motor function. All participants then completed the Kent Multidimensional Assessment Driving Simulation (K-MADS), a driving simulator scenario with good psychometric properties. Results The sample exhibited an average Mini Mental State Examination (MMSE) score of 27.83 (SD = 2.09). Independent sample t-tests showed that HF patients performed worse than healthy adults on the driving simulation scenario. Finally, partial correlations showed worse attention/executive and motor function were independently associated with poorer driving simulation performance across several indices reflective of driving ability (i.e., centerline crossings, number of collisions, % of time over the speed limit, among others). Conclusion The current findings showed that reduced cognitive function was associated with poor simulated driving performance in older adults with HF. If replicated using behind-the-wheel testing, HF patients may be at elevated risk for unsafe driving and routine driving evaluations in this population may be warranted. PMID:24499466

  12. Accelerated weight histogram method for exploring free energy landscapes

    NASA Astrophysics Data System (ADS)

    Lindahl, V.; Lidmar, J.; Hess, B.

    2014-07-01

    Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.

  13. Accelerated weight histogram method for exploring free energy landscapes.

    PubMed

    Lindahl, V; Lidmar, J; Hess, B

    2014-07-28

    Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.

  14. A coupling of homology modeling with multiple molecular dynamics simulation for identifying representative conformation of GPCR structures: a case study on human bombesin receptor subtype-3.

    PubMed

    Nowroozi, Amin; Shahlaei, Mohsen

    2017-02-01

    In this study, a computational pipeline was therefore devised to overcome homology modeling (HM) bottlenecks. The coupling of HM with molecular dynamics (MD) simulation is useful in that it tackles the sampling deficiency of dynamics simulations by providing good-quality initial guesses for the native structure. Indeed, HM also relaxes the severe requirement of force fields to explore the huge conformational space of protein structures. In this study, the interaction between the human bombesin receptor subtype-3 and MK-5046 was investigated integrating HM, molecular docking, and MD simulations. To improve conformational sampling in typical MD simulations of GPCRs, as in other biomolecules, multiple trajectories with different initial conditions can be employed rather than a single long trajectory. Multiple MD simulations of human bombesin receptor subtype-3 with different initial atomic velocities are applied to sample conformations in the vicinity of the structure generated by HM. The backbone atom conformational space distribution of replicates is analyzed employing principal components analysis. As a result, the averages of structural and dynamic properties over the twenty-one trajectories differ significantly from those obtained from individual trajectories.

  15. Simulation of cryolipolysis as a novel method for noninvasive fat layer reduction.

    PubMed

    Majdabadi, Abbas; Abazari, Mohammad

    2016-12-20

    Regarding previous problems in conventional liposuction methods, the need for development of new fat removal operations was appreciated. In this study we are going to simulate one of the novel methods, cryolipolysis, aimed to tackle those drawbacks. We think that simulation of clinical procedures contributes considerably in efficacious performance of the operations. To do this we have attempted to simulate temperature distribution in a sample fat of the human body. Using Abaqus software we have presented the graphical display of temperature-time variations within the medium. Findings of our simulation indicate that tissue temperature decreases after cold exposure of about 30 min. It can be seen that the minimum temperature of tissue occurs in shallow layers of the sample and the temperature in deeper layers of the sample remains nearly unchanged. It is clear that cold exposure time of more than the specific time (t > 30 min) does not result in considerable changes. Numerous clinical studies have proved the efficacy of cryolipolysis. This noninvasive technique has eliminated some of drawbacks of conventional methods. Findings of our simulation clearly prove the efficiency of this method, especially for superficial fat layers.

  16. Affected States soft independent modeling by class analogy from the relation between independent variables, number of independent variables and sample size.

    PubMed

    Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz

    2013-03-01

    The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.

  17. Program to Optimize Simulated Trajectories (POST). Volume 2: Utilization manual

    NASA Technical Reports Server (NTRS)

    Bauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.

    1975-01-01

    Information pertinent to users of the program to optimize simulated trajectories (POST) is presented. The input required and output available is described for each of the trajectory and targeting/optimization options. A sample input listing and resulting output are given.

  18. THE DETECTION AND STATISTICS OF GIANT ARCS BEHIND CLASH CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Bingxiao; Zheng, Wei; Postman, Marc

    We developed an algorithm to find and characterize gravitationally lensed galaxies (arcs) to perform a comparison of the observed and simulated arc abundance. Observations are from the Cluster Lensing And Supernova survey with Hubble (CLASH). Simulated CLASH images are created using the MOKA package and also clusters selected from the high-resolution, hydrodynamical simulations, MUSIC, over the same mass and redshift range as the CLASH sample. The algorithm's arc elongation accuracy, completeness, and false positive rate are determined and used to compute an estimate of the true arc abundance. We derive a lensing efficiency of 4 ± 1 arcs (with length ≥6″ andmore » length-to-width ratio ≥7) per cluster for the X-ray-selected CLASH sample, 4 ± 1 arcs per cluster for the MOKA-simulated sample, and 3 ± 1 arcs per cluster for the MUSIC-simulated sample. The observed and simulated arc statistics are in full agreement. We measure the photometric redshifts of all detected arcs and find a median redshift z{sub s} = 1.9 with 33% of the detected arcs having z{sub s} > 3. We find that the arc abundance does not depend strongly on the source redshift distribution but is sensitive to the mass distribution of the dark matter halos (e.g., the c–M relation). Our results show that consistency between the observed and simulated distributions of lensed arc sizes and axial ratios can be achieved by using cluster-lensing simulations that are carefully matched to the selection criteria used in the observations.« less

  19. SSAGES: Software Suite for Advanced General Ensemble Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sidky, Hythem; Colón, Yamil J.; Helfferich, Julian

    Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods, and that facilitates implementation of new techniquesmore » as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite.« less

  20. SSAGES: Software Suite for Advanced General Ensemble Simulations

    NASA Astrophysics Data System (ADS)

    Sidky, Hythem; Colón, Yamil J.; Helfferich, Julian; Sikora, Benjamin J.; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z.; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J.; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S.; Reid, Daniel R.; Sevgen, Emre; Thapar, Vikram; Webb, Michael A.; Whitmer, Jonathan K.; de Pablo, Juan J.

    2018-01-01

    Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.

  1. Simulation of secondary emission calorimeter for future colliders

    NASA Astrophysics Data System (ADS)

    Yetkin, E. A.; Yetkin, T.; Ozok, F.; Iren, E.; Erduran, M. N.

    2018-03-01

    We present updated results from a simulation study of a conceptual sampling electromagnetic calorimeter based on secondary electron emission process. We implemented the secondary electron emission process in Geant4 as a user physics list and produced the energy spectrum and yield of secondary electrons. The energy resolution of the SEE calorimeter was σ/E = (41%) GeV1/2/√E and the response linearity to electromagnetic showers was to within 1.5%. The simulation results were also compared with a traditional scintillator calorimeter.

  2. Coarse kMC-based replica exchange algorithms for the accelerated simulation of protein folding in explicit solvent.

    PubMed

    Peter, Emanuel K; Shea, Joan-Emma; Pivkin, Igor V

    2016-05-14

    In this paper, we present a coarse replica exchange molecular dynamics (REMD) approach, based on kinetic Monte Carlo (kMC). The new development significantly can reduce the amount of replicas and the computational cost needed to enhance sampling in protein simulations. We introduce 2 different methods which primarily differ in the exchange scheme between the parallel ensembles. We apply this approach on folding of 2 different β-stranded peptides: the C-terminal β-hairpin fragment of GB1 and TrpZip4. Additionally, we use the new simulation technique to study the folding of TrpCage, a small fast folding α-helical peptide. Subsequently, we apply the new methodology on conformation changes in signaling of the light-oxygen voltage (LOV) sensitive domain from Avena sativa (AsLOV2). Our results agree well with data reported in the literature. In simulations of dialanine, we compare the statistical sampling of the 2 techniques with conventional REMD and analyze their performance. The new techniques can reduce the computational cost of REMD significantly and can be used in enhanced sampling simulations of biomolecules.

  3. Ultra-fast hadronic calorimetry

    DOE PAGES

    Denisov, Dmitri; Lukic, Strahinja; Mokhov, Nikolai; ...

    2018-05-08

    Calorimeters for particle physics experiments with integration time of a few ns will substantially improve the capability of the experiment to resolve event pileup and to reject backgrounds. In this paper the time development of hadronic showers induced by 30 and 60 GeV positive pions and 120 GeV protons is studied using Monte Carlo simulation and beam tests with a prototype of a sampling steel-scintillator hadronic calorimeter. In the beam tests, scintillator signals induced by hadronic showers in steel are sampled with a period of 0.2 ns and precisely time-aligned in order to study the average signal waveform at various locations with respectmore » to the beam particle impact. Simulations of the same setup are performed using the MARS15 code. Both simulation and test beam results suggest that energy deposition in steel calorimeters develop over a time shorter than 2 ns providing opportunity for ultra-fast calorimetry. As a result, simulation results for an “ideal” calorimeter consisting exclusively of bulk tungsten or copper are presented to establish the lower limit of the signal integration window.« less

  4. pypet: A Python Toolkit for Data Management of Parameter Explorations

    PubMed Central

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines. PMID:27610080

  5. pypet: A Python Toolkit for Data Management of Parameter Explorations.

    PubMed

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.

  6. The effect of electronically steering a phased array ultrasound transducer on near-field tissue heating.

    PubMed

    Payne, Allison; Vyas, Urvi; Todd, Nick; de Bever, Joshua; Christensen, Douglas A; Parker, Dennis L

    2011-09-01

    This study presents the results obtained from both simulation and experimental techniques that show the effect of mechanically or electronically steering a phased array transducer on proximal tissue heating. The thermal response of a nine-position raster and a 16-mm diameter circle scanning trajectory executed through both electronic and mechanical scanning was evaluated in computer simulations and experimentally in a homogeneous tissue-mimicking phantom. Simulations were performed using power deposition maps obtained from the hybrid angular spectrum (HAS) method and applying a finite-difference approximation of the Pennes' bioheat transfer equation for the experimentally used transducer and also for a fully sampled transducer to demonstrate the effect of acoustic window, ultrasound beam overlap and grating lobe clutter on near-field heating. Both simulation and experimental results show that electronically steering the ultrasound beam for the two trajectories using the 256-element phased array significantly increases the thermal dose deposited in the near-field tissues when compared with the same treatment executed through mechanical steering only. In addition, the individual contributions of both beam overlap and grating lobe clutter to the near-field thermal effects were determined through comparing the simulated ultrasound beam patterns and resulting temperature fields from mechanically and electronically steered trajectories using the 256-randomized element phased array transducer to an electronically steered trajectory using a fully sampled transducer with 40 401 phase-adjusted sample points. Three distinctly different three distinctly different transducers were simulated to analyze the tradeoffs of selected transducer design parameters on near-field heating. Careful consideration of design tradeoffs and accurate patient treatment planning combined with thorough monitoring of the near-field tissue temperature will help to ensure patient safety during an MRgHIFU treatment.

  7. Optimization of groundwater sampling approach under various hydrogeological conditions using a numerical simulation model

    NASA Astrophysics Data System (ADS)

    Qi, Shengqi; Hou, Deyi; Luo, Jian

    2017-09-01

    This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.

  8. Kinetic Monte Carlo simulation of self-organized pattern formation induced by ion beam sputtering using crater functions

    NASA Astrophysics Data System (ADS)

    Yang, Zhangcan; Lively, Michael A.; Allain, Jean Paul

    2015-02-01

    The production of self-organized nanostructures by ion beam sputtering has been of keen interest to researchers for many decades. Despite numerous experimental and theoretical efforts to understand ion-induced nanostructures, there are still many basic questions open to discussion, such as the role of erosion or curvature-dependent sputtering. In this work, a hybrid MD/kMC (molecular dynamics/kinetic Monte Carlo) multiscale atomistic model is developed to investigate these knowledge gaps, and its predictive ability is validated across the experimental parameter space. This model uses crater functions, which were obtained from MD simulations, to model the prompt mass redistribution due to single-ion impacts. Defect migration, which is missing from previous models that use crater functions, is treated by a kMC Arrhenius method. Using this model, a systematic study was performed for silicon bombarded by Ar+ ions of various energies (100 eV, 250 eV, 500 eV, 700 eV, and 1000 eV) at incidence angles of 0∘ to 80∘. The simulation results were compared with experimental findings, showing good agreement in many aspects of surface evolution, such as the phase diagram. The underestimation of the ripple wavelength by the simulations suggests that surface diffusion is not the main smoothening mechanism for ion-induced pattern formation. Furthermore, the simulated results were compared with moment-description continuum theory and found to give better results, as the simulation did not suffer from the same mathematical inconsistencies as the continuum model. The key finding was that redistributive effects are dominant in the formation of flat surfaces and parallel-mode ripples, but erosive effects are dominant at high angles when perpendicular-mode ripples are formed. Ion irradiation with simultaneous sample rotation was also simulated, resulting in arrays of square-ordered dots. The patterns obtained from sample rotation were strongly correlated to the rotation speed and to the pattern types formed without sample rotation, and a critical value of about 5 rpm was found between disordered ripples and square-ordered dots. Finally, simulations of dual-beam sputtering were performed, with the resulting patterns determined by the flux ratio of the two beams and the pattern types resulting from single-beam sputtering under the same conditions.

  9. A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Zhang, Dongxiao; Lin, Guang

    A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a samplemore » of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching problem is captured and are used to give a reliable production prediction with uncertainty quantification. The new algorithm reveals a great improvement in terms of computational efficiency comparing previously studied approaches for the sample problem.« less

  10. Effects of water plasma immersion ion implantation on surface electrochemical behavior of NiTi shape memory alloys in simulated body fluids

    NASA Astrophysics Data System (ADS)

    Liu, X. M.; Wu, S. L.; Chu, Paul K.; Chung, C. Y.; Chu, C. L.; Yeung, K. W. K.; Lu, W. W.; Cheung, K. M. C.; Luk, K. D. K.

    2007-01-01

    Water plasma immersion ion implantation (PIII) was conducted on orthopedic NiTi shape memory alloy to enhance the surface electrochemical characteristics. The surface composition of the NiTi alloy before and after H 2O-PIII was determined by X-ray photoelectron spectroscopy (XPS) and atomic force microscopy (AFM) was utilized to determine the roughness and morphology of the NiTi samples. Potentiodynamic polarization tests and electrochemical impedance spectroscopy (EIS) were carried out to investigate the surface electrochemical behavior of the control and H 2O-PIII NiTi samples in simulated body fluids (SBF) at 37 °C as well as the mechanism. The H 2O-PIII NiTi sample showed a higher breakdown potential ( Eb) than the control sample. Based on the AFM results, two different physical models with related equivalent electrical circuits were obtained to fit the EIS data and explain the surface electrochemical behavior of NiTi in SBF. The simulation results demonstrate that the higher resistance of the oxide layer produced by H 2O-PIII is primarily responsible for the improvement in the surface corrosion resistance.

  11. Building Better Planet Populations for EXOSIMS

    NASA Astrophysics Data System (ADS)

    Garrett, Daniel; Savransky, Dmitry

    2018-01-01

    The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.

  12. TiOx deposited by magnetron sputtering: a joint modelling and experimental study

    NASA Astrophysics Data System (ADS)

    Tonneau, R.; Moskovkin, P.; Pflug, A.; Lucas, S.

    2018-05-01

    This paper presents a 3D multiscale simulation approach to model magnetron reactive sputter deposition of TiOx⩽2 at various O2 inlets and its validation against experimental results. The simulation first involves the transport of sputtered material in a vacuum chamber by means of a three-dimensional direct simulation Monte Carlo (DSMC) technique. Second, the film growth at different positions on a 3D substrate is simulated using a kinetic Monte Carlo (kMC) method. When simulating the transport of species in the chamber, wall chemistry reactions are taken into account in order to get the proper content of the reactive species in the volume. Angular and energy distributions of particles are extracted from DSMC and used for film growth modelling by kMC. Along with the simulation, experimental deposition of TiOx coatings on silicon samples placed at different positions on a curved sample holder was performed. The experimental results are in agreement with the simulated ones. For a given coater, the plasma phase hysteresis behaviour, film composition and film morphology are predicted. The used methodology can be applied to any coater and any films. This paves the way to the elaboration of a virtual coater allowing a user to predict composition and morphology of films deposited in silico.

  13. Numerical simulation of isolation of cancer cells in a microfluidic chip

    NASA Astrophysics Data System (ADS)

    Djukic, T.; Topalovic, M.; Filipovic, N.

    2015-08-01

    Cancer is a disease that is characterized by the uncontrolled increase of numbers of cells. Circulating tumour cells (CTCs) are separated from the primary tumor, circulate in the bloodstream and form metastases. Circulating tumor cells can be identified in the blood of a patient by taking a blood sample. Microfluidic chips are a new technique that is used to isolate these cells from the blood sample. In this paper a numerical model is presented that is able to simulate the motion of individual cells through a microfluidic chip. The proposed numerical model gives very valuable insight into the processes happening within a microfluidic chip. The accuracy of the proposed model is compared with experimental results. The experimental setup that is described in literature is used to create identical geometrical domains and define simulation parameters. A good agreement of experimental and numerical results demonstrates that the proposed model can be successfully used to simulate complex behaviour of CTCs inside microfluidic chips.

  14. Acoustic response of cemented granular sedimentary rocks: molecular dynamics modeling.

    PubMed

    García, Xavier; Medina, Ernesto

    2007-06-01

    The effect of cementation processes on the acoustical properties of sands is studied via molecular dynamics simulation methods. We propose numerical methods where the initial uncemented sand is built by simulating the settling process of sediments. Uncemented samples of different porosity are considered by emulating natural mechanical compaction of sediments due to overburden. Cementation is considered through a particle-based model that captures the underlying physics behind the process. In our simulations, we consider samples with different degrees of compaction and cementing materials with distinct elastic properties. The microstructure of cemented sands is taken into account while adding cement at specific locations within the pores, such as grain-to-grain contacts. Results show that the acoustical properties of cemented sands are strongly dependent on the amount of cement, its stiffness relative to the hosting medium, and its location within the pores. Simulation results are in good correspondence with available experimental data and compare favorably with some theoretical predictions for the sound velocity within a range of cement saturation, porosity, and confining pressure.

  15. Rheological Characterization of Unusual DWPF Slurry Samples (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koopman, D. C.

    2005-09-01

    A study was undertaken to identify and clarify examples of unusual rheological behavior in Defense Waste Processing Facility (DWPF) simulant slurry samples. Identification was accomplished by reviewing sludge, Sludge Receipt and Adjustment Tank (SRAT) product, and Slurry Mix Evaporator (SME) product simulant rheological results from the prior year. Clarification of unusual rheological behavior was achieved by developing and implementing new measurement techniques. Development of these new methods is covered in a separate report, WSRC-TR-2004-00334. This report includes a review of recent literature on unusual rheological behavior, followed by a summary of the rheological measurement results obtained on a set ofmore » unusual simulant samples. Shifts in rheological behavior of slurries as the wt. % total solids changed have been observed in numerous systems. The main finding of the experimental work was that the various unusual DWPF simulant slurry samples exhibit some degree of time dependent behavior. When a given shear rate is applied to a sample, the apparent viscosity of the slurry changes with time rather than remaining constant. These unusual simulant samples are more rheologically complex than Newtonian liquids or more simple slurries, neither of which shows significant time dependence. The study concludes that the unusual rheological behavior that has been observed is being caused by time dependent rheological properties in the slurries being measured. Most of the changes are due to the effect of time under shear, but SB3 SME products were also changing properties while stored in sample bottles. The most likely source of this shear-related time dependence for sludge is in the simulant preparation. More than a single source of time dependence was inferred for the simulant SME product slurries based on the range of phenomena observed. Rheological property changes were observed on the time-scale of a single measurement (minutes) as well as on a time scale of hours to weeks. The unusual shape of the slurry flow curves was not an artifact of the rheometric measurement. Adjusting the user-specified parameters in the rheometer measurement jobs can alter the shape of the flow curve of these time dependent samples, but this was not causing the unusual behavior. Variations in the measurement parameters caused the time dependence of a given slurry to manifest at different rates. The premise of the controlled shear rate flow curve measurement is that the dynamic response of the sample to a change in shear rate is nearly instantaneous. When this is the case, the data can be fitted to a time independent rheological equation, such as the Bingham plastic model. In those cases where this does not happen, interpretation of the data is difficult. Fitting time dependent data to time independent rheological equations, such as the Bingham plastic model, is also not appropriate.« less

  16. Convergence of Free Energy Profile of Coumarin in Lipid Bilayer

    PubMed Central

    2012-01-01

    Atomistic molecular dynamics (MD) simulations of druglike molecules embedded in lipid bilayers are of considerable interest as models for drug penetration and positioning in biological membranes. Here we analyze partitioning of coumarin in dioleoylphosphatidylcholine (DOPC) bilayer, based on both multiple, unbiased 3 μs MD simulations (total length) and free energy profiles along the bilayer normal calculated by biased MD simulations (∼7 μs in total). The convergences in time of free energy profiles calculated by both umbrella sampling and z-constraint techniques are thoroughly analyzed. Two sets of starting structures are also considered, one from unbiased MD simulation and the other from “pulling” coumarin along the bilayer normal. The structures obtained by pulling simulation contain water defects on the lipid bilayer surface, while those acquired from unbiased simulation have no membrane defects. The free energy profiles converge more rapidly when starting frames from unbiased simulations are used. In addition, z-constraint simulation leads to more rapid convergence than umbrella sampling, due to quicker relaxation of membrane defects. Furthermore, we show that the choice of RESP, PRODRG, or Mulliken charges considerably affects the resulting free energy profile of our model drug along the bilayer normal. We recommend using z-constraint biased MD simulations based on starting geometries acquired from unbiased MD simulations for efficient calculation of convergent free energy profiles of druglike molecules along bilayer normals. The calculation of free energy profile should start with an unbiased simulation, though the polar molecules might need a slow pulling afterward. Results obtained with the recommended simulation protocol agree well with available experimental data for two coumarin derivatives. PMID:22545027

  17. Convergence of Free Energy Profile of Coumarin in Lipid Bilayer.

    PubMed

    Paloncýová, Markéta; Berka, Karel; Otyepka, Michal

    2012-04-10

    Atomistic molecular dynamics (MD) simulations of druglike molecules embedded in lipid bilayers are of considerable interest as models for drug penetration and positioning in biological membranes. Here we analyze partitioning of coumarin in dioleoylphosphatidylcholine (DOPC) bilayer, based on both multiple, unbiased 3 μs MD simulations (total length) and free energy profiles along the bilayer normal calculated by biased MD simulations (∼7 μs in total). The convergences in time of free energy profiles calculated by both umbrella sampling and z-constraint techniques are thoroughly analyzed. Two sets of starting structures are also considered, one from unbiased MD simulation and the other from "pulling" coumarin along the bilayer normal. The structures obtained by pulling simulation contain water defects on the lipid bilayer surface, while those acquired from unbiased simulation have no membrane defects. The free energy profiles converge more rapidly when starting frames from unbiased simulations are used. In addition, z-constraint simulation leads to more rapid convergence than umbrella sampling, due to quicker relaxation of membrane defects. Furthermore, we show that the choice of RESP, PRODRG, or Mulliken charges considerably affects the resulting free energy profile of our model drug along the bilayer normal. We recommend using z-constraint biased MD simulations based on starting geometries acquired from unbiased MD simulations for efficient calculation of convergent free energy profiles of druglike molecules along bilayer normals. The calculation of free energy profile should start with an unbiased simulation, though the polar molecules might need a slow pulling afterward. Results obtained with the recommended simulation protocol agree well with available experimental data for two coumarin derivatives.

  18. Problems with sampling desert tortoises: A simulation analysis based on field data

    USGS Publications Warehouse

    Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.

    2005-01-01

    The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.

  19. 40 CFR 761.357 - Reporting the results of the procedure used to simulate leachate generation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Liquid, Non-Metal PCB Bulk Product... micrograms PCBs per liter of extract to obtain the equivalent measurement from a 100 gram sample. ...

  20. 40 CFR 761.357 - Reporting the results of the procedure used to simulate leachate generation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Liquid, Non-Metal PCB Bulk Product... micrograms PCBs per liter of extract to obtain the equivalent measurement from a 100 gram sample. ...

  1. 40 CFR 761.357 - Reporting the results of the procedure used to simulate leachate generation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Liquid, Non-Metal PCB Bulk Product... micrograms PCBs per liter of extract to obtain the equivalent measurement from a 100 gram sample. ...

  2. 40 CFR 761.357 - Reporting the results of the procedure used to simulate leachate generation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Liquid, Non-Metal PCB Bulk Product... micrograms PCBs per liter of extract to obtain the equivalent measurement from a 100 gram sample. ...

  3. 40 CFR 761.357 - Reporting the results of the procedure used to simulate leachate generation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Liquid, Non-Metal PCB Bulk Product... micrograms PCBs per liter of extract to obtain the equivalent measurement from a 100 gram sample. ...

  4. T-COMP—A suite of programs for extracting transmissivity from MODFLOW models

    USGS Publications Warehouse

    Halford, Keith J.

    2016-02-12

    Simulated transmissivities are constrained poorly by assigning permissible ranges of hydraulic conductivities from aquifer-test results to hydrogeologic units in groundwater-flow models. These wide ranges are derived from interpretations of many aquifer tests that are categorized by hydrogeologic unit. Uncertainty is added where contributing thicknesses differ between field estimates and numerical models. Wide ranges of hydraulic conductivities and discordant thicknesses result in simulated transmissivities that frequently are much greater than aquifer-test results. Multiple orders of magnitude differences frequently occur between simulated and observed transmissivities where observed transmissivities are less than 1,000 feet squared per day.Transmissivity observations from individual aquifer tests can constrain model calibration as head and flow observations do. This approach is superior to diluting aquifer-test results into generalized ranges of hydraulic conductivities. Observed and simulated transmissivities can be compared directly with T-COMP, a suite of three FORTRAN programs. Transmissivity observations require that simulated hydraulic conductivities and thicknesses in the volume investigated by an aquifer test be extracted and integrated into a simulated transmissivity. Transmissivities of MODFLOW model cells are sampled within the volume affected by an aquifer test as defined by a well-specific, radial-flow model of each aquifer test. Sampled transmissivities of model cells are averaged within a layer and summed across layers. Accuracy of the approach was tested with hypothetical, multiple-aquifer models where specified transmissivities ranged between 250 and 20,000 feet squared per day. More than 90 percent of simulated transmissivities were within a factor of 2 of specified transmissivities.

  5. Q-Sample Construction: A Critical Step for a Q-Methodological Study.

    PubMed

    Paige, Jane B; Morin, Karen H

    2016-01-01

    Q-sample construction is a critical step in Q-methodological studies. Prior to conducting Q-studies, researchers start with a population of opinion statements (concourse) on a particular topic of interest from which a sample is drawn. These sampled statements are known as the Q-sample. Although literature exists on methodological processes to conduct Q-methodological studies, limited guidance exists on the practical steps to reduce the population of statements to a Q-sample. A case exemplar illustrates the steps to construct a Q-sample in preparation for a study that explored perspectives nurse educators and nursing students hold about simulation design. Experts in simulation and Q-methodology evaluated the Q-sample for readability, clarity, and for representativeness of opinions contained within the concourse. The Q-sample was piloted and feedback resulted in statement refinement. Researchers especially those undertaking Q-method studies for the first time may benefit from the practical considerations to construct a Q-sample offered in this article. © The Author(s) 2014.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pang, Yuan-Ping, E-mail: pang@mayo.edu

    Highlights: • Reducing atomic masses by 10-fold vastly improves sampling in MD simulations. • CLN025 folded in 4 of 10 × 0.5-μs MD simulations when masses were reduced by 10-fold. • CLN025 folded as early as 96.2 ns in 1 of the 4 simulations that captured folding. • CLN025 did not fold in 10 × 0.5-μs MD simulations when standard masses were used. • Low-mass MD simulation is a simple and generic sampling enhancement technique. - Abstract: CLN025 is one of the smallest fast-folding proteins. Until now it has not been reported that CLN025 can autonomously fold to its nativemore » conformation in a classical, all-atom, and isothermal–isobaric molecular dynamics (MD) simulation. This article reports the autonomous and repeated folding of CLN025 from a fully extended backbone conformation to its native conformation in explicit solvent in multiple 500-ns MD simulations at 277 K and 1 atm with the first folding event occurring as early as 66.1 ns. These simulations were accomplished by using AMBER forcefield derivatives with atomic masses reduced by 10-fold on Apple Mac Pros. By contrast, no folding event was observed when the simulations were repeated using the original AMBER forcefields of FF12SB and FF14SB. The results demonstrate that low-mass MD simulation is a simple and generic technique to enhance configurational sampling. This technique may propel autonomous folding of a wide range of miniature proteins in classical, all-atom, and isothermal–isobaric MD simulations performed on commodity computers—an important step forward in quantitative biology.« less

  7. Simulation of Forward and Inverse X-ray Scattering From Shocked Materials

    NASA Astrophysics Data System (ADS)

    Barber, John; Marksteiner, Quinn; Barnes, Cris

    2012-02-01

    The next generation of high-intensity, coherent light sources should generate sufficient brilliance to perform in-situ coherent x-ray diffraction imaging (CXDI) of shocked materials. In this work, we present beginning-to-end simulations of this process. This includes the calculation of the partially-coherent intensity profiles of self-amplified stimulated emission (SASE) x-ray free electron lasers (XFELs), as well as the use of simulated, shocked molecular-dynamics-based samples to predict the evolution of the resulting diffraction patterns. In addition, we will explore the corresponding inverse problem by performing iterative phase retrieval to generate reconstructed images of the simulated sample. The development of these methods in the context of materials under extreme conditions should provide crucial insights into the design and capabilities of shocked in-situ imaging experiments.

  8. Estimation variance bounds of importance sampling simulations in digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  9. Modelling the line shape of very low energy peaks of positron beam induced secondary electrons measured using a time of flight spectrometer

    NASA Astrophysics Data System (ADS)

    Fairchild, A. J.; Chirayath, V. A.; Gladen, R. W.; Chrysler, M. D.; Koymen, A. R.; Weiss, A. H.

    2017-01-01

    In this paper, we present results of numerical modelling of the University of Texas at Arlington’s time of flight positron annihilation induced Auger electron spectrometer (UTA TOF-PAES) using SIMION® 8.1 Ion and Electron Optics Simulator. The time of flight (TOF) spectrometer measures the energy of electrons emitted from the surface of a sample as a result of the interaction of low energy positrons with the sample surface. We have used SIMION® 8.1 to calculate the times of flight spectra of electrons leaving the sample surface with energies and angles dispersed according to distribution functions chosen to model the positron induced electron emission process and have thus obtained an estimate of the true electron energy distribution. The simulated TOF distribution was convolved with a Gaussian timing resolution function and compared to the experimental distribution. The broadening observed in the simulated TOF spectra was found to be consistent with that observed in the experimental secondary electron spectra of Cu generated as a result of positrons incident with energy 1.5 eV to 901 eV, when a timing resolution of 2.3 ns was assumed.

  10. Connecting Returned Apollo Soils and Remote Sensing: Application to the Diviner Lunar Radiometer

    NASA Technical Reports Server (NTRS)

    Greenhagen, B. T.; DonaldsonHanna, K. L.; Thomas, I. R.; Bowles, N. E.; Allen, Carlton C.; Pieters, C. M.; Paige, D. A.

    2014-01-01

    The Diviner Lunar Radiometer, onboard NASA's Lunar Reconnaissance Orbiter, has produced the first global, high resolution, thermal infrared observations of an airless body. The Moon, which is the most accessible member of this most abundant class of solar system objects, is also the only body for which we have extraterrestrial samples with known spatial context, returned Apollo samples. Here we present the results of a comprehensive study to reproduce an accurate simulated lunar environment, evaluate the most appropriate sample and measurement conditions, collect thermal infrared spectra of a representative suite of Apollo soils, and correlate them with Diviner observations of the lunar surface. It has been established previously that thermal infrared spectra measured in simulated lunar environment (SLE) are significantly altered from spectra measured under terrestrial or martian conditions. The data presented here were collected at the University of Oxford Simulated Lunar Environment Chamber (SLEC). In SLEC, we simulate the lunar environment by: (1) pumping the chamber to vacuum pressures (less than 10-4 mbar) sufficient to simulate lunar heat transport processes within the sample, (2) cooling the chamber with liquid nitrogen to simulate radiation to the cold space environment, and (3) heating the samples with heaters and lamp to set-up thermal gradients similar to those experienced in the upper hundreds of microns of the lunar surface. We then conducted a comprehensive suite of experiments using different sample preparation and heating conditions on Apollo soils 15071 (maria) and 67701 (highland) and compared the results to Diviner noontime data to select the optimal experimental conditions. This study includes thermal infrared SLE measurements of 10084 (A11 - LM), 12001 (A12 - LM), 14259 (A14 - LM), 15071 (A15 - S1), 15601 (A15 - S9a), 61141 (A16 - S1), 66031 (A16 - S6), 67701 (A16 - S11), and 70181 (A17 - LM). The Diviner dataset includes all six Apollo sites at approximately 200 m spatial resolution We find that analyses of Diviner observations of individual sampling stations and SLE measurements returned Apollo soils show good agreement, while comparisons to thermal infrared reflectance under ambient conditions do not agree well, which underscores the need for SLE measurements and validates the Diviner compositional measurement technique.

  11. Development of size-selective sampling of Bacillus anthracis surrogate spores from simulated building air intake mixtures for analysis via laser-induced breakdown spectroscopy.

    PubMed

    Gibb-Snyder, Emily; Gullett, Brian; Ryan, Shawn; Oudejans, Lukas; Touati, Abderrahmane

    2006-08-01

    Size-selective sampling of Bacillus anthracis surrogate spores from realistic, common aerosol mixtures was developed for analysis by laser-induced breakdown spectroscopy (LIBS). A two-stage impactor was found to be the preferential sampling technique for LIBS analysis because it was able to concentrate the spores in the mixtures while decreasing the collection of potentially interfering aerosols. Three common spore/aerosol scenarios were evaluated, diesel truck exhaust (to simulate a truck running outside of a building air intake), urban outdoor aerosol (to simulate common building air), and finally a protein aerosol (to simulate either an agent mixture (ricin/anthrax) or a contaminated anthrax sample). Two statistical methods, linear correlation and principal component analysis, were assessed for differentiation of surrogate spore spectra from other common aerosols. Criteria for determining percentages of false positives and false negatives via correlation analysis were evaluated. A single laser shot analysis of approximately 4 percent of the spores in a mixture of 0.75 m(3) urban outdoor air doped with approximately 1.1 x 10(5) spores resulted in a 0.04 proportion of false negatives. For that same sample volume of urban air without spores, the proportion of false positives was 0.08.

  12. Investigating Test Equating Methods in Small Samples through Various Factors

    ERIC Educational Resources Information Center

    Asiret, Semih; Sünbül, Seçil Ömür

    2016-01-01

    In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…

  13. Technology Tips: Sample Too Small? Probably Not!

    ERIC Educational Resources Information Center

    Strayer, Jeremy F.

    2013-01-01

    Statistical studies are referenced in the news every day, so frequently that people are sometimes skeptical of reported results. Often, no matter how large a sample size researchers use in their studies, people believe that the sample size is too small to make broad generalizations. The tasks presented in this article use simulations of repeated…

  14. A molecular simulation protocol to avoid sampling redundancy and discover new states.

    PubMed

    Bacci, Marco; Vitalis, Andreas; Caflisch, Amedeo

    2015-05-01

    For biomacromolecules or their assemblies, experimental knowledge is often restricted to specific states. Ambiguity pervades simulations of these complex systems because there is no prior knowledge of relevant phase space domains, and sampling recurrence is difficult to achieve. In molecular dynamics methods, ruggedness of the free energy surface exacerbates this problem by slowing down the unbiased exploration of phase space. Sampling is inefficient if dwell times in metastable states are large. We suggest a heuristic algorithm to terminate and reseed trajectories run in multiple copies in parallel. It uses a recent method to order snapshots, which provides notions of "interesting" and "unique" for individual simulations. We define criteria to guide the reseeding of runs from more "interesting" points if they sample overlapping regions of phase space. Using a pedagogical example and an α-helical peptide, the approach is demonstrated to amplify the rate of exploration of phase space and to discover metastable states not found by conventional sampling schemes. Evidence is provided that accurate kinetics and pathways can be extracted from the simulations. The method, termed PIGS for Progress Index Guided Sampling, proceeds in unsupervised fashion, is scalable, and benefits synergistically from larger numbers of replicas. Results confirm that the underlying ideas are appropriate and sufficient to enhance sampling. In molecular simulations, errors caused by not exploring relevant domains in phase space are always unquantifiable and can be arbitrarily large. Our protocol adds to the toolkit available to researchers in reducing these types of errors. This article is part of a Special Issue entitled "Recent developments of molecular dynamics". Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Vibronic Boson Sampling: Generalized Gaussian Boson Sampling for Molecular Vibronic Spectra at Finite Temperature.

    PubMed

    Huh, Joonsuk; Yung, Man-Hong

    2017-08-07

    Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.

  16. Accurate ensemble molecular dynamics binding free energy ranking of multidrug-resistant HIV-1 proteases.

    PubMed

    Sadiq, S Kashif; Wright, David W; Kenway, Owain A; Coveney, Peter V

    2010-05-24

    Accurate calculation of important thermodynamic properties, such as macromolecular binding free energies, is one of the principal goals of molecular dynamics simulations. However, single long simulation frequently produces incorrectly converged quantitative results due to inadequate sampling of conformational space in a feasible wall-clock time. Multiple short (ensemble) simulations have been shown to explore conformational space more effectively than single long simulations, but the two methods have not yet been thermodynamically compared. Here we show that, for end-state binding free energy determination methods, ensemble simulations exhibit significantly enhanced thermodynamic sampling over single long simulations and result in accurate and converged relative binding free energies that are reproducible to within 0.5 kcal/mol. Completely correct ranking is obtained for six HIV-1 protease variants bound to lopinavir with a correlation coefficient of 0.89 and a mean relative deviation from experiment of 0.9 kcal/mol. Multidrug resistance to lopinavir is enthalpically driven and increases through a decrease in the protein-ligand van der Waals interaction, principally due to the V82A/I84V mutation, and an increase in net electrostatic repulsion due to water-mediated disruption of protein-ligand interactions in the catalytic region. Furthermore, we correctly rank, to within 1 kcal/mol of experiment, the substantially increased chemical potency of lopinavir binding to the wild-type protease compared to saquinavir and show that lopinavir takes advantage of a decreased net electrostatic repulsion to confer enhanced binding. Our approach is dependent on the combined use of petascale computing resources and on an automated simulation workflow to attain the required level of sampling and turn around time to obtain the results, which can be as little as three days. This level of performance promotes integration of such methodology with clinical decision support systems for the optimization of patient-specific therapy.

  17. Calibrated simulations of Z opacity experiments that reproduce the experimentally measured plasma conditions

    DOE PAGES

    Nagayama, T.; Bailey, J. E.; Loisel, G.; ...

    2016-02-05

    Recently, frequency-resolved iron opacity measurements at electron temperatures of 170–200 eV and electron densities of (0.7 – 4.0) × 10 22 cm –3 revealed a 30–400% disagreement with the calculated opacities [J. E. Bailey et al., Nature (London) 517, 56 (2015)]. The discrepancies have a high impact on astrophysics, atomic physics, and high-energy density physics, and it is important to verify our understanding of the experimental platform with simulations. Reliable simulations are challenging because the temporal and spatial evolution of the source radiation and of the sample plasma are both complex and incompletely diagnosed. In this article, we describe simulationsmore » that reproduce the measured temperature and density in recent iron opacity experiments performed at the Sandia National Laboratories Z facility. The time-dependent spectral irradiance at the sample is estimated using the measured time- and space-dependent source radiation distribution, in situ source-to-sample distance measurements, and a three-dimensional (3D) view-factor code. The inferred spectral irradiance is used to drive 1D sample radiation hydrodynamics simulations. The images recorded by slit-imaged space-resolved spectrometers are modeled by solving radiation transport of the source radiation through the sample. We find that the same drive radiation time history successfully reproduces the measured plasma conditions for eight different opacity experiments. These results provide a quantitative physical explanation for the observed dependence of both temperature and density on the sample configuration. Simulated spectral images for the experiments without the FeMg sample show quantitative agreement with the measured spectral images. The agreement in spectral profile, spatial profile, and brightness provides further confidence in our understanding of the backlight-radiation time history and image formation. Furthermore, these simulations bridge the static-uniform picture of the data interpretation and the dynamic-gradient reality of the experiments, and they will allow us to quantitatively assess the impact of effects neglected in the data interpretation.« less

  18. Prediction of protein loop conformations using multiscale modeling methods with physical energy scoring functions.

    PubMed

    Olson, Mark A; Feig, Michael; Brooks, Charles L

    2008-04-15

    This article examines ab initio methods for the prediction of protein loops by a computational strategy of multiscale conformational sampling and physical energy scoring functions. Our approach consists of initial sampling of loop conformations from lattice-based low-resolution models followed by refinement using all-atom simulations. To allow enhanced conformational sampling, the replica exchange method was implemented. Physical energy functions based on CHARMM19 and CHARMM22 parameterizations with generalized Born (GB) solvent models were applied in scoring loop conformations extracted from the lattice simulations and, in the case of all-atom simulations, the ensemble of conformations were generated and scored with these models. Predictions are reported for 25 loop segments, each eight residues long and taken from a diverse set of 22 protein structures. We find that the simulations generally sampled conformations with low global root-mean-square-deviation (RMSD) for loop backbone coordinates from the known structures, whereas clustering conformations in RMSD space and scoring detected less favorable loop structures. Specifically, the lattice simulations sampled basins that exhibited an average global RMSD of 2.21 +/- 1.42 A, whereas clustering and scoring the loop conformations determined an RMSD of 3.72 +/- 1.91 A. Using CHARMM19/GB to refine the lattice conformations improved the sampling RMSD to 1.57 +/- 0.98 A and detection to 2.58 +/- 1.48 A. We found that further improvement could be gained from extending the upper temperature in the all-atom refinement from 400 to 800 K, where the results typically yield a reduction of approximately 1 A or greater in the RMSD of the detected loop. Overall, CHARMM19 with a simple pairwise GB solvent model is more efficient at sampling low-RMSD loop basins than CHARMM22 with a higher-resolution modified analytical GB model; however, the latter simulation method provides a more accurate description of the all-atom energy surface, yet demands a much greater computational cost. (c) 2007 Wiley Periodicals, Inc.

  19. Eigenvector method for umbrella sampling enables error analysis

    PubMed Central

    Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.

    2016-01-01

    Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912

  20. Structural characterization and numerical simulations of flow properties of standard and reservoir carbonate rocks using micro-tomography

    NASA Astrophysics Data System (ADS)

    Islam, Amina; Chevalier, Sylvie; Sassi, Mohamed

    2018-04-01

    With advances in imaging techniques and computational power, Digital Rock Physics (DRP) is becoming an increasingly popular tool to characterize reservoir samples and determine their internal structure and flow properties. In this work, we present the details for imaging, segmentation, as well as numerical simulation of single-phase flow through a standard homogenous Silurian dolomite core plug sample as well as a heterogeneous sample from a carbonate reservoir. We develop a procedure that integrates experimental results into the segmentation step to calibrate the porosity. We also look into using two different numerical tools for the simulation; namely Avizo Fire Xlab Hydro that solves the Stokes' equations via the finite volume method and Palabos that solves the same equations using the Lattice Boltzmann Method. Representative Elementary Volume (REV) and isotropy studies are conducted on the two samples and we show how DRP can be a useful tool to characterize rock properties that are time consuming and costly to obtain experimentally.

  1. Atomistic simulation and XAS investigation of Mn induced defects in Bi{sub 12}TiO{sub 20}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rezende, Marcos V dos S.; Santos, Denise J.; Jackson, Robert A.

    2016-06-15

    This work reports an investigation of the valence and site occupancy of Mn dopants in Bi{sub 12}TiO{sub 20} (BTO: Mn) host using X-ray Absorption (XAS) and atomistic simulation techniques based on energy minimisation. X-ray Absorption Near Edge Structure (XANES) at the Mn K-edges gave typical results for Mn ions with mixed valences of 3+ and 4+. Extended X-ray Absorption Fine Structure (EXAFS) results indicated that Mn ions are probably substituted at Ti sites. Atomistic simulation was performed assuming the incorporation of Mn{sup 2+}, Mn{sup 3+} and Mn{sup 4+} ions at either Bi{sup 3+} or Ti{sup 4+} sites, and the resultsmore » were compared to XANES and EXAFS measurements. Electrical conductivity for pure and doped samples was used to evaluate the consistency of the proposed model. - Graphical abstract: The structure of Bi{sub 12}TiO{sub 20} (BTO). Display Omitted - Highlights: • Pure and Mn-doped Bi{sub 12}TiO{sub 20} samples were studied by experimental techniques combined with atomistic simulation. • Good agreement between experimental and simulation results was obtained. • XANES results suggest a mixture of 3+ and 4+ valences for Mn, occupying the Ti4+ site in both cases. • Charge compensation by holes is most energetically favoured, explaining the enhancement observed in AC dark conductivity.« less

  2. Synthesis Al complex and investigating effect of doped ZnO nanoparticles in the electrical and optical efficiency of OLEDS

    NASA Astrophysics Data System (ADS)

    Shahedi, Zahra; Jafari, Mohammad Reza

    2017-01-01

    In this study, an organometallic complex based on aluminum ions is synthesized. And it is utilized as fluorescent material in the organic light-emitting diodes (OLEDs). The synthesized complex was characterized using XRD, UV-Vis, FT-IR as well as PL spectroscopy analyses. The energy levels of Al complex were determined by cyclic voltammetry measurements. Then, the effects of ZnO nanoparticles (NPs) of poly(3,4-ethylenedioxythiophene):poly(styrene sulfonate), PEDOT:PSS, on the electrical and optical performance of the organic light-emitting diodes have been investigated. For this purpose, two samples containing ITO/PEDOT:PSS/PVK/Alq3/PBD/Al with two different concentration and two samples containing ITO/PEDOT:PSS:ZnO/PVK/Alq3/PBD/Al with two different concentration were prepared. Then, hole transport, electron transport and emissive layers were deposited by the spin coating method and the cathode layer (Al) was deposited by the thermal evaporation method. The OLED simulation was also done by constructing the model and choosing appropriate parameters. Then, the experimental data were collected and the results interpreted both qualitatively and quantitatively. The results of the simulations were compared with experimental data of the J-V spectra. Comparing experimental data and simulation results showed that the electrical and optical efficiency of the samples with ZnO NPs is appreciably higher than the samples without ZnO NPs.

  3. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  4. Evaluation of mechanical properties of Aluminum-Copper cold sprayed and alloy 625 wire arc sprayed coatings

    NASA Astrophysics Data System (ADS)

    Bashirzadeh, Milad

    This study examines microstructural-based mechanical properties of Al-Cu composite deposited by cold spraying and wire arc sprayed nickel-based alloy 625 coating using numerical modeling and experimental techniques. The microhardness and elastic modulus of samples were determined using the Knoop hardness technique. Hardness in both transverse and longitudinal directions on the sample cross-sections has been measured. An image-based finite element simulation algorithm was employed to determine the mechanical properties through an inverse analysis. In addition mechanical tests including, tensile, bending, and nano-indentation tests were performed on alloy 625 wire arc sprayed samples. Overall, results from the experimental tests are in relatively good agreement for deposited Al-Cu composites and alloy 625 coating. However, results obtained from numerical simulation are significantly higher in value than experimentally obtained results. Examination and comparison of the results are strong indications of the influence of microstructure characteristics on the mechanical properties of thermally spray deposited coatings.

  5. Equilibrium sampling by reweighting nonequilibrium simulation trajectories

    NASA Astrophysics Data System (ADS)

    Yang, Cheng; Wan, Biao; Xu, Shun; Wang, Yanting; Zhou, Xin

    2016-03-01

    Based on equilibrium molecular simulations, it is usually difficult to efficiently visit the whole conformational space of complex systems, which are separated into some metastable regions by high free energy barriers. Nonequilibrium simulations could enhance transitions among these metastable regions and then be applied to sample equilibrium distributions in complex systems, since the associated nonequilibrium effects can be removed by employing the Jarzynski equality (JE). Here we present such a systematical method, named reweighted nonequilibrium ensemble dynamics (RNED), to efficiently sample equilibrium conformations. The RNED is a combination of the JE and our previous reweighted ensemble dynamics (RED) method. The original JE reproduces equilibrium from lots of nonequilibrium trajectories but requires that the initial distribution of these trajectories is equilibrium. The RED reweights many equilibrium trajectories from an arbitrary initial distribution to get the equilibrium distribution, whereas the RNED has both advantages of the two methods, reproducing equilibrium from lots of nonequilibrium simulation trajectories with an arbitrary initial conformational distribution. We illustrated the application of the RNED in a toy model and in a Lennard-Jones fluid to detect its liquid-solid phase coexistence. The results indicate that the RNED sufficiently extends the application of both the original JE and the RED in equilibrium sampling of complex systems.

  6. Equilibrium sampling by reweighting nonequilibrium simulation trajectories.

    PubMed

    Yang, Cheng; Wan, Biao; Xu, Shun; Wang, Yanting; Zhou, Xin

    2016-03-01

    Based on equilibrium molecular simulations, it is usually difficult to efficiently visit the whole conformational space of complex systems, which are separated into some metastable regions by high free energy barriers. Nonequilibrium simulations could enhance transitions among these metastable regions and then be applied to sample equilibrium distributions in complex systems, since the associated nonequilibrium effects can be removed by employing the Jarzynski equality (JE). Here we present such a systematical method, named reweighted nonequilibrium ensemble dynamics (RNED), to efficiently sample equilibrium conformations. The RNED is a combination of the JE and our previous reweighted ensemble dynamics (RED) method. The original JE reproduces equilibrium from lots of nonequilibrium trajectories but requires that the initial distribution of these trajectories is equilibrium. The RED reweights many equilibrium trajectories from an arbitrary initial distribution to get the equilibrium distribution, whereas the RNED has both advantages of the two methods, reproducing equilibrium from lots of nonequilibrium simulation trajectories with an arbitrary initial conformational distribution. We illustrated the application of the RNED in a toy model and in a Lennard-Jones fluid to detect its liquid-solid phase coexistence. The results indicate that the RNED sufficiently extends the application of both the original JE and the RED in equilibrium sampling of complex systems.

  7. Discrete element method (DEM) simulations of stratified sampling during solid dosage form manufacturing.

    PubMed

    Hancock, Bruno C; Ketterhagen, William R

    2011-10-14

    Discrete element model (DEM) simulations of the discharge of powders from hoppers under gravity were analyzed to provide estimates of dosage form content uniformity during the manufacture of solid dosage forms (tablets and capsules). For a system that exhibits moderate segregation the effects of sample size, number, and location within the batch were determined. The various sampling approaches were compared to current best-practices for sampling described in the Product Quality Research Institute (PQRI) Blend Uniformity Working Group (BUWG) guidelines. Sampling uniformly across the discharge process gave the most accurate results with respect to identifying segregation trends. Sigmoidal sampling (as recommended in the PQRI BUWG guidelines) tended to overestimate potential segregation issues, whereas truncated sampling (common in industrial practice) tended to underestimate them. The size of the sample had a major effect on the absolute potency RSD. The number of sampling locations (10 vs. 20) had very little effect on the trends in the data, and the number of samples analyzed at each location (1 vs. 3 vs. 7) had only a small effect for the sampling conditions examined. The results of this work provide greater understanding of the effect of different sampling approaches on the measured content uniformity of real dosage forms, and can help to guide the choice of appropriate sampling protocols. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Simulation of the Beating Heart Based on Physically Modeling aDeformable Balloon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohmer, Damien; Sitek, Arkadiusz; Gullberg, Grant T.

    2006-07-18

    The motion of the beating heart is complex and createsartifacts in SPECT and x-ray CT images. Phantoms such as the JaszczakDynamic Cardiac Phantom are used to simulate cardiac motion forevaluationof acquisition and data processing protocols used for cardiacimaging. Two concentric elastic membranes filled with water are connectedto tubing and pump apparatus for creating fluid flow in and out of theinner volume to simulate motion of the heart. In the present report, themovement of two concentric balloons is solved numerically in order tocreate a computer simulation of the motion of the moving membranes in theJaszczak Dynamic Cardiac Phantom. A system ofmore » differential equations,based on the physical properties, determine the motion. Two methods aretested for solving the system of differential equations. The results ofboth methods are similar providing a final shape that does not convergeto a trivial circular profile. Finally,a tomographic imaging simulationis performed by acquiring static projections of the moving shape andreconstructing the result to observe motion artifacts. Two cases aretaken into account: in one case each projection angle is sampled for ashort time interval and the other case is sampled for a longer timeinterval. The longer sampling acquisition shows a clear improvement indecreasing the tomographic streaking artifacts.« less

  9. Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems

    NASA Astrophysics Data System (ADS)

    Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros

    2015-04-01

    In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).

  10. Free-space optical channel simulator for weak-turbulence conditions.

    PubMed

    Bykhovsky, Dima

    2015-11-01

    Free-space optical (FSO) communication may be severely influenced by the inevitable turbulence effect that results in channel gain fluctuations and fading. The objective of this paper is to provide a simple and effective simulator of the weak-turbulence FSO channel that emulates the influence of the temporal covariance effect. Specifically, the proposed model is based on lognormal distributed samples with a corresponding correlation time. The simulator is based on the solution of the first-order stochastic differential equation (SDE). The results of the provided SDE analysis reveal its efficacy for turbulent channel modeling.

  11. Spacecraft Guidance, Navigation, and Control Visualization Tool

    NASA Technical Reports Server (NTRS)

    Mandic, Milan; Acikmese, Behcet; Blackmore, Lars

    2011-01-01

    G-View is a 3D visualization tool for supporting spacecraft guidance, navigation, and control (GN&C) simulations relevant to small-body exploration and sampling (see figure). The tool is developed in MATLAB using Virtual Reality Toolbox and provides users with the ability to visualize the behavior of their simulations, regardless of which programming language (or machine) is used to generate simulation results. The only requirement is that multi-body simulation data is generated and placed in the proper format before applying G-View.

  12. Numerical simulation on hydromechanical coupling in porous media adopting three-dimensional pore-scale model.

    PubMed

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view.

  13. Numerical Simulation on Hydromechanical Coupling in Porous Media Adopting Three-Dimensional Pore-Scale Model

    PubMed Central

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view. PMID:24955384

  14. On the Use of Enveloping Distribution Sampling (EDS) to Compute Free Enthalpy Differences between Different Conformational States of Molecules: Application to 310-, α-, and π-Helices.

    PubMed

    Lin, Zhixiong; Liu, Haiyan; Riniker, Sereina; van Gunsteren, Wilfred F

    2011-12-13

    Enveloping distribution sampling (EDS) is a powerful method to compute relative free energies from simulation. So far, the EDS method has only been applied to alchemical free energy differences, i.e., between different Hamiltonians defining different systems, and not yet to obtain free energy differences between different conformations or conformational states of a system. In this article, we extend the EDS formalism such that it can be applied to compute free energy differences of different conformations and apply it to compute the relative free enthalpy ΔG of 310-, α-, and π-helices of an alanine deca-peptide in explicit water solvent. The resulting ΔG values are compared to those obtained by standard thermodynamic integration (TI) and from so-called end-state simulations. A TI simulation requires the definition of a λ-dependent pathway which in the present case is based on hydrogen bonds of the different helical conformations. The values of ⟨(∂VTI)/(∂λ)⟩λ show a sharp change for a particular range of λ values, which is indicative of an energy barrier along the pathway, which lowers the accuracy of the resulting ΔG value. In contrast, in a two-state EDS simulation, an unphysical reference-state Hamiltonian which connects the parts of conformational space that are relevant to the different end states is constructed automatically; that is, no pathway needs to be defined. In the simulation using this reference state, both helices were sampled, and many transitions between them occurred, thus ensuring the accuracy of the resulting free enthalpy difference. According to the EDS simulations, the free enthalpy differences of the π-helix and the 310-helix versus the α-helix are 5 kJ mol(-1) and 47 kJ mol(-1), respectively, for an alanine deca-peptide in explicit SPC water solvent using the GROMOS 53A6 force field. The EDS method, which is a particular form of umbrella sampling, is thus applicable to compute free energy differences between conformational states as well as between systems and has definite advantages over the traditional TI and umbrella sampling methods to compute relative free energies.

  15. Particle emission from artificial cometary materials

    NASA Technical Reports Server (NTRS)

    Koelzer, Gabriele; Kochan, Hermann; Thiel, Klaus

    1992-01-01

    During KOSI (comet simulation) experiments, mineral-ice mixtures are observed in simulated space conditions. Emission of ice-/dust particles from the sample surface is observed by means of different devices. The particle trajectories are recorded with a video system. In the following analysis we extracted the parameters: particle count rate, spatial distribution of starting points on the sample surface, and elevation angle and particle velocity at distances up to 5 cm from the sample surface. Different kinds of detectors are mounted on a frame in front of the sample to register the emitted particles and to collect their dust residues. By means of these instruments the particle count rates, the particle sizes and the composition of the particles can be correlated. The results are related to the gas flux density and the temperature on the sample surface during the insolation period. The particle emission is interpreted in terms of phenomena on the sample surface, e.g., formation of a dust mantle.

  16. Derivation and Applicability of Asymptotic Results for Multiple Subtests Person-Fit Statistics

    PubMed Central

    Albers, Casper J.; Meijer, Rob R.; Tendeiro, Jorge N.

    2016-01-01

    In high-stakes testing, it is important to check the validity of individual test scores. Although a test may, in general, result in valid test scores for most test takers, for some test takers, test scores may not provide a good description of a test taker’s proficiency level. Person-fit statistics have been proposed to check the validity of individual test scores. In this study, the theoretical asymptotic sampling distribution of two person-fit statistics that can be used for tests that consist of multiple subtests is first discussed. Second, simulation study was conducted to investigate the applicability of this asymptotic theory for tests of finite length, in which the correlation between subtests and number of items in the subtests was varied. The authors showed that these distributions provide reasonable approximations, even for tests consisting of subtests of only 10 items each. These results have practical value because researchers do not have to rely on extensive simulation studies to simulate sampling distributions. PMID:29881053

  17. Electro-thermal analysis of contact resistance

    NASA Astrophysics Data System (ADS)

    Pandey, Nitin; Jain, Ishant; Reddy, Sudhakar; Gulhane, Nitin P.

    2018-05-01

    Electro-Mechanical characterization over copper samples are performed at the macroscopic level to understand the dependence of electrical contact resistance and temperature on surface roughness and contact pressure. For two different surface roughness levels of samples, six levels of load are selected and varied to capture the bulk temperature rise and electrical contact resistance. Accordingly, the copper samples are modelled and analysed using COMSOLTM as a simulation package and the results are validated by the experiments. The interface temperature during simulation is obtained using Mikic-Elastic correlation and by directly entering experimental contact resistance value. The load values are varied and then reversed in a similar fashion to capture the hysteresis losses. The governing equations & assumptions underlying these models and their significance are examined & possible justification for the observed variations are discussed. Equivalent Greenwood model is also predicted by mapping the results of the experiment.

  18. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Walsh, Jonathan A.

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  19. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE PAGES

    Romano, Paul K.; Walsh, Jonathan A.

    2018-02-03

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  20. Dynamic behavior of geometrically complex hybrid composite samples in a Split-Hopkinson Pressure Bar system

    NASA Astrophysics Data System (ADS)

    Pouya, M.; Balasubramaniam, S.; Sharafiev, S.; F-X Wagner, M.

    2018-06-01

    The interfaces between layered materials play an important role for the overall mechanical behavior of hybrid composites, particularly during dynamic loading. Moreover, in complex-shaped composites, interfacial failure is strongly affected by the geometry and size of these contact interfaces. As preliminary work for the design of a novel sample geometry that allows to analyze wave reflection phenomena at the interfaces of such materials, a series of experiments using a Split-Hopkinson Pressure Bar technique was performed on five different sample geometries made of a monomaterial steel. A complementary explicit finite element model of the Split-Hopkinson Pressure Bar system was developed and the same sample geometries were studied numerically. The simulated input, reflected and transmitted elastic wave pulses were analyzed for the different sample geometries and were found to agree well with the experimental results. Additional simulations using different composite layers of steel and aluminum (with the same sample geometries) were performed to investigate the effect of material variation on the propagated wave pulses. The numerical results show that the reflected and transmitted wave pulses systematically depend on the sample geometry, and that elastic wave pulse propagation is affected by the properties of individual material layers.

  1. Synchronization of Hierarchical Time-Varying Neural Networks Based on Asynchronous and Intermittent Sampled-Data Control.

    PubMed

    Xiong, Wenjun; Patel, Ragini; Cao, Jinde; Zheng, Wei Xing

    In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.

  2. Characteristics of white LED transmission through a smoke screen

    NASA Astrophysics Data System (ADS)

    Zheng, Yunfei; Yang, Aiying; Feng, Lihui; Guo, Peng

    2018-01-01

    The characteristics of white LED transmission through a smoke screen is critical for visible light communication through a smoke screen. Based on the Mie scattering theory, the Monte Carlo transmission model is established. Based on the probability density function, the white LED sampling model is established according to the measured spectrum of a white LED and the distribution angle of the lambert model. The sampling model of smoke screen particle diameter is also established according to its distribution. We simulate numerically the influence the smoke thickness, the smoke concentration and the angle of irradiance of white LED on transmittance of the white LED. We construct a white LED smoke transmission experiment system. The measured result on the light transmittance and the smoke concentration agreed with the simulated result, and demonstrated the validity of simulation model for visible light transmission channel through a smoke screen.

  3. Estimation of distributional parameters for censored trace level water quality data: 2. Verification and applications

    USGS Publications Warehouse

    Helsel, Dennis R.; Gilliom, Robert J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.

  4. Improved Statistical Sampling and Accuracy with Accelerated Molecular Dynamics on Rotatable Torsions.

    PubMed

    Doshi, Urmi; Hamelberg, Donald

    2012-11-13

    In enhanced sampling techniques, the precision of the reweighted ensemble properties is often decreased due to large variation in statistical weights and reduction in the effective sampling size. To abate this reweighting problem, here, we propose a general accelerated molecular dynamics (aMD) approach in which only the rotatable dihedrals are subjected to aMD (RaMD), unlike the typical implementation wherein all dihedrals are boosted (all-aMD). Nonrotatable and improper dihedrals are marginally important to conformational changes or the different rotameric states. Not accelerating them avoids the sharp increases in the potential energies due to small deviations from their minimum energy conformations and leads to improvement in the precision of RaMD. We present benchmark studies on two model dipeptides, Ace-Ala-Nme and Ace-Trp-Nme, simulated with normal MD, all-aMD, and RaMD. We carry out a systematic comparison between the performances of both forms of aMD using a theory that allows quantitative estimation of the effective number of sampled points and the associated uncertainty. Our results indicate that, for the same level of acceleration and simulation length, as used in all-aMD, RaMD results in significantly less loss in the effective sample size and, hence, increased accuracy in the sampling of φ-ψ space. RaMD yields an accuracy comparable to that of all-aMD, from simulation lengths 5 to 1000 times shorter, depending on the peptide and the acceleration level. Such improvement in speed and accuracy over all-aMD is highly remarkable, suggesting RaMD as a promising method for sampling larger biomolecules.

  5. Performance of the likelihood ratio difference (G2 Diff) test for detecting unidimensionality in applications of the multidimensional Rasch model.

    PubMed

    Harrell-Williams, Leigh; Wolfe, Edward W

    2014-01-01

    Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.

  6. Finite element simulation and experimental verification of ultrasonic non-destructive inspection of defects in additively manufactured materials

    NASA Astrophysics Data System (ADS)

    Taheri, H.; Koester, L.; Bigelow, T.; Bond, L. J.

    2018-04-01

    Industrial applications of additively manufactured components are increasing quickly. Adequate quality control of the parts is necessary in ensuring safety when using these materials. Base material properties, surface conditions, as well as location and size of defects are some of the main targets for nondestructive evaluation of additively manufactured parts, and the problem of adequate characterization is compounded given the challenges of complex part geometry. Numerical modeling can allow the interplay of the various factors to be studied, which can lead to improved measurement design. This paper presents a finite element simulation verified by experimental results of ultrasonic waves scattering from flat bottom holes (FBH) in additive manufacturing materials. A focused beam immersion ultrasound transducer was used for both the modeling and simulations in the additive manufactured samples. The samples were SS17 4 PH steel samples made by laser sintering in a powder bed.

  7. The effects of pediatric community simulation experience on the self-confidence and satisfaction of baccalaureate nursing students: A quasi-experimental study.

    PubMed

    Lubbers, Jaclynn; Rossman, Carol

    2016-04-01

    Simulation in nursing education is a means to transform student learning and respond to decreasing clinical site availability. This study proposed an innovative simulation experience where students completed community based clinical hours with simulation scenarios. The purpose of this study was to determine the effects of a pediatric community simulation experience on the self-confidence of nursing students. Bandura's (1977) Self-Efficacy Theory and Jeffries' (2005) Nursing Education Simulation Framework were used. This quasi-experimental study collected data using a pre-test and posttest tool. The setting was a private, liberal arts college in the Midwestern United States. Fifty-four baccalaureate nursing students in a convenience sample were the population of interest. The sample was predominantly female with very little exposure to simulation prior to this study. The participants completed a 16-item self-confidence instrument developed for this study which measured students' self-confidence in pediatric community nursing knowledge, skill, communication, and documentation. The overall study showed statistically significant results (t=20.70, p<0.001) and statistically significant results within each of the eight 4-item sub-scales (p<0.001). Students also reported a high level of satisfaction with their simulation experience. The data demonstrate that students who took the Pediatric Community Based Simulation course reported higher self-confidence after the course than before the course. Higher self-confidence scores for simulation participants have been shown to increase quality of care for patients. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Network visualization of conformational sampling during molecular dynamics simulation.

    PubMed

    Ahlstrom, Logan S; Baker, Joseph Lee; Ehrlich, Kent; Campbell, Zachary T; Patel, Sunita; Vorontsov, Ivan I; Tama, Florence; Miyashita, Osamu

    2013-11-01

    Effective data reduction methods are necessary for uncovering the inherent conformational relationships present in large molecular dynamics (MD) trajectories. Clustering algorithms provide a means to interpret the conformational sampling of molecules during simulation by grouping trajectory snapshots into a few subgroups, or clusters, but the relationships between the individual clusters may not be readily understood. Here we show that network analysis can be used to visualize the dominant conformational states explored during simulation as well as the connectivity between them, providing a more coherent description of conformational space than traditional clustering techniques alone. We compare the results of network visualization against 11 clustering algorithms and principal component conformer plots. Several MD simulations of proteins undergoing different conformational changes demonstrate the effectiveness of networks in reaching functional conclusions. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Replica exchange enveloping distribution sampling (RE-EDS): A robust method to estimate multiple free-energy differences from a single simulation.

    PubMed

    Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina

    2016-10-21

    In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.

  10. Computational Fluid Dynamics Analysis of the Venturi Dustiness Tester

    PubMed Central

    Dubey, Prahit; Ghia, Urmila; Turkevich, Leonid A.

    2017-01-01

    Dustiness quantifies the propensity of a finely divided solid to be aerosolized by a prescribed mechanical stimulus. Dustiness is relevant wherever powders are mixed, transferred or handled, and is important in the control of hazardous exposures and the prevention of dust explosions and product loss. Limited quantities of active pharmaceutical powders available for testing led to the development (at University of North Carolina) of a Venturi-driven dustiness tester. The powder is turbulently injected at high speed (Re ~ 2 × 104) into a glass chamber; the aerosol is then gently sampled (Re ~ 2 × 103) through two filters located at the top of the chamber; the dustiness index is the ratio of sampled to injected mass of powder. Injection is activated by suction at an Extraction Port at the top of the chamber; loss of powder during injection compromises the sampled dustiness. The present work analyzes the flow inside the Venturi Dustiness Tester, using an Unsteady Reynolds-Averaged Navier-Stokes formulation with the k-ω Shear Stress Transport turbulence model. The simulation considers single-phase flow, valid for small particles (Stokes number Stk <1). Results show that ~ 24% of fluid-tracers escape the tester before the Sampling Phase begins. Dispersion of the powder during the Injection Phase results in a uniform aerosol inside the tester, even for inhomogeneous injections, satisfying a necessary condition for the accurate evaluation of dustiness. Simulations are also performed under the conditions of reduced Extraction-Port flow; results confirm the importance of high Extraction-Port flow rate (standard operation) for uniform distribution of fluid tracers. Simulations are also performed under the conditions of delayed powder injection; results show that a uniform aerosol is still achieved provided 0.5 s elapses between powder injection and sampling. PMID:28638167

  11. Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.

    PubMed

    Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O

    2015-10-01

    Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.

  12. Non-destructive identification of unknown minor phases in polycrystalline bulk alloys using three-dimensional X-ray diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yiming, E-mail: yangyiming1988@outlook.com

    Minor phases make considerable contributions to the mechanical and physical properties of metals and alloys. Unfortunately, it is difficult to identify unknown minor phases in a bulk polycrystalline material using conventional metallographic methods. Here, a non-destructive method based on three-dimensional X-ray diffraction (3DXRD) is developed to solve this problem. Simulation results demonstrate that this method is simultaneously able to identify minor phase grains and reveal their positions, orientations and sizes within bulk alloys. According to systematic simulations, the 3DXRD method is practicable for an extensive sample set, including polycrystalline alloys with hexagonal, orthorhombic and cubic minor phases. Experiments were alsomore » conducted to confirm the simulation results. The results for a bulk sample of aluminum alloy AA6061 show that the crystal grains of an unexpected γ-Fe (austenite) phase can be identified, three-dimensionally and nondestructively. Therefore, we conclude that the 3DXRD method is a powerful tool for the identification of unknown minor phases in bulk alloys belonging to a variety of crystal systems. This method also has the potential to be used for in situ observations of the effects of minor phases on the crystallographic behaviors of alloys. - Highlights: •A method based on 3DXRD is developed for identification of unknown minor phase. •Grain position, orientation and size, is simultaneously acquired. •A systematic simulation demonstrated the applicability of the proposed method. •Experimental results on a AA6061 sample confirmed the practicability of the method.« less

  13. Workshop on Analysis of Returned Comet Nucleus Samples

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This volume contains abstracts that were accepted by the Program Committee for presentation at the workshop on the analysis of returned comet nucleus samples held in Milpitas, California, January 16 to 18, 1989. The abstracts deal with the nature of cometary ices, cryogenic handling and sampling equipment, origin and composition of samples, and spectroscopic, thermal and chemical processing methods of cometary nuclei. Laboratory simulation experimental results on dust samples are reported. Some results obtained from Halley's comet are also included. Microanalytic techniques for examining trace elements of cometary particles, synchrotron x ray fluorescence and instrument neutron activation analysis (INAA), are presented.

  14. Investigation of polarization effects in the gramicidin A channel from ab initio molecular dynamics simulations.

    PubMed

    Timko, Jeff; Kuyucak, Serdar

    2012-11-28

    Polarization is an important component of molecular interactions and is expected to play a particularly significant role in inhomogeneous environments such as pores and interfaces. Here we investigate the effects of polarization in the gramicidin A ion channel by performing quantum mechanics/molecular mechanics molecular dynamics (MD) simulations and comparing the results with those obtained from classical MD simulations with non-polarizable force fields. We consider the dipole moments of backbone carbonyl groups and channel water molecules as well as a number of structural quantities of interest. The ab initio results show that the dipole moments of the carbonyl groups and water molecules are highly sensitive to the hydrogen bonds (H-bonds) they participate in. In the absence of a K(+) ion, water molecules in the channel are quite mobile, making the H-bond network highly dynamic. A central K(+) ion acts as an anchor for the channel waters, stabilizing the H-bond network and thereby increasing their average dipole moments. In contrast, the K(+) ion has little effect on the dipole moments of the neighboring carbonyl groups. The weakness of the ion-peptide interactions helps to explain the near diffusion-rate conductance of K(+) ions through the channel. We also address the sampling issue in relatively short ab initio MD simulations. Results obtained from a continuous 20 ps ab initio MD simulation are compared with those generated by sampling ten windows from a much longer classical MD simulation and running each window for 2 ps with ab initio MD. Both methods yield similar results for a number of quantities of interest, indicating that fluctuations are fast enough to justify the short ab initio MD simulations.

  15. A comparison of color fidelity metrics for light sources using simulation of color samples under lighting conditions

    NASA Astrophysics Data System (ADS)

    Kwon, Hyeokjun; Kang, Yoojin; Jang, Junwoo

    2017-09-01

    Color fidelity has been used as one of indices to evaluate the performance of light sources. Since the Color Rendering Index (CRI) was proposed at CIE, many color fidelity metrics have been proposed to increase the accuracy of the metric. This paper focuses on a comparison of the color fidelity metrics in an aspect of accuracy with human visual assessments. To visually evaluate the color fidelity of light sources, we made a simulator that reproduces the color samples under lighting conditions. In this paper, eighteen color samples of the Macbeth color checker under test light sources and reference illuminant for each of them are simulated and displayed on a well-characterized monitor. With only a spectrum set of the test light source and reference illuminant, color samples under any lighting condition can be reproduced. In this paper, the spectrums of the two LED and two OLED light sources that have similar values of CRI are used for the visual assessment. In addition, the results of the visual assessment are compared with the two color fidelity metrics that include CRI and IES TM-30-15 (Rf), proposed by Illuminating Engineering Society (IES) in 2015. Experimental results indicate that Rf outperforms CRI in terms of the correlation with visual assessment.

  16. A dynamic structural model of expanded RNA CAG repeats: A refined X-ray structure and computational investigations using molecular dynamics and umbrella sampling simulations

    PubMed Central

    Yildirim, Ilyas; Park, Hajeung; Disney, Matthew D.; Schatz, George C.

    2013-01-01

    One class of functionally important RNA is repeating transcripts that cause disease through various mechanisms. For example, expanded r(CAG) repeats can cause Huntington’s and other disease through translation of toxic proteins. Herein, crystal structure of r[5ʹUUGGGC(CAG)3GUCC]2, a model of CAG expanded transcripts, refined to 1.65 Å resolution is disclosed that show both anti-anti and syn-anti orientations for 1×1 nucleotide AA internal loops. Molecular dynamics (MD) simulations using Amber force field in explicit solvent were run for over 500 ns on model systems r(5ʹGCGCAGCGC)2 (MS1) and r(5ʹCCGCAGCGG)2 (MS2). In these MD simulations, both anti-anti and syn-anti AA base pairs appear to be stable. While anti-anti AA base pairs were dynamic and sampled multiple anti-anti conformations, no syn-anti↔anti-anti transformations were observed. Umbrella sampling simulations were run on MS2, and a 2D free energy surface was created to extract transformation pathways. In addition, over 800 ns explicit solvent MD simulation was run on r[5ʹGGGC(CAG)3GUCC]2, which closely represents the refined crystal structure. One of the terminal AA base pairs (syn-anti conformation), transformed to anti-anti conformation. The pathway followed in this transformation was the one predicted by umbrella sampling simulations. Further analysis showed a binding pocket near AA base pairs in syn-anti conformations. Computational results combined with the refined crystal structure show that global minimum conformation of 1×1 nucleotide AA internal loops in r(CAG) repeats is anti-anti but can adopt syn-anti depending on the environment. These results are important to understand RNA dynamic-function relationships and develop small molecules that target RNA dynamic ensembles. PMID:23441937

  17. Effect of particle size and percentages of Boron carbide on the thermal neutron radiation shielding properties of HDPE/B4C composite: Experimental and simulation studies

    NASA Astrophysics Data System (ADS)

    Soltani, Zahra; Beigzadeh, Amirmohammad; Ziaie, Farhood; Asadi, Eskandar

    2016-10-01

    In this paper the effects of particle size and weight percentage of the reinforcement phase on the absorption ability of thermal neutron by HDPE/B4C composites were investigated by means of Monte-Carlo simulation method using MCNP code and experimental studies. The composite samples were prepared using the HDPE filled with different weight percentages of Boron carbide powder in the form of micro and nano particles. Micro and nano composite were prepared under the similar mixing and moulding processes. The samples were subjected to thermal neutron radiation. Neutron shielding efficiency in terms of the neutron transmission fractions of the composite samples were investigated and compared with simulation results. According to the simulation results, the particle size of the radiation shielding material has an important role on the shielding efficiency. By decreasing the particle size of shielding material in each weight percentages of the reinforcement phase, better radiation shielding properties were obtained. It seems that, decreasing the particle size and homogeneous distribution of nano forms of B4C particles, cause to increase the collision probability between the incident thermal neutron and the shielding material which consequently improve the radiation shielding properties. So, this result, propose the feasibility of nano composite as shielding material to have a high performance shielding characteristic, low weight and low thick shielding along with economical benefit.

  18. Longitudinal design considerations to optimize power to detect variances and covariances among rates of change: Simulation results based on actual longitudinal studies

    PubMed Central

    Rast, Philippe; Hofer, Scott M.

    2014-01-01

    We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batcheller, Thomas Aquinas; Taylor, Dean Dalton

    Idaho Nuclear Technology and Engineering Center 300,000-gallon vessel WM-189 was filled in late 2001 with concentrated sodium bearing waste (SBW). Three airlifted liquid samples and a steam jetted slurry sample were obtained for quantitative analysis and characterization of WM-189 liquid phase SBW and tank heel sludge. Estimates were provided for most of the reported data values, based on the greater of (a) analytical uncertainty, and (b) variation of analytical results between nominally similar samples. A consistency check on the data was performed by comparing the total mass of dissolved solids in the liquid, as measured gravimetrically from a dried sample,more » with the corresponding value obtained by summing the masses of cations and anions in the liquid, based on the reported analytical data. After reasonable adjustments to the nitrate and oxygen concentrations, satisfactory consistency between the two results was obtained. A similar consistency check was performed on the reported compositional data for sludge solids from the steam jetted sample. In addition to the compositional data, various other analyses were performed: particle size distribution was measured for the sludge solids, sludge settling tests were performed, and viscosity measurements were made. WM-189 characterization results were compared with those for WM-180, and other Tank Farm Facility tank characterization data. A 2-liter batch of WM-189 simulant was prepared and a clear, stable solution was obtained, based on a general procedure for mixing SBW simulant that was develop by Dr. Jerry Christian. This WM-189 SBW simulant is considered suitable for laboratory testing for process development.« less

  20. Sample Results From The Extraction, Scrub, And Strip Test For The Blended NGS Solvent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washington, A. L. II; Peters, T. B.

    This report summarizes the results of the extraction, scrub, and strip testing for the September 2013 sampling of the Next Generation Solvent (NGS) Blended solvent from the Modular Caustic Side-Solvent Extraction Unit (MCU) Solvent Hold Tank. MCU is in the process of transitioning from the BOBCalixC6 solvent to the NGS Blend solvent. As part of that transition, MCU has intentionally created a blended solvent to be processed using the Salt Batch program. This sample represents the first sample received from that blended solvent. There were two ESS tests performed where NGS blended solvent performance was assessed using either the Tankmore » 21 material utilized in the Salt Batch 7 analyses or a simulant waste material used in the V-5/V-10 contactor testing. This report tabulates the temperature corrected cesium distribution, or D Cs values, step recovery percentage, and actual temperatures recorded during the experiment. This report also identifies the sample receipt date, preparation method, and analysis performed in the accumulation of the listed values. The calculated extraction D Cs values using the Tank 21H material and simulant are 59.4 and 53.8, respectively. The DCs values for two scrub and three strip processes for the Tank 21 material are 4.58, 2.91, 0.00184, 0.0252, and 0.00575, respectively. The D-values for two scrub and three strip processes for the simulant are 3.47, 2.18, 0.00468, 0.00057, and 0.00572, respectively. These values are similar to previous measurements of Salt Batch 7 feed with lab-prepared blended solvent. These numbers are considered compatible to allow simulant testing to be completed in place of actual waste due to the limited availability of feed material.« less

  1. Comparing the performance of cluster random sampling and integrated threshold mapping for targeting trachoma control, using computer simulation.

    PubMed

    Smith, Jennifer L; Sturrock, Hugh J W; Olives, Casey; Solomon, Anthony W; Brooker, Simon J

    2013-01-01

    Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF), generally collected using the recommended gold-standard cluster randomized surveys (CRS). Integrated Threshold Mapping (ITM) has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters. Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i) the district prevalence of TF; (ii) the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii) the enrollment rate in schools. Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates the use of a simulated approach in addressing operational research questions for trachoma but also other NTDs.

  2. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    PubMed

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  3. Rare event simulation in radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollman, Craig

    1993-10-01

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less

  4. Effects of Bedrock Landsliding on Cosmogenically Determined Erosion Rates

    NASA Technical Reports Server (NTRS)

    Niemi, Nathan; Oskin, Mike; Burbank, Douglas; Heimsath, Arjun

    2005-01-01

    The successful quantification of long-term erosion rates underpins our understanding of landscape. formation, the topographic evolution of mountain ranges, and the mass balance within active orogens. The measurement of in situ-produced cosmogenic radionuclides (CRNs) in fluvial and alluvial sediments is perhaps the method with the greatest ability to provide such long-term erosion rates. In active orogens, however, deep-seated bedrock landsliding is an important erosional process, the effect of which on CRN-derived erosion rates is largely unquantified. We present a numerical simulation of cosmogenic nuclide production and distribution in landslide-dominated catchments to address the effect of bedrock landsliding on cosmogenic erosion rates in actively eroding landscapes. Results of the simulation indicate that the temporal stability of erosion rates determined from CRN concentrations in sediment decreases with increased ratios of landsliding to sediment detachment rates within a given catchment area, and that larger catchment areas must be sampled with increased frequency of landsliding in order to accurately evaluate long-term erosion rates. In addition, results of this simulation suggest that sediment sampling for CRNs is the appropriate method for determining long-term erosion rates in regions dominated by mass-wasting processes, while bedrock surface sampling for CRNs is generally an ineffective means of determining long-term erosion rates. Response times of CRN concentrations to changes in erosion rate indicate that climatically driven cycles of erosion may be detected relatively quickly after such changes occur, but that complete equilibration of CRN concentrations to new erosional conditions may take tens of thousands of years. Simulation results of CRN erosion rates are compared with a new, rich dataset of CRN concentrations from the Nepalese Himalaya, supporting conclusions drawn from the simulation.

  5. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  6. An in-flight simulator investigation of roll and yaw control power requirements for STOL approach and landing: Development of capability and preliminary results

    NASA Technical Reports Server (NTRS)

    Ellis, D. R.; Raisinghani, S. C.

    1979-01-01

    A six-degree-of-freedom variable-response research aircraft was used to determine the minimum lateral-directional control power required for desirable and acceptable levels of handling qualities for the STOL landing approach task in a variety of simulated atmospheric disturbance conditions for a range of lateral-directional response characteristics. Topics covered include the in-flight simulator, crosswind simulation, turbulence simulation, test configurations, and evaluation procedures. Conclusions based on a limited sampling of simulated STOL transport configurations flown to touchdown out of 6 deg, 75 kt MLS approaches, usually with a sidestep maneuver are discussed.

  7. Simulations of the Sampling Distribution of the Mean Do Not Necessarily Mislead and Can Facilitate Learning

    ERIC Educational Resources Information Center

    Lane, David M.

    2015-01-01

    Recently Watkins, Bargagliotti, and Franklin (2014) discovered that simulations of the sampling distribution of the mean can mislead students into concluding that the mean of the sampling distribution of the mean depends on sample size. This potential error arises from the fact that the mean of a simulated sampling distribution will tend to be…

  8. Evaluation and characterization of anti-estrogenic and anti-androgenic activities in soil samples along the Second Songhua River, China.

    PubMed

    Li, Jian; Wang, Yafei; Kong, Dongdong; Wang, Jinsheng; Teng, Yanguo; Li, Na

    2015-11-01

    In the present study, re-combined estrogen receptor (ER) and androgen receptor (AR) gene yeast assays combined with a novel approach based on Monte Carlo simulation were used for evaluation and characterization of soil samples collected from Jilin along the Second Songhua River to assess their antagonist/agonist properties for ER and AR. The results showed that estrogenic activity only occurred in the soil samples collected in the agriculture area, but most soil samples showed anti-estrogenic activities, and the bioassay-derived 4-hydroxytamoxifen equivalents ranged from N.D. to 23.51 μg/g. Hydrophilic substance fractions were determined as potential contributors associated with anti-estrogenic activity in these soil samples. Moreover, none of the soil samples exhibited AR agonistic potency, whereas 54% of the soil samples exhibited AR antagonistic potency. The flutamide equivalents varied between N.D. and 178.05 μg/g. Based on Monte Carlo simulation-related mass balance analysis, the AR antagonistic activities were significantly correlated with the media polar and polar fractions. All of these results support that this novel calculation method can be adopted effectively to quantify and characterize the ER/AR agonists and antagonists of the soil samples, and these data could help provide useful information for future management and remediation efforts.

  9. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  10. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  11. Dynamic loading and release in Johnson Space Center Lunar regolith simulant

    NASA Astrophysics Data System (ADS)

    Plesko, C. S.; Jensen, B. J.; Wescott, B. L.; Skinner McKee, T. E.

    2011-10-01

    The behavior of regolith under dynamic loading is important for the study of planetary evolution, impact cratering, and other topics. Here we present the initial results of explosively driven flier plate experiments and numerical models of compaction and release in samples of the JSC-1A Lunar regolith simulant.

  12. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  13. Ultra-Fast Hadronic Calorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denisov, Dmitri; Lukić, Strahinja; Mokhov, Nikolai

    2018-08-01

    Calorimeters for particle physics experiments with integration time of a few ns will substantially improve the capability of the experiment to resolve event pileup and to reject backgrounds. In this paper the time development of hadronic showers induced by 30 and 60 GeV positive pions and 120 GeV protons is studied using Monte Carlo simulation and beam tests with a prototype of a sampling steel-scintillator hadronic calorimeter. In the beam tests, scintillator signals induced by hadronic showers in steel are sampled with a period of 0.2 ns and precisely time-aligned in order to study the average signal waveform at various locations with respectmore » to the beam particle impact. Simulations of the same setup are performed using the MARS15 code. Both simulation and test beam results suggest that energy deposition in steel calorimeters develop over a time shorter than 2 ns providing opportunity for ultra-fast calorimetry. Simulation results for an “ideal” calorimeter consisting exclusively of bulk tungsten or copper are presented to establish the lower limit of the signal integration window.« less

  14. Ultra-Fast Hadronic Calorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denisov, Dmitri; Lukić, Strahinja; Mokhov, Nikolai

    2017-12-18

    Calorimeters for particle physics experiments with integration time of a few ns will substantially improve the capability of the experiment to resolve event pileup and to reject backgrounds. In this paper time development of hadronic showers induced by 30 and 60 GeV positive pions and 120 GeV protons is studied using Monte Carlo simulation and beam tests with a prototype of a sampling steel-scintillator hadronic calorimeter. In the beam tests, scintillator signals induced by hadronic showers in steel are sampled with a period of 0.2 ns and precisely time-aligned in order to study the average signal waveform at various locationsmore » w.r.t. the beam particle impact. Simulations of the same setup are performed using the MARS15 code. Both simulation and test beam results suggest that energy deposition in steel calorimeters develop over a time shorter than 3 ns providing opportunity for ultra-fast calorimetry. Simulation results for an "ideal" calorimeter consisting exclusively of bulk tungsten or copper are presented to establish the lower limit of the signal integration window.« less

  15. An Integrated Study on a Novel High Temperature High Entropy Alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Shizhong

    2016-12-31

    This report summarizes our recent works of theoretical modeling, simulation, and experimental validation of the simulation results on the new refractory high entropy alloy (HEA) design and oxide doped refractory HEA research. The simulation of the stability and thermal dynamics simulation on potential thermal stable candidates were performed and related HEA with oxide doped samples were synthesized and characterized. The HEA ab initio density functional theory and molecular dynamics physical property simulation methods and experimental texture validation techniques development, achievements already reached, course work development, students and postdoc training, and future improvement research directions are briefly introduced.

  16. Adaptive screening for depression--recalibration of an item bank for the assessment of depression in persons with mental and somatic diseases and evaluation in a simulated computer-adaptive test environment.

    PubMed

    Forkmann, Thomas; Kroehne, Ulf; Wirtz, Markus; Norra, Christine; Baumeister, Harald; Gauggel, Siegfried; Elhan, Atilla Halil; Tennant, Alan; Boecker, Maren

    2013-11-01

    This study conducted a simulation study for computer-adaptive testing based on the Aachen Depression Item Bank (ADIB), which was developed for the assessment of depression in persons with somatic diseases. Prior to computer-adaptive test simulation, the ADIB was newly calibrated. Recalibration was performed in a sample of 161 patients treated for a depressive syndrome, 103 patients from cardiology, and 103 patients from otorhinolaryngology (mean age 44.1, SD=14.0; 44.7% female) and was cross-validated in a sample of 117 patients undergoing rehabilitation for cardiac diseases (mean age 58.4, SD=10.5; 24.8% women). Unidimensionality of the itembank was checked and a Rasch analysis was performed that evaluated local dependency (LD), differential item functioning (DIF), item fit and reliability. CAT-simulation was conducted with the total sample and additional simulated data. Recalibration resulted in a strictly unidimensional item bank with 36 items, showing good Rasch model fit (item fit residuals<|2.5|) and no DIF or LD. CAT simulation revealed that 13 items on average were necessary to estimate depression in the range of -2 and +2 logits when terminating at SE≤0.32 and 4 items if using SE≤0.50. Receiver Operating Characteristics analysis showed that θ estimates based on the CAT algorithm have good criterion validity with regard to depression diagnoses (Area Under the Curve≥.78 for all cut-off criteria). The recalibration of the ADIB succeeded and the simulation studies conducted suggest that it has good screening performance in the samples investigated and that it may reasonably add to the improvement of depression assessment. © 2013.

  17. Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.

    PubMed

    Sheppard, C W.

    1969-03-01

    A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.

  18. Hypersonic, nonequilibrium flow over the FIRE 2 forebody at 1634 sec

    NASA Technical Reports Server (NTRS)

    Chambers, Lin Hartung

    1994-01-01

    The numerical simulation of hypersonic flow in thermochemical nonequilibrium over the forebody of the FIRE 2 vehicle at 1634 sec in its trajectory is described. The simulation was executed on a Cray C90 with the program Langley Aerodynamic Upwind Relaxation Algorithm (LAURA) 4.0.2. Code setup procedures and sample results, including grid refinement studies, are discussed. This simulation relates to a study of radiative heating predictions on aerobrake type vehicles.

  19. A computer simulation of the transient response of a 4 cylinder Stirling engine with burner and air preheater in a vehicle

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1981-01-01

    A series of computer programs are presented with full documentation which simulate the transient behavior of a modern 4 cylinder Siemens arrangement Stirling engine with burner and air preheater. Cold start, cranking, idling, acceleration through 3 gear changes and steady speed operation are simulated. Sample results and complete operating instructions are given. A full source code listing of all programs are included.

  20. Spectrum simulation in DTSA-II.

    PubMed

    Ritchie, Nicholas W M

    2009-10-01

    Spectrum simulation is a useful practical and pedagogical tool. Particularly with complex samples or trace constituents, a simulation can help to understand the limits of the technique and the instrument parameters for the optimal measurement. DTSA-II, software for electron probe microanalysis, provides both easy to use and flexible tools for simulating common and less common sample geometries and materials. Analytical models based on (rhoz) curves provide quick simulations of simple samples. Monte Carlo models based on electron and X-ray transport provide more sophisticated models of arbitrarily complex samples. DTSA-II provides a broad range of simulation tools in a framework with many different interchangeable physical models. In addition, DTSA-II provides tools for visualizing, comparing, manipulating, and quantifying simulated and measured spectra.

  1. The Viking X ray fluorescence experiment - Sampling strategies and laboratory simulations. [Mars soil sampling

    NASA Technical Reports Server (NTRS)

    Baird, A. K.; Castro, A. J.; Clark, B. C.; Toulmin, P., III; Rose, H., Jr.; Keil, K.; Gooding, J. L.

    1977-01-01

    Ten samples of Mars regolith material (six on Viking Lander 1 and four on Viking Lander 2) have been delivered to the X ray fluorescence spectrometers as of March 31, 1977. An additional six samples at least are planned for acquisition in the remaining Extended Mission (to January 1979) for each lander. All samples acquired are Martian fines from the near surface (less than 6-cm depth) of the landing sites except the latest on Viking Lander 1, which is fine material from the bottom of a trench dug to a depth of 25 cm. Several attempts on each lander to acquire fresh rock material (in pebble sizes) for analysis have yielded only cemented surface crustal material (duricrust). Laboratory simulation and experimentation are required both for mission planning of sampling and for interpretation of data returned from Mars. This paper is concerned with the rationale for sample site selections, surface sampler operations, and the supportive laboratory studies needed to interpret X ray results from Mars.

  2. Let's get honest about sampling.

    PubMed

    Mobley, David L

    2012-01-01

    Molecular simulations see widespread and increasing use in computation and molecular design, especially within the area of molecular simulations applied to biomolecular binding and interactions, our focus here. However, force field accuracy remains a concern for many practitioners, and it is often not clear what level of accuracy is really needed for payoffs in a discovery setting. Here, I argue that despite limitations of today's force fields, current simulation tools and force fields now provide the potential for real benefits in a variety of applications. However, these same tools also provide irreproducible results which are often poorly interpreted. Continued progress in the field requires more honesty in assessment and care in evaluation of simulation results, especially with respect to convergence.

  3. Enhanced and effective conformational sampling of protein molecular systems for their free energy landscapes.

    PubMed

    Higo, Junichi; Ikebe, Jinzen; Kamiya, Narutoshi; Nakamura, Haruki

    2012-03-01

    Protein folding and protein-ligand docking have long persisted as important subjects in biophysics. Using multicanonical molecular dynamics (McMD) simulations with realistic expressions, i.e., all-atom protein models and an explicit solvent, free-energy landscapes have been computed for several systems, such as the folding of peptides/proteins composed of a few amino acids up to nearly 60 amino-acid residues, protein-ligand interactions, and coupled folding and binding of intrinsically disordered proteins. Recent progress in conformational sampling and its applications to biophysical systems are reviewed in this report, including descriptions of several outstanding studies. In addition, an algorithm and detailed procedures used for multicanonical sampling are presented along with the methodology of adaptive umbrella sampling. Both methods control the simulation so that low-probability regions along a reaction coordinate are sampled frequently. The reaction coordinate is the potential energy for multicanonical sampling and is a structural identifier for adaptive umbrella sampling. One might imagine that this probability control invariably enhances conformational transitions among distinct stable states, but this study examines the enhanced conformational sampling of a simple system and shows that reasonably well-controlled sampling slows the transitions. This slowing is induced by a rapid change of entropy along the reaction coordinate. We then provide a recipe to speed up the sampling by loosening the rapid change of entropy. Finally, we report all-atom McMD simulation results of various biophysical systems in an explicit solvent.

  4. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less

  5. Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference

    PubMed Central

    Shringarpure, Suyash; Xing, Eric P.

    2014-01-01

    Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351

  6. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  7. Monte Carlo source simulation technique for solution of interference reactions in INAA experiments: a preliminary report

    NASA Astrophysics Data System (ADS)

    Allaf, M. Athari; Shahriari, M.; Sohrabpour, M.

    2004-04-01

    A new method using Monte Carlo source simulation of interference reactions in neutron activation analysis experiments has been developed. The neutron spectrum at the sample location has been simulated using the Monte Carlo code MCNP and the contributions of different elements to produce a specified gamma line have been determined. The produced response matrix has been used to measure peak areas and the sample masses of the elements of interest. A number of benchmark experiments have been performed and the calculated results verified against known values. The good agreement obtained between the calculated and known values suggests that this technique may be useful for the elimination of interference reactions in neutron activation analysis.

  8. Comparison of High-Performance Fiber Materials Properties in Simulated and Actual Space Environments

    NASA Technical Reports Server (NTRS)

    Finckernor, M. M.

    2017-01-01

    A variety of high-performance fibers, including Kevlar, Nomex, Vectran, and Spectra, have been tested for durability in the space environment, mostly the low Earth orbital environment. These materials have been tested in yarn, tether/cable, and fabric forms. Some material samples were tested in a simulated space environment, such as the Atomic Oxygen Beam Facility and solar simulators in the laboratory. Other samples were flown on the International Space Station as part of the Materials on International Space Station Experiment. Mass loss due to atomic oxygen erosion and optical property changes due to ultraviolet radiation degradation are given. Tensile test results are also presented, including where moisture loss in a vacuum had an impact on tensile strength.

  9. Computer program uses Monte Carlo techniques for statistical system performance analysis

    NASA Technical Reports Server (NTRS)

    Wohl, D. P.

    1967-01-01

    Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.

  10. Microstructural Characterization of Thermomechanical and Heat-Affected Zones of an Inertia Friction Welded Astroloy

    NASA Astrophysics Data System (ADS)

    Oluwasegun, K. M.; Olawale, J. O.; Ige, O. O.; Shittu, M. D.; Adeleke, A. A.; Malomo, B. O.

    2014-08-01

    The behaviour of γ' phase to thermal and mechanical effects during rapid heating of Astroloy, a powder metallurgy nickel-based superalloy has been investigated. The thermo-mechanical-affected zone (TMAZ) and heat-affected zone (HAZ) microstructures of an inertia friction welded (IFW) Astroloy were simulated using a Gleeble thermo-mechanical simulation system. Detailed microstructural examination of the simulated TMAZ and HAZ and those present in actual IFW specimens showed that γ' particles persisted during rapid heating up to a temperature where the formation of liquid is thermodynamically favored and subsequently re-solidified eutectically. The result obtained showed that forging during the thermo-mechanical simulation significantly enhanced resistance to weld liquation cracking of the alloy. This is attributable to strain-induced rapid isothermal dissolution of the constitutional liquation products within 150 μm from the center of the forged sample. This was not observed in purely thermally simulated samples. The microstructure within the TMAZ of the as-welded alloy is similar to the microstructure in the forged Gleeble specimens.

  11. Numerical Simulation of Creep Characteristic for Composite Rock Mass with Weak Interlayer

    NASA Astrophysics Data System (ADS)

    Li, Jian-guang; Zhang, Zuo-liang; Zhang, Yu-biao; Shi, Xiu-wen; Wei, Jian

    2017-06-01

    The composite rock mass with weak interlayer is widely exist in engineering, and it’s essential to research the creep behavior which could cause stability problems of rock engineering and production accidents. However, due to it is difficult to take samples, the losses and damages in delivery and machining process, we always cannot get enough natural layered composite rock mass samples, so the indirect test method has been widely used. In this paper, we used ANSYS software (a General Finite Element software produced by American ANSYS, Inc) to carry out the numerical simulation based on the uniaxial compression creep experiments of artificial composite rock mass with weak interlayer, after experimental data fitted. The results show that the laws obtained by numerical simulations and experiments are consistent. Thus confirmed that carry out numerical simulation for the creep characteristics of rock mass with ANSYS software is feasible, and this method can also be extended to other underground engineering of simulate the weak intercalations.

  12. Salt weathering in Egyptian limestone after laboratory simulations with continuous flow of salt solutions at different temperatures

    NASA Astrophysics Data System (ADS)

    Aly, Nevin; Gomez-Heras, Miguel; Hamed, Ayman; Alvarez de Buergo, Monica

    2013-04-01

    weathering in Egyptian limestone after laboratory simulations with continuous flow of salt solutions at different temperatures Nevin Aly Mohamed (1), Miguel Gomez - Heras(2), Ayman Hamed Ahmed (1), and Monica Alvarez de Buergo(2). (1) Faculty of Pet. & Min. Engineering- Suez Canal University, Suez, Egypt, (2) Instituto de Geociencias (CSIC-UCM) Madrid. Spain. Limestone is one of the most frequent building stones in Egypt and is used since the time of ancient Egyptians and salt weathering is one of the main threats to its conservation. Most of the limestone used in historical monuments in Cairo is a biomicrite extracted from the Mid-Eocene Mokattam Group. During this work, cylindrical samples (2.4 cm diameter and approx. 4.8 cm length) were subjected, in a purpose-made simulation chamber, to simulated laboratory weathering tests with fixed salt concentration (10% weight NaCl solution), at different temperatures, which were kept constant throughout each test (10, 20, 30, 40 oC). During each test, salt solutions flowed continuously imbibing samples by capilarity. Humidity within the simulation chamber was reduced using silica gel to keep it low and constant to increase evaporation rate. Temperature, humidity inside the simulation chamber and samples weight were digitally monitored during each test. Results show the advantages of the proposed experimental methodology using a continuous flow of salt solutions and shed light on the effect of temperature on the dynamics of salt crystallization on and within samples. Research funded by mission sector of high education ministry, Egypt and Geomateriales S2009/MAT-1629.

  13. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA

    PubMed Central

    Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe

    2015-01-01

    Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results: We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. Availability and implementation: http://github.com/brendankelly/micropower. Contact: brendank@mail.med.upenn.edu or hongzhe@upenn.edu PMID:25819674

  14. Tsunami hazard assessments with consideration of uncertain earthquakes characteristics

    NASA Astrophysics Data System (ADS)

    Sepulveda, I.; Liu, P. L. F.; Grigoriu, M. D.; Pritchard, M. E.

    2017-12-01

    The uncertainty quantification of tsunami assessments due to uncertain earthquake characteristics faces important challenges. First, the generated earthquake samples must be consistent with the properties observed in past events. Second, it must adopt an uncertainty propagation method to determine tsunami uncertainties with a feasible computational cost. In this study we propose a new methodology, which improves the existing tsunami uncertainty assessment methods. The methodology considers two uncertain earthquake characteristics, the slip distribution and location. First, the methodology considers the generation of consistent earthquake slip samples by means of a Karhunen Loeve (K-L) expansion and a translation process (Grigoriu, 2012), applicable to any non-rectangular rupture area and marginal probability distribution. The K-L expansion was recently applied by Le Veque et al. (2016). We have extended the methodology by analyzing accuracy criteria in terms of the tsunami initial conditions. Furthermore, and unlike this reference, we preserve the original probability properties of the slip distribution, by avoiding post sampling treatments such as earthquake slip scaling. Our approach is analyzed and justified in the framework of the present study. Second, the methodology uses a Stochastic Reduced Order model (SROM) (Grigoriu, 2009) instead of a classic Monte Carlo simulation, which reduces the computational cost of the uncertainty propagation. The methodology is applied on a real case. We study tsunamis generated at the site of the 2014 Chilean earthquake. We generate earthquake samples with expected magnitude Mw 8. We first demonstrate that the stochastic approach of our study generates consistent earthquake samples with respect to the target probability laws. We also show that the results obtained from SROM are more accurate than classic Monte Carlo simulations. We finally validate the methodology by comparing the simulated tsunamis and the tsunami records for the 2014 Chilean earthquake. Results show that leading wave measurements fall within the tsunami sample space. At later times, however, there are mismatches between measured data and the simulated results, suggesting that other sources of uncertainty are as relevant as the uncertainty of the studied earthquake characteristics.

  15. Evaluation and characterization of thyroid-disrupting activities in soil samples along the Second Songhua River, China.

    PubMed

    Kong, Dongdong; Wang, Yafei; Wang, Jinsheng; Teng, Yanguo; Li, Na; Li, Jian

    2016-11-01

    In this study, a recombinant thyroid receptor (TR) gene yeast assay combined with Monte Carlo simulation were used to evaluate and characterize soil samples collected from Jilin (China) along the Second Songhua River, for their ant/agonist effect on TR. No TR agonistic activity was found in soils, but many soil samples exhibited TR antagonistic activities, and the bioassay-derived amiodarone hydrochloride equivalents, which was calculated based on Monte Carlo simulation, ranged from not detected (N.D.) to 35.5μg/g. Hydrophilic substance fractions were determined to be the contributors to TR antagonistic activity in these soil samples. Our results indicate that the novel calculation method is effective for the quantification and characterization of TR antagonists in soil samples, and these data could provide useful information for future management and remediation efforts for contaminated soils. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, Paul R.; Boyce, Donald E.; Park, Jun-Sang

    A robust methodology is presented to extract slip system strengths from lattice strain distributions for polycrystalline samples obtained from high-energy x-ray diffraction (HEXD) experiments with in situ loading. The methodology consists of matching the evolution of coefficients of a harmonic expansion of the distributions from simulation to the coefficients derived from measurements. Simulation results are generated via finite element simulations of virtual polycrystals that are subjected to the loading history applied in the HEXD experiments. Advantages of the methodology include: (1) its ability to utilize extensive data sets generated by HEXD experiments; (2) its ability to capture trends in distributionsmore » that may be noisy (both measured and simulated); and (3) its sensitivity to the ratios of the family strengths. The approach is used to evaluate the slip system strengths of Ti-6Al-4V using samples having relatively equiaxed grains. These strength estimates are compared to values in the literature.« less

  18. Simulation and flavor compound analysis of dealcoholized beer via one-step vacuum distillation.

    PubMed

    Andrés-Iglesias, Cristina; García-Serna, Juan; Montero, Olimpio; Blanco, Carlos A

    2015-10-01

    The coupled operation of vacuum distillation process to produce alcohol free beer at laboratory scale and Aspen HYSYS simulation software was studied to define the chemical changes during the dealcoholization process in the aroma profiles of 2 different lager beers. At the lab-scale process, 2 different parameters were chosen to dealcoholize beer samples, 102mbar at 50°C and 200mbar at 67°C. Samples taken at different steps of the process were analyzed by HS-SPME-GC-MS focusing on the concentration of 7 flavor compounds, 5 alcohols and 2 esters. For simulation process, the EoS parameters of the Wilson-2 property package were adjusted to the experimental data and one more pressure was tested (60mbar). Simulation methods represent a viable alternative to predict results of the volatile compound composition of a final dealcoholized beer. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Indentation experiments and simulation of ovine bone using a viscoelastic-plastic damage model

    PubMed Central

    Zhao, Yang; Wu, Ziheng; Turner, Simon; MacLeay, Jennifer; Niebur, Glen L.; Ovaert, Timothy C.

    2015-01-01

    Indentation methods have been widely used to study bone at the micro- and nanoscales. It has been shown that bone exhibits viscoelastic behavior with permanent deformation during indentation. At the same time, damage due to microcracks is induced due to the stresses beneath the indenter tip. In this work, a simplified viscoelastic-plastic damage model was developed to more closely simulate indentation creep data, and the effect of the model parameters on the indentation curve was investigated. Experimentally, baseline and 2-year postovariectomized (OVX-2) ovine (sheep) bone samples were prepared and indented. The damage model was then applied via finite element analysis to simulate the bone indentation data. The mechanical properties of yielding, viscosity, and damage parameter were obtained from the simulations. The results suggest that damage develops more quickly for OVX-2 samples under the same indentation load conditions as the baseline data. PMID:26136623

  20. Sample Results From Tank 48H Samples HTF-48-14-158, -159, -169, and -170

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, T.; Hang, T.

    2015-04-28

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 48H in support of determining the cause for the unusually high dose rates at the sampling points for this tank. A set of two samples was taken from the quiescent tank, and two additional samples were taken after the contents of the tank were mixed. The results of the analyses of all the samples show that the contents of the tank have changed very little since the analysis of the previous sample in 2012. The solids are almost exclusively composed of tetraphenylborate (TPB) salts, and there is no indication of accelerationmore » in the TPB decomposition. The filtrate composition shows a moderate increase in salt concentration and density, which is attributable to the addition of NaOH for the purposes of corrosion control. An older modeling simulation of the TPB degradation was updated, and the supernate results from a 2012 sample were run in the model. This result was compared to the results from the 2014 recent sample results reported in this document. The model indicates there is no change in the TPB degradation from 2012 to 2014. SRNL measured the buoyancy of the TPB solids in Tank 48H simulant solutions. It was determined that a solution of density 1.279 g/mL (~6.5M sodium) was capable of indefinitely suspending the TPB solids evenly throughout the solution. A solution of density 1.296 g/mL (~7M sodium) caused a significant fraction of the solids to float on the solution surface. As the experiments could not include the effect of additional buoyancy elements such as benzene or hydrogen generation, the buoyancy measurements provide an upper bound estimate of the density in Tank 48H required to float the solids.« less

  1. Quality-assurance results for field pH and specific-conductance measurements, and for laboratory analysis, National Atmospheric Deposition Program and National Trends Network; January 1980-September 1984

    USGS Publications Warehouse

    Schroder, L.J.; Brooks, M.H.; Malo, B.A.; Willoughby, T.C.

    1986-01-01

    Five intersite comparison studies for the field determination of pH and specific conductance, using simulated-precipitation samples, were conducted by the U.S.G.S. for the National Atmospheric Deposition Program and National Trends Network. These comparisons were performed to estimate the precision of pH and specific conductance determinations made by sampling-site operators. Simulated-precipitation samples were prepared from nitric acid and deionized water. The estimated standard deviation for site-operator determination of pH was 0.25 for pH values ranging from 3.79 to 4.64; the estimated standard deviation for specific conductance was 4.6 microsiemens/cm at 25 C for specific-conductance values ranging from 10.4 to 59.0 microsiemens/cm at 25 C. Performance-audit samples with known analyte concentrations were prepared by the U.S.G.S.and distributed to the National Atmospheric Deposition Program 's Central Analytical Laboratory. The differences between the National Atmospheric Deposition Program and national Trends Network-reported analyte concentrations and known analyte concentrations were calculated, and the bias and precision were determined. For 1983, concentrations of calcium, magnesium, sodium, and chloride were biased at the 99% confidence limit; concentrations of potassium and sulfate were unbiased at the 99% confidence limit. Four analytical laboratories routinely analyzing precipitation were evaluated in their analysis of identical natural- and simulated precipitation samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple-range test on data produced by these laboratories, from the analysis of identical simulated-precipitation samples. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Interlaboratory comparability results may be used to normalize natural-precipitation chemistry data obtained from two or more of these laboratories. (Author 's abstract)

  2. A robust measure of HIV-1 population turnover within chronically infected individuals.

    PubMed

    Achaz, G; Palmer, S; Kearney, M; Maldarelli, F; Mellors, J W; Coffin, J M; Wakeley, J

    2004-10-01

    A simple nonparameteric test for population structure was applied to temporally spaced samples of HIV-1 sequences from the gag-pol region within two chronically infected individuals. The results show that temporal structure can be detected for samples separated by about 22 months or more. The performance of the method, which was originally proposed to detect geographic structure, was tested for temporally spaced samples using neutral coalescent simulations. Simulations showed that the method is robust to variation in samples sizes and mutation rates, to the presence/absence of recombination, and that the power to detect temporal structure is high. By comparing levels of temporal structure in simulations to the levels observed in real data, we estimate the effective intra-individual population size of HIV-1 to be between 10(3) and 10(4) viruses, which is in agreement with some previous estimates. Using this estimate and a simple measure of sequence diversity, we estimate an effective neutral mutation rate of about 5 x 10(-6) per site per generation in the gag-pol region. The definition and interpretation of estimates of such "effective" population parameters are discussed.

  3. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  4. Three-dimensional simulation of ultrasound propagation through trabecular bone structures measured by synchrotron microtomography.

    PubMed

    Bossy, Emmanuel; Padilla, Frédéric; Peyrin, Françoise; Laugier, Pascal

    2005-12-07

    Three-dimensional numerical simulations of ultrasound transmission were performed through 31 trabecular bone samples measured by synchrotron microtomography. The synchrotron microtomography provided high resolution 3D mappings of bone structures, which were used as the input geometry in the simulation software developed in our laboratory. While absorption (i.e. the absorption of ultrasound through dissipative mechanisms) was not taken into account in the algorithm, the simulations reproduced major phenomena observed in real through-transmission experiments in trabecular bone. The simulated attenuation (i.e. the decrease of the transmitted ultrasonic energy) varies linearly with frequency in the MHz frequency range. Both the speed of sound (SOS) and the slope of the normalized frequency-dependent attenuation (nBUA) increase with the bone volume fraction. Twenty-five out of the thirty-one samples exhibited negative velocity dispersion. One sample was rotated to align the main orientation of the trabecular structure with the direction of ultrasonic propagation, leading to the observation of a fast and a slow wave. Coupling numerical simulation with real bone architecture therefore provides a powerful tool to investigate the physics of ultrasound propagation in trabecular structures. As an illustration, comparison between results obtained on bone modelled either as a fluid or a solid structure suggested the major role of mode conversion of the incident acoustic wave to shear waves in bone to explain the large contribution of scattering to the overall attenuation.

  5. Aerobraking strategies for the sample of comet coma earth return mission

    NASA Astrophysics Data System (ADS)

    Abe, Takashi; Kawaguchi, Jun'ichiro; Uesugi, Kuninori; Yen, Chen-Wan L.

    The results of a study to the validate the applicability of the aerobraking concept to the SOCCER (sample of comet coma earth return) mission using a six-DOF computer simulation of the aerobraking process are presented. The SOCCER spacecraft and the aerobraking scenario and power supply problem are briefly described. Results are presented for the spin effect, payload exposure problem, and sun angle effect.

  6. Aerobraking strategies for the sample of comet coma earth return mission

    NASA Technical Reports Server (NTRS)

    Abe, Takashi; Kawaguchi, Jun'ichiro; Uesugi, Kuninori; Yen, Chen-Wan L.

    1990-01-01

    The results of a study to the validate the applicability of the aerobraking concept to the SOCCER (sample of comet coma earth return) mission using a six-DOF computer simulation of the aerobraking process are presented. The SOCCER spacecraft and the aerobraking scenario and power supply problem are briefly described. Results are presented for the spin effect, payload exposure problem, and sun angle effect.

  7. Evaluation of NASA speech encoder

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.

  8. Building test data from real outbreaks for evaluating detection algorithms.

    PubMed

    Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.

  9. Building test data from real outbreaks for evaluating detection algorithms

    PubMed Central

    Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID:28863159

  10. The effect of as long-term Mars simulation on a microbial permafrost soil community and macromolecules such as DNA, polypeptides and cell wall components.

    NASA Astrophysics Data System (ADS)

    Finster, K.; Hansen, A.; Liengaard, L.; Kristoffersen, T.; Mikkelsen, K.; Merrison, J.; Lomstein, B.

    Ten freeze-dried and homogenized samples of a 2300 years old Spitsbergen permafrost soil containing a complex microbial community were aseptically transferred to inert glass tubes and subjected to a 30 days Martian simulation experiment. During this period the samples received an UV dose equivalent to 80 Martian Sol. Data loggers in 4 out the ten samples monitored the temperature 0-2 mm below the surface of the sample. After removal from the simulation chamber, the samples were sliced in 1.5 to 6 mm thick horizons (H1, 0-1.5 mm; H2, 1.5-3 mm; H3, 3-6 mm; H4, 6-9 mm; H5, 9-15 mm; H6, 15-21 mm; H7, 21-27 mm and H8, 27-33 mm), resulting in 10 subsamples from each soil horizon. The subsamples from each horizon were pooled and used for the following investigations: 1. Determination of the bacterial number after staining with SYBR-gold, 2. Determination of the number of dead and living bacteria using the BacLight kit, 3. Determination of the total amount of extractable DNA, 4. Determination of the number of culturable aerobic and anaerobic bacteria, 5. Determination of the concentration of the total hydrolysable amino acids and D and L enantiomers, 6. Determination of the muramic acid contentration. The results of the experiments will be presented and discussed in our communication

  11. COED Transactions, Vol. X, No. 10, October 1978. Simulation of a Sampled-Data System on a Hybrid Computer.

    ERIC Educational Resources Information Center

    Mitchell, Eugene E., Ed.

    The simulation of a sampled-data system is described that uses a full parallel hybrid computer. The sampled data system simulated illustrates the proportional-integral-derivative (PID) discrete control of a continuous second-order process representing a stirred-tank. The stirred-tank is simulated using continuous analog components, while PID…

  12. Scenario and modelling uncertainty in global mean temperature change derived from emission driven Global Climate Models

    NASA Astrophysics Data System (ADS)

    Booth, B. B. B.; Bernie, D.; McNeall, D.; Hawkins, E.; Caesar, J.; Boulton, C.; Friedlingstein, P.; Sexton, D.

    2012-09-01

    We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission driven rather than concentration driven perturbed parameter ensemble of a Global Climate Model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration driven simulations (with 10-90 percentile ranges of 1.7 K for the aggressive mitigation scenario up to 3.9 K for the high end business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 degrees (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission driven experiments, they do not change existing expectations (based on previous concentration driven experiments) on the timescale that different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration pathways used to drive GCM ensembles lies towards the lower end of our simulated distribution. This design decision (a legecy of previous assessments) is likely to lead concentration driven experiments to under-sample strong feedback responses in concentration driven projections. Our ensemble of emission driven simulations span the global temperature response of other multi-model frameworks except at the low end, where combinations of low climate sensitivity and low carbon cycle feedbacks lead to responses outside our ensemble range. The ensemble simulates a number of high end responses which lie above the CMIP5 carbon cycle range. These high end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real world climate sensitivity constraints which, if achieved, would lead to reductions on the uppper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present day observables and future changes while the large spread of future projected changes, highlights the ongoing need for such work.

  13. Passivation of pigment particles for thermal control coatings

    NASA Technical Reports Server (NTRS)

    Sancier, K. M.; Morrison, S. R.; Farley, E. P.

    1975-01-01

    The preparation of a matrix of 48 samples consisting of pigments and pigmented paints is described. The results obtained from testing these samples by electron spin resonance and by in situ spectral reflectance measurements in space simulation tests are presented. Conclusions and recommendations for further research are given.

  14. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  15. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  16. How well do we know the infaunal biomass of the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger

    2016-03-01

    Benthic infauna comprise a wide range of taxa of varying abundances and sizes, but large infaunal taxa are infrequently recorded in community surveys of the shelf benthos. These larger, but numerically rare, species may contribute disproportionately to biomass, however. We examine the degree to which standard benthic sampling gear and survey design provide an adequate estimate of the biomass of large infauna using the Atlantic surfclam, Spisula solidissima, on the continental shelf off the northeastern coast of the United States as a test organism. We develop a numerical model that simulates standard survey designs, gear types, and sampling densities to evaluate the effectiveness of vertically-dropped sampling gear (e.g., boxcores, grabs) for estimating density of large species. Simulations of randomly distributed clams at a density of 0.5-1 m-2 within an 0.25-km2 domain show that lower sampling densities (1-5 samples per sampling event) resulted in highly inaccurate estimates of clam density with the presence of clams detected in less than 25% of the sampling events. In all cases in which patchiness was present in the simulated clam population, surveys were prone to very large errors (survey availability events) unless a dense (e.g., 100-sample) sampling protocol was imposed. Thus, commercial quantities of surfclams could easily go completely undetected by any standard benthic community survey protocol using vertically-dropped gear. Without recourse to modern high-volume sampling gear capable of sampling many meters at a swath, such as hydraulic dredges, biomass of the continental shelf will be grievously underestimated if large infauna are present even at moderate densities.

  17. Propagation of polarized light through textile material.

    PubMed

    Peng, Bo; Ding, Tianhuai; Wang, Peng

    2012-09-10

    In this paper a detailed investigation, based on simulations and experiments of polarized light propagation through textile material, is presented. The fibers in textile material are generally anisotropic with axisymmetric structure. The formalism of anisotropic fiber scattering (AFS) at oblique incidence is first deduced and then, based on this formalism and considered multiscattering, a polarization-dependent Monte Carlo method is employed to simulate the propagation of polarized light in textile material. Taking cotton fiber assemblies as samples, the forward-scattering Mueller matrices are calculated theoretically through the AFS-based simulations and measured experimentally by an improved Mueller matrix polarimeter. Their variations according to sample thickness are discussed primarily. With these matrices polar-decomposed, a further discussion on the optical polarization properties of cotton fiber assemblies (i.e., depolarization Δ, diattenuation D, optical rotation ψ and linear retardance δ) versus the thickness is held. Simultaneously, a meaningful comparison of both the matrices and their polar decomposition, generated from the simulations based on isotropic fiber scattering (IFS), with those simulated based on AFS is made. Results show that the IFS-derived values are strikingly different from those that are AFS-derived due to ignoring the fiber anisotropy. Furthermore, all the AFS-derived results are perfectly consistent with those obtained experimentally, which suggests that the Monte Carlo simulation based on AFS has potential applications for light scattering and propagation in textile material.

  18. Understanding Cryptic Pocket Formation in Protein Targets by Enhanced Sampling Simulations.

    PubMed

    Oleinikovas, Vladimiras; Saladino, Giorgio; Cossins, Benjamin P; Gervasio, Francesco L

    2016-11-02

    Cryptic pockets, that is, sites on protein targets that only become apparent when drugs bind, provide a promising alternative to classical binding sites for drug development. Here, we investigate the nature and dynamical properties of cryptic sites in four pharmacologically relevant targets, while comparing the efficacy of various simulation-based approaches in discovering them. We find that the studied cryptic sites do not correspond to local minima in the computed conformational free energy landscape of the unliganded proteins. They thus promptly close in all of the molecular dynamics simulations performed, irrespective of the force-field used. Temperature-based enhanced sampling approaches, such as Parallel Tempering, do not improve the situation, as the entropic term does not help in the opening of the sites. The use of fragment probes helps, as in long simulations occasionally it leads to the opening and binding to the cryptic sites. Our observed mechanism of cryptic site formation is suggestive of an interplay between two classical mechanisms: induced-fit and conformational selection. Employing this insight, we developed a novel Hamiltonian Replica Exchange-based method "SWISH" (Sampling Water Interfaces through Scaled Hamiltonians), which combined with probes resulted in a promising general approach for cryptic site discovery. We also addressed the issue of "false-positives" and propose a simple approach to distinguish them from druggable cryptic pockets. Our simulations, whose cumulative sampling time was more than 200 μs, help in clarifying the molecular mechanism of pocket formation, providing a solid basis for the choice of an efficient computational method.

  19. Assessment of solute fluxes beneath an orchard irrigated with treated sewage water: A numerical study

    NASA Astrophysics Data System (ADS)

    Russo, David; Laufer, Asher; Shapira, Roi H.; Kurtzman, Daniel

    2013-02-01

    Detailed numerical simulations were used to analyze water flow and transport of nitrate, chloride, and a tracer solute in a 3-D, spatially heterogeneous, variably saturated soil, originating from a citrus orchard irrigated with treated sewage water (TSW) considering realistic features of the soil-water-plant-atmosphere system. Results of this study suggest that under long-term irrigation with TSW, because of nitrate uptake by the tree roots and nitrogen transformations, the vadose zone may provide more capacity for the attenuation of the nitrate load in the groundwater than for the chloride load in the groundwater. Results of the 3-D simulations were used to assess their counterparts based on a simplified, deterministic, 1-D vertical simulation and on limited soil monitoring. Results of the analyses suggest that the information that may be gained from a single sampling point (located close to the area active in water uptake by the tree roots) or from the results of the 1-D simulation is insufficient for a quantitative description of the response of the complicated, 3-D flow system. Both might considerably underestimate the movement and spreading of a pulse of a tracer solute and also the groundwater contamination hazard posed by nitrate and particularly by chloride moving through the vadose zone. This stems mainly from the rain that drove water through the flow system away from the rooted area and could not be represented by the 1-D model or by the single sampling point. It was shown, however, that an additional sampling point, located outside the area active in water uptake, may substantially improve the quantitative description of the response of the complicated, 3-D flow system.

  20. Comparing the IRT Pre-equating and Section Pre-equating: A Simulation Study.

    ERIC Educational Resources Information Center

    Hwang, Chi-en; Cleary, T. Anne

    The results obtained from two basic types of pre-equatings of tests were compared: the item response theory (IRT) pre-equating and section pre-equating (SPE). The simulated data were generated from a modified three-parameter logistic model with a constant guessing parameter. Responses of two replication samples of 3000 examinees on two 72-item…

  1. The effects of physiological adjustments on the perceptual and acoustical characteristics of simulated laryngeal vocal tremor

    PubMed Central

    Lester, Rosemary A.; Story, Brad H.

    2015-01-01

    The purpose of this study was to determine if adjustments to the voice source [i.e., fundamental frequency (F0), degree of vocal fold adduction] or vocal tract filter (i.e., vocal tract shape for vowels) reduce the perception of simulated laryngeal vocal tremor and to determine if listener perception could be explained by characteristics of the acoustical modulations. This research was carried out using a computational model of speech production that allowed for precise control and manipulation of the glottal and vocal tract configurations. Forty-two healthy adults participated in a perceptual study involving pair-comparisons of the magnitude of “shakiness” with simulated samples of laryngeal vocal tremor. Results revealed that listeners perceived a higher magnitude of voice modulation when simulated samples had a higher mean F0, greater degree of vocal fold adduction, and vocal tract shape for /i/ vs /ɑ/. However, the effect of F0 was significant only when glottal noise was not present in the acoustic signal. Acoustical analyses were performed with the simulated samples to determine the features that affected listeners' judgments. Based on regression analyses, listeners' judgments were predicted to some extent by modulation information present in both low and high frequency bands. PMID:26328711

  2. Shear wave speed estimation by adaptive random sample consensus method.

    PubMed

    Lin, Haoming; Wang, Tianfu; Chen, Siping

    2014-01-01

    This paper describes a new method for shear wave velocity estimation that is capable of extruding outliers automatically without preset threshold. The proposed method is an adaptive random sample consensus (ARANDSAC) and the metric used here is finding the certain percentage of inliers according to the closest distance criterion. To evaluate the method, the simulation and phantom experiment results were compared using linear regression with all points (LRWAP) and radon sum transform (RS) method. The assessment reveals that the relative biases of mean estimation are 20.00%, 4.67% and 5.33% for LRWAP, ARANDSAC and RS respectively for simulation, 23.53%, 4.08% and 1.08% for phantom experiment. The results suggested that the proposed ARANDSAC algorithm is accurate in shear wave speed estimation.

  3. Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems

    NASA Astrophysics Data System (ADS)

    Nieciąg, Halina

    2015-10-01

    Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.

  4. Analysis of Lunar Highland Regolith Samples From Apollo 16 Drive Core 64001/2 and Lunar Regolith Simulants - an Expanding Comparative Database

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Doug; Stoeser, Douglas; Wentworth, Susan; McKay, Dave S.; Botha, Pieter; Butcher, Alan R.; Horsch, Hanna E.; Benedictus, Aukje; Gottlieb, Paul

    2008-01-01

    This slide presentation reviews the work to analyze the lunar highland regolith samples that came from the Apollo 16 core sample 64001/2 and simulants of lunar regolith, and build a comparative database. The work is part of a larger effort to compile an internally consistent database on lunar regolith (Apollo Samples) and lunar regolith simulants. This is in support of a future lunar outpost. The work is to characterize existing lunar regolith and simulants in terms of particle type, particle size distribution, particle shape distribution, bulk density, and other compositional characteristics, and to evaluate the regolith simulants by the same properties in comparison to the Apollo sample lunar regolith.

  5. Enhanced sampling simulations to construct free-energy landscape of protein-partner substrate interaction.

    PubMed

    Ikebe, Jinzen; Umezawa, Koji; Higo, Junichi

    2016-03-01

    Molecular dynamics (MD) simulations using all-atom and explicit solvent models provide valuable information on the detailed behavior of protein-partner substrate binding at the atomic level. As the power of computational resources increase, MD simulations are being used more widely and easily. However, it is still difficult to investigate the thermodynamic properties of protein-partner substrate binding and protein folding with conventional MD simulations. Enhanced sampling methods have been developed to sample conformations that reflect equilibrium conditions in a more efficient manner than conventional MD simulations, thereby allowing the construction of accurate free-energy landscapes. In this review, we discuss these enhanced sampling methods using a series of case-by-case examples. In particular, we review enhanced sampling methods conforming to trivial trajectory parallelization, virtual-system coupled multicanonical MD, and adaptive lambda square dynamics. These methods have been recently developed based on the existing method of multicanonical MD simulation. Their applications are reviewed with an emphasis on describing their practical implementation. In our concluding remarks we explore extensions of the enhanced sampling methods that may allow for even more efficient sampling.

  6. Fast and accurate Monte Carlo sampling of first-passage times from Wiener diffusion models.

    PubMed

    Drugowitsch, Jan

    2016-02-11

    We present a new, fast approach for drawing boundary crossing samples from Wiener diffusion models. Diffusion models are widely applied to model choices and reaction times in two-choice decisions. Samples from these models can be used to simulate the choices and reaction times they predict. These samples, in turn, can be utilized to adjust the models' parameters to match observed behavior from humans and other animals. Usually, such samples are drawn by simulating a stochastic differential equation in discrete time steps, which is slow and leads to biases in the reaction time estimates. Our method, instead, facilitates known expressions for first-passage time densities, which results in unbiased, exact samples and a hundred to thousand-fold speed increase in typical situations. In its most basic form it is restricted to diffusion models with symmetric boundaries and non-leaky accumulation, but our approach can be extended to also handle asymmetric boundaries or to approximate leaky accumulation.

  7. Accelerating Convergence in Molecular Dynamics Simulations of Solutes in Lipid Membranes by Conducting a Random Walk along the Bilayer Normal.

    PubMed

    Neale, Chris; Madill, Chris; Rauscher, Sarah; Pomès, Régis

    2013-08-13

    All molecular dynamics simulations are susceptible to sampling errors, which degrade the accuracy and precision of observed values. The statistical convergence of simulations containing atomistic lipid bilayers is limited by the slow relaxation of the lipid phase, which can exceed hundreds of nanoseconds. These long conformational autocorrelation times are exacerbated in the presence of charged solutes, which can induce significant distortions of the bilayer structure. Such long relaxation times represent hidden barriers that induce systematic sampling errors in simulations of solute insertion. To identify optimal methods for enhancing sampling efficiency, we quantitatively evaluate convergence rates using generalized ensemble sampling algorithms in calculations of the potential of mean force for the insertion of the ionic side chain analog of arginine in a lipid bilayer. Umbrella sampling (US) is used to restrain solute insertion depth along the bilayer normal, the order parameter commonly used in simulations of molecular solutes in lipid bilayers. When US simulations are modified to conduct random walks along the bilayer normal using a Hamiltonian exchange algorithm, systematic sampling errors are eliminated more rapidly and the rate of statistical convergence of the standard free energy of binding of the solute to the lipid bilayer is increased 3-fold. We compute the ratio of the replica flux transmitted across a defined region of the order parameter to the replica flux that entered that region in Hamiltonian exchange simulations. We show that this quantity, the transmission factor, identifies sampling barriers in degrees of freedom orthogonal to the order parameter. The transmission factor is used to estimate the depth-dependent conformational autocorrelation times of the simulation system, some of which exceed the simulation time, and thereby identify solute insertion depths that are prone to systematic sampling errors and estimate the lower bound of the amount of sampling that is required to resolve these sampling errors. Finally, we extend our simulations and verify that the conformational autocorrelation times estimated by the transmission factor accurately predict correlation times that exceed the simulation time scale-something that, to our knowledge, has never before been achieved.

  8. Simulation of range imaging-based estimation of respiratory lung motion. Influence of noise, signal dimensionality and sampling patterns.

    PubMed

    Wilms, M; Werner, R; Blendowski, M; Ortmüller, J; Handels, H

    2014-01-01

    A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions). A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented. This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines. Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.

  9. Wang-Landau method for calculating Rényi entropies in finite-temperature quantum Monte Carlo simulations.

    PubMed

    Inglis, Stephen; Melko, Roger G

    2013-01-01

    We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.

  10. Remote Sensing, Sampling and Simulation Applications in Analyses of Insect Dispersion and Abundance in Cotton

    Treesearch

    J. L. Willers; J. M. McKinion; J. N. Jenkins

    2006-01-01

    Simulation was employed to create stratified simple random samples of different sample unit sizes to represent tarnished plant bug abundance at different densities within various habitats of simulated cotton fields. These samples were used to investigate dispersion patterns of this cotton insect. It was found that the assessment of spatial pattern varied as a function...

  11. Reactive flow modeling of small scale detonation failure experiments for a baseline non-ideal explosive

    NASA Astrophysics Data System (ADS)

    Kittell, David E.; Cummock, Nick R.; Son, Steven F.

    2016-08-01

    Small scale characterization experiments using only 1-5 g of a baseline ammonium nitrate plus fuel oil (ANFO) explosive are discussed and simulated using an ignition and growth reactive flow model. There exists a strong need for the small scale characterization of non-ideal explosives in order to adequately survey the wide parameter space in sample composition, density, and microstructure of these materials. However, it is largely unknown in the scientific community whether any useful or meaningful result may be obtained from detonation failure, and whether a minimum sample size or level of confinement exists for the experiments. In this work, it is shown that the parameters of an ignition and growth rate law may be calibrated using the small scale data, which is obtained from a 35 GHz microwave interferometer. Calibration is feasible when the samples are heavily confined and overdriven; this conclusion is supported with detailed simulation output, including pressure and reaction contours inside the ANFO samples. The resulting shock wave velocity is most likely a combined chemical-mechanical response, and simulations of these experiments require an accurate unreacted equation of state (EOS) in addition to the calibrated reaction rate. Other experiments are proposed to gain further insight into the detonation failure data, as well as to help discriminate between the role of the EOS and reaction rate in predicting the measured outcome.

  12. Scattering Properties of Large Irregular Cosmic Dust Particles at Visible Wavelengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escobar-Cerezo, J.; Palmer, C.; Muñoz, O.

    The effect of internal inhomogeneities and surface roughness on the scattering behavior of large cosmic dust particles is studied by comparing model simulations with laboratory measurements. The present work shows the results of an attempt to model a dust sample measured in the laboratory with simulations performed by a ray-optics model code. We consider this dust sample as a good analogue for interplanetary and interstellar dust as it shares its refractive index with known materials in these media. Several sensitivity tests have been performed for both structural cases (internal inclusions and surface roughness). Three different samples have been selected tomore » mimic inclusion/coating inhomogeneities: two measured scattering matrices of hematite and white clay, and a simulated matrix for water ice. These three matrices are selected to cover a wide range of imaginary refractive indices. The selection of these materials also seeks to study astrophysical environments of interest such as Mars, where hematite and clays have been detected, and comets. Based on the results of the sensitivity tests shown in this work, we perform calculations for a size distribution of a silicate-type host particle model with inclusions and surface roughness to reproduce the experimental measurements of a dust sample. The model fits the measurements quite well, proving that surface roughness and internal structure play a role in the scattering pattern of irregular cosmic dust particles.« less

  13. Reactive flow modeling of small scale detonation failure experiments for a baseline non-ideal explosive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kittell, David E.; Cummock, Nick R.; Son, Steven F.

    2016-08-14

    Small scale characterization experiments using only 1–5 g of a baseline ammonium nitrate plus fuel oil (ANFO) explosive are discussed and simulated using an ignition and growth reactive flow model. There exists a strong need for the small scale characterization of non-ideal explosives in order to adequately survey the wide parameter space in sample composition, density, and microstructure of these materials. However, it is largely unknown in the scientific community whether any useful or meaningful result may be obtained from detonation failure, and whether a minimum sample size or level of confinement exists for the experiments. In this work, itmore » is shown that the parameters of an ignition and growth rate law may be calibrated using the small scale data, which is obtained from a 35 GHz microwave interferometer. Calibration is feasible when the samples are heavily confined and overdriven; this conclusion is supported with detailed simulation output, including pressure and reaction contours inside the ANFO samples. The resulting shock wave velocity is most likely a combined chemical-mechanical response, and simulations of these experiments require an accurate unreacted equation of state (EOS) in addition to the calibrated reaction rate. Other experiments are proposed to gain further insight into the detonation failure data, as well as to help discriminate between the role of the EOS and reaction rate in predicting the measured outcome.« less

  14. MD modeling of screw dislocation influence upon initiation and mechanism of BCC-HCP polymorphous transition in iron

    NASA Astrophysics Data System (ADS)

    Dremov, V. V.; Ionov, G. V.; Sapozhnikov, F. A.; Smirnov, N. A.; Karavaev, A. V.; Vorobyova, M. A.; Ryzhkov, M. V.

    2015-09-01

    The present work is devoted to classical molecular dynamics investigation into microscopic mechanisms of the bcc-hcp transition in iron. The interatomic potential of EAM type used in the calculations was tested for the capability to reproduce ab initio data on energy evolution along the bcc-hcp transformation path (Burgers deformation + shuffe) and then used in the large-scale MD simulations. The large-scale simulations included constant volume deformation along the Burgers path to study the origin and nature of the plasticity, hydrostatic volume compression of defect free samples above the bcc to hcp transition threshold to observe the formation of new phase embryos, and the volume compression of samples containing screw dislocations to study the effect of the dislocations on the probability of the new phase critical embryo formation. The volume compression demonstrated high level of metastability. The transition starts at pressure much higher than the equilibrium one. Dislocations strongly affect the probability of the critical embryo formation and significantly reduce the onset pressure of transition. The dislocations affect also the resulting structure of the samples upon the transition. The formation of layered structure is typical for the samples containing the dislocations. The results of the simulations were compared with the in-situ experimental data on the mechanism of the bcc-hcp transition in iron.

  15. Adaptive Biasing Combined with Hamiltonian Replica Exchange to Improve Umbrella Sampling Free Energy Simulations.

    PubMed

    Zeller, Fabian; Zacharias, Martin

    2014-02-11

    The accurate calculation of potentials of mean force for ligand-receptor binding is one of the most important applications of molecular simulation techniques. Typically, the separation distance between ligand and receptor is chosen as a reaction coordinate along which a PMF can be calculated with the aid of umbrella sampling (US) techniques. In addition, restraints can be applied on the relative position and orientation of the partner molecules to reduce accessible phase space. An approach combining such phase space reduction with flattening of the free energy landscape and configurational exchanges has been developed, which significantly improves the convergence of PMF calculations in comparison with standard umbrella sampling. The free energy surface along the reaction coordinate is smoothened by iteratively adapting biasing potentials corresponding to previously calculated PMFs. Configurations are allowed to exchange between the umbrella simulation windows via the Hamiltonian replica exchange method. The application to a DNA molecule in complex with a minor groove binding ligand indicates significantly improved convergence and complete reversibility of the sampling along the pathway. The calculated binding free energy is in excellent agreement with experimental results. In contrast, the application of standard US resulted in large differences between PMFs calculated for association and dissociation pathways. The approach could be a useful alternative to standard US for computational studies on biomolecular recognition processes.

  16. Molecular Dynamics Simulations of Nucleic Acids. From Tetranucleotides to the Ribosome.

    PubMed

    Šponer, Jiří; Banáš, Pavel; Jurečka, Petr; Zgarbová, Marie; Kührová, Petra; Havrila, Marek; Krepl, Miroslav; Stadlbauer, Petr; Otyepka, Michal

    2014-05-15

    We present a brief overview of explicit solvent molecular dynamics (MD) simulations of nucleic acids. We explain physical chemistry limitations of the simulations, namely, the molecular mechanics (MM) force field (FF) approximation and limited time scale. Further, we discuss relations and differences between simulations and experiments, compare standard and enhanced sampling simulations, discuss the role of starting structures, comment on different versions of nucleic acid FFs, and relate MM computations with contemporary quantum chemistry. Despite its limitations, we show that MD is a powerful technique for studying the structural dynamics of nucleic acids with a fast growing potential that substantially complements experimental results and aids their interpretation.

  17. Evaluation of waveguide coating materials

    NASA Technical Reports Server (NTRS)

    Chen, W. C. J.; Baker, B. W.

    1982-01-01

    Waveguide coating materials were tested at 8470 MHz for insertion loss. Samples of these coatings on waveguide pieces without flanges were tested in an environmental chamber to simulate the effects of high power microwave heating. Test results indicated that three types of coating materials are acceptable with regard to insertion loss. However, simulated microwave heating caused debonding of Metcot 7 and BD-991 coatings, resulting in peelings in the waveguide. The higher cost Chemglaze R104 does not exhibit this problem.

  18. Simulation Analysis of DC and Switching Impulse Superposition Circuit

    NASA Astrophysics Data System (ADS)

    Zhang, Chenmeng; Xie, Shijun; Zhang, Yu; Mao, Yuxiang

    2018-03-01

    Surge capacitors running between the natural bus and the ground are affected by DC and impulse superposition voltage during operation in the converter station. This paper analyses the simulation aging circuit of surge capacitors by PSCAD electromagnetic transient simulation software. This paper also analyses the effect of the DC voltage to the waveform of the impulse voltage generation. The effect of coupling capacitor to the test voltage waveform is also studied. Testing results prove that the DC voltage has little effect on the waveform of the output of the surge voltage generator, and the value of the coupling capacitor has little effect on the voltage waveform of the sample. Simulation results show that surge capacitor DC and impulse superimposed aging test is feasible.

  19. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  20. Tomographic reconstruction of melanin structures of optical coherence tomography via the finite-difference time-domain simulation

    NASA Astrophysics Data System (ADS)

    Huang, Shi-Hao; Wang, Shiang-Jiu; Tseng, Snow H.

    2015-03-01

    Optical coherence tomography (OCT) provides high resolution, cross-sectional image of internal microstructure of biological tissue. We use the Finite-Difference Time-Domain method (FDTD) to analyze the data acquired by OCT, which can help us reconstruct the refractive index of the biological tissue. We calculate the refractive index tomography and try to match the simulation with the data acquired by OCT. Specifically, we try to reconstruct the structure of melanin, which has complex refractive indices and is the key component of human pigment system. The results indicate that better reconstruction can be achieved for homogenous sample, whereas the reconstruction is degraded for samples with fine structure or with complex interface. Simulation reconstruction shows structures of the Melanin that may be useful for biomedical optics applications.

  1. Muon Simulation at the Daya Bay SIte

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mengyun, Guan; Jun, Cao; Changgen, Yang

    2006-05-23

    With a pretty good-resolution mountain profile, we simulated the underground muon background at the Daya Bay site. To get the sea-level muon flux parameterization, a modification to the standard Gaisser's formula was introduced according to the world muon data. MUSIC code was used to transport muon through the mountain rock. To deploy the simulation, first we generate a statistic sample of sea-level muon events according to the sea-level muon flux distribution formula; then calculate the slant depth of muon passing through the mountain using an interpolation method based on the digitized data of the mountain; finally transport muons through rockmore » to get underground muon sample, from which we can get results of muon flux, mean energy, energy distribution and angular distribution.« less

  2. The Lake Tahoe Basin Land Use Simulation Model

    USGS Publications Warehouse

    Forney, William M.; Oldham, I. Benson

    2011-01-01

    This U.S. Geological Survey Open-File Report describes the final modeling product for the Tahoe Decision Support System project for the Lake Tahoe Basin funded by the Southern Nevada Public Land Management Act and the U.S. Geological Survey's Geographic Analysis and Monitoring Program. This research was conducted by the U.S. Geological Survey Western Geographic Science Center. The purpose of this report is to describe the basic elements of the novel Lake Tahoe Basin Land Use Simulation Model, publish samples of the data inputs, basic outputs of the model, and the details of the Python code. The results of this report include a basic description of the Land Use Simulation Model, descriptions and summary statistics of model inputs, two figures showing the graphical user interface from the web-based tool, samples of the two input files, seven tables of basic output results from the web-based tool and descriptions of their parameters, and the fully functional Python code.

  3. Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality

    USGS Publications Warehouse

    Hirsch, Robert M.

    1988-01-01

    This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.

  4. Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts

    NASA Astrophysics Data System (ADS)

    Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo

    This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.

  5. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    PubMed

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  6. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  7. Dynamic Acquisition and Retrieval Tool (DART) for Comet Sample Return : Session: 2.06.Robotic Mobility and Sample Acquisition Systems

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Bonitz, Robert; Kulczycki, Erick; Aisen, Norman; Dandino, Charles M.; Cantrell, Brett S.; Gallagher, William; Shevin, Jesse; Ganino, Anthony; Haddad, Nicolas; hide

    2013-01-01

    The 2011 Decadal Survey for planetary science released by the National Research Council of the National Academies identified Comet Surface Sample Return (CSSR) as one of five high priority potential New Frontiers-class missions in the next decade. The main objectives of the research described in this publication are: develop a concept for an end-to-end system for collecting and storing a comet sample to be returned to Earth; design, fabricate and test a prototype Dynamic Acquisition and Retrieval Tool (DART) capable of collecting 500 cc sample in a canister and eject the canister with a predetermined speed; identify a set of simulants with physical properties at room temperature that suitably match the physical properties of the comet surface as it would be sampled. We propose the use of a dart that would be launched from the spacecraft to impact and penetrate the comet surface. After collecting the sample, the sample canister would be ejected at a speed greater than the comet's escape velocity and captured by the spacecraft, packaged into a return capsule and returned to Earth. The dart would be composed of an inner tube or sample canister, an outer tube, a decelerator, a means of capturing and retaining the sample, and a mechanism to eject the canister with the sample for later rendezvous with the spacecraft. One of the significant unknowns is the physical properties of the comet surface. Based on new findings from the recent Deep Impact comet encounter mission, we have limited our search of solutions for sampling materials to materials with 10 to 100 kPa shear strength in loose or consolidated form. As the possible range of values for the comet surface temperature is also significantly different than room temperature and testing at conditions other than the room temperature can become resource intensive, we sought sample simulants with physical properties at room temperature similar to the expected physical properties of the comet surface material. The chosen DART configuration, the efforts to identify a test simulant and the properties of these simulants, and the results of the preliminary testing will be described in this paper.

  8. Automated sampling assessment for molecular simulations using the effective sample size

    PubMed Central

    Zhang, Xin; Bhatt, Divesh; Zuckerman, Daniel M.

    2010-01-01

    To quantify the progress in the development of algorithms and forcefields used in molecular simulations, a general method for the assessment of the sampling quality is needed. Statistical mechanics principles suggest the populations of physical states characterize equilibrium sampling in a fundamental way. We therefore develop an approach for analyzing the variances in state populations, which quantifies the degree of sampling in terms of the effective sample size (ESS). The ESS estimates the number of statistically independent configurations contained in a simulated ensemble. The method is applicable to both traditional dynamics simulations as well as more modern (e.g., multi–canonical) approaches. Our procedure is tested in a variety of systems from toy models to atomistic protein simulations. We also introduce a simple automated procedure to obtain approximate physical states from dynamic trajectories: this allows sample–size estimation in systems for which physical states are not known in advance. PMID:21221418

  9. Release of Hexavalent Chromium by Ash and Soils in Wildfire-Impacted Areas

    USGS Publications Warehouse

    Wolf, Ruth E.; Morman, Suzette A.; Plumlee, Geoffrey S.; Hageman, Philip L.; Adams, Monique

    2008-01-01

    The highly oxidizing environment of a wildfire has the potential to convert any chromium present in the soil or in residential or industrial debris to its more toxic form, hexavalent chromium, a known carcinogen. In addition, the highly basic conditions resulting from the combustion of wood and wood products could result in the stabilization of any aqueous hexavalent chromium formed. Samples were collected from the October 2007 wildfires in Southern California and subjected to an array of test procedures to evaluate the potential effects of fire-impacted soils and ashes on human and environmental health. Soil and ash samples were leached using de-ionized water to simulate conditions resulting from rainfall on fire-impacted areas. The resulting leachates were of high pH (10-13) and many, particularly those of ash from burned residential areas, contained elevated total chromium as much as 33 micrograms per liter. Samples were also leached using a near-neutral pH simulated lung fluid to model potential chemical interactions of inhaled particles with fluids lining the respiratory tract. High Performance Liquid Chromatography coupled to Inductively Coupled Plasma Mass Spectrometry was used to separate and detect individual species (for example, Cr+3, Cr+6, As+3, As+5, Se+4, and Se+6). These procedures were used to determine the form of the chromium present in the de-ionized water and simulated lung fluid leachates. The results show that in the de-ionized water leachate, all of the chromium present is in the form of Cr+6, and the resulting high pH tends to stabilize Cr+6 from reduction to Cr+3. Analysis of the simulated lung fluid leachates indicates that the predominant form of chromium present in the near-neutral pH of lung fluid would be Cr+6, which is of concern due to the high possibility of inhalation of the small ash and soil particulates, particularly by fire or restoration crews.

  10. User's guide to resin infusion simulation program in the FORTRAN language

    NASA Technical Reports Server (NTRS)

    Weideman, Mark H.; Hammond, Vince H.; Loos, Alfred C.

    1992-01-01

    RTMCL is a user friendly computer code which simulates the manufacture of fabric composites by the resin infusion process. The computer code is based on the process simulation model described in reference 1. Included in the user's guide is a detailed step by step description of how to run the program and enter and modify the input data set. Sample input and output files are included along with an explanation of the results. Finally, a complete listing of the program is provided.

  11. The influence of taxon sampling on Bayesian divergence time inference under scenarios of rate heterogeneity among lineages.

    PubMed

    Soares, André E R; Schrago, Carlos G

    2015-01-07

    Although taxon sampling is commonly considered an important issue in phylogenetic inference, it is rarely considered in the Bayesian estimation of divergence times. In fact, the studies conducted to date have presented ambiguous results, and the relevance of taxon sampling for molecular dating remains unclear. In this study, we developed a series of simulations that, after six hundred Bayesian molecular dating analyses, allowed us to evaluate the impact of taxon sampling on chronological estimates under three scenarios of among-lineage rate heterogeneity. The first scenario allowed us to examine the influence of the number of terminals on the age estimates based on a strict molecular clock. The second scenario imposed an extreme example of lineage specific rate variation, and the third scenario permitted extensive rate variation distributed along the branches. We also analyzed empirical data on selected mitochondrial genomes of mammals. Our results showed that in the strict molecular-clock scenario (Case I), taxon sampling had a minor impact on the accuracy of the time estimates, although the precision of the estimates was greater with an increased number of terminals. The effect was similar in the scenario (Case III) based on rate variation distributed among the branches. Only under intensive rate variation among lineages (Case II) taxon sampling did result in biased estimates. The results of an empirical analysis corroborated the simulation findings. We demonstrate that taxonomic sampling affected divergence time inference but that its impact was significant if the rates deviated from those derived for the strict molecular clock. Increased taxon sampling improved the precision and accuracy of the divergence time estimates, but the impact on precision is more relevant. On average, biased estimates were obtained only if lineage rate variation was pronounced. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Interlaboratory comparability, bias, and precision for four laboratories measuring analytes in wet deposition, October 1983-December 1984

    USGS Publications Warehouse

    Brooks, Myron H.; Schroder, LeRoy J.; Willoughby, Timothy C.

    1987-01-01

    Four laboratories involved in the routine analysis of wet-deposition samples participated in an interlaboratory comparison program managed by the U.S. Geological Survey. The four participants were: Illinois State Water Survey central analytical laboratory in Champaign, Illinois; U.S. Geological Survey national water-quality laboratories in Atlanta, Georgia, and Denver, Colorado; and Inland Waters Directorate national water-quality laboratory in Burlington, Ontario, Canada. Analyses of interlaboratory samples performed by the four laboratories from October 1983 through December 1984 were compared.Participating laboratories analyzed three types of interlaboratory samples--natural wet deposition, simulated wet deposition, and deionized water--for pH and specific conductance, and for dissolved calcium, magnesium, sodium, sodium, potassium, chloride, sulfate, nitrate, ammonium, and orthophosphate. Natural wet-deposition samples were aliquots of actual wet-deposition samples. Analyses of these samples by the four laboratories were compared using analysis of variance. Test results indicated that pH, calcium, nitrate, and ammonium results were not directly comparable among the four laboratories. Statistically significant differences between laboratory results probably only were meaningful for analyses of dissolved calcium. Simulated wet-deposition samples with known analyte concentrations were used to test each laboratory for analyte bias. Laboratory analyses of calcium, magnesium, sodium, potassium, chloride, sulfate, and nitrate were not significantly different from the known concentrations of these analytes when tested using analysis of variance. Deionized-water samples were used to test each laboratory for reporting of false positive values. The Illinois State Water Survey Laboratory reported the smallest percentage of false positive values for most analytes. Analyte precision was estimated for each laboratory from results of replicate measurements. In general, the Illinois State Water Survey laboratory achieved the greatest precision, whereas the U.S. Geological Survey laboratories achieved the least precision.

  13. Applying Incremental Sampling Methodology to Soils Containing Heterogeneously Distributed Metallic Residues to Improve Risk Analysis.

    PubMed

    Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A

    2018-01-01

    This study compares conventional grab sampling to incremental sampling methodology (ISM) to characterize metal contamination at a military small-arms-range. Grab sample results had large variances, positively skewed non-normal distributions, extreme outliers, and poor agreement between duplicate samples even when samples were co-located within tens of centimeters of each other. The extreme outliers strongly influenced the grab sample means for the primary contaminants lead (Pb) and antinomy (Sb). In contrast, median and mean metal concentrations were similar for the ISM samples. ISM significantly reduced measurement uncertainty of estimates of the mean, increasing data quality (e.g., for environmental risk assessments) with fewer samples (e.g., decreasing total project costs). Based on Monte Carlo resampling simulations, grab sampling resulted in highly variable means and upper confidence limits of the mean relative to ISM.

  14. Corrosion resistance and biological activity of TiO2 implant coatings produced in oxygen-rich environments.

    PubMed

    Zhang, Rui; Wan, Yi; Ai, Xing; Liu, Zhanqiang; Zhang, Dong

    2017-01-01

    The physical and chemical properties of bio-titanium alloy implant surfaces play an important role in their corrosion resistance and biological activity. New turning and turning-rolling processes are presented, employing an oxygen-rich environment in order to obtain titanium dioxide layers that can both protect implants from corrosion and also promote cell adhesion. The surface topographies, surface roughnesses and chemical compositions of the sample surfaces were obtained using scanning electron microscopy, a white light interferometer, and the Auger electron spectroscopy, respectively. The corrosion resistance of the samples in a simulated body fluid was determined using electrochemical testing. Biological activity on the samples was also analyzed, using a vitro cell culture system. The results show that compared with titanium oxide layers formed using a turning process in air, the thickness of the titanium oxide layers formed using turning and turning-rolling processes in an oxygen-rich environment increased by 4.6 and 7.3 times, respectively. Using an oxygen-rich atmosphere in the rolling process greatly improves the corrosion resistance of the resulting samples in a simulated body fluid. On samples produced using the turning-rolling process, cells spread quickly and exhibited the best adhesion characteristics.

  15. Optimal Digital Controller Design for a Servo Motor Taking Account of Intersample Behavior

    NASA Astrophysics Data System (ADS)

    Akiyoshi, Tatsuro; Imai, Jun; Funabiki, Shigeyuki

    A continuous-time plant with discretized continuous-time controller do not yield stability if the sampling rate is lower than some certain level. Thus far, high functioning electronic control has made use of high cost hardwares which are needed to implement discretized continuous-time controllers, while low cost hardwares generally do not have high enough sampling rate. This technical note presents results comparing performance indices with and without intersample behavior, and some answer to the question how a low specification device can control a plant effectively. We consider a machine simulating wafer handling robots at semiconductor factories, which is an electromechanical system driven by a direct drive motor. We illustrate controller design for the robot with and without intersample behavior, and simulations and experimental results by using these controllers. Taking intersample behavior into account proves to be effective to make control performance better and enables it to choose relatively long sampling period. By controller design via performance index with intersample behavior, we can cope with situation where short enough sampling period may not be employed, and freedom of controller design might be widened especially on choice of sampling period.

  16. Scenario and modelling uncertainty in global mean temperature change derived from emission-driven global climate models

    NASA Astrophysics Data System (ADS)

    Booth, B. B. B.; Bernie, D.; McNeall, D.; Hawkins, E.; Caesar, J.; Boulton, C.; Friedlingstein, P.; Sexton, D. M. H.

    2013-04-01

    We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission-driven rather than concentration-driven perturbed parameter ensemble of a global climate model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration-driven simulations (with 10-90th percentile ranges of 1.7 K for the aggressive mitigation scenario, up to 3.9 K for the high-end, business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 K (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission-driven experiments, they do not change existing expectations (based on previous concentration-driven experiments) on the timescales over which different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in the case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration scenarios used to drive GCM ensembles, lies towards the lower end of our simulated distribution. This design decision (a legacy of previous assessments) is likely to lead concentration-driven experiments to under-sample strong feedback responses in future projections. Our ensemble of emission-driven simulations span the global temperature response of the CMIP5 emission-driven simulations, except at the low end. Combinations of low climate sensitivity and low carbon cycle feedbacks lead to a number of CMIP5 responses to lie below our ensemble range. The ensemble simulates a number of high-end responses which lie above the CMIP5 carbon cycle range. These high-end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real-world climate-sensitivity constraints which, if achieved, would lead to reductions on the upper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present-day observables and future changes, while the large spread of future-projected changes highlights the ongoing need for such work.

  17. Prediction of Precipitation Strengthening in the Commercial Mg Alloy AZ91 Using Dislocation Dynamics

    DOE PAGES

    Aagesen, L. K.; Miao, J.; Allison, J. E.; ...

    2018-03-05

    In this paper, dislocation dynamics simulations were used to predict the strengthening of a commercial magnesium alloy, AZ91, due to β-Mg 17Al 12 formed in the continuous precipitation mode. The precipitate distributions used in simulations were determined based on experimental characterization of the sizes, shapes, and number densities of the precipitates for 10-hour aging and 50-hour aging. For dislocations gliding on the basal plane, which is expected to be the dominant contributor to plastic deformation at room temperature, the critical resolved shear stress to bypass the precipitate distribution was 3.5 MPa for the 10-hour aged sample and 16.0 MPa formore » the 50-hour aged sample. The simulation results were compared to an analytical model of strengthening in this alloy, and the analytical model was found to predict critical resolved shear stresses that were approximately 30 pct lower. A model for the total yield strength was developed and compared with experiment for the 50-hour aged sample. Finally, the predicted yield strength, which included the precipitate strengthening contribution from the DD simulations, was 132.0 MPa, in good agreement with the measured yield strength of 141 MPa.« less

  18. Modeling and Simulation of a Tethered Harpoon for Comet Sampling

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.

    2014-01-01

    This paper describes the development of a dynamic model and simulation results of a tethered harpoon for comet sampling. This model and simulation was done in order to carry out an initial sensitivity analysis for key design parameters of the tethered system. The harpoon would contain a canister which would collect a sample of soil from a cometary surface. Both a spring ejected canister and a tethered canister are considered. To arrive in close proximity of the spacecraft at the end of its trajectory so it could be captured, the free-flying canister would need to be ejected at the right time and with the proper impulse, while the tethered canister must be recovered by properly retrieving the tether at a rate that would avoid an excessive amplitude of oscillatory behavior during the retrieval. The paper describes the model of the tether dynamics and harpoon penetration physics. The simulations indicate that, without the tether, the canister would still reach the spacecraft for collection, that the tether retrieval of the canister would be achievable with reasonable fuel consumption, and that the canister amplitude upon retrieval would be insensitive to variations in vertical velocity dispersion.

  19. Prediction of Precipitation Strengthening in the Commercial Mg Alloy AZ91 Using Dislocation Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aagesen, L. K.; Miao, J.; Allison, J. E.

    In this paper, dislocation dynamics simulations were used to predict the strengthening of a commercial magnesium alloy, AZ91, due to β-Mg 17Al 12 formed in the continuous precipitation mode. The precipitate distributions used in simulations were determined based on experimental characterization of the sizes, shapes, and number densities of the precipitates for 10-hour aging and 50-hour aging. For dislocations gliding on the basal plane, which is expected to be the dominant contributor to plastic deformation at room temperature, the critical resolved shear stress to bypass the precipitate distribution was 3.5 MPa for the 10-hour aged sample and 16.0 MPa formore » the 50-hour aged sample. The simulation results were compared to an analytical model of strengthening in this alloy, and the analytical model was found to predict critical resolved shear stresses that were approximately 30 pct lower. A model for the total yield strength was developed and compared with experiment for the 50-hour aged sample. Finally, the predicted yield strength, which included the precipitate strengthening contribution from the DD simulations, was 132.0 MPa, in good agreement with the measured yield strength of 141 MPa.« less

  20. Prediction of Precipitation Strengthening in the Commercial Mg Alloy AZ91 Using Dislocation Dynamics

    NASA Astrophysics Data System (ADS)

    Aagesen, L. K.; Miao, J.; Allison, J. E.; Aubry, S.; Arsenlis, A.

    2018-03-01

    Dislocation dynamics simulations were used to predict the strengthening of a commercial magnesium alloy, AZ91, due to β-Mg17Al12 formed in the continuous precipitation mode. The precipitate distributions used in simulations were determined based on experimental characterization of the sizes, shapes, and number densities of the precipitates for 10-hour aging and 50-hour aging. For dislocations gliding on the basal plane, which is expected to be the dominant contributor to plastic deformation at room temperature, the critical resolved shear stress to bypass the precipitate distribution was 3.5 MPa for the 10-hour aged sample and 16.0 MPa for the 50-hour aged sample. The simulation results were compared to an analytical model of strengthening in this alloy, and the analytical model was found to predict critical resolved shear stresses that were approximately 30 pct lower. A model for the total yield strength was developed and compared with experiment for the 50-hour aged sample. The predicted yield strength, which included the precipitate strengthening contribution from the DD simulations, was 132.0 MPa, in good agreement with the measured yield strength of 141 MPa.

  1. Measurements and FLUKA simulations of bismuth and aluminium activation at the CERN Shielding Benchmark Facility (CSBF)

    NASA Astrophysics Data System (ADS)

    Iliopoulou, E.; Bamidis, P.; Brugger, M.; Froeschl, R.; Infantino, A.; Kajimoto, T.; Nakao, N.; Roesler, S.; Sanami, T.; Siountas, A.

    2018-03-01

    The CERN High Energy AcceleRator Mixed field facility (CHARM) is located in the CERN Proton Synchrotron (PS) East Experimental Area. The facility receives a pulsed proton beam from the CERN PS with a beam momentum of 24 GeV/c with 5 ṡ1011 protons per pulse with a pulse length of 350 ms and with a maximum average beam intensity of 6.7 ṡ1010 p/s that then impacts on the CHARM target. The shielding of the CHARM facility also includes the CERN Shielding Benchmark Facility (CSBF) situated laterally above the target. This facility consists of 80 cm of cast iron and 360 cm of concrete with barite concrete in some places. Activation samples of bismuth and aluminium were placed in the CSBF and in the CHARM access corridor in July 2015. Monte Carlo simulations with the FLUKA code have been performed to estimate the specific production yields for these samples. The results estimated by FLUKA Monte Carlo simulations are compared to activation measurements of these samples. The comparison between FLUKA simulations and the measured values from γ-spectrometry gives an agreement better than a factor of 2.

  2. Exploratory Factor Analysis with Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.

    2009-01-01

    Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…

  3. An Investigation of Sample Size Splitting on ATFIND and DIMTEST

    ERIC Educational Resources Information Center

    Socha, Alan; DeMars, Christine E.

    2013-01-01

    Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…

  4. Environmental Factors Affecting Asthma and Allergies: Predicting and Simulating Downwind Exposure to Airborne Pollen

    NASA Technical Reports Server (NTRS)

    Luvall, Jeffrey; Estes, Sue; Sprigg, William A.; Nickovic, Slobodan; Huete, Alfredo; Solano, Ramon; Ratana, Piyachat; Jiang, Zhangyan; Flowers, Len; Zelicoff, Alan

    2009-01-01

    This slide presentation reviews the environmental factors that affect asthma and allergies and work to predict and simulate the downwind exposure to airborne pollen. Using a modification of Dust REgional Atmosphere Model (DREAM) that incorporates phenology (i.e. PREAM) the aim was to predict concentrations of pollen in time and space. The strategy for using the model to simulate downwind pollen dispersal, and evaluate the results. Using MODerate-resolution Imaging Spectroradiometer (MODIS), to get seasonal sampling of Juniper, the pollen chosen for the study, land cover on a near daily basis. The results of the model are reviewed.

  5. Experimental verification and simulation of negative index of refraction using Snell's law.

    PubMed

    Parazzoli, C G; Greegor, R B; Li, K; Koltenbah, B E C; Tanielian, M

    2003-03-14

    We report the results of a Snell's law experiment on a negative index of refraction material in free space from 12.6 to 13.2 GHz. Numerical simulations using Maxwell's equations solvers show good agreement with the experimental results, confirming the existence of negative index of refraction materials. The index of refraction is a function of frequency. At 12.6 GHz we measure and compute the real part of the index of refraction to be -1.05. The measurements and simulations of the electromagnetic field profiles were performed at distances of 14lambda and 28lambda from the sample; the fields were also computed at 100lambda.

  6. Experimental simulation of space plasma interactions with high voltage solar arrays

    NASA Technical Reports Server (NTRS)

    Stillwell, R. P.; Kaufman, H. R.; Robinson, R. S.

    1981-01-01

    Operating high voltage solar arrays in the space environment can result in anomalously large currents being collected through small insulation defects. Tests of simulated defects have been conducted in a 45-cm vacuum chamber with plasma densities of 100,000 to 1,000,000/cu cm. Plasmas were generated using an argon hollow cathode. The solar array elements were simulated by placing a thin sheet of polyimide (Kapton) insulation with a small hole in it over a conductor. Parameters tested were: hole size, adhesive, surface roughening, sample temperature, insulator thickness, insulator area. These results are discussed along with some preliminary empirical correlations.

  7. Translational-circular scanning for magneto-acoustic tomography with current injection.

    PubMed

    Wang, Shigang; Ma, Ren; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-01-27

    Magneto-acoustic tomography with current injection involves using electrical impedance imaging technology. To explore the potential applications in imaging biological tissue and enhance image quality, a new scan mode for the transducer is proposed that is based on translational and circular scanning to record acoustic signals from sources. An imaging algorithm to analyze these signals is developed in respect to this alternative scanning scheme. Numerical simulations and physical experiments were conducted to evaluate the effectiveness of this scheme. An experiment using a graphite sheet as a tissue-mimicking phantom medium was conducted to verify simulation results. A pulsed voltage signal was applied across the sample, and acoustic signals were recorded as the transducer performed stepped translational or circular scans. The imaging algorithm was used to obtain an acoustic-source image based on the signals. In simulations, the acoustic-source image is correlated with the conductivity at the sample boundaries of the sample, but image results change depending on distance and angular aspect of the transducer. In general, as angle and distance decreases, the image quality improves. Moreover, experimental data confirmed the correlation. The acoustic-source images resulting from the alternative scanning mode has yielded the outline of a phantom medium. This scan mode enables improvements to be made in the sensitivity of the detecting unit and a change to a transducer array that would improve the efficiency and accuracy of acoustic-source images.

  8. On the importance of an accurate representation of the initial state of the system in classical dynamics simulations

    NASA Astrophysics Data System (ADS)

    García-Vela, A.

    2000-05-01

    A definition of a quantum-type phase-space distribution is proposed in order to represent the initial state of the system in a classical dynamics simulation. The central idea is to define an initial quantum phase-space state of the system as the direct product of the coordinate and momentum representations of the quantum initial state. The phase-space distribution is then obtained as the square modulus of this phase-space state. The resulting phase-space distribution closely resembles the quantum nature of the system initial state. The initial conditions are sampled with the distribution, using a grid technique in phase space. With this type of sampling the distribution of initial conditions reproduces more faithfully the shape of the original phase-space distribution. The method is applied to generate initial conditions describing the three-dimensional state of the Ar-HCl cluster prepared by ultraviolet excitation. The photodissociation dynamics is simulated by classical trajectories, and the results are compared with those of a wave packet calculation. The classical and quantum descriptions are found in good agreement for those dynamical events less subject to quantum effects. The classical result fails to reproduce the quantum mechanical one for the more strongly quantum features of the dynamics. The properties and applicability of the phase-space distribution and the sampling technique proposed are discussed.

  9. Evaluation of Cooling Conditions for a High Heat Flux Testing Facility Based on Plasma-Arc Lamps

    DOE PAGES

    Charry, Carlos H.; Abdel-khalik, Said I.; Yoda, Minami; ...

    2015-07-31

    The new Irradiated Material Target Station (IMTS) facility for fusion materials at Oak Ridge National Laboratory (ORNL) uses an infrared plasma-arc lamp (PAL) to deliver incident heat fluxes as high as 27 MW/m 2. The facility is being used to test irradiated plasma-facing component materials as part of the joint US-Japan PHENIX program. The irradiated samples are to be mounted on molybdenum sample holders attached to a water-cooled copper rod. Depending on the size and geometry of samples, several sample holders and copper rod configurations have been fabricated and tested. As a part of the effort to design sample holdersmore » compatible with the high heat flux (HHF) testing to be conducted at the IMTS facility, numerical simulations have been performed for two different water-cooled sample holder designs using the ANSYS FLUENT 14.0 commercial computational fluid dynamics (CFD) software package. The primary objective of this work is to evaluate the cooling capability of different sample holder designs, i.e. to estimate their maximum allowable incident heat flux values. 2D axisymmetric numerical simulations are performed using the realizable k-ε turbulence model and the RPI nucleate boiling model within ANSYS FLUENT 14.0. The results of the numerical model were compared against the experimental data for two sample holder designs tested in the IMTS facility. The model has been used to parametrically evaluate the effect of various operational parameters on the predicted temperature distributions. The results were used to identify the limiting parameter for safe operation of the two sample holders and the associated peak heat flux limits. The results of this investigation will help guide the development of new sample holder designs.« less

  10. Results and analysis of saltstone cores taken from saltstone disposal unit cell 2A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reigel, M. M.; Hill, K. A.

    2016-03-01

    As part of an ongoing Performance Assessment (PA) Maintenance Plan, Savannah River Remediation (SRR) has developed a sampling and analyses strategy to facilitate the comparison of field-emplaced samples (i.e., saltstone placed and cured in a Saltstone Disposal Unit (SDU)) with samples prepared and cured in the laboratory. The primary objectives of the Sampling and Analyses Plan (SAP) are; (1) to demonstrate a correlation between the measured properties of laboratory-prepared, simulant samples (termed Sample Set 3), and the field-emplaced saltstone samples (termed Sample Set 9), and (2) to validate property values assumed for the Saltstone Disposal Facility (SDF) PA modeling. Themore » analysis and property data for Sample Set 9 (i.e. six core samples extracted from SDU Cell 2A (SDU2A)) are documented in this report, and where applicable, the results are compared to the results for Sample Set 3. Relevant properties to demonstrate the aforementioned objectives include bulk density, porosity, saturated hydraulic conductivity (SHC), and radionuclide leaching behavior.« less

  11. Estimating variation in a landscape simulation of forest structure.

    Treesearch

    S. Hummel; P. Cunningham

    2006-01-01

    Modern technology makes it easy to show how forested landscapes might change with time but it remains difficult to estimate how sampling error affects landscape simulation results. To address this problem we used two methods to project the area in late-sera1 forest (LSF) structure for the same 6070 hectare (ha) study site over 30 years. The site was stratified into...

  12. Spatial interpolation and simulation of post-burn duff thickness after prescribed fire

    Treesearch

    Peter R. Robichaud; S. M. Miller

    1999-01-01

    Prescribed fire is used as a site treatment after timber harvesting. These fires result in spatial patterns with some portions consuming all of the forest floor material (duff) and others consuming little. Prior to the burn, spatial sampling of duff thickness and duff water content can be used to generate geostatistical spatial simulations of these characteristics....

  13. C3H7NO2S effect on concrete steel-rebar corrosion in 0.5 M H2SO4 simulating industrial/microbial environment

    NASA Astrophysics Data System (ADS)

    Okeniyi, Joshua Olusegun; Nwadialo, Christopher Chukwuweike; Olu-Steven, Folusho Emmanuel; Ebinne, Samaru Smart; Coker, Taiwo Ebenezer; Okeniyi, Elizabeth Toyin; Ogbiye, Adebanji Samuel; Durotoye, Taiwo Omowunmi; Badmus, Emmanuel Omotunde Oluwasogo

    2017-02-01

    This paper investigates C3H7NO2S (Cysteine) effect on the inhibition of reinforcing steel corrosion in concrete immersed in 0.5 M H2SO4, for simulating industrial/microbial environment. Different C3H7NO2S concentrations were admixed, in duplicates, in steel-reinforced concrete samples that were partially immersed in the acidic sulphate environment. Electrochemical monitoring techniques of open circuit potential, as per ASTM C876-91 R99, and corrosion rate, by linear polarization resistance, were then employed for studying anticorrosion effect in steel-reinforced concrete samples by the organic hydrocarbon admixture. Analyses of electrochemical test-data followed ASTM G16-95 R04 prescriptions including probability distribution modeling with significant testing by Kolmogorov-Smirnov and student's t-tests statistics. Results established that all datasets of corrosion potential distributed like the Normal, the Gumbel and the Weibull distributions but that only the Weibull model described all the corrosion rate datasets in the study, as per the Kolmogorov-Smirnov test-statistics. Results of the student's t-test showed that differences of corrosion test-data between duplicated samples with the same C3H7NO2S concentrations were not statistically significant. These results indicated that 0.06878 M C3H7NO2S exhibited optimal inhibition efficiency η = 90.52±1.29% on reinforcing steel corrosion in the concrete samples immersed in 0.5 M H2SO4, simulating industrial/microbial service-environment.

  14. Population genomics meet Lagrangian simulations: Oceanographic patterns and long larval duration ensure connectivity among Paracentrotus lividus populations in the Adriatic and Ionian seas.

    PubMed

    Paterno, Marta; Schiavina, Marcello; Aglieri, Giorgio; Ben Souissi, Jamila; Boscari, Elisa; Casagrandi, Renato; Chassanite, Aurore; Chiantore, Mariachiara; Congiu, Leonardo; Guarnieri, Giuseppe; Kruschel, Claudia; Macic, Vesna; Marino, Ilaria A M; Papetti, Chiara; Patarnello, Tomaso; Zane, Lorenzo; Melià, Paco

    2017-04-01

    Connectivity between populations influences both their dynamics and the genetic structuring of species. In this study, we explored connectivity patterns of a marine species with long-distance dispersal, the edible common sea urchin Paracentrotus lividus , focusing mainly on the Adriatic-Ionian basins (Central Mediterranean). We applied a multidisciplinary approach integrating population genomics, based on 1,122 single nucleotide polymorphisms (SNPs) obtained from 2b-RAD in 275 samples, with Lagrangian simulations performed with a biophysical model of larval dispersal. We detected genetic homogeneity among eight population samples collected in the focal Adriatic-Ionian area, whereas weak but significant differentiation was found with respect to two samples from the Western Mediterranean (France and Tunisia). This result was not affected by the few putative outlier loci identified in our dataset. Lagrangian simulations found a significant potential for larval exchange among the eight Adriatic-Ionian locations, supporting the hypothesis of connectivity of P. lividus populations in this area. A peculiar pattern emerged from the comparison of our results with those obtained from published P. lividus cytochrome b (cytb) sequences, the latter revealing genetic differentiation in the same geographic area despite a smaller sample size and a lower power to detect differences. The comparison with studies conducted using nuclear markers on other species with similar pelagic larval durations in the same Adriatic-Ionian locations indicates species-specific differences in genetic connectivity patterns and warns against generalizing single-species results to the entire community of rocky shore habitats.

  15. Phase Aberration and Attenuation Effects on Acoustic Radiation Force-Based Shear Wave Generation.

    PubMed

    Carrascal, Carolina Amador; Aristizabal, Sara; Greenleaf, James F; Urban, Matthew W

    2016-02-01

    Elasticity is measured by shear wave elasticity imaging (SWEI) methods using acoustic radiation force to create the shear waves. Phase aberration and tissue attenuation can hamper the generation of shear waves for in vivo applications. In this study, the effects of phase aberration and attenuation in ultrasound focusing for creating shear waves were explored. This includes the effects of phase shifts and amplitude attenuation on shear wave characteristics such as shear wave amplitude, shear wave speed, shear wave center frequency, and bandwidth. Two samples of swine belly tissue were used to create phase aberration and attenuation experimentally. To explore the phase aberration and attenuation effects individually, tissue experiments were complemented with ultrasound beam simulations using fast object-oriented C++ ultrasound simulator (FOCUS) and shear wave simulations using finite-element-model (FEM) analysis. The ultrasound frequency used to generate shear waves was varied from 3.0 to 4.5 MHz. Results: The measured acoustic pressure and resulting shear wave amplitude decreased approximately 40%-90% with the introduction of the tissue samples. Acoustic intensity and shear wave displacement were correlated for both tissue samples, and the resulting Pearson's correlation coefficients were 0.99 and 0.97. Analysis of shear wave generation with tissue samples (phase aberration and attenuation case), measured phase screen, (only phase aberration case), and FOCUS/FEM model (only attenuation case) showed that tissue attenuation affected the shear wave generation more than tissue aberration. Decreasing the ultrasound frequency helped maintain a focused beam for creation of shear waves in the presence of both phase aberration and attenuation.

  16. Simulations and Experiments of Dynamic Granular Compaction in Non-ideal Geometries

    NASA Astrophysics Data System (ADS)

    Homel, Michael; Herbold, Eric; Lind, John; Crum, Ryan; Hurley, Ryan; Akin, Minta; Pagan, Darren; LLNL Team

    2017-06-01

    Accurately describing the dynamic compaction of granular materials is a persistent challenge in computational mechanics. Using a synchrotron x-ray source we have obtained detailed imaging of the evolving compaction front in synthetic olivine powder impacted at 300 - 600 m / s . To facilitate imaging, a non-traditional sample geometry is used, producing multiple load paths within the sample. We demonstrate that (i) commonly used models for porous compaction may produce inaccurate results for complex loading, even if the 1 - D , uniaxial-strain compaction response is reasonable, and (ii) the experimental results can be used along with simulations to determine parameters for sophisticated constitutive models that more accurately describe the strength, softening, bulking, and poroelastic response. Effects of experimental geometry and alternative configurations are discussed. Our understanding of the material response is further enhanced using mesoscale simulations that allow us to relate the mechanisms of grain fracture, contact, and comminution to the macroscale continuum response. Numerical considerations in both continuum and mesoscale simulations are described. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LDRD#16-ERD-010. LLNL-ABS-725113.

  17. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    NASA Astrophysics Data System (ADS)

    Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe

    2017-04-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce the cost and improve the accuracy of the estimations of cumulative N2O fluxes using the discrete chamber-based method.

  18. Enhanced conformational sampling using enveloping distribution sampling.

    PubMed

    Lin, Zhixiong; van Gunsteren, Wilfred F

    2013-10-14

    To lessen the problem of insufficient conformational sampling in biomolecular simulations is still a major challenge in computational biochemistry. In this article, an application of the method of enveloping distribution sampling (EDS) is proposed that addresses this challenge and its sampling efficiency is demonstrated in simulations of a hexa-β-peptide whose conformational equilibrium encompasses two different helical folds, i.e., a right-handed 2.7(10∕12)-helix and a left-handed 3(14)-helix, separated by a high energy barrier. Standard MD simulations of this peptide using the GROMOS 53A6 force field did not reach convergence of the free enthalpy difference between the two helices even after 500 ns of simulation time. The use of soft-core non-bonded interactions in the centre of the peptide did enhance the number of transitions between the helices, but at the same time led to neglect of relevant helical configurations. In the simulations of a two-state EDS reference Hamiltonian that envelops both the physical peptide and the soft-core peptide, sampling of the conformational space of the physical peptide ensures that physically relevant conformations can be visited, and sampling of the conformational space of the soft-core peptide helps to enhance the transitions between the two helices. The EDS simulations sampled many more transitions between the two helices and showed much faster convergence of the relative free enthalpy of the two helices compared with the standard MD simulations with only a slightly larger computational effort to determine optimized EDS parameters. Combined with various methods to smoothen the potential energy surface, the proposed EDS application will be a powerful technique to enhance the sampling efficiency in biomolecular simulations.

  19. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  20. Pressure of the hot gas in simulations of galaxy clusters

    NASA Astrophysics Data System (ADS)

    Planelles, S.; Fabjan, D.; Borgani, S.; Murante, G.; Rasia, E.; Biffi, V.; Truong, N.; Ragone-Figueroa, C.; Granato, G. L.; Dolag, K.; Pierpaoli, E.; Beck, A. M.; Steinborn, Lisa K.; Gaspari, M.

    2017-06-01

    We analyse the radial pressure profiles, the intracluster medium (ICM) clumping factor and the Sunyaev-Zel'dovich (SZ) scaling relations of a sample of simulated galaxy clusters and groups identified in a set of hydrodynamical simulations based on an updated version of the treepm-SPH GADGET-3 code. Three different sets of simulations are performed: the first assumes non-radiative physics, the others include, among other processes, active galactic nucleus (AGN) and/or stellar feedback. Our results are analysed as a function of redshift, ICM physics, cluster mass and cluster cool-coreness or dynamical state. In general, the mean pressure profiles obtained for our sample of groups and clusters show a good agreement with X-ray and SZ observations. Simulated cool-core (CC) and non-cool-core (NCC) clusters also show a good match with real data. We obtain in all cases a small (if any) redshift evolution of the pressure profiles of massive clusters, at least back to z = 1. We find that the clumpiness of gas density and pressure increases with the distance from the cluster centre and with the dynamical activity. The inclusion of AGN feedback in our simulations generates values for the gas clumping (√{C}_{ρ }˜ 1.2 at R200) in good agreement with recent observational estimates. The simulated YSZ-M scaling relations are in good accordance with several observed samples, especially for massive clusters. As for the scatter of these relations, we obtain a clear dependence on the cluster dynamical state, whereas this distinction is not so evident when looking at the subsamples of CC and NCC clusters.

  1. Creep Rupture of the Simulated HAZ of T92 Steel Compared to that of a T91 Steel

    PubMed Central

    Peng, Yu-Quan; Chen, Tai-Cheng; Chung, Tien-Jung; Jeng, Sheng-Long; Huang, Rong-Tan; Tsay, Leu-Wen

    2017-01-01

    The increased thermal efficiency of fossil power plants calls for the development of advanced creep-resistant alloy steels like T92. In this study, microstructures found in the heat-affected zone (HAZ) of a T92 steel weld were simulated to evaluate their creep-rupture-life at elevated temperatures. An infrared heating system was used to heat the samples to 860 °C (around AC1), 900 °C (slightly below AC3), and 940 °C (moderately above AC3) for one minute, before cooling to room temperature. The simulated specimens were then subjected to a conventional post-weld heat treatment (PWHT) at 750 °C for two hours, where both the 900 °C and 940 °C simulated specimens had fine grain sizes. In the as-treated condition, the 900 °C simulated specimen consisted of fine lath martensite, ferrite subgrains, and undissolved carbides, while residual carbides and fresh martensite were found in the 940 °C simulated specimen. The results of short-term creep tests indicated that the creep resistance of the 900 °C and 940 °C simulated specimens was poorer than that of the 860 °C simulated specimens and the base metal. Moreover, simulated T92 steel samples had higher creep strength than the T91 counterpart specimens. PMID:28772500

  2. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia

    This paper describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLO formore » structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation« less

  3. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE PAGES

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia; ...

    2017-08-01

    This study describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980 s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLOmore » for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation.« less

  4. Creep Rupture of the Simulated HAZ of T92 Steel Compared to that of a T91 Steel.

    PubMed

    Peng, Yu-Quan; Chen, Tai-Cheng; Chung, Tien-Jung; Jeng, Sheng-Long; Huang, Rong-Tan; Tsay, Leu-Wen

    2017-02-08

    The increased thermal efficiency of fossil power plants calls for the development of advanced creep-resistant alloy steels like T92. In this study, microstructures found in the heat-affected zone (HAZ) of a T92 steel weld were simulated to evaluate their creep-rupture-life at elevated temperatures. An infrared heating system was used to heat the samples to 860 °C (around A C1 ), 900 °C (slightly below A C3 ), and 940 °C (moderately above A C3 ) for one minute, before cooling to room temperature. The simulated specimens were then subjected to a conventional post-weld heat treatment (PWHT) at 750 °C for two hours, where both the 900 °C and 940 °C simulated specimens had fine grain sizes. In the as-treated condition, the 900 °C simulated specimen consisted of fine lath martensite, ferrite subgrains, and undissolved carbides, while residual carbides and fresh martensite were found in the 940 °C simulated specimen. The results of short-term creep tests indicated that the creep resistance of the 900 °C and 940 °C simulated specimens was poorer than that of the 860 °C simulated specimens and the base metal. Moreover, simulated T92 steel samples had higher creep strength than the T91 counterpart specimens.

  5. Flow-induced vibration analysis of a helical coil steam generator experiment using large eddy simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Haomin; Solberg, Jerome; Merzari, Elia

    This study describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980 s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLOmore » for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation.« less

  6. Essential energy space random walks to accelerate molecular dynamics simulations: Convergence improvements via an adaptive-length self-healing strategy

    NASA Astrophysics Data System (ADS)

    Zheng, Lianqing; Yang, Wei

    2008-07-01

    Recently, accelerated molecular dynamics (AMD) technique was generalized to realize essential energy space random walks so that further sampling enhancement and effective localized enhanced sampling could be achieved. This method is especially meaningful when essential coordinates of the target events are not priori known; moreover, the energy space metadynamics method was also introduced so that biasing free energy functions can be robustly generated. Despite the promising features of this method, due to the nonequilibrium nature of the metadynamics recursion, it is challenging to rigorously use the data obtained at the recursion stage to perform equilibrium analysis, such as free energy surface mapping; therefore, a large amount of data ought to be wasted. To resolve such problem so as to further improve simulation convergence, as promised in our original paper, we are reporting an alternate approach: the adaptive-length self-healing (ALSH) strategy for AMD simulations; this development is based on a recent self-healing umbrella sampling method. Here, the unit simulation length for each self-healing recursion is increasingly updated based on the Wang-Landau flattening judgment. When the unit simulation length for each update is long enough, all the following unit simulations naturally run into the equilibrium regime. Thereafter, these unit simulations can serve for the dual purposes of recursion and equilibrium analysis. As demonstrated in our model studies, by applying ALSH, both fast recursion and short nonequilibrium data waste can be compromised. As a result, combining all the data obtained from all the unit simulations that are in the equilibrium regime via the weighted histogram analysis method, efficient convergence can be robustly ensured, especially for the purpose of free energy surface mapping.

  7. Converging free energies of binding in cucurbit[7]uril and octa-acid host-guest systems from SAMPL4 using expanded ensemble simulations

    NASA Astrophysics Data System (ADS)

    Monroe, Jacob I.; Shirts, Michael R.

    2014-04-01

    Molecular containers such as cucurbit[7]uril (CB7) and the octa-acid (OA) host are ideal simplified model test systems for optimizing and analyzing methods for computing free energies of binding intended for use with biologically relevant protein-ligand complexes. To this end, we have performed initially blind free energy calculations to determine the free energies of binding for ligands of both the CB7 and OA hosts. A subset of the selected guest molecules were those included in the SAMPL4 prediction challenge. Using expanded ensemble simulations in the dimension of coupling host-guest intermolecular interactions, we are able to show that our estimates in most cases can be demonstrated to fully converge and that the errors in our estimates are due almost entirely to the assigned force field parameters and the choice of environmental conditions used to model experiment. We confirm the convergence through the use of alternative simulation methodologies and thermodynamic pathways, analyzing sampled conformations, and directly observing changes of the free energy with respect to simulation time. Our results demonstrate the benefits of enhanced sampling of multiple local free energy minima made possible by the use of expanded ensemble molecular dynamics and may indicate the presence of significant problems with current transferable force fields for organic molecules when used for calculating binding affinities, especially in non-protein chemistries.

  8. Converging free energies of binding in cucurbit[7]uril and octa-acid host-guest systems from SAMPL4 using expanded ensemble simulations.

    PubMed

    Monroe, Jacob I; Shirts, Michael R

    2014-04-01

    Molecular containers such as cucurbit[7]uril (CB7) and the octa-acid (OA) host are ideal simplified model test systems for optimizing and analyzing methods for computing free energies of binding intended for use with biologically relevant protein-ligand complexes. To this end, we have performed initially blind free energy calculations to determine the free energies of binding for ligands of both the CB7 and OA hosts. A subset of the selected guest molecules were those included in the SAMPL4 prediction challenge. Using expanded ensemble simulations in the dimension of coupling host-guest intermolecular interactions, we are able to show that our estimates in most cases can be demonstrated to fully converge and that the errors in our estimates are due almost entirely to the assigned force field parameters and the choice of environmental conditions used to model experiment. We confirm the convergence through the use of alternative simulation methodologies and thermodynamic pathways, analyzing sampled conformations, and directly observing changes of the free energy with respect to simulation time. Our results demonstrate the benefits of enhanced sampling of multiple local free energy minima made possible by the use of expanded ensemble molecular dynamics and may indicate the presence of significant problems with current transferable force fields for organic molecules when used for calculating binding affinities, especially in non-protein chemistries.

  9. Computer Graphics Simulations of Sampling Distributions.

    ERIC Educational Resources Information Center

    Gordon, Florence S.; Gordon, Sheldon P.

    1989-01-01

    Describes the use of computer graphics simulations to enhance student understanding of sampling distributions that arise in introductory statistics. Highlights include the distribution of sample proportions, the distribution of the difference of sample means, the distribution of the difference of sample proportions, and the distribution of sample…

  10. Simulation of the Sampling Distribution of the Mean Can Mislead

    ERIC Educational Resources Information Center

    Watkins, Ann E.; Bargagliotti, Anna; Franklin, Christine

    2014-01-01

    Although the use of simulation to teach the sampling distribution of the mean is meant to provide students with sound conceptual understanding, it may lead them astray. We discuss a misunderstanding that can be introduced or reinforced when students who intuitively understand that "bigger samples are better" conduct a simulation to…

  11. An Overview of Importance Splitting for Rare Event Simulation

    ERIC Educational Resources Information Center

    Morio, Jerome; Pastel, Rudy; Le Gland, Francois

    2010-01-01

    Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…

  12. Macrostructure from Microstructure: Generating Whole Systems from Ego Networks

    PubMed Central

    Smith, Jeffrey A.

    2014-01-01

    This paper presents a new simulation method to make global network inference from sampled data. The proposed simulation method takes sampled ego network data and uses Exponential Random Graph Models (ERGM) to reconstruct the features of the true, unknown network. After describing the method, the paper presents two validity checks of the approach: the first uses the 20 largest Add Health networks while the second uses the Sociology Coauthorship network in the 1990's. For each test, I take random ego network samples from the known networks and use my method to make global network inference. I find that my method successfully reproduces the properties of the networks, such as distance and main component size. The results also suggest that simpler, baseline models provide considerably worse estimates for most network properties. I end the paper by discussing the bounds/limitations of ego network sampling. I also discuss possible extensions to the proposed approach. PMID:25339783

  13. Logistics and quality control for DNA sampling in large multicenter studies.

    PubMed

    Nederhand, R J; Droog, S; Kluft, C; Simoons, M L; de Maat, M P M

    2003-05-01

    To study associations between genetic variation and disease, large bio-banks need to be created in multicenter studies. Therefore, we studied the effects of storage time and temperature on DNA quality and quantity in a simulation experiment with storage up to 28 days frozen, at 4 degrees C and at room temperature. In the simulation experiment, the conditions did not influence the amount or quality of DNA to an unsatisfactory level. However, the amount of extracted DNA was decreased in frozen samples and in samples that were stored for > 7 days at room temperature. In a sample of patients from 24 countries of the EUROPA trial obtained by mail with transport times up to 1 month DNA yield and quality were adequate. From these results we conclude that transport of non-frozen blood by ordinary mail is usable and practical for DNA isolation for polymerase chain reaction in clinical and epidemiological studies.

  14. An empirical analysis of the quantitative effect of data when fitting quadratic and cubic polynomials

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.

  15. Crystallization of hard spheres revisited. I. Extracting kinetics and free energy landscape from forward flux sampling.

    PubMed

    Richard, David; Speck, Thomas

    2018-03-28

    We investigate the kinetics and the free energy landscape of the crystallization of hard spheres from a supersaturated metastable liquid though direct simulations and forward flux sampling. In this first paper, we describe and test two different ways to reconstruct the free energy barriers from the sampled steady state probability distribution of cluster sizes without sampling the equilibrium distribution. The first method is based on mean first passage times, and the second method is based on splitting probabilities. We verify both methods for a single particle moving in a double-well potential. For the nucleation of hard spheres, these methods allow us to probe a wide range of supersaturations and to reconstruct the kinetics and the free energy landscape from the same simulation. Results are consistent with the scaling predicted by classical nucleation theory although a quantitative fit requires a rather large effective interfacial tension.

  16. Crystallization of hard spheres revisited. I. Extracting kinetics and free energy landscape from forward flux sampling

    NASA Astrophysics Data System (ADS)

    Richard, David; Speck, Thomas

    2018-03-01

    We investigate the kinetics and the free energy landscape of the crystallization of hard spheres from a supersaturated metastable liquid though direct simulations and forward flux sampling. In this first paper, we describe and test two different ways to reconstruct the free energy barriers from the sampled steady state probability distribution of cluster sizes without sampling the equilibrium distribution. The first method is based on mean first passage times, and the second method is based on splitting probabilities. We verify both methods for a single particle moving in a double-well potential. For the nucleation of hard spheres, these methods allow us to probe a wide range of supersaturations and to reconstruct the kinetics and the free energy landscape from the same simulation. Results are consistent with the scaling predicted by classical nucleation theory although a quantitative fit requires a rather large effective interfacial tension.

  17. Enhanced Sampling of an Atomic Model with Hybrid Nonequilibrium Molecular Dynamics-Monte Carlo Simulations Guided by a Coarse-Grained Model.

    PubMed

    Chen, Yunjie; Roux, Benoît

    2015-08-11

    Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up to 21 times for polyalanine and (AAQAA)3 in water.

  18. Enhanced Sampling of an Atomic Model with Hybrid Nonequilibrium Molecular Dynamics—Monte Carlo Simulations Guided by a Coarse-Grained Model

    PubMed Central

    2015-01-01

    Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up to 21 times for polyalanine and (AAQAA)3 in water. PMID:26574442

  19. Improvement of a wind-tunnel sampling system for odour and VOCs.

    PubMed

    Wang, X; Jiang, J; Kaye, R

    2001-01-01

    Wind-tunnel systems are widely used for collecting odour emission samples from surface area sources. Consequently, a portable wind-tunnel system was developed at the University of New South Wales that was easy to handle and suitable for sampling from liquid surfaces. Development work was undertaken to ensure even air-flows above the emitting surface and to optimise air velocities to simulate real situations. However, recovery efficiencies for emissions have not previously been studied for wind-tunnel systems. A series of experiments was carried out for determining and improving the recovery rate of the wind-tunnel sampling system by using carbon monoxide as a tracer gas. It was observed by mass balance that carbon monoxide recovery rates were initially only 37% to 48% from a simulated surface area emission source. It was therefore apparent that further development work was required to improve recovery efficiencies. By analysing the aerodynamic character of air movement and CO transportation inside the wind-tunnel, it was determined that the apparent poor recoveries resulted from uneven mixing at the sample collection point. A number of modifications were made for the mixing chamber of the wind-tunnel system. A special sampling chamber extension and a sampling manifold with optimally distributed sampling orifices were developed for the wind-tunnel sampling system. The simulation experiments were repeated with the new sampling system. Over a series of experiments, the recovery efficiency of sampling was improved to 83-100% with an average of 90%, where the CO tracer gas was introduced at a single point and 92-102% with an average of 97%, where the CO tracer gas was introduced along a line transverse to the sweep air. The stability and accuracy of the new system were determined statistically and are reported.

  20. Quantifying uncertainty and computational complexity for pore-scale simulations

    NASA Astrophysics Data System (ADS)

    Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.

    2016-12-01

    Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.

  1. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan

    2016-11-01

    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  2. Image analysis of representative food structures: application of the bootstrap method.

    PubMed

    Ramírez, Cristian; Germain, Juan C; Aguilera, José M

    2009-08-01

    Images (for example, photomicrographs) are routinely used as qualitative evidence of the microstructure of foods. In quantitative image analysis it is important to estimate the area (or volume) to be sampled, the field of view, and the resolution. The bootstrap method is proposed to estimate the size of the sampling area as a function of the coefficient of variation (CV(Bn)) and standard error (SE(Bn)) of the bootstrap taking sub-areas of different sizes. The bootstrap method was applied to simulated and real structures (apple tissue). For simulated structures, 10 computer-generated images were constructed containing 225 black circles (elements) and different coefficient of variation (CV(image)). For apple tissue, 8 images of apple tissue containing cellular cavities with different CV(image) were analyzed. Results confirmed that for simulated and real structures, increasing the size of the sampling area decreased the CV(Bn) and SE(Bn). Furthermore, there was a linear relationship between the CV(image) and CV(Bn) (.) For example, to obtain a CV(Bn) = 0.10 in an image with CV(image) = 0.60, a sampling area of 400 x 400 pixels (11% of whole image) was required, whereas if CV(image) = 1.46, a sampling area of 1000 x 100 pixels (69% of whole image) became necessary. This suggests that a large-size dispersion of element sizes in an image requires increasingly larger sampling areas or a larger number of images.

  3. Baseline-dependent sampling and windowing for radio interferometry: data compression, field-of-interest shaping, and outer field suppression

    NASA Astrophysics Data System (ADS)

    Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.

    2018-07-01

    Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.

  4. Screening study on microsphere used in profile control under the environment of microbial oil recovery

    NASA Astrophysics Data System (ADS)

    Zhang, Tiantian; Xie, Gang; Gao, Shanshan; Wang, Zhiqiang; Wei, Junjie; Shi, Lei; Zheng, Ya; Gu, Yi; Lei, Xiaoyang; Wang, Ai

    2017-12-01

    The performance of four microspheres samples (MS-1, MS-2, MS-3, and MS-4) were evaluated and optimized by indoor experiments. Firstly, the basic physical and chemical properties of the four kinds of microspheres were evaluated by analyzing the solid contents and the solubility in the water. Results showed that the content of the precipitated solids in MS-1 was the lowest in the four kinds of microsphere samples. The contents of the other three microspheres were similar in the value of solid content. Besides, the three microspheres of the solubility in the simulated formation water were excellent. Secondly, the expansion properties of three kinds of microspheres (MS-2, MS-3, and MS-4) were investigated. Results revealed that the expansion performance of MS-3 was greatly affected by microbial metabolism. However, the other two samples had excellent expansion performance under the condition of microbial flooding. Finally, the sealing performance of MS-2 and MS-4 was evaluated by physical simulation Block test. Results showed that compared with MS-2, MS-4 was more suitable for Block B.

  5. MODFLOW-2000 Ground-Water Model?User Guide to the Subsidence and Aquifer-System Compaction (SUB) Package

    USGS Publications Warehouse

    Hoffmann, Jörn; Leake, S.A.; Galloway, D.L.; Wilson, Alicia M.

    2003-01-01

    This report documents a computer program, the Subsidence and Aquifer-System Compaction (SUB) Package, to simulate aquifer-system compaction and land subsidence using the U.S. Geological Survey modular finite-difference ground-water flow model, MODFLOW-2000. The SUB Package simulates elastic (recoverable) compaction and expansion, and inelastic (permanent) compaction of compressible fine-grained beds (interbeds) within the aquifers. The deformation of the interbeds is caused by head or pore-pressure changes, and thus by changes in effective stress, within the interbeds. If the stress is less than the preconsolidation stress of the sediments, the deformation is elastic; if the stress is greater than the preconsolidation stress, the deformation is inelastic. The propagation of head changes within the interbeds is defined by a transient, one-dimensional (vertical) diffusion equation. This equation accounts for delayed release of water from storage or uptake of water into storage in the interbeds. Properties that control the timing of the storage changes are vertical hydraulic diffusivity and interbed thickness. The SUB Package supersedes the Interbed Storage Package (IBS1) for MODFLOW, which assumes that water is released from or taken into storage with changes in head in the aquifer within a single model time step and, therefore, can be reasonably used to simulate only thin interbeds. The SUB Package relaxes this assumption and can be used to simulate time-dependent drainage and compaction of thick interbeds and confining units. The time-dependent drainage can be turned off, in which case the SUB Package gives results identical to those from IBS1. Three sample problems illustrate the usefulness of the SUB Package. One sample problem verifies that the package works correctly. This sample problem simulates the drainage of a thick interbed in response to a step change in head in the adjacent aquifer and closely matches the analytical solution. A second sample problem illustrates the effects of seasonally varying discharge and recharge to an aquifer system with a thick interbed. A third sample problem simulates a multilayered regional ground-water basin. Model input files for the third sample problem are included in the appendix.

  6. A-SIDE: Video Simulation of Teen Alcohol and Marijuana Use Contexts

    PubMed Central

    Anderson, Kristen G; Brackenbury, Lauren; Quackenbush, Mathias; Buras, Morgan; Brown, Sandra A; Price, Joseph

    2014-01-01

    Objective: This investigation examined the concurrent validity of a new video simulation assessing adolescent alcohol and marijuana decision making in peer contexts (A-SIDE). Method: One hundred eleven youth (60% female; age 14–19 years; 80% White, 12.6% Latino; 24% recruited from treatment centers) completed the A-SIDE simulation, self-report measures of alcohol and marijuana use and disorder symptoms, and measures of alcohol (i.e., drinking motives and expectancies) and marijuana (i.e., expectancies) cognitions in the laboratory. Results: Study findings support concurrent associations between behavioral willingness to use alcohol and marijuana on the simulation and current use variables as well as on drinking motives and marijuana expectancies. Relations with use variables were found even when sample characteristics were controlled. Interestingly, willingness to accept nonalcoholic beverages (e.g., soda) and food offers in the simulation were inversely related to recent alcohol and marijuana use behavior. Conclusions: These findings are consistent with prior work using laboratory simulations with college students and provide preliminary validity evidence for this procedure. Future work is needed to examine the predictive utility of the A-SIDE with larger and more diverse samples of youth. PMID:25343652

  7. Numerical sedimentation particle-size analysis using the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.

    2015-12-01

    Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.

  8. The Shawmere anorthosite and OB-1 as lunar highland regolith simulants

    NASA Astrophysics Data System (ADS)

    Battler, Melissa M.; Spray, John G.

    2009-12-01

    Anorthosite constitutes a major component of the lunar crust and comprises an important, if not dominant, ingredient of the lunar regolith. Given the need for highland regolith simulants in preparation for lunar surface engineering activities, we have selected an appropriate terrestrial anorthosite and performed crushing trials to generate a particle size distribution comparable to Apollo 16 regolith sample 64 500. The root simulant is derived from a granoblastic facies of the Archean Shawmere Complex of the Kapuskasing Structural Zone of Ontario, Canada. The Shawmere exhibits minimal retrogression, is homogeneous and has an average plagioclase composition of An 78 (bytownite). Previous industrial interest in this calcic anorthosite has resulted in quarrying operations, which provide ease of extraction and access for potential large-scale simulant production. A derivative of the Shawmere involves the addition of olivine slag, crushed to yield a particle size distribution similar to that of the agglutinate and glass components of the Apollo sample. This simulant is referred to as OB-1. The Shawmere and OB-1 regolith simulants are lunar highland analogues, conceived to produce geotechnical properties of benefit to designing and testing drilling, excavation and construction equipment for future lunar surface operations.

  9. Domain Motion Enhanced (DoME) Model for Efficient Conformational Sampling of Multidomain Proteins.

    PubMed

    Kobayashi, Chigusa; Matsunaga, Yasuhiro; Koike, Ryotaro; Ota, Motonori; Sugita, Yuji

    2015-11-19

    Large conformational changes of multidomain proteins are difficult to simulate using all-atom molecular dynamics (MD) due to the slow time scale. We show that a simple modification of the structure-based coarse-grained (CG) model enables a stable and efficient MD simulation of those proteins. "Motion Tree", a tree diagram that describes conformational changes between two structures in a protein, provides information on rigid structural units (domains) and the magnitudes of domain motions. In our new CG model, which we call the DoME (domain motion enhanced) model, interdomain interactions are defined as being inversely proportional to the magnitude of the domain motions in the diagram, whereas intradomain interactions are kept constant. We applied the DoME model in combination with the Go model to simulations of adenylate kinase (AdK). The results of the DoME-Go simulation are consistent with an all-atom MD simulation for 10 μs as well as known experimental data. Unlike the conventional Go model, the DoME-Go model yields stable simulation trajectories against temperature changes and conformational transitions are easily sampled despite domain rigidity. Evidently, identification of domains and their interfaces is useful approach for CG modeling of multidomain proteins.

  10. Finite Element Analysis of Simple Rectangular Microstrip Sensor for Determination Moisture Content of Hevea Rubber Latex

    NASA Astrophysics Data System (ADS)

    Yahaya, NZ; Ramli, MR; Razak, NNANA; Abbas, Z.

    2018-04-01

    The Finite Element Method, FEM has been successfully used to model a simple rectangular microstrip sensor to determine the moisture content of Hevea rubber latex. The FEM simulation of sensor and samples was implemented by using COMSOL Multiphysics software. The simulation includes the calculation of magnitude and phase of reflection coefficient and was compared to analytical method. The results show a good agreement in finding the magnitude and phase of reflection coefficient when compared with analytical results. Field distributions of both the unloaded sensor as well as the sensor loaded with different percentages of moisture content were visualized using FEM in conjunction with COMSOL software. The higher the amount of moisture content in the sample the more the electric loops were observed.

  11. Simulating Protein Mediated Hydrolysis of ATP and Other Nucleoside Triphosphates by Combining QM/MM Molecular Dynamics with Advances in Metadynamics

    PubMed Central

    2017-01-01

    The protein mediated hydrolysis of nucleoside triphosphates such as ATP or GTP is one of the most important and challenging biochemical reactions in nature. The chemical environment (water structure, catalytic metal, and amino acid residues) adjacent to the hydrolysis site contains hundreds of atoms, usually greatly limiting the amount of the free energy sampling that one can achieve from computationally demanding electronic structure calculations such as QM/MM simulations. Therefore, the combination of QM/MM molecular dynamics with the recently developed transition-tempered metadynamics (TTMetaD), an enhanced sampling method that can provide a high-quality free energy estimate at an early stage in a simulation, is an ideal approach to address the biomolecular nucleoside triphosphate hydrolysis problem. In this work the ATP hydrolysis process in monomeric and filamentous actin is studied as an example application of the combined methodology. The performance of TTMetaD in these demanding QM/MM simulations is compared with that of the more conventional well-tempered metadynamics (WTMetaD). Our results show that TTMetaD exhibits much better exploration of the hydrolysis reaction free energy surface in two key collective variables (CVs) during the early stages of the QM/MM simulation than does WTMetaD. The TTMetaD simulations also reveal that a key third degree of freedom, the O–H bond-breaking and proton transfer from the lytic water, must be biased for TTMetaD to converge fully. To perturb the NTP hydrolysis dynamics to the least extent and to properly focus the MetaD free energy sampling, we also adopt here the recently developed metabasin metadynamics (MBMetaD) to construct a self-limiting bias potential that only applies to the lytic water after its nucleophilic attack of the phosphate of ATP. With these new, state-of-the-art enhanced sampling metadynamics techniques, we present an effective and accurate computational strategy for combining QM/MM molecular dynamics simulation with free energy sampling methodology, including a means to analyze the convergence of the calculations through robust numerical criteria. PMID:28345907

  12. Simulating Protein Mediated Hydrolysis of ATP and Other Nucleoside Triphosphates by Combining QM/MM Molecular Dynamics with Advances in Metadynamics.

    PubMed

    Sun, Rui; Sode, Olaseni; Dama, James F; Voth, Gregory A

    2017-05-09

    The protein mediated hydrolysis of nucleoside triphosphates such as ATP or GTP is one of the most important and challenging biochemical reactions in nature. The chemical environment (water structure, catalytic metal, and amino acid residues) adjacent to the hydrolysis site contains hundreds of atoms, usually greatly limiting the amount of the free energy sampling that one can achieve from computationally demanding electronic structure calculations such as QM/MM simulations. Therefore, the combination of QM/MM molecular dynamics with the recently developed transition-tempered metadynamics (TTMetaD), an enhanced sampling method that can provide a high-quality free energy estimate at an early stage in a simulation, is an ideal approach to address the biomolecular nucleoside triphosphate hydrolysis problem. In this work the ATP hydrolysis process in monomeric and filamentous actin is studied as an example application of the combined methodology. The performance of TTMetaD in these demanding QM/MM simulations is compared with that of the more conventional well-tempered metadynamics (WTMetaD). Our results show that TTMetaD exhibits much better exploration of the hydrolysis reaction free energy surface in two key collective variables (CVs) during the early stages of the QM/MM simulation than does WTMetaD. The TTMetaD simulations also reveal that a key third degree of freedom, the O-H bond-breaking and proton transfer from the lytic water, must be biased for TTMetaD to converge fully. To perturb the NTP hydrolysis dynamics to the least extent and to properly focus the MetaD free energy sampling, we also adopt here the recently developed metabasin metadynamics (MBMetaD) to construct a self-limiting bias potential that only applies to the lytic water after its nucleophilic attack of the phosphate of ATP. With these new, state-of-the-art enhanced sampling metadynamics techniques, we present an effective and accurate computational strategy for combining QM/MM molecular dynamics simulation with free energy sampling methodology, including a means to analyze the convergence of the calculations through robust numerical criteria.

  13. Surface Roughness of Composite Resins after Simulated Toothbrushing with Different Dentifrices

    PubMed Central

    Monteiro, Bruna; Spohr, Ana Maria

    2015-01-01

    Background: The aim of the study was to evaluate, in vitro, the surface roughness of two composite resins submitted to simulated toothbrushing with three different dentifrices. Materials and Methods: Totally, 36 samples of Z350XT and 36 samples of Empress Direct were built and randomly divided into three groups (n = 12) according to the dentifrice used (Oral-B Pro-Health Whitening [OBW], Colgate Sensitive Pro-Relief [CS], Colgate Total Clean Mint 12 [CT12]). The samples were submitted to 5,000, 10,000 or 20,000 cycles of simulated toothbrushing. After each simulated period, the surface roughness of the samples was measured using a roughness tester. Results: According to three-way analysis of variance, dentifrice (P = 0.044) and brushing time (P = 0.000) were significant. The composite resin was not significant (P = 0.381) and the interaction among the factors was not significant (P > 0.05). The mean values of the surface roughness (µm) followed by the same letter represent no statistical difference by Tukey's post-hoc test (P <0.05): Dentifrice: CT12 = 0.269a; CS Pro- Relief = 0.300ab; OBW = 0.390b. Brushing time: Baseline = 0,046ª; 5,000 cycles = 0.297b; 10,000 cycles = 0.354b; 20,000 cycles = 0.584c. Conclusion: Z350 XT and Empress Direct presented similar surface roughness after all cycles of simulated toothbrushing. The higher the brushing time, the higher the surface roughness of composite resins. The dentifrice OBW caused a higher surface roughness in both composite resins. PMID:26229362

  14. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  15. Experimental investigation of the mechanical properties of brain simulants used for cranial gunshot simulation.

    PubMed

    Lazarjan, Milad Soltanipour; Geoghegan, Patrick Henry; Jermy, Mark Christopher; Taylor, Michael

    2014-06-01

    The mechanical properties of the human brain at high strain rate were investigated to analyse the mechanisms that cause backspatter when a cranial gunshot wound occurs. Different concentrations of gelatine and a new material (M1) developed in this work were tested and compared to bovine brain samples. Kinetic energy absorption and expansion rate of the samples caused by the impact of a bullet from .22 air rifle (AR) (average velocity (uav) of 290m/s) and .22 long rifle (LR) (average velocity (uav) of 330m/s) were analysed using a high speed camera (24,000fps). The AR projectile had, in the region of interest, an average kinetic energy (Ek) of 42±1.3J. On average, the bovine brain absorbed 50±5% of Ek, and the simulants 46-58±5%. The Ek of the .22 LR was 141±3.7J. The bovine brain absorbed 27% of the .22LR Ek and the simulants 15-29%. The expansion of the sample, after penetration, was measured. The bovine brain experienced significant plastic deformation whereas the gelatine solution exhibited a principally elastic response. The permanent damage patterns in the M1 material were much closer to those in brain tissue, than were the damage patterns in the gelatine. The results provide a first step to developing a realistic experimental simulant for the human brain which can produce the same blood backspatter patterns as a human brain during a cranial gunshot. These results can also be used to improve the 3D models of human heads used in car crash and blast trauma injury research. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Freezing and drying effects on potential plant contributions to phosphorus in runoff.

    PubMed

    Roberson, Tiffany; Bundy, Larry G; Andraski, Todd W

    2007-01-01

    Phosphorus (P) in runoff from landscapes can promote eutrophication of natural waters. Soluble P released from plant material can contribute significant amounts of P to runoff particularly after plant freezing or drying. This study was conducted to evaluate P losses from alfalfa or grass after freezing or drying as potential contributors to runoff P. Alfalfa (Medicago sativa L.) and grass (principally, Agropyron repens L.) plant samples were subjected to freezing and drying treatments to determine P release. Simulated rainfall runoff and natural runoff from established alfalfa fields and a grass waterway were collected to study P contributions from plant tissue to runoff. The effects of freezing and drying on P released from plant tissue were simulated by a herbicide treatment in selected experiments. Soluble reactive P (SP) extracted from alfalfa and grass samples was markedly increased by freezing or drying. In general, SP extracted from plant samples increased in the order fresh < frozen < frozen/thawed < dried, and averaged 1, 8, 14, and 26% of total P in alfalfa, respectively. Soluble reactive P extracted from alfalfa after freezing or drying increased with increasing soil test P (r(2) = 0.64 to 0.68), suggesting that excessive soil P levels increased the risk of plant P contributions to runoff losses. In simulated rainfall studies, paraquat (1,1'-dimethyl-4, 4''-bipyridinium ion) treatment of alfalfa increased P losses in runoff, and results suggested that this treatment simulated the effects of drying on plant P loss. In contrast to the simulated rainfall results, natural runoff studies over 2 yr did not show higher runoff P losses that could be attributed to P from alfalfa. Actual P losses likely depend on the timing and extent of plant freezing and drying and of precipitation events after freezing.

  17. Simulation of spin label structure and its implication in molecular characterization

    PubMed Central

    Fajer, Piotr; Fajer, Mikolai; Zawrotny, Michael; Yang, Wei

    2016-01-01

    Interpretation of EPR from spin labels in terms of structure and dynamics requires knowledge of label behavior. General strategies were developed for simulation of labels used in EPR of proteins. The criteria for those simulations are: (a) exhaustive sampling of rotamer space; (b) consensus of results independent of starting points; (c) inclusion of entropy. These criteria are satisfied only when the number of transitions in any dihedral angle exceeds 100 and the simulation maintains thermodynamic equilibrium. Methods such as conventional MD do not efficiently cross energetic barriers, Simulated Annealing, Monte Carlo or popular Rotamer Library methodologies are potential energy based and ignore entropy (in addition to their specific shortcomings: environment fluctuations, fixed environment or electrostatics). Simulated Scaling method, avoids above flaws by modulating the forcefields between 0 (allowing crossing energy barriers) and full potential (sampling minima). Spin label diffuses on this surface while remaining in thermodynamic equilibrium. Simulations show that: (a) single conformation is rare, often there are 2–4 populated rotamers; (b) position of the NO varies up to 16Å. These results illustrate necessity for caution when interpreting EPR signals in terms of molecular structure. For example the 10–16Å distance change in DEER should not be interpreted as a large conformational change, it can well be a flip about Cα -Cβ bond. Rigorous exploration of possible rotamer structures of a spin label is paramount in signal interpretation. We advocate use of bifunctional labels, which motion is restricted 10,000-fold and the NO position is restricted to 2–5Å. PMID:26478501

  18. Nuclear Resonance Fluorescence Measurements of High Explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caggiano, Joseph A.; Warren, Glen A.; Korbly, Steve

    Pacific Northwest National Laboratory and Passport Systems have collaborated to perform Nuclear Resonance Fluorescence experiments using several high quality high-explosive simulant samples. These measurements were conducted to determine the feasibility of finding and characterizing high explosive material by NRF interrogation. Electron beams of 5.1, 5.3, 8, and 10 MeV were used to produce bremsstrahlung photon beams, which irradiated the samples. The gamma-ray spectra were collected using high-purity germanium detectors. Nitrogen-to-carbon ratios of the high-explosive simulants were extracted from the 5.1 and 5.3 MeV data and compare favorably with accepted values. Analysis of the 8 and 10 MeV data is inmore » progress; preliminary isotopic comparisons within the samples are consistent with the expected results.« less

  19. Analyzing indirect secondary electron contrast of unstained bacteriophage T4 based on SEM images and Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogura, Toshihiko, E-mail: t-ogura@aist.go.jp

    2009-03-06

    The indirect secondary electron contrast (ISEC) condition of the scanning electron microscopy (SEM) produces high contrast detection with minimal damage of unstained biological samples mounted under a thin carbon film. The high contrast image is created by a secondary electron signal produced under the carbon film by a low acceleration voltage. Here, we show that ISEC condition is clearly able to detect unstained bacteriophage T4 under a thin carbon film (10-15 nm) by using high-resolution field emission (FE) SEM. The results show that FE-SEM provides higher resolution than thermionic emission SEM. Furthermore, we investigated the scattered electron area within themore » carbon film under ISEC conditions using Monte Carlo simulation. The simulations indicated that the image resolution difference is related to the scattering width in the carbon film and the electron beam spot size. Using ISEC conditions on unstained virus samples would produce low electronic damage, because the electron beam does not directly irradiate the sample. In addition to the routine analysis, this method can be utilized for structural analysis of various biological samples like viruses, bacteria, and protein complexes.« less

  20. Characterizing the Conformational Landscape of Flavivirus Fusion Peptides via Simulation and Experiment

    PubMed Central

    Marzinek, Jan K.; Lakshminarayanan, Rajamani; Goh, Eunice; Huber, Roland G.; Panzade, Sadhana; Verma, Chandra; Bond, Peter J.

    2016-01-01

    Conformational changes in the envelope proteins of flaviviruses help to expose the highly conserved fusion peptide (FP), a region which is critical to membrane fusion and host cell infection, and which represents a significant target for antiviral drugs and antibodies. In principle, extended timescale atomic-resolution simulations may be used to characterize the dynamics of such peptides. However, the resultant accuracy is critically dependent upon both the underlying force field and sufficient conformational sampling. In the present study, we report a comprehensive comparison of three simulation methods and four force fields comprising a total of more than 40 μs of sampling. Additionally, we describe the conformational landscape of the FP fold across all flavivirus family members. All investigated methods sampled conformations close to available X-ray structures, but exhibited differently populated ensembles. The best force field / sampling combination was sufficiently accurate to predict that the solvated peptide fold is less ordered than in the crystallographic state, which was subsequently confirmed via circular dichroism and spectrofluorometric measurements. Finally, the conformational landscape of a mutant incapable of membrane fusion was significantly shallower than wild-type variants, suggesting that dynamics should be considered when therapeutically targeting FP epitopes. PMID:26785994

  1. Gallium arsenide based surface plasmon resonance for glucose monitoring

    NASA Astrophysics Data System (ADS)

    Patil, Harshada; Sane, Vani; Sriram, G.; Indumathi, T. S; Sharan, Preeta

    2015-07-01

    The recent trends in the semiconductor and microwave industries has enabled the development of scalable microfabrication technology which produces a superior set of performance as against its counterparts. Surface Plasmon Resonance (SPR) based biosensors are a special class of optical sensors that become affected by electromagnetic waves. It is found that bio-molecular recognition element immobilized on the SPR sensor surface layer reveals a characteristic interaction with various sample solutions during the passage of light. The present work revolves around developing painless glucose monitoring systems using fluids containing glucose like saliva, urine, sweat or tears instead of blood samples. Non-invasive glucose monitoring has long been simulated using label free detection mechanisms and the same concept is adapted. In label-free detection, target molecules are not labeled or altered, and are detected in their natural forms. Label-free detection mechanisms involves the measurement of refractive index (RI) change induced by molecular interactions. These interactions relates the sample concentration or surface density, instead of total sample mass. After simulation it has been observed that the result obtained is highly accurate and sensitive. The structure used here is SPR sensor based on channel waveguide. The tools used for simulation are RSOFT FULLWAVE, MEEP and MATLAB etc.

  2. A Statistics-Based Cracking Criterion of Resin-Bonded Silica Sand for Casting Process Simulation

    NASA Astrophysics Data System (ADS)

    Wang, Huimin; Lu, Yan; Ripplinger, Keith; Detwiler, Duane; Luo, Alan A.

    2017-02-01

    Cracking of sand molds/cores can result in many casting defects such as veining. A robust cracking criterion is needed in casting process simulation for predicting/controlling such defects. A cracking probability map, relating to fracture stress and effective volume, was proposed for resin-bonded silica sand based on Weibull statistics. Three-point bending test results of sand samples were used to generate the cracking map and set up a safety line for cracking criterion. Tensile test results confirmed the accuracy of the safety line for cracking prediction. A laboratory casting experiment was designed and carried out to predict cracking of a cup mold during aluminum casting. The stress-strain behavior and the effective volume of the cup molds were calculated using a finite element analysis code ProCAST®. Furthermore, an energy dispersive spectroscopy fractographic examination of the sand samples confirmed the binder cracking in resin-bonded silica sand.

  3. Molecular dynamics coupled with a virtual system for effective conformational sampling.

    PubMed

    Hayami, Tomonori; Kasahara, Kota; Nakamura, Haruki; Higo, Junichi

    2018-07-15

    An enhanced conformational sampling method is proposed: virtual-system coupled canonical molecular dynamics (VcMD). Although VcMD enhances sampling along a reaction coordinate, this method is free from estimation of a canonical distribution function along the reaction coordinate. This method introduces a virtual system that does not necessarily obey a physical law. To enhance sampling the virtual system couples with a molecular system to be studied. Resultant snapshots produce a canonical ensemble. This method was applied to a system consisting of two short peptides in an explicit solvent. Conventional molecular dynamics simulation, which is ten times longer than VcMD, was performed along with adaptive umbrella sampling. Free-energy landscapes computed from the three simulations mutually converged well. The VcMD provided quicker association/dissociation motions of peptides than the conventional molecular dynamics did. The VcMD method is applicable to various complicated systems because of its methodological simplicity. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  4. Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.

    PubMed

    Villarrubia, J S; Tondare, V N; Vladár, A E

    2016-01-01

    The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.

  5. The SELGIFS data challenge: generating synthetic observations of CALIFA galaxies from hydrodynamical simulations

    NASA Astrophysics Data System (ADS)

    Guidi, G.; Casado, J.; Ascasibar, Y.; García-Benito, R.; Galbany, L.; Sánchez-Blázquez, P.; Sánchez, S. F.; Rosales-Ortega, F. F.; Scannapieco, C.

    2018-06-01

    In this work we present a set of synthetic observations that mimic the properties of the Integral Field Spectroscopy (IFS) survey CALIFA, generated using radiative transfer techniques applied to hydrodynamical simulations of galaxies in a cosmological context. The simulated spatially-resolved spectra include stellar and nebular emission, kinematic broadening of the lines, and dust extinction and scattering. The results of the radiative transfer simulations have been post-processed to reproduce the main properties of the CALIFA V500 and V1200 observational setups. The data has been further formatted to mimic the CALIFA survey in terms of field of view size, spectral range and sampling. We have included the effect of the spatial and spectral Point Spread Functions affecting CALIFA observations, and added detector noise after characterizing it on a sample of 367 galaxies. The simulated datacubes are suited to be analysed by the same algorithms used on real IFS data. In order to provide a benchmark to compare the results obtained applying IFS observational techniques to our synthetic datacubes, and test the calibration and accuracy of the analysis tools, we have computed the spatially-resolved properties of the simulations. Hence, we provide maps derived directly from the hydrodynamical snapshots or the noiseless spectra, in a way that is consistent with the values recovered by the observational analysis algorithms. Both the synthetic observations and the product datacubes are public and can be found in the collaboration website http://astro.ft.uam.es/selgifs/data_challenge/.

  6. Equilibrium simulations of proteins using molecular fragment replacement and NMR chemical shifts.

    PubMed

    Boomsma, Wouter; Tian, Pengfei; Frellsen, Jes; Ferkinghoff-Borg, Jesper; Hamelryck, Thomas; Lindorff-Larsen, Kresten; Vendruscolo, Michele

    2014-09-23

    Methods of protein structure determination based on NMR chemical shifts are becoming increasingly common. The most widely used approaches adopt the molecular fragment replacement strategy, in which structural fragments are repeatedly reassembled into different complete conformations in molecular simulations. Although these approaches are effective in generating individual structures consistent with the chemical shift data, they do not enable the sampling of the conformational space of proteins with correct statistical weights. Here, we present a method of molecular fragment replacement that makes it possible to perform equilibrium simulations of proteins, and hence to determine their free energy landscapes. This strategy is based on the encoding of the chemical shift information in a probabilistic model in Markov chain Monte Carlo simulations. First, we demonstrate that with this approach it is possible to fold proteins to their native states starting from extended structures. Second, we show that the method satisfies the detailed balance condition and hence it can be used to carry out an equilibrium sampling from the Boltzmann distribution corresponding to the force field used in the simulations. Third, by comparing the results of simulations carried out with and without chemical shift restraints we describe quantitatively the effects that these restraints have on the free energy landscapes of proteins. Taken together, these results demonstrate that the molecular fragment replacement strategy can be used in combination with chemical shift information to characterize not only the native structures of proteins but also their conformational fluctuations.

  7. Effects of forcefield and sampling method in all-atom simulations of inherently disordered proteins: Application to conformational preferences of human amylin

    PubMed Central

    Peng, Enxi; Todorova, Nevena

    2017-01-01

    Although several computational modelling studies have investigated the conformational behaviour of inherently disordered protein (IDP) amylin, discrepancies in identifying its preferred solution conformations still exist between various forcefields and sampling methods used. Human islet amyloid polypeptide has long been a subject of research, both experimentally and theoretically, as the aggregation of this protein is believed to be the lead cause of type-II diabetes. In this work, we present a systematic forcefield assessment using one of the most advanced non-biased sampling techniques, Replica Exchange with Solute Tempering (REST2), by comparing the secondary structure preferences of monomeric amylin in solution. This study also aims to determine the ability of common forcefields to sample a transition of the protein from a helical membrane bound conformation into the disordered solution state of amylin. Our results demonstrated that the CHARMM22* forcefield showed the best ability to sample multiple conformational states inherent for amylin. It is revealed that REST2 yielded results qualitatively consistent with experiments and in quantitative agreement with other sampling methods, however far more computationally efficiently and without any bias. Therefore, combining an unbiased sampling technique such as REST2 with a vigorous forcefield testing could be suggested as an important step in developing an efficient and robust strategy for simulating IDPs. PMID:29023509

  8. Effects of forcefield and sampling method in all-atom simulations of inherently disordered proteins: Application to conformational preferences of human amylin.

    PubMed

    Peng, Enxi; Todorova, Nevena; Yarovsky, Irene

    2017-01-01

    Although several computational modelling studies have investigated the conformational behaviour of inherently disordered protein (IDP) amylin, discrepancies in identifying its preferred solution conformations still exist between various forcefields and sampling methods used. Human islet amyloid polypeptide has long been a subject of research, both experimentally and theoretically, as the aggregation of this protein is believed to be the lead cause of type-II diabetes. In this work, we present a systematic forcefield assessment using one of the most advanced non-biased sampling techniques, Replica Exchange with Solute Tempering (REST2), by comparing the secondary structure preferences of monomeric amylin in solution. This study also aims to determine the ability of common forcefields to sample a transition of the protein from a helical membrane bound conformation into the disordered solution state of amylin. Our results demonstrated that the CHARMM22* forcefield showed the best ability to sample multiple conformational states inherent for amylin. It is revealed that REST2 yielded results qualitatively consistent with experiments and in quantitative agreement with other sampling methods, however far more computationally efficiently and without any bias. Therefore, combining an unbiased sampling technique such as REST2 with a vigorous forcefield testing could be suggested as an important step in developing an efficient and robust strategy for simulating IDPs.

  9. Simulation of ICESat-2 canopy height retrievals for different ecosystems

    NASA Astrophysics Data System (ADS)

    Neuenschwander, A. L.

    2016-12-01

    Slated for launch in late 2017 (or early 2018), the ICESat-2 satellite will provide a global distribution of geodetic measurements from a space-based laser altimeter of both the terrain surface and relative canopy heights which will provide a significant benefit to society through a variety of applications ranging from improved global digital terrain models to producing distribution of above ground vegetation structure. The ATLAS instrument designed for ICESat-2, will utilize a different technology than what is found on most laser mapping systems. The photon counting technology of the ATLAS instrument onboard ICESat-2 will record the arrival time associated with a single photon detection. That detection can occur anywhere within the vertical distribution of the reflected signal, that is, anywhere within the vertical distribution of the canopy. This uncertainty of where the photon will be returned from within the vegetation layer is referred to as the vertical sampling error. Preliminary simulation studies to estimate vertical sampling error have been conducted for several ecosystems including woodland savanna, montane conifers, temperate hardwoods, tropical forest, and boreal forest. The results from these simulations indicate that the canopy heights reported on the ATL08 data product will underestimate the top canopy height in the range of 1 - 4 m. Although simulation results indicate the ICESat-2 will underestimate top canopy height, there is, however, a strong correlation between ICESat-2 heights and relative canopy height metrics (e.g. RH75, RH90). In tropical forest, simulation results indicate the ICESat-2 height correlates strongly with RH90. Similarly, in temperate broadleaf forest, the simulated ICESat-2 heights were also strongly correlated with RH90. In boreal forest, the simulated ICESat-2 heights are strongly correlated with RH75 heights. It is hypothesized that the correlations between simulated ICESat-2 heights and canopy height metrics are a function of both canopy cover and vegetation physiology (e.g. leaf size/shape) which contributes to the horizontal and vertical structure of the vegetation.

  10. Use of 3H/3He Ages to evaluate and improve groundwater flow models in a complex buried-valley aquifer

    USGS Publications Warehouse

    Sheets, Rodney A.; Bair, E. Scott; Rowe, Gary L.

    1998-01-01

    Combined use of the tritium/helium 3 (3H/3He) dating technique and particle-tracking analysis can improve flow-model calibration. As shown at two sites in the Great Miami buried-valley aquifer in southwestern Ohio, the combined use of 3H/3He age dating and particle tracking led to a lower mean absolute error between measured heads and simulated heads than in the original calibrated models and/or between simulated travel times and 3H/3He ages. Apparent groundwater ages were obtained for water samples collected from 44 wells at two locations where previously constructed finite difference models of groundwater flow were available (Mound Plant and Wright-Patterson Air Force Base (WPAFB)). The two-layer Mound Plant model covers 11 km2 within the buried-valley aquifer. The WPAFB model has three layers and covers 262 km2 within the buried-valley aquifer and adjacent bedrock uplands. Sampled wells were chosen along flow paths determined from potentiometric maps or particle-tracking analyses. Water samples were collected at various depths within the aquifer. In the Mound Plant area, samples used for comparison of 3H/3He ages with simulated travel times were from wells completed in the uppermost model layer. Simulated travel times agreed well with 3H/3He ages. The mean absolute error (MAE) was 3.5 years. Agreement in ages at WPAFB decreased with increasing depth in the system. The MAEs were 1.63, 17.2, and 255 years for model layers 1, 2, and 3, respectively. Discrepancies between the simulated travel times and 3H/3He ages were assumed to be due to improper conceptualization or incorrect parameterization of the flow models. Selected conceptual and parameter modifications to the models resulted in improved agreement between 3H/3He ages and simulated travel times and between measured and simulated heads and flows.

  11. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less

  12. Influence of Decontaminating Agents and Swipe Materials on Laboratory Simulated Working Surfaces Wet Spilled with Sodium Pertechnetate.

    PubMed

    Akchata, Suman; Lavanya, K; Shivanand, Bhushan

    2017-01-01

    Decontamination of various working surfaces with sodium pertechnetate minor spillage is essential for maintaining good radiation safety practices as well as for regulatory compliance. To observe the influences of decontaminating agents and swipe materials on different type of surfaces used in nuclear medicine laboratory work area wet spilled with 99m-technetium (99mTc) sodium pertechnetate. Lab-simulated working surface materials. Experimental study design. Direct decontamination method on dust-free lab simulated new working surfaces [stainless steel, polyvinyl chloride (PVC), Perspex, resin] using four decontaminating agents [tap water, soap water (SW), Radiacwash, and spirit] with four different swipe material [cotton, tissue paper (TP), Whatman paper (WP), adsorbent sheet (AS)] was taken 10 samples (n = 10) for each group. Parametric test two-way analysis of variance is used with significance level of 0.005, was used to evaluate statistical differences between different group of decontaminating agent and swipe material, and the results are expressed in mean ± SD. Decontamination factor is calculated after five cleaning for each group. A total of 160 samples result calculated using four decontaminating agent (tap water, SW, Radiacwash, and spirit), four swipe material (cotton, TP, WP, and AS) for commonly used surface (stainless steel, PVC, Perspex, resin) using direct method by 10 samples (n = 10) for each group. Tap water is the best decontaminating agent compared with SW, Radiac wash and spirit for the laboratory simulated stainless steel, PVC, and Perspex surface material, whereas in case of resin surface material, SW decontaminating agent is showing better effectiveness. Cotton is the best swipe material compared to WP-1, AS and TP for the stainless steel, PVC, Perspex, and resin laboratory simulated surface materials. Perspex and stainless steel are the most suitable and recommended laboratory surface material compared to PVC and resin in nuclear medicine. Radiacwash may show better result for 99mTc labelled product and other radionuclide contamination on the laboratory working surface area.

  13. Numerical Simulation of Electrical Properties of Carbonate Reservoir Rocks Using µCT Images

    NASA Astrophysics Data System (ADS)

    Colgin, J.; Niu, Q.; Zhang, C.; Zhang, F.

    2017-12-01

    Digital rock physics involves the modern microscopic imaging of geomaterials, digitalization of the microstructure, and numerical simulation of physical properties of rocks. This physics-based approach can give important insight into understanding properties of reservoir rocks, and help reveal the link between intrinsic rock properties and macroscopic geophysical responses. The focus of this study is the simulation of the complex conductivity of carbonate reservoir rocks using reconstructed 3D rock structures from high-resolution X-ray micro computed tomography (µCT). Carbonate core samples with varying lithofacies and pore structures from the Cambro-Ordovician Arbuckle Group and the Upper Pennsylvanian Lansing-Kansas City Group in Kansas are used in this study. The wide variations in pore geometry and connectivity of these samples were imaged using µCT. A two-phase segmentation method was used to reconstruct a digital rock of solid particles and pores. We then calculate the effective electrical conductivity of the digital rock volume using a pore-scale numerical approach. The complex conductivity of geomaterials is influenced by the electrical properties and geometry of each phase, i.e., the solid and fluid phases. In addition, the electrical double layer that forms between the solid and fluid phases can also affect the effective conductivity of the material. In the numerical modeling, the influence of the electrical double layer is quantified by a complex surface conductance and converted to an apparent volumetric complex conductivity of either solid particles or pore fluid. The effective complex conductivity resulting from numerical simulations based on µCT images will be compared to results from laboratory experiments on equivalent rock samples. The imaging and digital segmentation method, assumptions in the numerical simulation, and trends as compared to laboratory results will be discussed. This study will help us understand how microscale physics affects macroscale electrical conductivity in porous media.

  14. Assessment of the Partially Resolved Numerical Simulation (PRNS) Approach in the National Combustion Code (NCC) for Turbulent Nonreacting and Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2008-01-01

    This paper describes an approach which aims at bridging the gap between the traditional Reynolds-averaged Navier-Stokes (RANS) approach and the traditional large eddy simulation (LES) approach. It has the characteristics of the very large eddy simulation (VLES) and we call this approach the partially-resolved numerical simulation (PRNS). Systematic simulations using the National Combustion Code (NCC) have been carried out for fully developed turbulent pipe flows at different Reynolds numbers to evaluate the PRNS approach. Also presented are the sample results of two demonstration cases: nonreacting flow in a single injector flame tube and reacting flow in a Lean Direct Injection (LDI) hydrogen combustor.

  15. Evaluation of induced color changes in chicken breast meat during simulation of pink color defect.

    PubMed

    Holownia, K; Chinnan, M S; Reynolds, A E; Koehler, P E

    2003-06-01

    The objective of the study was to establish a pink threshold and simulate the pink defect in cooked chicken breast meat with treatment combinations that would induce significant changes in the color of raw and cooked meat. The subjective pink threshold used in judging pink discoloration was established at a* = 3.8. Samples of three color groups (normal, lighter than normal, and darker than normal) of boneless, skinless chicken breast muscles were selected based on instrumental color values. The in situ changes were induced using sodium chloride, sodium tripolyphosphate, sodium erythorbate, and sodium nitrite at two levels: present and not present. Fillets in all treatments were subjected to individual injections, followed by tumbling, cooking, and chilling. Samples were analyzed for color [lightness (L*), red/green axis (a*), yellow/blue axis (b*)] and reflectance spectra. Simulation of the pink defect was achieved in eight of the 16 treatment combinations when sodium nitrite was present and in an additional two treatment combinations when it was absent. Pinking in cooked samples was affected (P < 0.05) by L* of raw meat color. Results confirmed that it was possible to simulate the undesired pinking in cooked chicken white meat when in situ conditions were induced by sodium chloride, sodium tripolyphosphate, and sodium nitrite. The continuation of the simulation study can aid in developing alternative processing methods to eliminate potential pink defects.

  16. Resonant Column Tests and Nonlinear Elasticity in Simulated Rocks

    NASA Astrophysics Data System (ADS)

    Sebastian, Resmi; Sitharam, T. G.

    2018-01-01

    Rocks are generally regarded as linearly elastic even though the manifestations of nonlinearity are prominent. The variations of elastic constants with varying strain levels and stress conditions, disagreement between static and dynamic moduli, etc., are some of the examples of nonlinear elasticity in rocks. The grain-to-grain contact, presence of pores and joints along with other compliant features induce the nonlinear behavior in rocks. The nonlinear elastic behavior of rocks is demonstrated through resonant column tests and numerical simulations in this paper. Resonant column tests on intact and jointed gypsum samples across varying strain levels have been performed in laboratory and using numerical simulations. The paper shows the application of resonant column apparatus to obtain the wave velocities of stiff samples at various strain levels under long wavelength condition, after performing checks and incorporating corrections to the obtained resonant frequencies. The numerical simulation and validation of the resonant column tests using distinct element method are presented. The stiffness reductions of testing samples under torsional and flexural vibrations with increasing strain levels have been analyzed. The nonlinear elastic behavior of rocks is reflected in the results, which is enhanced by the presence of joints. The significance of joint orientation and influence of joint spacing during wave propagation have also been assessed and presented using the numerical simulations. It has been found that rock joints also exhibit nonlinear behavior within the elastic limit.

  17. Comparison of competing segmentation standards for X-ray computed topographic imaging using Lattice Boltzmann techniques

    NASA Astrophysics Data System (ADS)

    Larsen, J. D.; Schaap, M. G.

    2013-12-01

    Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.

  18. Determining the strengths of HCP slip systems using harmonic analyses of lattice strain distributions

    DOE PAGES

    Dawson, Paul R.; Boyce, Donald E.; Park, Jun-Sang; ...

    2017-10-15

    A robust methodology is presented to extract slip system strengths from lattice strain distributions for polycrystalline samples obtained from high-energy x-ray diffraction (HEXD) experiments with in situ loading. The methodology consists of matching the evolution of coefficients of a harmonic expansion of the distributions from simulation to the coefficients derived from measurements. Simulation results are generated via finite element simulations of virtual polycrystals that are subjected to the loading history applied in the HEXD experiments. Advantages of the methodology include: (1) its ability to utilize extensive data sets generated by HEXD experiments; (2) its ability to capture trends in distributionsmore » that may be noisy (both measured and simulated); and (3) its sensitivity to the ratios of the family strengths. The approach is used to evaluate the slip system strengths of Ti-6Al-4V using samples having relatively equiaxed grains. These strength estimates are compared to values in the literature.« less

  19. Helium-4 Experiments near T-lambda in a Low-Gravity Simulator

    NASA Technical Reports Server (NTRS)

    Liu, Yuanming; Larson, Melora; Israelsson, Ulf

    2000-01-01

    We report our studies of gravity cancellation in a liquid helium sample cell along the lambda-line using a low-gravity simulator facility. The simulator consists of a superconducting magnet capable of producing B(delta-B/delta-z) = 22squareT)/cm. We have verified experimentally that the simulator can cancel gravity to about 0.01g in a cylindrical sample volume of 0.5 cm in diameter and 0.5 cm in height. This allows us to approach more closely the superfluid transition without entering the normal-superfluid two phase region induced by gravity. We also present the measurements of T-c(Q,P): depression of the superfluid transition temperature by a heat current(Q) along the lambda-line (P). The results are consistent with the Renormalization-group theory calculation. Measurements of thermal expansion coefficient in a heat current will also be discussed. The work has been carried out by JPL, California Institute of Technology under contract to NASA.

  20. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  1. Digital holographic microscopy long-term and real-time monitoring of cell division and changes under simulated zero gravity.

    PubMed

    Pan, Feng; Liu, Shuo; Wang, Zhe; Shang, Peng; Xiao, Wen

    2012-05-07

    The long-term and real-time monitoring the cell division and changes of osteoblasts under simulated zero gravity condition were succeed by combing a digital holographic microscopy (DHM) with a superconducting magnet (SM). The SM could generate different magnetic force fields in a cylindrical cavity, where the gravitational force of biological samples could be canceled at a special gravity position by a high magnetic force. Therefore the specimens were levitated and in a simulated zero gravity environment. The DHM was modified to fit with SM by using single mode optical fibers and a vertically-configured jig designed to hold specimens and integrate optical device in the magnet's bore. The results presented the first-phase images of living cells undergoing dynamic divisions and changes under simulated zero gravity environment for a period of 10 hours. The experiments demonstrated that the SM-compatible DHM setup could provide a highly efficient and versatile method for research on the effects of microgravity on biological samples.

  2. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  3. Performance evaluation of digital phase-locked loops for advanced deep space transponders

    NASA Technical Reports Server (NTRS)

    Nguyen, T. M.; Hinedi, S. M.; Yeh, H.-G.; Kyriacou, C.

    1994-01-01

    The performances of the digital phase-locked loops (DPLL's) for the advanced deep-space transponders (ADT's) are investigated. DPLL's considered in this article are derived from the analog phase-locked loop, which is currently employed by the NASA standard deep space transponder, using S-domain to Z-domain mapping techniques. Three mappings are used to develop digital approximations of the standard deep space analog phase-locked loop, namely the bilinear transformation (BT), impulse invariant transformation (IIT), and step invariant transformation (SIT) techniques. The performance in terms of the closed loop phase and magnitude responses, carrier tracking jitter, and response of the loop to the phase offset (the difference between in incoming phase and reference phase) is evaluated for each digital approximation. Theoretical results of the carrier tracking jitter for command-on and command-off cases are then validated by computer simulation. Both theoretical and computer simulation results show that at high sampling frequency, the DPLL's approximated by all three transformations have the same tracking jitter. However, at low sampling frequency, the digital approximation using BT outperforms the others. The minimum sampling frequency for adequate tracking performance is determined for each digital approximation of the analog loop. In addition, computer simulation shows that the DPLL developed by BT provides faster response to the phase offset than IIT and SIT.

  4. The effect of dose enhancement near metal interfaces on synthetic diamond based X-ray dosimeters

    NASA Astrophysics Data System (ADS)

    Alamoudi, D.; Lohstroh, A.; Albarakaty, H.

    2017-11-01

    This study investigates the effects of dose enhancement on the photocurrent performance at metallic interfaces in synthetic diamond detectors based X-ray dosimeters as a function of bias voltages. Monte Carlo (MC) simulations with the BEAMnrc code were carried out to simulate the dose enhancement factor (DEF) and compared against the equivalent photocurrent ratio from experimental investigations. The MC simulation results show that the sensitive region for the absorbed dose distribution covers a few micrometers distances from the interface. Experimentally, two single crystals (SC) and one polycrystalline (PC) synthetic diamond samples were fabricated into detectors with carbon based electrodes by boron and carbon ion implantation. Subsequently; the samples were each mounted inside a tissue equivalent encapsulation to minimize unintended fluence perturbation. Dose enhancement was generated by placing copper, lead or gold near the active volume of the detectors using 50 kVp and 100 kVp X-rays relevant for medical dosimetry. The results show enhancement in the detectors' photocurrent performance when different metals are butted up to the diamond bulk as expected. The variation in the photocurrent measurement depends on the type of diamond samples, their electrodes' fabrication and the applied bias voltages indicating that the dose enhancement near the detector may modify their electronic performance.

  5. Performance of a Heating Block System Designed for Studying the Heat Resistance of Bacteria in Foods

    NASA Astrophysics Data System (ADS)

    Kou, Xiao-Xi; Li, Rui; Hou, Li-Xia; Huang, Zhi; Ling, Bo; Wang, Shao-Jin

    2016-07-01

    Knowledge of bacteria’s heat resistance is essential for developing effective thermal treatments. Choosing an appropriate test method is important to accurately determine bacteria’s heat resistances. Although being a major factor to influence the thermo-tolerance of bacteria, the heating rate in samples cannot be controlled in water or oil bath methods due to main dependence on sample’s thermal properties. A heating block system (HBS) was designed to regulate the heating rates in liquid, semi-solid and solid foods using a temperature controller. Distilled water, apple juice, mashed potato, almond powder and beef were selected to evaluate the HBS’s performance by experiment and computer simulation. The results showed that the heating rates of 1, 5 and 10 °C/min with final set-point temperatures and holding times could be easily and precisely achieved in five selected food materials. A good agreement in sample central temperature profiles was obtained under various heating rates between experiment and simulation. The experimental and simulated results showed that the HBS could provide a sufficiently uniform heating environment in food samples. The effect of heating rate on bacterial thermal resistance was evaluated with the HBS. The system may hold potential applications for rapid and accurate assessments of bacteria’s thermo-tolerances.

  6. Mapping of local argon impingement on a virtual surface: an insight for gas injection during FEBID

    NASA Astrophysics Data System (ADS)

    Wanzenboeck, H. D.; Hochleitner, G.; Mika, J.; Shawrav, M. M.; Gavagnin, M.; Bertagnolli, E.

    2014-12-01

    During the last decades, focused electron beam induced deposition (FEBID) has become a successful approach for direct-write fabrication of nanodevices. Such a deposition technique relies on the precursor supply to the sample surface which is typically accomplished by a gas injection system using a tube-shaped injector nozzle. This precursor injection strategy implies a position-dependent concentration gradient on the surface, which affects the geometry and chemistry of the final nanodeposit. Although simulations already proposed the local distribution of nozzle-borne gas molecules impinging on the surface, this isolated step in the FEBID process has never been experimentally measured yet. This work experimentally investigates the local distribution of impinging gas molecules on the sample plane, isolating the direct impingement component from surface diffusion or precursor depletion by deposition. The experimental setup used in this work maps and quantifies the local impinging rate of argon gas over the sample plane. This setup simulates the identical conditions for a precursor molecule during FEBID. Argon gas was locally collected with a sniffer tube, which is directly connected to a residual gas analyzer for quantification. The measured distribution of impinging gas molecules showed a strong position dependence. Indeed, a 300-µm shift of the deposition area to a position further away from the impingement center spot resulted in a 50 % decrease in the precursor impinging rate on the surface area. With the same parameters, the precursor distribution was also simulated by a Monte Carlo software by Friedli and Utke and showed a good correlation between the empirical and the simulated precursor distribution. The results hereby presented underline the importance of controlling the local precursor flux conditions in order to obtain reproducible and comparable deposition results in FEBID.

  7. Development of fire shutters based on numerical optimizations

    NASA Astrophysics Data System (ADS)

    Novak, Ondrej; Kulhavy, Petr; Martinec, Tomas; Petru, Michal; Srb, Pavel

    2018-06-01

    This article deals with a prototype concept, real experiment and numerical simulation of a layered industrial fire shutter, based on some new insulating composite materials. The real fire shutter has been developed and optimized in laboratory and subsequently tested in the certified test room. A simulation of whole concept has been carried out as the non-premixed combustion process in the commercial final volume sw Pyrosim. Model of the combustion based on a stoichiometric defined mixture of gas and the tested layered samples showed good conformity with experimental results - i.e. thermal distribution inside and heat release rate that has gone through the sample.

  8. Direct magnetocaloric characterization and simulation of thermomagnetic cycles

    NASA Astrophysics Data System (ADS)

    Porcari, G.; Buzzi, M.; Cugini, F.; Pellicelli, R.; Pernechele, C.; Caron, L.; Brück, E.; Solzi, M.

    2013-07-01

    An experimental setup for the direct measurement of the magnetocaloric effect capable of simulating high frequency magnetothermal cycles on laboratory-scale samples is described. The study of the magnetocaloric properties of working materials under operative conditions is fundamental for the development of innovative devices. Frequency and time dependent characterization can provide essential information on intrinsic features such as magnetic field induced fatigue in materials undergoing first order magnetic phase transitions. A full characterization of the adiabatic temperature change performed for a sample of Gadolinium across its Curie transition shows the good agreement between our results and literature data and in-field differential scanning calorimetry.

  9. Radio-frequency characteristic variation of interdigital capacitor having multilayer graphene of various widths

    NASA Astrophysics Data System (ADS)

    Lee, Hee-Jo; Hong, Young-Pyo

    2018-03-01

    In this paper, a radio-frequency circuit model of an interdigital capacitor (IDC) with a multilayer graphene (MLG) width variation is proposed. The circuit model with three sample configurations, i.e., a bare IDC, IDC-MLG with a width of 5 μm, and IDC-MLG with a width of 20 μm, is constructed via a fitted method based on the measured samples. The simulated results of the circuit model are validated through the RF characteristics, e.g., the capacitance and the self-resonance frequency, of the measured samples. From the circuit model, all samples show not only a similar capacitance behavior but also an identical self-resonance frequency of 10 GHz. Moreover, the R, L, and C values of MLG with a 5 μm width (MLG with a 20 μm width) alone are approximately 0.8 kΩ (0.5 kΩ), 0.5 nH (0.9 nH), and 0.3 pF (0.1 pF), respectively. As a result, we find that the simulated results are in good agreement with RF characteristics of the measured samples. In the future, we expect that the proposed circuit model of an IDC with MLG will offer assistance with performance predictions of diverse IDC-based 2D material applications, such as biosensors and gas sensors, as well as supercapacitors.

  10. Optical simulation of a Popescu-Rohrlich Box

    PubMed Central

    Chu, Wen-Jing; Zong, Xiao-Lan; Yang, Ming; Pan, Guo-Zhu; Cao, Zhuo-Liang

    2016-01-01

    It is well known that the fair-sampling loophole in Bell test opened by the selection of the state to be measured can lead to post-quantum correlations. In this paper, we make the selection of the results after measurement, which opens the fair- sampling loophole too, and thus can lead to post-quantum correlations. This kind of result-selection loophole can be realized by pre- and post-selection processes within the “two-state vector formalism”, and a physical simulation of Popescu-Rohrlich (PR) box is designed in linear optical system. The probability distribution of the PR has a maximal CHSH value 4, i.e. it can maximally violate CHSH inequality. Because the “two-state vector formalism” violates the information causality, it opens the locality loophole too, which means that this kind of results selection within “two-state vector formalism” leads to both fair- sampling loophole and locality loophole, so we call it a comprehensive loophole in Bell test. The comprehensive loophole opened by the results selection within “two-state vector formalism” may be another possible explanation of why post-quantum correlations are incompatible with quantum mechanics and seem not to exist in nature. PMID:27329203

  11. Optical simulation of a Popescu-Rohrlich Box.

    PubMed

    Chu, Wen-Jing; Zong, Xiao-Lan; Yang, Ming; Pan, Guo-Zhu; Cao, Zhuo-Liang

    2016-06-22

    It is well known that the fair-sampling loophole in Bell test opened by the selection of the state to be measured can lead to post-quantum correlations. In this paper, we make the selection of the results after measurement, which opens the fair- sampling loophole too, and thus can lead to post-quantum correlations. This kind of result-selection loophole can be realized by pre- and post-selection processes within the "two-state vector formalism", and a physical simulation of Popescu-Rohrlich (PR) box is designed in linear optical system. The probability distribution of the PR has a maximal CHSH value 4, i.e. it can maximally violate CHSH inequality. Because the "two-state vector formalism" violates the information causality, it opens the locality loophole too, which means that this kind of results selection within "two-state vector formalism" leads to both fair- sampling loophole and locality loophole, so we call it a comprehensive loophole in Bell test. The comprehensive loophole opened by the results selection within "two-state vector formalism" may be another possible explanation of why post-quantum correlations are incompatible with quantum mechanics and seem not to exist in nature.

  12. A simulative comparison of respondent driven sampling with incentivized snowball sampling--the "strudel effect".

    PubMed

    Gyarmathy, V Anna; Johnston, Lisa G; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A

    2014-02-01

    Respondent driven sampling (RDS) and incentivized snowball sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania ("original sample") to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1-12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called "strudel effect" is discussed in the paper. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Detecting the sampling rate through observations

    NASA Astrophysics Data System (ADS)

    Shoji, Isao

    2018-09-01

    This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.

  14. Survey Sampling of Community College Students: For Better or for Worse.

    ERIC Educational Resources Information Center

    Rasor, Richard E.; Barr, James

    This paper provides an overview of common sampling methods (both the good and the bad) likely to be used in community college self-evaluations and presents the results from several simulated trials. The report begins by reviewing various survey techniques, discussing the negative and positive aspects of each method. The increased accuracy and…

  15. Identifying Issues and Concerns with the Use of Interval-Based Systems in Single Case Research Using a Pilot Simulation Study

    ERIC Educational Resources Information Center

    Ledford, Jennifer R.; Ayres, Kevin M.; Lane, Justin D.; Lam, Man Fung

    2015-01-01

    Momentary time sampling (MTS), whole interval recording (WIR), and partial interval recording (PIR) are commonly used in applied research. We discuss potential difficulties with analyzing data when these systems are used and present results from a pilot simulation study designed to determine the extent to which these issues are likely to be…

  16. Thermospheric Mass Density Specification: Synthesis of Observations and Models

    DTIC Science & Technology

    2013-10-21

    Simulation Experiments (OSSEs) of the column-integrated ratio of atomic oxygen and molecular nitrogen. Note that OSSEs assimilate, for a given...realistic observing system, synthetically generated observational data often sampled from model simulation results, in place of actually observed values...and molecular oxygen mass mixing ratio). Note that in the TIEGCM the molecular nitrogen mass mixing ratio is specified so that the sum of mixing

  17. Feasibility Assessment of CO2 Sequestration and Enhanced Recovery in Gas Shale Reservoirs

    NASA Astrophysics Data System (ADS)

    Vermylen, J. P.; Hagin, P. N.; Zoback, M. D.

    2008-12-01

    CO2 sequestration and enhanced methane recovery may be feasible in unconventional, organic-rich, gas shale reservoirs in which the methane is stored as an adsorbed phase. Previous studies have shown that organic-rich, Appalachian Devonian shales adsorb approximately five times more carbon dioxide than methane at reservoir conditions. However, the enhanced recovery and sequestration concept has not yet been tested for gas shale reservoirs under realistic flow and production conditions. Using the lessons learned from previous studies on enhanced coalbed methane (ECBM) as a starting point, we are conducting laboratory experiments, reservoir modeling, and fluid flow simulations to test the feasibility of sequestration and enhanced recovery in gas shales. Our laboratory work investigates both adsorption and mechanical properties of shale samples to use as inputs for fluid flow simulation. Static and dynamic mechanical properties of shale samples are measured using a triaxial press under realistic reservoir conditions with varying gas saturations and compositions. Adsorption is simultaneously measured using standard, static, volumetric techniques. Permeability is measured using pulse decay methods calibrated to standard Darcy flow measurements. Fluid flow simulations are conducted using the reservoir simulator GEM that has successfully modeled enhanced recovery in coal. The results of the flow simulation are combined with the laboratory results to determine if enhanced recovery and CO2 sequestration is feasible in gas shale reservoirs.

  18. Molecular Dynamics Simulations of Intrinsically Disordered Proteins: Force Field Evaluation and Comparison with Experiment.

    PubMed

    Henriques, João; Cragnell, Carolina; Skepö, Marie

    2015-07-14

    An increasing number of studies using molecular dynamics (MD) simulations of unfolded and intrinsically disordered proteins (IDPs) suggest that current force fields sample conformations that are overly collapsed. Here, we study the applicability of several state-of-the-art MD force fields, of the AMBER and GROMOS variety, for the simulation of Histatin 5, a short (24 residues) cationic salivary IDP with antimicrobial and antifungal properties. The quality of the simulations is assessed in three complementary analyses: (i) protein shape and size comparison with recent experimental small-angle X-ray scattering data; (ii) secondary structure prediction; (iii) energy landscape exploration and conformational class analysis. Our results show that, indeed, standard force fields sample conformations that are too compact, being systematically unable to reproduce experimental evidence such as the scattering function, the shape of the protein as compared with the Kratky plot, and intrapeptide distances obtained through the pair distance distribution function, p(r). The consistency of this deviation suggests that the problem is not mainly due to protein-protein or water-water interactions, whose parametrization varies the most between force fields and water models. In fact, as originally proposed in [ Best et al. J. Chem. Theory Comput. 2014, 10, 5113 - 5124.], balanced protein-water interactions may be the key to solving this problem. Our simulations using this approach produce results in very good agreement with experiment.

  19. Simulating adsorptive expansion of zeolites: application to biomass-derived solutions in contact with silicalite.

    PubMed

    Santander, Julian E; Tsapatsis, Michael; Auerbach, Scott M

    2013-04-16

    We have constructed and applied an algorithm to simulate the behavior of zeolite frameworks during liquid adsorption. We applied this approach to compute the adsorption isotherms of furfural-water and hydroxymethyl furfural (HMF)-water mixtures adsorbing in silicalite zeolite at 300 K for comparison with experimental data. We modeled these adsorption processes under two different statistical mechanical ensembles: the grand canonical (V-Nz-μg-T or GC) ensemble keeping volume fixed, and the P-Nz-μg-T (osmotic) ensemble allowing volume to fluctuate. To optimize accuracy and efficiency, we compared pure Monte Carlo (MC) sampling to hybrid MC-molecular dynamics (MD) simulations. For the external furfural-water and HMF-water phases, we assumed the ideal solution approximation and employed a combination of tabulated data and extended ensemble simulations for computing solvation free energies. We found that MC sampling in the V-Nz-μg-T ensemble (i.e., standard GCMC) does a poor job of reproducing both the Henry's law regime and the saturation loadings of these systems. Hybrid MC-MD sampling of the V-Nz-μg-T ensemble, which includes framework vibrations at fixed total volume, provides better results in the Henry's law region, but this approach still does not reproduce experimental saturation loadings. Pure MC sampling of the osmotic ensemble was found to approach experimental saturation loadings more closely, whereas hybrid MC-MD sampling of the osmotic ensemble quantitatively reproduces such loadings because the MC-MD approach naturally allows for locally anisotropic volume changes wherein some pores expand whereas others contract.

  20. Hot flue-gas spiking and recovery study for tetrachlorodibenzodioxins (TCDD) using Modified Method 5 and SASS (Source Assessment Sampling System) sampling with a simulated incinerator. Final report, May 1981-February 1982

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooke, M.; DeRoos, F.; Rising, B.

    1984-10-01

    The report gives results of an evaluation of the sampling and analysis of ultratrace levels of dibenzodioxins using EPA's recommended source sampling procedures (Modified Method 5 (MM5) train and the Source Assessment Sampling System--SASS). A gas-fired combustion system was used to simulate incineration flue gas, and a precision liquid injection system was designed for the program. The precision liquid injector was used to administer dilute solutions of 1,2,3,4-tetrachlorodibenzo-p-dioxin (1,2,3,4-TCDD) directly into a hot--260C (500F)--flue gas stream. Injections occurred continuously during the sampling episode so that very low gas-phase concentrations of 1,2,3,4-TCDD were continuously mixed with the flue gases. Recoveries weremore » measured for eight burn experiments. For all but one, the recoveries could be considered quantitative, demonstrating efficient collection by the EPA sampling systems. In one study, the components and connecting lines from a sampling device were analyzed separately to show where the 1,2,3,4-TCDD deposited in the train.« less

  1. THE CHALLENGE OF DETECTING CLASSICAL SWINE FEVER VIRUS CIRCULATION IN WILD BOAR (SUS SCROFA): SIMULATION OF SAMPLING OPTIONS.

    PubMed

    Sonnenburg, Jana; Schulz, Katja; Blome, Sandra; Staubach, Christoph

    2016-10-01

    Classical swine fever (CSF) is one of the most important viral diseases of domestic pigs ( Sus scrofa domesticus) and wild boar ( Sus scrofa ). For at least 4 decades, several European Union member states were confronted with outbreaks among wild boar and, as it had been shown that infected wild boar populations can be a major cause of primary outbreaks in domestic pigs, strict control measures for both species were implemented. To guarantee early detection and to demonstrate freedom from disease, intensive surveillance is carried out based on a hunting bag sample. In this context, virologic investigations play a major role in the early detection of new introductions and in regions immunized with a conventional vaccine. The required financial resources and personnel for reliable testing are often large, and sufficient sample sizes to detect low virus prevalences are difficult to obtain. We conducted a simulation to model the possible impact of changes in sample size and sampling intervals on the probability of CSF virus detection based on a study area of 65 German hunting grounds. A 5-yr period with 4,652 virologic investigations was considered. Results suggest that low prevalences could not be detected with a justifiable effort. The simulation of increased sample sizes per sampling interval showed only a slightly better performance but would be unrealistic in practice, especially outside the main hunting season. Further studies on other approaches such as targeted or risk-based sampling for virus detection in connection with (marker) antibody surveillance are needed.

  2. The X-IFU end-to-end simulations performed for the TES array optimization exercise

    NASA Astrophysics Data System (ADS)

    Peille, Philippe; Wilms, J.; Brand, T.; Cobo, B.; Ceballos, M. T.; Dauser, T.; Smith, S. J.; Barret, D.; den Herder, J. W.; Piro, L.; Barcons, X.; Pointecouteau, E.; Bandler, S.; den Hartog, R.; de Plaa, J.

    2015-09-01

    The focal plane assembly of the Athena X-ray Integral Field Unit (X-IFU) includes as the baseline an array of ~4000 single size calorimeters based on Transition Edge Sensors (TES). Other sensor array configurations could however be considered, combining TES of different properties (e.g. size). In attempting to improve the X-IFU performance in terms of field of view, count rate performance, and even spectral resolution, two alternative TES array configurations to the baseline have been simulated, each combining a small and a large pixel array. With the X-IFU end-to-end simulator, a sub-sample of the Athena core science goals, selected by the X-IFU science team as potentially driving the optimal TES array configuration, has been simulated for the results to be scientifically assessed and compared. In this contribution, we will describe the simulation set-up for the various array configurations, and highlight some of the results of the test cases simulated.

  3. [Effects of sampling plot number on tree species distribution prediction under climate change].

    PubMed

    Liang, Yu; He, Hong-Shi; Wu, Zhi-Wei; Li, Xiao-Na; Luo, Xu

    2013-05-01

    Based on the neutral landscapes under different degrees of landscape fragmentation, this paper studied the effects of sampling plot number on the prediction of tree species distribution at landscape scale under climate change. The tree species distribution was predicted by the coupled modeling approach which linked an ecosystem process model with a forest landscape model, and three contingent scenarios and one reference scenario of sampling plot numbers were assumed. The differences between the three scenarios and the reference scenario under different degrees of landscape fragmentation were tested. The results indicated that the effects of sampling plot number on the prediction of tree species distribution depended on the tree species life history attributes. For the generalist species, the prediction of their distribution at landscape scale needed more plots. Except for the extreme specialist, landscape fragmentation degree also affected the effects of sampling plot number on the prediction. With the increase of simulation period, the effects of sampling plot number on the prediction of tree species distribution at landscape scale could be changed. For generalist species, more plots are needed for the long-term simulation.

  4. Embedded ensemble propagation for improving performance, portability, and scalability of uncertainty quantification on emerging computational architectures

    DOE PAGES

    Phipps, Eric T.; D'Elia, Marta; Edwards, Harold C.; ...

    2017-04-18

    In this study, quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in anmore » embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan).« less

  5. Measurements of Regolith Simulant Thermal Conductivity Under Asteroid and Mars Surface Conditions

    NASA Astrophysics Data System (ADS)

    Ryan, A. J.; Christensen, P. R.

    2017-12-01

    Laboratory measurements have been necessary to interpret thermal data of planetary surfaces for decades. We present a novel radiometric laboratory method to determine temperature-dependent thermal conductivity of complex regolith simulants under rough to high vacuum and across a wide range of temperatures. This method relies on radiometric temperature measurements instead of contact measurements, eliminating the need to disturb the sample with thermal probes. We intend to determine the conductivity of grains that are up to 2 cm in diameter and to parameterize the effects of angularity, sorting, layering, composition, and eventually cementation. We present the experimental data and model results for a suite of samples that were selected to isolate and address regolith physical parameters that affect bulk conductivity. Spherical glass beads of various sizes were used to measure the effect of size frequency distribution. Spherical beads of polypropylene and well-rounded quartz sand have respectively lower and higher solid phase thermal conductivities than the glass beads and thus provide the opportunity to test the sensitivity of bulk conductivity to differences in solid phase conductivity. Gas pressure in our asteroid experimental chambers is held at 10^-6 torr, which is sufficient to negate gas thermal conduction in even our coarsest of samples. On Mars, the atmospheric pressure is such that the mean free path of the gas molecules is comparable to the pore size for many regolith particulates. Thus, subtle variations in pore size and/or atmospheric pressure can produce large changes in bulk regolith conductivity. For each sample measured in our martian environmental chamber, we repeat thermal measurement runs at multiple pressures to observe this behavior. Finally, we present conductivity measurements of angular basaltic simulant that is physically analogous to sand and gravel that may be present on Bennu. This simulant was used for OSIRIS-REx TAGSAM Sample Return Arm engineering tests. We measure the original size frequency distribution as well as several sorted size fractions. These results will support the efforts of the OSIRIS-REx team in selecting a site on asteroid Bennu that is safe for the spacecraft and meets grain size requirements for sampling.

  6. A comparison of moment-based methods of estimation for the log Pearson type 3 distribution

    NASA Astrophysics Data System (ADS)

    Koutrouvelis, I. A.; Canavos, G. C.

    2000-06-01

    The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.

  7. Shock wave perturbation decay in granular materials

    DOE PAGES

    Vogler, Tracy J.

    2015-11-05

    A technique in which the evolution of a perturbation in a shock wave front is monitored as it travels through a sample is applied to granular materials. Although the approach was originally conceived as a way to measure the viscosity of the sample, here it is utilized as a means to probe the deviatoric strength of the material. Initial results for a tungsten carbide powder are presented that demonstrate the approach is viable. Simulations of the experiments using continuum and mesoscale modeling approaches are used to better understand the experiments. The best agreement with the limited experimental data is obtainedmore » for the mesoscale model, which has previously been shown to give good agreement with planar impact results. The continuum simulations indicate that the decay of the perturbation is controlled by material strength but is insensitive to the compaction response. Other sensitivities are assessed using the two modeling approaches. The simulations indicate that the configuration used in the preliminary experiments suffers from certain artifacts and should be modified to remove them. As a result, the limitations of the current instrumentation are discussed, and possible approaches to improve it are suggested.« less

  8. Study of the formation of duricrusts on the martian surface and their effect on sampling equipment

    NASA Astrophysics Data System (ADS)

    Kömle, Norbert; Pitcher, Craig; Gao, Yang; Richter, Lutz

    2017-01-01

    The Powdered Sample Dosing and Distribution System (PSDDS) of the ExoMars rover will be required to handle and contain samples of Mars regolith for long periods of time. Cementation of the regolith, caused by water and salts in the soil, results in clumpy material and a duricrust layer forming on the surface. It is therefore possible that material residing in the sampling system may cement, and could potentially hinder its operation. There has yet to be an investigation into the formation of duricrusts under simulated Martian conditions, or how this may affect the performance of sample handling mechanisms. Therefore experiments have been performed to create a duricrust and to explore the cementation of Mars analogues, before performing a series of tests on a qualification model of the PSDDS under simulated Martian conditions. It was possible to create a consolidated crust of cemented material several millimetres deep, with the material below remaining powder-like. It was seen that due to the very low permeability of the Montmorillonite component material, diffusion of water through the material was quickly blocked, resulting in a sample with an inhomogeneous water content. Additionally, samples with a water mass content of 10% or higher would cement into a single solid piece. Finally, tests with the PSDDS revealed that samples with a water mass content of just 5% created small clumps with significant internal cohesion, blocking the sample funnels and preventing transportation of the material. These experiments have highlighted that the cementation of regolith in Martian conditions must be taken into consideration in the design of sample handling instruments.

  9. Simulations of multi-contrast x-ray imaging using near-field speckles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zdora, Marie-Christine; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom and Department of Physics & Astronomy, University College London, London, WC1E 6BT; Thibault, Pierre

    2016-01-28

    X-ray dark-field and phase-contrast imaging using near-field speckles is a novel technique that overcomes limitations inherent in conventional absorption x-ray imaging, i.e. poor contrast for features with similar density. Speckle-based imaging yields a wealth of information with a simple setup tolerant to polychromatic and divergent beams, and simple data acquisition and analysis procedures. Here, we present a simulation software used to model the image formation with the speckle-based technique, and we compare simulated results on a phantom sample with experimental synchrotron data. Thorough simulation of a speckle-based imaging experiment will help for better understanding and optimising the technique itself.

  10. New media simulation stories in nursing education: a quasi-experimental study exploring learning outcomes.

    PubMed

    Webb-Corbett, Robin; Schwartz, Melissa Renee; Green, Bob; Sessoms, Andrea; Swanson, Melvin

    2013-04-01

    New media simulation stories are short multimedia presentations that combine simulation, digital technology, and story branching to depict a variety of healthcare-related scenarios. The purpose of this study was to explore whether learning outcomes were enhanced if students viewed the results of both correct and incorrect nursing actions demonstrated through new media simulation stories. A convenience sample of 109 undergraduate nursing students in a family-centered maternity course participated in the study. Study findings suggests that students who viewed both correct and incorrect depictions of maternity nursing actions scored better on tests than did those students who viewed only correct nursing actions.

  11. Sensitivity of airborne fluorosensor measurements to linear vertical gradients in chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Venable, D. D.; Punjabi, A. R.; Poole, L. R.

    1984-01-01

    A semianalytic Monte Carlo radiative transfer simulation model for airborne laser fluorosensors has been extended to investigate the effects of inhomogeneities in the vertical distribution of phytoplankton concentrations in clear seawater. Simulation results for linearly varying step concentrations of chlorophyll are presented. The results indicate that statistically significant differences can be seen under certain conditions in the water Raman-normalized fluorescence signals between nonhomogeneous and homogeneous cases. A statistical test has been used to establish ranges of surface concentrations and/or verticl gradients in which calibration by surface samples would by inappropriate, and the results are discussed.

  12. The Development of a 3D LADAR Simulator Based on a Fast Target Impulse Response Generation Approach

    NASA Astrophysics Data System (ADS)

    Al-Temeemy, Ali Adnan

    2017-09-01

    A new laser detection and ranging (LADAR) simulator has been developed, using MATLAB and its graphical user interface, to simulate direct detection time of flight LADAR systems, and to produce 3D simulated scanning images under a wide variety of conditions. This simulator models each stage from the laser source to data generation and can be considered as an efficient simulation tool to use when developing LADAR systems and their data processing algorithms. The novel approach proposed for this simulator is to generate the actual target impulse response. This approach is fast and able to deal with high scanning requirements without losing the fidelity that accompanies increments in speed. This leads to a more efficient LADAR simulator and opens up the possibility for simulating LADAR beam propagation more accurately by using a large number of laser footprint samples. The approach is to select only the parts of the target that lie in the laser beam angular field by mathematically deriving the required equations and calculating the target angular ranges. The performance of the new simulator has been evaluated under different scanning conditions, the results showing significant increments in processing speeds in comparison to conventional approaches, which are also used in this study as a point of comparison for the results. The results also show the simulator's ability to simulate phenomena related to the scanning process, for example, type of noise, scanning resolution and laser beam width.

  13. Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2012-09-01

    This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.

  14. Influence of Decontaminating Agents and Swipe Materials on Laboratory Simulated Working Surfaces Wet Spilled with Sodium Pertechnetate

    PubMed Central

    Akchata, Suman; Lavanya, K; Shivanand, Bhushan

    2017-01-01

    Context: Decontamination of various working surfaces with sodium pertechnetate minor spillage is essential for maintaining good radiation safety practices as well as for regulatory compliance. Aim: To observe the influences of decontaminating agents and swipe materials on different type of surfaces used in nuclear medicine laboratory work area wet spilled with 99m-technetium (99mTc) sodium pertechnetate. Settings and Design: Lab-simulated working surface materials. Experimental study design. Materials and Methods: Direct decontamination method on dust-free lab simulated new working surfaces [stainless steel, polyvinyl chloride (PVC), Perspex, resin] using four decontaminating agents [tap water, soap water (SW), Radiacwash, and spirit] with four different swipe material [cotton, tissue paper (TP), Whatman paper (WP), adsorbent sheet (AS)] was taken 10 samples (n = 10) for each group. Statistical Analysis: Parametric test two-way analysis of variance is used with significance level of 0.005, was used to evaluate statistical differences between different group of decontaminating agent and swipe material, and the results are expressed in mean ± SD. Results: Decontamination factor is calculated after five cleaning for each group. A total of 160 samples result calculated using four decontaminating agent (tap water, SW, Radiacwash, and spirit), four swipe material (cotton, TP, WP, and AS) for commonly used surface (stainless steel, PVC, Perspex, resin) using direct method by 10 samples (n = 10) for each group. Conclusions: Tap water is the best decontaminating agent compared with SW, Radiac wash and spirit for the laboratory simulated stainless steel, PVC, and Perspex surface material, whereas in case of resin surface material, SW decontaminating agent is showing better effectiveness. Cotton is the best swipe material compared to WP-1, AS and TP for the stainless steel, PVC, Perspex, and resin laboratory simulated surface materials. Perspex and stainless steel are the most suitable and recommended laboratory surface material compared to PVC and resin in nuclear medicine. Radiacwash may show better result for 99mTc labelled product and other radionuclide contamination on the laboratory working surface area. PMID:28680198

  15. Statistical variances of diffusional properties from ab initio molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    He, Xingfeng; Zhu, Yizhou; Epstein, Alexander; Mo, Yifei

    2018-12-01

    Ab initio molecular dynamics (AIMD) simulation is widely employed in studying diffusion mechanisms and in quantifying diffusional properties of materials. However, AIMD simulations are often limited to a few hundred atoms and a short, sub-nanosecond physical timescale, which leads to models that include only a limited number of diffusion events. As a result, the diffusional properties obtained from AIMD simulations are often plagued by poor statistics. In this paper, we re-examine the process to estimate diffusivity and ionic conductivity from the AIMD simulations and establish the procedure to minimize the fitting errors. In addition, we propose methods for quantifying the statistical variance of the diffusivity and ionic conductivity from the number of diffusion events observed during the AIMD simulation. Since an adequate number of diffusion events must be sampled, AIMD simulations should be sufficiently long and can only be performed on materials with reasonably fast diffusion. We chart the ranges of materials and physical conditions that can be accessible by AIMD simulations in studying diffusional properties. Our work provides the foundation for quantifying the statistical confidence levels of diffusion results from AIMD simulations and for correctly employing this powerful technique.

  16. Hankin and Reeves' approach to estimating fish abundance in small streams: Limitations and alternatives

    USGS Publications Warehouse

    Thompson, W.L.

    2003-01-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled stream units. Violations of these assumptions may produce suspect results. To determine possible sources of the assumption violations, I used data on the abundance of steelhead Oncorhynchus mykiss from Hankin and Reeves' (1988) in a simulation composed of 50,000 repeated, stratified systematic random samples from a spatially clustered distribution. The simulation was used to investigate effects of a range of removal estimates, from 75% to 100% of true fish abundance, on overall stream fish population estimates. The effects of various categories of removal-estimates-to-snorkel-count correlation levels (r = 0.75-1.0) on fish population estimates were also explored. Simulation results indicated that Hankin and Reeves' approach may produce poor results unless removal estimates exceed at least 85% of the true number of fish within sampled units and unless correlations between removal estimates and snorkel counts are at least 0.90. A potential modification to Hankin and Reeves' approach is the inclusion of environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative approach is to use snorkeling combined with line transect sampling to estimate fish densities within stream units. As with any method of population estimation, a pilot study should be conducted to evaluate its usefulness, which requires a known (or nearly so) population of fish to serve as a benchmark for evaluating bias and precision of estimators.

  17. Mutagenicity of automobile workshop soil leachate and tobacco industry wastewater using the Ames Salmonella fluctuation and the SOS chromotests.

    PubMed

    Okunola, Alabi A; Babatunde, Esan E; Chinwe, Duru; Pelumi, Oyedele; Ramatu, Salihu G

    2016-06-01

    Environmental management of industrial solid wastes and wastewater is an important economic and environmental health problem globally. This study evaluated the mutagenic potential of automobile workshop soil-simulated leachate and tobacco wastewater using the SOS chromotest on Escherichia coli PQ37 and the Ames Salmonella fluctuation test on Salmonella typhimurium strains TA98 and TA100 without metabolic activation. Physicochemical parameters of the samples were also analyzed. The result of the Ames test showed mutagenicity of the test samples. However, the TA100 was the more responsive strain for both the simulated leachate and tobacco wastewater in terms of mutagenic index in the absence of metabolic activation. The SOS chromotest results were in agreement with those of the Ames Salmonella fluctuation test. Nevertheless, the E. coli PQ37 system was slightly more sensitive than the Salmonella assay for detecting genotoxins in the tested samples. Iron, cadmium, manganese, copper, nickel, chromium, arsenic, zinc, and lead contents analyzed in the samples were believed to play significant role in the observed mutagenicity in the microbial assays. The results of this study showed that the simulated leachate and tobacco wastewater showed strong indication of a genotoxic risk. Further studies would be required in the analytical field in order to identify and quantify other compounds not analyzed for in this study, some of which could be responsible for the observed genotoxicity. This will be necessary in order to identify the sources of toxicants and thus to take preventive and/or curative measures to limit the toxicity of these types of wastes. © The Author(s) 2014.

  18. Validated numerical simulation model of a dielectric elastomer generator

    NASA Astrophysics Data System (ADS)

    Foerster, Florentine; Moessinger, Holger; Schlaak, Helmut F.

    2013-04-01

    Dielectric elastomer generators (DEG) produce electrical energy by converting mechanical into electrical energy. Efficient operation requires homogeneous deformation of each single layer. However, by different internal and external influences like supports or the shape of a DEG the deformation will be inhomogeneous and hence negatively affect the amount of the generated electrical energy. Optimization of the deformation behavior leads to improved efficiency of the DEG and consequently to higher energy gain. In this work a numerical simulation model of a multilayer dielectric elastomer generator is developed using the FEM software ANSYS. The analyzed multilayer DEG consists of 49 active dielectric layers with layer thicknesses of 50 μm. The elastomer is silicone (PDMS) while the compliant electrodes are made of graphite powder. In the simulation the real material parameters of the PDMS and the graphite electrodes need to be included. Therefore, the mechanical and electrical material parameters of the PDMS are determined by experimental investigations of test samples while the electrode parameters are determined by numerical simulations of test samples. The numerical simulation of the DEG is carried out as coupled electro-mechanical simulation for the constant voltage energy harvesting cycle. Finally, the derived numerical simulation model is validated by comparison with analytical calculations and further simulated DEG configurations. The comparison of the determined results show good accordance with regard to the deformation of the DEG. Based on the validated model it is now possible to optimize the DEG layout for improved deformation behavior with further simulations.

  19. Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.

  20. G-DYN Multibody Dynamics Engine

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, James C.; Broderick, Daniel

    2011-01-01

    G-DYN is a multi-body dynamic simulation software engine that automatically assembles and integrates equations of motion for arbitrarily connected multibody dynamic systems. The algorithm behind G-DYN is based on a primal-dual formulation of the dynamics that captures the position and velocity vectors (primal variables) of each body and the interaction forces (dual variables) between bodies, which are particularly useful for control and estimation analysis and synthesis. It also takes full advantage of the spare matrix structure resulting from the system dynamics to numerically integrate the equations of motion efficiently. Furthermore, the dynamic model for each body can easily be replaced without re-deriving the overall equations of motion, and the assembly of the equations of motion is done automatically. G-DYN proved an essential software tool in the simulation of spacecraft systems used for small celestial body surface sampling, specifically in simulating touch-and-go (TAG) maneuvers of a robotic sampling system from a comet and asteroid. It is used extensively in validating mission concepts for small body sample return, such as Comet Odyssey and Galahad New Frontiers proposals.

  1. Mineral assemblage transformation of a metakaolin-based waste form after geopolymer encapsulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Benjamin D.; Neeway, James J.; Snyder, Michelle M. V.

    2016-05-01

    Current plans for nuclear waste vitrification at the Hanford Tank Waste Treatment and Immobilization Plant (WTP) lack the capacity to treat all of the low activity waste (LAW) that is not encapsulated in the vitrified product. Fluidized Bed Steam Reforming (FBSR) is one of the supplemental technologies under consideration to fill this gap. The FBSR process results in a granular product mainly composed of feldspathoid mineral phases that encapsulate the LAW and other contaminants of concern (COCs). In order to better understand the characteristics of the FBSR product, characterization testing has been performed on the granular product as well asmore » the granular product encapsulated in a monolithic geopolymer binder. The non-radioactive simulated tank waste samples created for use in this study are the result of a 2008 Department of Energy sponsored Engineering Scale Technology Demonstration (ESTD) in 2008. These samples were created from waste simulant that was chemically shimmed to resemble actual tank waste, and rhenium has been used as a substitute for technetium. Another set of samples was created by the Savannah River Site Bench-Scale Reformer (BSR) using a chemical shim of Savannah River Site Tank 50 waste in order to simulate a blend of 68 Hanford tank wastes. This paper presents results from coal and moisture removal tests along with XRD, SEM, and BET analyses showing that the major mineral components are predominantly sodium aluminosilicate minerals and that the mineral product is highly porous. Results also show that the materials pass the short-term leach tests: the Toxicity Characteristic Leaching Procedure (TCLP) and Product Consistency Test (PCT).« less

  2. Optical properties of selected components of mineral dust aerosol processed with organic acids and humic material

    NASA Astrophysics Data System (ADS)

    Alexander, Jennifer M.; Grassian, V. H.; Young, M. A.; Kleiber, P. D.

    2015-03-01

    Visible light scattering phase function and linear polarization profiles of mineral dust components processed with organic acids and humic material are measured, and results are compared to T-matrix simulations of the scattering properties. Processed samples include quartz mixed with humic material, and calcite reacted with acetic and oxalic acids. Clear differences in light scattering properties are observed for all three processed samples when compared to the unprocessed dust or organic salt products. Results for quartz processed with humic acid sodium salt (NaHA) indicate the presence of both internally mixed quartz-NaHA particles and externally mixed NaHA aerosol. Simulations of light scattering suggest that the processed quartz particles become more moderate in shape due to the formation of a coating of humic material over the mineral core. Experimental results for calcite reacted with acetic acid are consistent with an external mixture of calcite and the reaction product, calcium acetate. Modeling of the light scattering properties does not require any significant change to the calcite particle shape distribution although morphology changes cannot be ruled out by our data. It is expected that calcite reacted with oxalic acid will produce internally mixed particles of calcite and calcium oxalate due to the low solubility of the product salt. However, simulations of the scattering for the calcite-oxalic acid system result in rather poor fits to the data when compared to the other samples. The poor fit provides a less accurate picture of the impact of processing in the calcite-oxalic acid system.

  3. Influence of pollution control on lead inhalation bioaccessibility in PM2.5: A case study of 2014 Youth Olympic Games in Nanjing.

    PubMed

    Li, Shi-Wei; Li, Hong-Bo; Luo, Jun; Li, Hui-Ming; Qian, Xin; Liu, Miao-Miao; Bi, Jun; Cui, Xin-Yi; Ma, Lena Q

    2016-09-01

    Pollution controls were implemented to improve the air quality for the 2014 Youth Olympic Games (YOG) in Nanjing. To investigate the influence of pollution control on Pb inhalation bioaccessibility in PM2.5, samples were collected before, during, and after YOG. The objectives were to identify Pb sources in PM2.5 using stable isotope fingerprinting technique and compare Pb inhalation bioaccessibility in PM2.5 using two simulated lung fluids. While artificial lysosomal fluid (ALF) simulates interstitial fluid at pH 7.4, Gamble's solution simulates fluid in alveolar macrophages at pH 4.5. The Pb concentration in PM2.5 samples during YOG (88.2ngm(-3)) was 44-48% lower than that in non-YOG samples. Based on stable Pb isotope ratios, Pb in YOG samples was mainly from coal combustion while Pb in non-YOG samples was from coal combustion and smelting activities. While Pb bioaccessibility in YOG samples was lower than those in non-YOG samples (59-79% vs. 55-87%) by ALF, it was higher than those in non-YOG samples (11-29% vs. 5.3-21%) based on Gamble's solution, attributing to the lower pH and organic acids in ALF. Different Pb bioaccessibility in PM2.5 between samples resulted from changes in Pb species due to pollution control. PbSO4 was the main Pb species in PM2.5 from coal combustion, which was less soluble in ALF than PbO from smelting activities, but more soluble in Gamble's solution. This study showed it is important to consider Pb bioaccessibility during pollution control as source control not only reduced Pb contamination in PM2.5 but also influenced Pb bioaccessibility. Published by Elsevier Ltd.

  4. Accounting for treatment by center interaction in sample size determinations and the use of surrogate outcomes in the pessary for the prevention of preterm birth trial: a simulation study.

    PubMed

    Willan, Andrew R

    2016-07-05

    The Pessary for the Prevention of Preterm Birth Study (PS3) is an international, multicenter, randomized clinical trial designed to examine the effectiveness of the Arabin pessary in preventing preterm birth in pregnant women with a short cervix. During the design of the study two methodological issues regarding power and sample size were raised. Since treatment in the Standard Arm will vary between centers, it is anticipated that so too will the probability of preterm birth in that arm. This will likely result in a treatment by center interaction, and the issue of how this will affect the sample size requirements was raised. The sample size requirements to examine the effect of the pessary on the baby's clinical outcome was prohibitively high, so the second issue is how best to examine the effect on clinical outcome. The approaches taken to address these issues are presented. Simulation and sensitivity analysis were used to address the sample size issue. The probability of preterm birth in the Standard Arm was assumed to vary between centers following a Beta distribution with a mean of 0.3 and a coefficient of variation of 0.3. To address the second issue a Bayesian decision model is proposed that combines the information regarding the between-treatment difference in the probability of preterm birth from PS3 with the data from the Multiple Courses of Antenatal Corticosteroids for Preterm Birth Study that relate preterm birth and perinatal mortality/morbidity. The approach provides a between-treatment comparison with respect to the probability of a bad clinical outcome. The performance of the approach was assessed using simulation and sensitivity analysis. Accounting for a possible treatment by center interaction increased the sample size from 540 to 700 patients per arm for the base case. The sample size requirements increase with the coefficient of variation and decrease with the number of centers. Under the same assumptions used for determining the sample size requirements, the simulated mean probability that pessary reduces the risk of perinatal mortality/morbidity is 0.98. The simulated mean decreased with coefficient of variation and increased with the number of clinical sites. Employing simulation and sensitivity analysis is a useful approach for determining sample size requirements while accounting for the additional uncertainty due to a treatment by center interaction. Using a surrogate outcome in conjunction with a Bayesian decision model is an efficient way to compare important clinical outcomes in a randomized clinical trial in situations where the direct approach requires a prohibitively high sample size.

  5. Thermal Conductivity Measurements of Helium 4 Near the Lambda-Transition Using a Magnetostrictive Low Gravity Simulator

    NASA Technical Reports Server (NTRS)

    Larson, Melora; Israelsson, Ulf E.

    1995-01-01

    There has been a recent increase in interest both experimentally and theoretically in the study of liquid helium very near the lambda-transition in the presence of a heat current. In traditional ground based experiments there are gravitationally induced pressure variations in any macroscopic helium sample that limit how closely the transition can be approached. We have taken advantage of the finite magnetic susceptibility of He 4 to build a magnetostrictive low gravity simulator. The simulator consists of a superconducting magnet with field profile shaped to counteract the force of gravity in a helium sample. When the magnet is operated with B x dB/dz = 21T(exp 2)/cm at the location of the cell, the gravitationally induced pressure variations will be canceled to within 1% over a volume of 0.5 cm in height and 0.5 cm in diameter. This technique for canceling the pressure variations in a long sample cell allows the lambda-transition to be studied much closer in reduced temperature and under a wider range of applied heat currents than is possible using other ground based techniques. Preliminary results using this low gravity simulator and the limitations of the magnetostrictive technique in comparison to doing space based experiments will be presented.

  6. Formation of iron nanoparticles and increase in iron reactivity in mineral dust during simulated cloud processing.

    PubMed

    Shi, Zongbo; Krom, Michael D; Bonneville, Steeve; Baker, Alex R; Jickells, Timothy D; Benning, Liane G

    2009-09-01

    The formation of iron (Fe) nanoperticles and increase in Fe reactivity in mineral dust during simulated cloud processing was investigated using high-resolution microscopy and chemical extraction methods. Cloud processing of dust was experimentally simulated via an alternation of acidic (pH 2) and circumneutral conditions (pH 5-6) over periods of 24 h each on presieved (<20 microm) Saharan soil and goethite suspensions. Microscopic analyses of the processed soil and goethite samples reveal the neo-formation of Fe-rich nanoparticle aggregates, which were not found initially. Similar Fe-rich nanoparticles were also observed in wet-deposited Saharen dusts from the western Mediterranean but not in dry-deposited dust from the eastern Mediterranean. Sequential Fe extraction of the soil samples indicated an increase in the proportion of chemically reactive Fe extractable by an ascorbate solution after simulated cloud processing. In addition, the sequential extractions on the Mediterranean dust samples revealed a higher content of reactive Fe in the wet-deposited dust compared to that of the dry-deposited dust These results suggestthat large variations of pH commonly reported in aerosol and cloud waters can trigger neo-formation of nanosize Fe particles and an increase in Fe reactivity in the dust

  7. Cyclic arc plasma tests of RSI materials using a preheater

    NASA Technical Reports Server (NTRS)

    Stewart, D. A.

    1973-01-01

    The results of a test program are reported in which a preheater was used with an arc plasma stream to study the thermal response of samples of candidate reusable surface insulation materials for the space shuttle. The preheater simulated the shuttle temperature history during the first and last portions of the test cycle, which could not be simulated by the air arc plasma flow. Pre- and post-test data taken for each of the materials included magnified views, optical properties, and chemical analyses. The test results indicate that the mullite base samples experience higher surface temperatures than the other materials at heating rates greater than 225 kw/sq m. The ceramic fibrous mullite and silica coatings show noncatalytic wall behavior. Internal temperature response data for the materials are compared and correlated with analytical predictions.

  8. H3 Histone Tail Conformation within the Nucleosome and the Impact of K14 Acetylation Studied Using Enhanced Sampling Simulation

    PubMed Central

    Ikebe, Jinzen; Sakuraba, Shun; Kono, Hidetoshi

    2016-01-01

    Acetylation of lysine residues in histone tails is associated with gene transcription. Because histone tails are structurally flexible and intrinsically disordered, it is difficult to experimentally determine the tail conformations and the impact of acetylation. In this work, we performed simulations to sample H3 tail conformations with and without acetylation. The results show that irrespective of the presence or absence of the acetylation, the H3 tail remains in contact with the DNA and assumes an α-helix structure in some regions. Acetylation slightly weakened the interaction between the tail and DNA and enhanced α-helix formation, resulting in a more compact tail conformation. We inferred that this compaction induces unwrapping and exposure of the linker DNA, enabling DNA-binding proteins (e.g., transcription factors) to bind to their target sequences. In addition, our simulation also showed that acetylated lysine was more often exposed to the solvent, which is consistent with the fact that acetylation functions as a post-translational modification recognition site marker. PMID:26967163

  9. Assessment of ecologic regression in the study of lung cancer and indoor radon.

    PubMed

    Stidley, C A; Samet, J M

    1994-02-01

    Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.

  10. The Influence of PV Module Materials and Design on Solder Joint Thermal Fatigue Durability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosco, Nick; Silverman, Timothy J.; Kurtz, Sarah

    Finite element model (FEM) simulations have been performed to elucidate the effect of flat plate photovoltaic (PV) module materials and design on PbSn eutectic solder joint thermal fatigue durability. The statistical method of Latin Hypercube sampling was employed to investigate the sensitivity of simulated damage to each input variable. Variables of laminate material properties and their thicknesses were investigated. Using analysis of variance, we determined that the rate of solder fatigue was most sensitive to solder layer thickness, with copper ribbon and silicon thickness being the next two most sensitive variables. By simulating both accelerated thermal cycles (ATCs) and PVmore » cell temperature histories through two characteristic days of service, we determined that the acceleration factor between the ATC and outdoor service was independent of the variables sampled in this study. This result implies that an ATC test will represent a similar time of outdoor exposure for a wide range of module designs. This is an encouraging result for the standard ATC that must be universally applied across all modules.« less

  11. Simulation of elution profiles in liquid chromatography - II: Investigation of injection volume overload under gradient elution conditions applied to second dimension separations in two-dimensional liquid chromatography.

    PubMed

    Stoll, Dwight R; Sajulga, Ray W; Voigt, Bryan N; Larson, Eli J; Jeong, Lena N; Rutan, Sarah C

    2017-11-10

    An important research direction in the continued development of two-dimensional liquid chromatography (2D-LC) is to improve the detection sensitivity of the method. This is especially important in applications where injection of large volumes of effluent from the first dimension ( 1 D) column into the second dimension ( 2 D) column leads to severe 2 D peak broadening and peak shape distortion. For example, this is common when coupling two reversed-phase columns and the organic solvent content of the 1 D mobile phase overwhelms the 2 D column with each injection of 1 D effluent, leading to low resolution in the second dimension. In a previous study we validated a simulation approach based on the Craig distribution model and adapted from the work of Czok and Guiochon [1] that enabled accurate simulation of simple isocratic and gradient separations with very small injection volumes, and isocratic separations with mismatched injection and mobile phase solvents [2]. In the present study we have extended this simulation approach to simulate separations relevant to 2D-LC. Specifically, we have focused on simulating 2 D separations where gradient elution conditions are used, there is mismatch between the sample solvent and the starting point in the gradient elution program, injection volumes approach or even exceed the dead volume of the 2 D column, and the extent of sample loop filling is varied. To validate this simulation we have compared results from simulations and experiments for 101 different conditions, including variation in injection volume (0.4-80μL), loop filling level (25-100%), and degree of mismatch between sample organic solvent and the starting point in the gradient elution program (-20 to +20% ACN). We find that that the simulation is accurate enough (median errors in retention time and peak width of -1.0 and -4.9%, without corrections for extra-column dispersion) to be useful in guiding optimization of 2D-LC separations. However, this requires that real injection profiles obtained from 2D-LC interface valves are used to simulate the introduction of samples into the 2 D column. These profiles are highly asymmetric - simulation using simple rectangular pulses leads to peak widths that are far too narrow under many conditions. We believe the simulation approach developed here will be useful for addressing practical questions in the development of 2D-LC methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. [Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].

    PubMed

    Furuta, Takuya

    2017-01-01

    Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.

  13. Enhanced release and drug delivery of celecoxib into physiological environment by the different types of nanoscale vehicles

    NASA Astrophysics Data System (ADS)

    Khazraei, Avideh; Tarlani, Aliakbar; Naderi, Nima; Muzart, Jacques; Abdulhameed (Kaabi), Zahra; Eslami-Moghadam, Mahbube

    2017-11-01

    Celecoxib (CEL) as the very low water soluble drug was loaded 16 and 50% (w/w) through an impregnation method on varieties of alumina nanostructures such as synthetic sol-gel γ-alumina (Gam-Al), functionalized sol-gel γ-alumina (Gam-Al-NH2), organized nano porous alumina (Onp-Al) and then the results compared with commercial alumina (Com-Al) and SBA-15 (SBA). Analyses of the samples were carried out by FT-IR, X-ray diffraction (XRD) and N2-sorption. in vitro studies were accomplished in simulated body fluid (SBF), simulated gastric fluid (SGF) and simulated intestinal fluid (SIF). In vivo study was carried out on male wistar rats under standard conditions. The N2-sorption revealed the initial pore characteristics of the nanocarriers. XRD patterns showed that the 50% loaded samples contain bulk celecoxib and its solubility in body fluids is lower than that of 16% loaded samples. In the case of 16% loaded samples, the drug solubility in three simulated body fluids drug was found to decrease in the following order: Gam-Al-CEL > Onp-Al-CEL > Com-Al-CEL > SBA-CEL. Gam-Al-CEL showed the highest release (96%) in SBF after 60 min in vivo study showed significant decrease in pain score in rats for Gam-Al-NH2-CEL-16% and Gam-Al-CEL-50%. It could be concluded that the synthetic aluminas have a developing future potential compared to the formal SBA and commercial alumina.

  14. Ultrasonic density measurement cell design and simulation of non-ideal effects.

    PubMed

    Higuti, Ricardo Tokio; Buiochi, Flávio; Adamowski, Júlio Cezar; de Espinosa, Francisco Montero

    2006-07-01

    This paper presents a theoretical analysis of a density measurement cell using an unidimensional model composed by acoustic and electroacoustic transmission lines in order to simulate non-ideal effects. The model is implemented using matrix operations, and is used to design the cell considering its geometry, materials used in sensor assembly, range of liquid sample properties and signal analysis techniques. The sensor performance in non-ideal conditions is studied, considering the thicknesses of adhesive and metallization layers, and the effect of residue of liquid sample which can impregnate on the sample chamber surfaces. These layers are taken into account in the model, and their effects are compensated to reduce the error on density measurement. The results show the contribution of residue layer thickness to density error and its behavior when two signal analysis methods are used.

  15. Sampling Versus Filtering in Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Debliquy, O.; Knaepen, B.; Carati, D.; Wray, A. A.

    2004-01-01

    A LES formalism in which the filter operator is replaced by a sampling operator is proposed. The unknown quantities that appear in the LES equations originate only from inadequate resolution (Discretization errors). The resulting viewpoint seems to make a link between finite difference approaches and finite element methods. Sampling operators are shown to commute with nonlinearities and to be purely projective. Moreover, their use allows an unambiguous definition of the LES numerical grid. The price to pay is that sampling never commutes with spatial derivatives and the commutation errors must be modeled. It is shown that models for the discretization errors may be treated using the dynamic procedure. Preliminary results, using the Smagorinsky model, are very encouraging.

  16. Investigation on porosity and permeability change of Mount Simon sandstone (Knox County, IN, USA) under geological CO 2 sequestration conditions: a numerical simulation approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Liwei; Soong, Yee; Dilmore, Robert M.

    In this paper, a numerical model was developed to simulate reactive transport with porosity and permeability change of Mount Simon sandstone (samples from Knox County, IN) after 180 days of exposure to CO 2-saturated brine under CO 2 sequestration conditions. The model predicted formation of a high-porosity zone adjacent to the surface of the sample in contact with bulk brine, and a lower porosity zone just beyond that high-porosity zone along the path from sample/bulk brine interface to sample core. The formation of the high porosity zone was attributed to dissolution of quartz and muscovite/illite, while the formation of themore » lower porosity zone adjacent to the aforementioned high porosity zone was attributed to precipitation of kaolinite and feldspar. The model predicted a 40% permeability increase for the Knox sandstone sample after 180 days of exposure to CO 2-saturated brine, which was consistent with laboratory-measured permeability results. Model-predicted solution chemistry results were also found to be consistent with laboratory-measured solution chemistry data. Finally, initial porosity, initial feldspar content and the exponent n value (determined by pore structure and tortuosity) used in permeability calculations were three important factors affecting permeability evolution of sandstone samples under CO 2 sequestration conditions.« less

  17. Investigation on porosity and permeability change of Mount Simon sandstone (Knox County, IN, USA) under geological CO 2 sequestration conditions: a numerical simulation approach

    DOE PAGES

    Zhang, Liwei; Soong, Yee; Dilmore, Robert M.

    2016-01-14

    In this paper, a numerical model was developed to simulate reactive transport with porosity and permeability change of Mount Simon sandstone (samples from Knox County, IN) after 180 days of exposure to CO 2-saturated brine under CO 2 sequestration conditions. The model predicted formation of a high-porosity zone adjacent to the surface of the sample in contact with bulk brine, and a lower porosity zone just beyond that high-porosity zone along the path from sample/bulk brine interface to sample core. The formation of the high porosity zone was attributed to dissolution of quartz and muscovite/illite, while the formation of themore » lower porosity zone adjacent to the aforementioned high porosity zone was attributed to precipitation of kaolinite and feldspar. The model predicted a 40% permeability increase for the Knox sandstone sample after 180 days of exposure to CO 2-saturated brine, which was consistent with laboratory-measured permeability results. Model-predicted solution chemistry results were also found to be consistent with laboratory-measured solution chemistry data. Finally, initial porosity, initial feldspar content and the exponent n value (determined by pore structure and tortuosity) used in permeability calculations were three important factors affecting permeability evolution of sandstone samples under CO 2 sequestration conditions.« less

  18. Simulation of HLNC and NCC measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ming-Shih; Teichmann, T.; De Ridder, P.

    1994-03-01

    This report discusses an automatic method of simulating the results of High Level Neutron Coincidence Counting (HLNC) and Neutron Collar Coincidence Counting (NCC) measurements to facilitate the safeguards` inspectors understanding and use of these instruments under realistic conditions. This would otherwise be expensive, and time-consuming, except at sites designed to handle radioactive materials, and having the necessary variety of fuel elements and other samples. This simulation must thus include the behavior of the instruments for variably constituted and composed fuel elements (including poison rods and Gd loading), and must display the changes in the count rates as a function ofmore » these characteristics, as well as of various instrumental parameters. Such a simulation is an efficient way of accomplishing the required familiarization and training of the inspectors by providing a realistic reproduction of the results of such measurements.« less

  19. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Treesearch

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  20. Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research

    ERIC Educational Resources Information Center

    Cook, David A.; Hatala, Rose

    2015-01-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…

Top