Sample records for parameters including sample

  1. Neutrino oscillation parameter sampling with MonteCUBES

    NASA Astrophysics Data System (ADS)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those

  2. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  3. Comparison of sampling techniques for Bayesian parameter estimation

    NASA Astrophysics Data System (ADS)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  4. Estimation of nonlinear pilot model parameters including time delay.

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.; Wells, W. R.

    1972-01-01

    Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.

  5. Iterative Importance Sampling Algorithms for Parameter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray W; Morzfeld, Matthias; Day, Marcus S.

    In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is a challenging task. Several sampling algorithms have been proposed over the past years that take an iterative approach to constructing a proposal distribution. We investigate the applicabilitymore » of such algorithms by applying them to two realistic and challenging test problems, one in subsurface flow, and one in combustion modeling. More specifically, we implement importance sampling algorithms that iterate over the mean and covariance matrix of Gaussian or multivariate t-proposal distributions. Our implementation leverages massively parallel computers, and we present strategies to initialize the iterations using 'coarse' MCMC runs or Gaussian mixture models.« less

  6. Random sampling and validation of covariance matrices of resonance parameters

    NASA Astrophysics Data System (ADS)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  7. Nonequilibrium umbrella sampling in spaces of many order parameters

    NASA Astrophysics Data System (ADS)

    Dickson, Alex; Warmflash, Aryeh; Dinner, Aaron R.

    2009-02-01

    We recently introduced an umbrella sampling method for obtaining nonequilibrium steady-state probability distributions projected onto an arbitrary number of coordinates that characterize a system (order parameters) [A. Warmflash, P. Bhimalapuram, and A. R. Dinner, J. Chem. Phys. 127, 154112 (2007)]. Here, we show how our algorithm can be combined with the image update procedure from the finite-temperature string method for reversible processes [E. Vanden-Eijnden and M. Venturoli, "Revisiting the finite temperature string method for calculation of reaction tubes and free energies," J. Chem. Phys. (in press)] to enable restricted sampling of a nonequilibrium steady state in the vicinity of a path in a many-dimensional space of order parameters. For the study of transitions between stable states, the adapted algorithm results in improved scaling with the number of order parameters and the ability to progressively refine the regions of enforced sampling. We demonstrate the algorithm by applying it to a two-dimensional model of driven Brownian motion and a coarse-grained (Ising) model for nucleation under shear. It is found that the choice of order parameters can significantly affect the convergence of the simulation; local magnetization variables other than those used previously for sampling transition paths in Ising systems are needed to ensure that the reactive flux is primarily contained within a tube in the space of order parameters. The relation of this method to other algorithms that sample the statistics of path ensembles is discussed.

  8. Impact of ADC parameters on linear optical sampling systems

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung-Hien; Gay, Mathilde; Gomez-Agis, Fausto; Lobo, Sébastien; Sentieys, Olivier; Simon, Jean-Claude; Peucheret, Christophe; Bramerie, Laurent

    2017-11-01

    Linear optical sampling (LOS), based on the coherent photodetection of an optical signal under test with a low repetition-rate signal originating from a pulsed local oscillator (LO), enables the characterization of the temporal electric field of optical sources. Thanks to this technique, low-speed photodetectors and analog-to-digital converters (ADCs) can be integrated in the LOS system providing a cost-effective tool for characterizing high-speed signals. However, the impact of photodetector and ADC parameters on such LOS systems has not been explored in detail so far. These parameters, including the integration time of the track-and-hold function, the effective number of bits (ENOB) of the ADC, as well as the combined limited bandwidth of the photodetector and ADC are experimentally and numerically investigated in a LOS system for the first time. More specifically, by reconstructing 10-Gbit/s non-return-to-zero on-off keying (NRZ-OOK) and 10-Gbaud NRZ-quadrature phase-shift-keying (QPSK) signals, it is shown that a short integration time provides a better recovered signal fidelity. Furthermore, an ENOB of 6 bits and an ADC bandwidth normalized to the sampling rate of 2.8 are found to be sufficient in order to reliably monitor the considered signals.

  9. Enhanced sampling simulations of DNA step parameters.

    PubMed

    Karolak, Aleksandra; van der Vaart, Arjan

    2014-12-15

    A novel approach for the selection of step parameters as reaction coordinates in enhanced sampling simulations of DNA is presented. The method uses three atoms per base and does not require coordinate overlays or idealized base pairs. This allowed for a highly efficient implementation of the calculation of all step parameters and their Cartesian derivatives in molecular dynamics simulations. Good correlation between the calculated and actual twist, roll, tilt, shift, and slide parameters is obtained, while the correlation with rise is modest. The method is illustrated by its application to the methylated and unmethylated 5'-CATGTGACGTCACATG-3' double stranded DNA sequence. One-dimensional umbrella simulations indicate that the flexibility of the central CG step is only marginally affected by methylation. © 2014 Wiley Periodicals, Inc.

  10. Compressive properties of passive skeletal muscle-the impact of precise sample geometry on parameter identification in inverse finite element analysis.

    PubMed

    Böl, Markus; Kruse, Roland; Ehret, Alexander E; Leichsenring, Kay; Siebert, Tobias

    2012-10-11

    Due to the increasing developments in modelling of biological material, adequate parameter identification techniques are urgently needed. The majority of recent contributions on passive muscle tissue identify material parameters solely by comparing characteristic, compressive stress-stretch curves from experiments and simulation. In doing so, different assumptions concerning e.g. the sample geometry or the degree of friction between the sample and the platens are required. In most cases these assumptions are grossly simplified leading to incorrect material parameters. In order to overcome such oversimplifications, in this paper a more reliable parameter identification technique is presented: we use the inverse finite element method (iFEM) to identify the optimal parameter set by comparison of the compressive stress-stretch response including the realistic geometries of the samples and the presence of friction at the compressed sample faces. Moreover, we judge the quality of the parameter identification by comparing the simulated and experimental deformed shapes of the samples. Besides this, the study includes a comprehensive set of compressive stress-stretch data on rabbit soleus muscle and the determination of static friction coefficients between muscle and PTFE. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Revisiting Hansen Solubility Parameters by Including Thermodynamics.

    PubMed

    Louwerse, Manuel J; Maldonado, Ana; Rousseau, Simon; Moreau-Masselon, Chloe; Roux, Bernard; Rothenberg, Gadi

    2017-11-03

    The Hansen solubility parameter approach is revisited by implementing the thermodynamics of dissolution and mixing. Hansen's pragmatic approach has earned its spurs in predicting solvents for polymer solutions, but for molecular solutes improvements are needed. By going into the details of entropy and enthalpy, several corrections are suggested that make the methodology thermodynamically sound without losing its ease of use. The most important corrections include accounting for the solvent molecules' size, the destruction of the solid's crystal structure, and the specificity of hydrogen-bonding interactions, as well as opportunities to predict the solubility at extrapolated temperatures. Testing the original and the improved methods on a large industrial dataset including solvent blends, fit qualities improved from 0.89 to 0.97 and the percentage of correct predictions rose from 54 % to 78 %. Full Matlab scripts are included in the Supporting Information, allowing readers to implement these improvements on their own datasets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Correlations of water quality parameters with mutagenicity of chlorinated drinking water samples.

    PubMed

    Schenck, Kathleen M; Sivaganesan, Mano; Rice, Glenn E

    2009-01-01

    Adverse health effects that may result from chronic exposure to mixtures of disinfection by-products (DBPs) present in drinking waters may be linked to both the types and concentrations of DBPs present. Depending on the characteristics of the source water and treatment processes used, both types and concentrations of DBPs found in drinking waters vary substantially. The composition of a drinking-water mixture also may change during distribution. This study evaluated the relationships between mutagenicity, using the Ames assay, and water quality parameters. The study included information on treatment, mutagenicity data, and water quality data for source waters, finished waters, and distribution samples collected from five full-scale drinking water treatment plants, which used chlorine exclusively for disinfection. Four of the plants used surface water sources and the fifth plant used groundwater. Correlations between mutagenicity and water quality parameters are presented. The highest correlation was observed between mutagenicity and the total organic halide concentrations in the treated samples.

  13. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    PubMed Central

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  14. SPIDER - I. Sample and galaxy parameters in the grizYJHK wavebands

    NASA Astrophysics Data System (ADS)

    La Barbera, F.; de Carvalho, R. R.; de La Rosa, I. G.; Lopes, P. A. A.; Kohl-Moreira, J. L.; Capelato, H. V.

    2010-11-01

    This is the first paper of a series presenting the Spheroids Panchromatic Investigation in Different Environmental Regions (SPIDER). The sample of spheroids consists of 5080 bright (Mr < -20) early-type galaxies (ETGs), in the redshift range of 0.05 to 0.095, with optical (griz) photometry and spectroscopy from the Sloan Digital Sky Survey Data Release 6 (SDSS-DR6) and near-infrared (YJHK) photometry from the UKIRT Infrared Deep Sky Survey-Large Area Survey (UKIDSS-LAS) (DR4). We describe how homogeneous photometric parameters (galaxy colours and structural parameters) are derived using grizYJHK wavebands. We find no systematic steepening of the colour-magnitude relation when probing the baseline from g - r to g - K, implying that internal colour gradients drive most of the mass-metallicity relation in ETGs. As far as structural parameters are concerned we find that the mean effective radius of ETGs smoothly decreases, by 30 per cent, from g through K, while no significant dependence on waveband is detected for the axial ratio, Sersic index and a4 parameters. Furthermore, velocity dispersions are remeasured for all the ETGs using STARLIGHT and compared to those obtained by SDSS. The velocity dispersions are rederived using a combination of simple stellar population models as templates, hence accounting for the kinematics of different galaxy stellar components. We compare our (2DPHOT) measurements of total magnitude, effective radius and mean surface brightness with those obtained as part of the SDSS pipeline (PHOTO). Significant differences are found and reported, including comparisons with a third and independent part. A full characterization of the sample completeness in all wavebands is presented, establishing the limits of application of the characteristic parameters presented here for the analysis of the global scaling relations of ETGs.

  15. Apparatus for microbiological sampling. [including automatic swabbing

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Mills, S. M. (Inventor)

    1974-01-01

    An automatic apparatus is described for microbiologically sampling surface using a cotton swab which eliminates human error. The apparatus includes a self-powered transport device, such as a motor-driven wheeled cart, which mounts a swabbing motor drive for a crank arm which supports a swab in the free end thereof. The swabbing motor is pivotably mounted and an actuator rod movable responsive to the cart traveling a predetermined distance provides lifting of the swab from the surface being sampled and reversal of the direction of travel of the cart.

  16. Noncoherent sampling technique for communications parameter estimations

    NASA Technical Reports Server (NTRS)

    Su, Y. T.; Choi, H. J.

    1985-01-01

    This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.

  17. Manual versus automated blood sampling: impact of repeated blood sampling on stress parameters and behavior in male NMRI mice

    PubMed Central

    Kalliokoski, Otto; Sørensen, Dorte B; Hau, Jann; Abelson, Klas S P

    2014-01-01

    Facial vein (cheek blood) and caudal vein (tail blood) phlebotomy are two commonly used techniques for obtaining blood samples from laboratory mice, while automated blood sampling through a permanent catheter is a relatively new technique in mice. The present study compared physiological parameters, glucocorticoid dynamics as well as the behavior of mice sampled repeatedly for 24 h by cheek blood, tail blood or automated blood sampling from the carotid artery. Mice subjected to cheek blood sampling lost significantly more body weight, had elevated levels of plasma corticosterone, excreted more fecal corticosterone metabolites, and expressed more anxious behavior than did the mice of the other groups. Plasma corticosterone levels of mice subjected to tail blood sampling were also elevated, although less significantly. Mice subjected to automated blood sampling were less affected with regard to the parameters measured, and expressed less anxious behavior. We conclude that repeated blood sampling by automated blood sampling and from the tail vein is less stressful than cheek blood sampling. The choice between automated blood sampling and tail blood sampling should be based on the study requirements, the resources of the laboratory and skills of the staff. PMID:24958546

  18. Multiobjective sampling design for parameter estimation and model discrimination in groundwater solute transport

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1989-01-01

    Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown

  19. Spectral gap optimization of order parameters for sampling complex molecular systems

    PubMed Central

    Tiwary, Pratyush; Berne, B. J.

    2016-01-01

    In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365

  20. Weighted statistical parameters for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  1. ECCM Scheme against Interrupted Sampling Repeater Jammer Based on Parameter-Adjusted Waveform Design

    PubMed Central

    Wei, Zhenhua; Peng, Bo; Shen, Rui

    2018-01-01

    Interrupted sampling repeater jamming (ISRJ) is an effective way of deceiving coherent radar sensors, especially for linear frequency modulated (LFM) radar. In this paper, for a simplified scenario with a single jammer, we propose a dynamic electronic counter-counter measure (ECCM) scheme based on jammer parameter estimation and transmitted signal design. Firstly, the LFM waveform is transmitted to estimate the main jamming parameters by investigating the discontinuousness of the ISRJ’s time-frequency (TF) characteristics. Then, a parameter-adjusted intra-pulse frequency coded signal, whose ISRJ signal after matched filtering only forms a single false target, is designed adaptively according to the estimated parameters, i.e., sampling interval, sampling duration and repeater times. Ultimately, for typical jamming scenes with different jamming signal ratio (JSR) and duty cycle, we propose two particular ISRJ suppression approaches. Simulation results validate the effective performance of the proposed scheme for countering the ISRJ, and the trade-off relationship between the two approaches is demonstrated. PMID:29642508

  2. Stability of Chronic Hepatitis-Related Parameters in Serum Samples After Long-Term Storage.

    PubMed

    Yu, Rentao; Dan, Yunjie; Xiang, Xiaomei; Zhou, Yi; Kuang, Xuemei; Yang, Ge; Tang, Yulan; Liu, Mingdong; Kong, Weilong; Tan, Wenting; Deng, Guohong

    2017-06-01

    Serum samples are widely used in clinical research, but a comprehensive research of the stability of parameters relevant to chronic hepatitis and the effect of a relatively long-term (up to 10 years) storage on the stability have rarely been studied. To investigate the stability of chronic hepatitis-related parameters in serum samples after long-term storage. The storage stability of common clinical parameters such as total bile acid (TBA), total bilirubin (TBIL), potassium, cholesterol, and protein parameters such as alanine aminotransferase (ALT), creatine kinase (CK), γ-glutamyltransferase (GGT), albumin, high-density lipoprotein (HDL) and also hepatitis B virus (HBV) DNA, hepatitis C virus (HCV) RNA, hepatitis B surface antigen (HBsAg), and chemokine (C-X-C motif) ligand 10 (CXCL10) were tested in serum samples after storing at -20°C or -70°C for 1, 2, 3, 7, 8, and 10 years. Levels of TBA, TBIL, and protein parameters such as ALT, CK, GGT, HDL, and HBsAg decreased significantly, but levels of potassium and cholesterol increased significantly after long-term storage, whereas blood glucose and triglycerides were stable during storage. HBV DNA remained stable at -70°C but changed at -20°C, whereas HCV RNA was stable after 1-, 2-, and 3-year storage. CXCL10 was still detectable after 8-year storage. Low temperatures (-70°C/80°C) are necessary for storage of serum samples in chronic hepatitis B research after long-term storage.

  3. Sampling errors in the measurement of rain and hail parameters

    NASA Technical Reports Server (NTRS)

    Gertzman, H. S.; Atlas, D.

    1977-01-01

    Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.

  4. Developing a methodology for the inverse estimation of root architectural parameters from field based sampling schemes

    NASA Astrophysics Data System (ADS)

    Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry

    2017-04-01

    Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show

  5. Bridging the gaps between non-invasive genetic sampling and population parameter estimation

    Treesearch

    Francesca Marucco; Luigi Boitani; Daniel H. Pletscher; Michael K. Schwartz

    2011-01-01

    Reliable estimates of population parameters are necessary for effective management and conservation actions. The use of genetic data for capture­recapture (CR) analyses has become an important tool to estimate population parameters for elusive species. Strong emphasis has been placed on the genetic analysis of non-invasive samples, or on the CR analysis; however,...

  6. Bayesian model comparison and parameter inference in systems biology using nested sampling.

    PubMed

    Pullen, Nick; Morris, Richard J

    2014-01-01

    Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focuses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design.

  7. C -parameter distribution at N 3 LL ' including power corrections

    DOE PAGES

    Hoang, André H.; Kolodrubetz, Daniel W.; Mateu, Vicent; ...

    2015-05-15

    We compute the e⁺e⁻ C-parameter distribution using the soft-collinear effective theory with a resummation to next-to-next-to-next-to-leading-log prime accuracy of the most singular partonic terms. This includes the known fixed-order QCD results up to O(α 3 s), a numerical determination of the two-loop nonlogarithmic term of the soft function, and all logarithmic terms in the jet and soft functions up to three loops. Our result holds for C in the peak, tail, and far tail regions. Additionally, we treat hadronization effects using a field theoretic nonperturbative soft function, with moments Ω n. To eliminate an O(Λ QCD) renormalon ambiguity in themore » soft function, we switch from the MS¯ to a short distance “Rgap” scheme to define the leading power correction parameter Ω 1. We show how to simultaneously account for running effects in Ω 1 due to renormalon subtractions and hadron-mass effects, enabling power correction universality between C-parameter and thrust to be tested in our setup. We discuss in detail the impact of resummation and renormalon subtractions on the convergence. In the relevant fit region for αs(m Z) and Ω 1, the perturbative uncertainty in our cross section is ≅ 2.5% at Q=m Z.« less

  8. Mutagenicity of drinking water sampled from the Yangtze River and Hanshui River (Wuhan section) and correlations with water quality parameters.

    PubMed

    Lv, Xuemin; Lu, Yi; Yang, Xiaoming; Dong, Xiaorong; Ma, Kunpeng; Xiao, Sanhua; Wang, Yazhou; Tang, Fei

    2015-03-31

    A total of 54 water samples were collected during three different hydrologic periods (level period, wet period, and dry period) from Plant A and Plant B (a source for Yangtze River and Hanshui River water, respectively), and several water parameters, such as chemical oxygen demand (COD), turbidity, and total organic carbon (TOC), were simultaneously analyzed. The mutagenicity of the water samples was evaluated using the Ames test with Salmonella typhimurium strains TA98 and TA100. According to the results, the organic compounds in the water were largely frame-shift mutagens, as positive results were found for most of the tests using TA98. All of the finished water samples exhibited stronger mutagenicity than the relative raw and distribution water samples, with water samples collected from Plant B presenting stronger mutagenic strength than those from Plant A. The finished water samples from Plant A displayed a seasonal-dependent variation. Water parameters including COD (r = 0.599, P = 0.009), TOC (r = 0.681, P = 0.02), UV254 (r = 0.711, P = 0.001), and total nitrogen (r = 0.570, P = 0.014) exhibited good correlations with mutagenicity (TA98), at 2.0 L/plate, which bolsters the argument of the importance of using mutagenicity as a new parameter to assess the quality of drinking water.

  9. Mutagenicity of drinking water sampled from the Yangtze River and Hanshui River (Wuhan section) and correlations with water quality parameters

    PubMed Central

    Lv, Xuemin; Lu, Yi; Yang, Xiaoming; Dong, Xiaorong; Ma, Kunpeng; Xiao, Sanhua; Wang, Yazhou; Tang, Fei

    2015-01-01

    A total of 54 water samples were collected during three different hydrologic periods (level period, wet period, and dry period) from Plant A and Plant B (a source for Yangtze River and Hanshui River water, respectively), and several water parameters, such as chemical oxygen demand (COD), turbidity, and total organic carbon (TOC), were simultaneously analyzed. The mutagenicity of the water samples was evaluated using the Ames test with Salmonella typhimurium strains TA98 and TA100. According to the results, the organic compounds in the water were largely frame-shift mutagens, as positive results were found for most of the tests using TA98. All of the finished water samples exhibited stronger mutagenicity than the relative raw and distribution water samples, with water samples collected from Plant B presenting stronger mutagenic strength than those from Plant A. The finished water samples from Plant A displayed a seasonal-dependent variation. Water parameters including COD (r = 0.599, P = 0.009), TOC (r = 0.681, P = 0.02), UV254 (r = 0.711, P = 0.001), and total nitrogen (r = 0.570, P = 0.014) exhibited good correlations with mutagenicity (TA98), at 2.0 L/plate, which bolsters the argument of the importance of using mutagenicity as a new parameter to assess the quality of drinking water. PMID:25825837

  10. Crack Damage Parameters and Dilatancy of Artificially Jointed Granite Samples Under Triaxial Compression

    NASA Astrophysics Data System (ADS)

    Walton, G.; Alejano, L. R.; Arzua, J.; Markley, T.

    2018-06-01

    A database of post-peak triaxial test results was created for artificially jointed planes introduced in cylindrical compression samples of a Blanco Mera granite. Aside from examining the artificial jointing effect on major rock and rock mass parameters such as stiffness, peak strength and residual strength, other strength parameters related to brittle cracking and post-yield dilatancy were analyzed. Crack initiation and crack damage values for both the intact and artificially jointed samples were determined, and these damage envelopes were found to be notably impacted by the presence of jointing. The data suggest that with increased density of jointing, the samples transition from a combined matrix damage and joint slip yielding mechanism to yield dominated by joint slip. Additionally, post-yield dilation data were analyzed in the context of a mobilized dilation angle model, and the peak dilation angle was found to decrease significantly when there were joints in the samples. These dilatancy results are consistent with hypotheses in the literature on rock mass dilatancy.

  11. Effect of the extent of well purging on laboratory parameters of groundwater samples

    NASA Astrophysics Data System (ADS)

    Reka Mathe, Agnes; Kohler, Artur; Kovacs, Jozsef

    2017-04-01

    Chemicals reaching groundwater cause water quality deterioration. Reconnaissance and remediation demands high financial and human resources. Groundwater samples are important sources of information. Representativity of these samples is fundamental to decision making. According to relevant literature the way of sampling and the sampling equipment can affect laboratory concentrations measured in samples. Detailed and systematic research on this field is missing from even international literature. Groundwater sampling procedures are regulated worldwide. Regulations describe how to sample a groundwater monitoring well. The most common element in these regulations is well purging prior to sampling. The aim of purging the well is to avoid taking the sample from the stagnant water instead of from formation water. The stagnant water forms inside and around the well because the well casing provides direct contact with the atmosphere, changing the physico-chemical composition of the well water. Sample from the stagnant water is not representative of the formation water. Regulations regarding the extent of the purging are different. Purging is mostly defined as multiply (3-5) well volumes, and/or reaching stabilization of some purged water parameters (pH, specific conductivity, etc.). There are hints for sampling without purging. To define the necessary extent of the purging repeated pumping is conducted, triplicate samples are taken at the beginning of purging, at one, two and three times well volumes and at parameter stabilization. Triplicate samples are the means to account for laboratory errors. The subsurface is not static, the test is repeated 10 times. Up to now three tests were completed.

  12. Constraining Unsaturated Hydraulic Parameters Using the Latin Hypercube Sampling Method and Coupled Hydrogeophysical Approach

    NASA Astrophysics Data System (ADS)

    Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.

    2017-12-01

    The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.

  13. Sensitivity of postplanning target and OAR coverage estimates to dosimetric margin distribution sampling parameters.

    PubMed

    Xu, Huijun; Gordon, J James; Siebers, Jeffrey V

    2011-02-01

    A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D, exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals w (e.g., w = 1 degree, 2 degrees, 5 degrees, 10 degrees, 20 degrees). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment omega eff. In each direction, the DM was calculated by moving the structure in radial steps of size delta (=0.1, 0.2, 0.5, 1 mm) until the specified isodose was crossed. Coverage estimation accuracy deltaQ was quantified as a function of the sampling parameters omega or omega eff and delta. The

  14. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    PubMed

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  15. Sensitivity of postplanning target and OAR coverage estimates to dosimetric margin distribution sampling parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Huijun; Gordon, J. James; Siebers, Jeffrey V.

    2011-02-15

    Purpose: A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D{sub v} exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structuresmore » meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Methods: Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals {omega} (e.g., {omega}=1 deg., 2 deg., 5 deg., 10 deg., 20 deg.). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment {omega}{sub eff}. In each direction, the DM was calculated by moving the structure in radial steps of size {delta}(=0.1,0.2,0.5,1 mm) until the specified isodose was crossed. Coverage estimation accuracy {Delta}Q was quantified as a function of the sampling parameters

  16. Implementing reduced-risk integrated pest management in fresh-market cabbage: influence of sampling parameters, and validation of binomial sequential sampling plans for the cabbage looper (Lepidoptera Noctuidae).

    PubMed

    Burkness, Eric C; Hutchison, W D

    2009-10-01

    Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.

  17. Effect of sampling schedule on pharmacokinetic parameter estimates of promethazine in astronauts

    NASA Astrophysics Data System (ADS)

    Boyd, Jason L.; Wang, Zuwei; Putcha, Lakshmi

    2005-08-01

    Six astronauts on the Shuttle Transport System (STS) participated in an investigation on the pharmacokinetics of promethazine (PMZ), a medication used for the treatment of space motion sickness (SMS) during flight. Each crewmember completed the protocol once during flight and repeated thirty days after returned to Earth. Saliva samples were collected at scheduled times for 72 h after PMZ administration; more frequent samples were collected on the ground than during flight owing to schedule constraints in flight. PMZ concentrations in saliva were determined by a liquid chromatographic/mass spectrometric (LC-MS) assay and pharmacokinetic parameters (PKPs) were calculated using actual flight and ground-based data sets and using time-matched sampling schedule on ground to that during flight. Volume of distribution (Vc) and clearance (Cls) decreased during flight compared to that from time-matched ground data set; however, ClS and Vc estimates were higher for all subjects when partial ground data sets were used for analysis. Area under the curve (AUC) normalized with administered dose was similar in flight and partial ground data; however AUC was significantly lower using time-matched sampling compared with the full data set on ground. Half life (t1/2) was longest during flight, shorter with matched-sampling schedule on ground and shortest when complete data set from ground was used. Maximum concentration (Cmax), time for Cmax (tmax), parameters of drug absorption, depicted a similar trend with lowest and longest respectively, during flight, lower with time- matched ground data and highest and shortest with full ground data.

  18. SERE: single-parameter quality control and sample comparison for RNA-Seq.

    PubMed

    Schulze, Stefan K; Kanwar, Rahul; Gölzenleuchter, Meike; Therneau, Terry M; Beutler, Andreas S

    2012-10-03

    Assessing the reliability of experimental replicates (or global alterations corresponding to different experimental conditions) is a critical step in analyzing RNA-Seq data. Pearson's correlation coefficient r has been widely used in the RNA-Seq field even though its statistical characteristics may be poorly suited to the task. Here we present a single-parameter test procedure for count data, the Simple Error Ratio Estimate (SERE), that can determine whether two RNA-Seq libraries are faithful replicates or globally different. Benchmarking shows that the interpretation of SERE is unambiguous regardless of the total read count or the range of expression differences among bins (exons or genes), a score of 1 indicating faithful replication (i.e., samples are affected only by Poisson variation of individual counts), a score of 0 indicating data duplication, and scores >1 corresponding to true global differences between RNA-Seq libraries. On the contrary the interpretation of Pearson's r is generally ambiguous and highly dependent on sequencing depth and the range of expression levels inherent to the sample (difference between lowest and highest bin count). Cohen's simple Kappa results are also ambiguous and are highly dependent on the choice of bins. For quantifying global sample differences SERE performs similarly to a measure based on the negative binomial distribution yet is simpler to compute. SERE can therefore serve as a straightforward and reliable statistical procedure for the global assessment of pairs or large groups of RNA-Seq datasets by a single statistical parameter.

  19. Data Stewardship in the Ocean Sciences Needs to Include Physical Samples

    NASA Astrophysics Data System (ADS)

    Carter, M.; Lehnert, K.

    2016-02-01

    Across the Ocean Sciences, research involves the collection and study of samples collected above, at, and below the seafloor, including but not limited to rocks, sediments, fluids, gases, and living organisms. Many domains in the Earth Sciences have recently expressed the need for better discovery, access, and sharing of scientific samples and collections (EarthCube End-User Domain workshops, 2012 and 2013, http://earthcube.org/info/about/end-user-workshops), as has the US government (OSTP Memo, March 2014). iSamples (Internet of Samples in the Earth Sciences) is a Research Coordination Network within the EarthCube program that aims to advance the use of innovative cyberinfrastructure to support and advance the utility of physical samples and sample collections for science and ensure reproducibility of sample-based data and research results. iSamples strives to build, grow, and foster a new community of practice, in which domain scientists, curators of sample repositories and collections, computer and information scientists, software developers and technology innovators engage in and collaborate on defining, articulating, and addressing the needs and challenges of physical samples as a critical component of digital data infrastructure. A primary goal of iSamples is to deliver a community-endorsed set of best practices and standards for the registration, description, identification, and citation of physical specimens and define an actionable plan for implementation. iSamples conducted a broad community survey about sample sharing and has created 5 different working groups to address the different challenges of developing the internet of samples - from metadata schemas and unique identifiers to an architecture for a shared cyberinfrastructure to manage collections, to digitization of existing collections, to education, and ultimately to establishing the physical infrastructure that will ensure preservation and access of the physical samples. Repositories that curate

  20. Effect of Sampling Schedule on Pharmacokinetic Parameter Estimates of Promethazine in Astronauts

    NASA Technical Reports Server (NTRS)

    Boyd, Jason L.; Wang, Zuwei; Putcha, Lakshmi

    2005-01-01

    Six astronauts on the Shuttle Transport System (STS) participated in an investigation on the pharmacokinetics of promethazine (PMZ), a medication used for the treatment of space motion sickness (SMS) during flight. Each crewmember completed the protocol once during flight and repeated thirty days after returned to Earth. Saliva samples were collected at scheduled times for 72 h after PMZ administration; more frequent samples were collected on the ground than during flight owing to schedule constraints in flight. PMZ concentrations in saliva were determined by a liquid chromatographic/mass spectrometric (LC-MS) assay and pharmacokinetic parameters (PKPs) were calculated using actual flight and ground-based data sets and using time-matched sampling schedule on ground to that during flight. Volume of distribution (V(sub c)) and clearance (Cl(sub s),) decreased during flight compared to that from time-matched ground data set; however, Cl(sub s) and V(sub c) estimates were higher for all subjects when partial ground data sets were used for analysis. Area under the curve (AUC) normalized with administered dose was similar in flight and partial ground data; however AUC was significantly lower using time-matched sampling compared with the full data set on ground. Half life (t(sub 1/2)) was longest during flight, shorter with matched-sampling schedule on ground and shortest when complete data set from ground was used. Maximum concentration (C(sub max)), time for C(sub max), (t(sub max)), parameters of drug absorption, depicted a similar trend with lowest and longest respectively, during flight, lower with time-matched ground data and highest and shortest with full ground data.

  1. SERE: Single-parameter quality control and sample comparison for RNA-Seq

    PubMed Central

    2012-01-01

    Background Assessing the reliability of experimental replicates (or global alterations corresponding to different experimental conditions) is a critical step in analyzing RNA-Seq data. Pearson’s correlation coefficient r has been widely used in the RNA-Seq field even though its statistical characteristics may be poorly suited to the task. Results Here we present a single-parameter test procedure for count data, the Simple Error Ratio Estimate (SERE), that can determine whether two RNA-Seq libraries are faithful replicates or globally different. Benchmarking shows that the interpretation of SERE is unambiguous regardless of the total read count or the range of expression differences among bins (exons or genes), a score of 1 indicating faithful replication (i.e., samples are affected only by Poisson variation of individual counts), a score of 0 indicating data duplication, and scores >1 corresponding to true global differences between RNA-Seq libraries. On the contrary the interpretation of Pearson’s r is generally ambiguous and highly dependent on sequencing depth and the range of expression levels inherent to the sample (difference between lowest and highest bin count). Cohen’s simple Kappa results are also ambiguous and are highly dependent on the choice of bins. For quantifying global sample differences SERE performs similarly to a measure based on the negative binomial distribution yet is simpler to compute. Conclusions SERE can therefore serve as a straightforward and reliable statistical procedure for the global assessment of pairs or large groups of RNA-Seq datasets by a single statistical parameter. PMID:23033915

  2. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology

    PubMed Central

    Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal

    2009-01-01

    The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455

  3. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  4. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum

  5. MontePython 3: Parameter inference code for cosmology

    NASA Astrophysics Data System (ADS)

    Brinckmann, Thejs; Lesgourgues, Julien; Audren, Benjamin; Benabed, Karim; Prunet, Simon

    2018-05-01

    MontePython 3 provides numerous ways to explore parameter space using Monte Carlo Markov Chain (MCMC) sampling, including Metropolis-Hastings, Nested Sampling, Cosmo Hammer, and a Fisher sampling method. This improved version of the Monte Python (ascl:1307.002) parameter inference code for cosmology offers new ingredients that improve the performance of Metropolis-Hastings sampling, speeding up convergence and offering significant time improvement in difficult runs. Additional likelihoods and plotting options are available, as are post-processing algorithms such as Importance Sampling and Adding Derived Parameter.

  6. Sampling of Stochastic Input Parameters for Rockfall Calculations and for Structural Response Calculations Under Vibratory Ground Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. Gross

    2004-09-01

    The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall inmore » emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for

  7. Power extraction calculation improvement when local parameters are included

    NASA Astrophysics Data System (ADS)

    Flores-Mateos, L. M.; Hartnett, M.

    2016-02-01

    The improvement of the tidal resource assessment will be studied by comparing two approaches in a two-dimensional, finite difference, hydrodynamic model DIVAST-ADI; in a channel of non-varying cross-sectional area that connects two large basins. The first strategy, considers a constant trust coefficient; the second one, use the local field parameters around the turbine. These parameters are obtained after applying the open channel theory in the tidal stream and after considering the turbine as a linear momentum actuator disk. The parameters correspond to the upstream and downstream, with respect to the turbine, speeds and depths; also the blockage ratio, the wake velocity and the bypass coefficients and they have already been incorporated in the model. The figure (a) shows the numerical configuration at high tide developed with DIVAST-ADI. The experiment undertakes two open boundary conditions. The first one is a sinusoidal forcing introduced as a water level located at (I, J=1) and the second one, indicate that a zero velocity and a constant water depth were kept (I, J=362); when the turbine is introduced it is placed in the middle of the channel (I=161, J=181). The influence of the turbine in the velocity and elevation around the turbine region is evident; figure (b) and (c) shows that the turbine produces a discontinuity in the depth and velocity profile, when we plot a transect along the channel. Finally, the configuration implemented reproduced with satisfactory accuracy the quasi-steady flow condition, even without presenting shock-capturing capability. Also, the range of the parameters 0.01<α 4<0.55, $0

  8. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure

  9. Characterization of PDMS samples with variation of its synthesis parameters for tunable optics applications

    NASA Astrophysics Data System (ADS)

    Marquez-Garcia, Josimar; Cruz-Félix, Angel S.; Santiago-Alvarado, Agustin; González-García, Jorge

    2017-09-01

    Nowadays the elastomer known as polydimethylsiloxane (PDMS, Sylgard 184), due to its physical properties, low cost and easy handle, have become a frequently used material for the elaboration of optical components such as: variable focal length liquid lenses, optical waveguides, solid elastic lenses, etc. In recent years, we have been working in the characterization of this material for applications in visual sciences; in this work, we describe the elaboration of PDMSmade samples, also, we present physical and optical properties of the samples by varying its synthesis parameters such as base: curing agent ratio, and both, curing time and temperature. In the case of mechanical properties, tensile and compression tests were carried out through a universal testing machine to obtain the respective stress-strain curves, and to obtain information regarding its optical properties, UV-vis spectroscopy is applied to the samples to obtain transmittance and absorbance curves. Index of refraction variation was obtained through an Abbe refractometer. Results from the characterization will determine the proper synthesis parameters for the elaboration of tunable refractive surfaces for potential applications in robotics.

  10. On the relation between correlation dimension, approximate entropy and sample entropy parameters, and a fast algorithm for their calculation

    NASA Astrophysics Data System (ADS)

    Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw

    2012-12-01

    We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.

  11. [Identification of Systemic Contaminations with Legionella Spec. in Drinking Water Plumbing Systems: Sampling Strategies and Corresponding Parameters].

    PubMed

    Völker, S; Schreiber, C; Müller, H; Zacharias, N; Kistemann, T

    2017-05-01

    After the amendment of the Drinking Water Ordinance in 2011, the requirements for the hygienic-microbiological monitoring of drinking water installations have increased significantly. In the BMBF-funded project "Biofilm Management" (2010-2014), we examined the extent to which established sampling strategies in practice can uncover drinking water plumbing systems systemically colonized with Legionella. Moreover, we investigated additional parameters that might be suitable for detecting systemic contaminations. We subjected the drinking water plumbing systems of 8 buildings with known microbial contamination (Legionella) to an intensive hygienic-microbiological sampling with high spatial and temporal resolution. A total of 626 drinking hot water samples were analyzed with classical culture-based methods. In addition, comprehensive hygienic observations were conducted in each building and qualitative interviews with operators and users were applied. Collected tap-specific parameters were quantitatively analyzed by means of sensitivity and accuracy calculations. The systemic presence of Legionella in drinking water plumbing systems has a high spatial and temporal variability. Established sampling strategies were only partially suitable to detect long-term Legionella contaminations in practice. In particular, the sampling of hot water at the calorifier and circulation re-entrance showed little significance in terms of contamination events. To detect the systemic presence of Legionella,the parameters stagnation (qualitatively assessed) and temperature (compliance with the 5K-rule) showed better results. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Comparison of two blood sampling techniques for the determination of coagulation parameters in the horse: Jugular venipuncture and indwelling intravenous catheter.

    PubMed

    Mackenzie, C J; McGowan, C M; Pinchbeck, G; Carslake, H B

    2018-05-01

    Evaluation of coagulation status is an important component of critical care. Ongoing monitoring of coagulation status in hospitalised horses has previously been via serial venipuncture due to concerns that sampling directly from the intravenous catheter (IVC) may alter the accuracy of the results. Adverse effects such as patient anxiety and trauma to the sampled vessel could be avoided by the use of an indwelling IVC for repeat blood sampling. To compare coagulation parameters from blood obtained by jugular venipuncture with IVC sampling in critically ill horses. Prospective observational study. A single set of paired blood samples were obtained from horses (n = 55) admitted to an intensive care unit by direct jugular venipuncture and, following removal of a presample, via an indwelling IVC. The following coagulation parameters were measured on venipuncture and IVC samples: whole blood prothrombin time (PT), fresh plasma PT and activated partial thromboplastin time (aPTT) and stored plasma antithrombin activity (AT) and fibrinogen concentration. D-dimer concentration was also measured in some horses (n = 22). Comparison of venipuncture and IVC results was performed using Lin's concordance correlation coefficient. Agreement between paired results was assessed using Bland Altman analysis. Correlation was substantial and agreement was good between sample methods for all parameters except AT and D-dimers. Each coagulation parameter was tested using only one assay. Sampling was limited to a convenience sample and timing of sample collection was not standardised in relation to when the catheter was flushed with heparinised saline. With the exception of AT and D-dimers, coagulation parameters measured on blood samples obtained via an IVC have clinically equivalent values to those obtained by jugular venipuncture. © 2017 EVJ Ltd.

  13. Identification of modal parameters including unmeasured forces and transient effects

    NASA Astrophysics Data System (ADS)

    Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.

    2003-08-01

    In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.

  14. Behavior of sensitivities in the one-dimensional advection-dispersion equation: Implications for parameter estimation and sampling design

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.

  15. Spectral Line Parameters Including Temperature Dependences of Self- and Air-Broadening in the 2 (left arrow) 0 Band of CO at 2.3 micrometers

    NASA Technical Reports Server (NTRS)

    Devi, V. Malathy; Benner, D. Chris; Smith, M. A. H.; Mantz, A. W.; Sung, K.; Brown, L. R.; Predoi-Cross, A.

    2012-01-01

    Temperature dependences of pressure-broadened half-width and pressure-induced shift coefficients along with accurate positions and intensities have been determined for transitions in the 2<--0 band of C-12 O-16 from analyzing high-resolution and high signal-to-noise spectra recorded with two different Fourier transform spectrometers. A total of 28 spectra, 16 self-broadened and 12 air-broadened, recorded using high- purity (greater than or equal to 99.5% C-12-enriched) CO samples and CO diluted with dry air(research grade) at different temperatures and pressures, were analyzed simultaneously to maximize the accuracy of the retrieved parameters. The sample temperatures ranged from 150 to 298K and the total pressures varied between 5 and 700 Torr. A multispectrum nonlinear least squares spectrum fitting technique was used to adjust the rovibrational constants (G, B, D, etc.) and intensity parameters (including Herman-Wallis coefficients), rather than determining individual line positions and intensities. Self-and air-broadened Lorentz half-width coefficients, their temperature dependence exponents, self- and air-pressure-induced shift coefficients, their temperature dependences, self- and air-line mixing coefficients, their temperature dependences and speed dependence have been retrieved from the analysis. Speed-dependent line shapes with line mixing employing off-diagonal relaxation matrix element formalism were needed to minimize the fit residuals. This study presents a precise and complete set of spectral line parameters that consistently reproduce the spectrum of carbon monoxide over terrestrial atmospheric conditions.

  16. Estimation of genetic parameters and their sampling variances of quantitative traits in the type 2 modified augmented design

    USDA-ARS?s Scientific Manuscript database

    We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...

  17. The impact of temporal sampling resolution on parameter inference for biological transport models.

    PubMed

    Harrison, Jonathan U; Baker, Ruth E

    2018-06-25

    Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply

  18. Analytical Parameters of an Amperometric Glucose Biosensor for Fast Analysis in Food Samples.

    PubMed

    Artigues, Margalida; Abellà, Jordi; Colominas, Sergi

    2017-11-14

    Amperometric biosensors based on the use of glucose oxidase (GOx) are able to combine the robustness of electrochemical techniques with the specificity of biological recognition processes. However, very little information can be found in literature about the fundamental analytical parameters of these sensors. In this work, the analytical behavior of an amperometric biosensor based on the immobilization of GOx using a hydrogel (Chitosan) onto highly ordered titanium dioxide nanotube arrays (TiO₂NTAs) has been evaluated. The GOx-Chitosan/TiO₂NTAs biosensor showed a sensitivity of 5.46 μA·mM -1 with a linear range from 0.3 to 1.5 mM; its fundamental analytical parameters were studied using a commercial soft drink. The obtained results proved sufficient repeatability (RSD = 1.9%), reproducibility (RSD = 2.5%), accuracy (95-105% recovery), and robustness (RSD = 3.3%). Furthermore, no significant interferences from fructose, ascorbic acid and citric acid were obtained. In addition, the storage stability was further examined, after 30 days, the GOx-Chitosan/TiO₂NTAs biosensor retained 85% of its initial current response. Finally, the glucose content of different food samples was measured using the biosensor and compared with the respective HPLC value. In the worst scenario, a deviation smaller than 10% was obtained among the 20 samples evaluated.

  19. Comparison of haematology, coagulation and clinical chemistry parameters in blood samples from the sublingual vein and vena cava in Sprague-Dawley rats.

    PubMed

    Seibel, J; Bodié, K; Weber, S; Bury, D; Kron, M; Blaich, G

    2010-10-01

    The investigation of clinical pathology parameters (haematology, clinical chemistry and coagulation) is an important part of the preclinical evaluation of drug safety. However, the blood sampling method employed should avoid or minimize stress and injury in laboratory animals. In the present study, we compared the clinical pathology results from blood samples collected terminally from the vena cava (VC) immediately before necropsy with samples taken from the sublingual vein (VS) also prior to necropsy in order to determine whether the sampling method has an influence on clinical pathology parameters. Forty-six 12-week-old male Sprague-Dawley rats were assigned to two groups (VC or VS; n = 23 each). All rats were anaesthetized with isoflurane prior to sampling. In the VC group, blood was withdrawn from the inferior VC. For VS sampling, the tongue was gently pulled out and the VS was punctured. The haematology, coagulation and clinical chemistry parameters were compared. Equivalence was established for 13 parameters, such as mean corpuscular volume, white blood cells and calcium. No equivalence was found for the remaining 26 parameters, although they were considered to be similar when compared with the historical data and normal ranges. The most conspicuous finding was that activated prothrombin time was 30.3% less in blood taken from the VC (16.6 ± 0.89 s) than that in the VS samples (23.8 ± 1.58 s). Summing up, blood sampling from the inferior VC prior to necropsy appears to be a suitable and reliable method for terminal blood sampling that reduces stress and injury to laboratory rats in preclinical drug safety studies.

  20. Wavelength dispersive X-ray fluorescence analysis using fundamental parameter approach of Catha edulis and other related plant samples

    NASA Astrophysics Data System (ADS)

    Shaltout, Abdallah A.; Moharram, Mohammed A.; Mostafa, Nasser Y.

    2012-01-01

    This work is the first attempt to quantify trace elements in the Catha edulis plant (Khat) with a fundamental parameter approach. C. edulis is a famous drug plant in east Africa and Arabian Peninsula. We have previously confirmed that hydroxyapatite represents one of the main inorganic compounds in the leaves and stalks of C. edulis. Comparable plant leaves from basil, mint and green tea were included in the present investigation as well as trifolium leaves were included as a non-related plant. The elemental analyses of the plants were done by Wavelength Dispersive X-Ray Fluorescence (WDXRF) spectroscopy. Standard-less quantitative WDXRF analysis was carried out based on the fundamental parameter approaches. According to the standard-less analysis algorithms, there is an essential need for an accurate determination of the amount of organic material in the sample. A new approach, based on the differential thermal analysis, was successfully used for the organic material determination. The obtained results based on this approach were in a good agreement with the commonly used methods. Depending on the developed method, quantitative analysis results of eighteen elements including; Al, Br, Ca, Cl, Cu, Fe, K, Na, Ni, Mg, Mn, P, Rb, S, Si, Sr, Ti and Zn were obtained for each plant. The results of the certified reference materials of green tea (NCSZC73014, China National Analysis Center for Iron and Steel, Beijing, China) confirmed the validity of the proposed method.

  1. Determination of the pathological state of skin samples by optical polarimetry parameters

    NASA Astrophysics Data System (ADS)

    Fanjul-Vélez, F.; Ortega-Quijano, N.; Buelta, L.; Arce-Diego, J. L.

    2008-11-01

    Polarimetry is widely known to involve a series of powerful optical techniques that characterize the polarization behaviour of a sample. In this work, we propose a method for applying polarimetric procedures to the characterization of biological tissues, in order to differentiate between healthy and pathologic tissues on a polarimetric basis. Usually, medical morphology diseases are diagnosed based on histological alterations of the tissue. The fact that these alterations will be reflected in polarization information highlights the suitability of polarimetric procedures for diagnostic purposes. The analysis is mainly focused on the depolarization properties of the media, as long as the internal structure strongly affects the polarization state of the light that interacts with the sample. Therefore, a method is developed in order to determine the correlation between pathological ultraestructural characteristics and the subsequent variations in the polarimetric parameters of the backscattered light. This study is applied to three samples of porcine skin corresponding to a healthy region, a mole, and a cancerous region. The results show that the method proposed is indeed an adequate technique in order to achieve an early, accurate and effective cancer detection.

  2. Estimation of the Ratio of Scale Parameters in the Two Sample Problem with Arbitrary Right Censorship.

    DTIC Science & Technology

    1980-06-01

    70. AWST RC 7 Coeittu an rewwase ati of nee*aa.ean mimDdentify by black n,.mboJ T two-sample version of the Cram~ r -von Mines statistic for right...estimator for exponential distributions. KEY WORDS: Cram~ r -von Mtses distance; Kaplan-Meier estimators; Right censorship; Scale parameter; lodgea and...suppose that two positive random variables ’i 2 S0 and ’ r differ in distribution only by their scale parameters. That is, there exists a positive

  3. Systems and methods for measuring a parameter of a landfill including a barrier cap and wireless sensor systems and methods

    DOEpatents

    Kunerth, Dennis C.; Svoboda, John M.; Johnson, James T.

    2007-03-06

    A method of measuring a parameter of a landfill including a cap, without passing wires through the cap, includes burying a sensor apparatus in the landfill prior to closing the landfill with the cap; providing a reader capable of communicating with the sensor apparatus via radio frequency (RF); placing an antenna above the barrier, spaced apart from the sensor apparatus; coupling the antenna to the reader either before or after placing the antenna above the barrier; providing power to the sensor apparatus, via the antenna, by generating a field using the reader; accumulating and storing power in the sensor apparatus; sensing a parameter of the landfill using the sensor apparatus while using power; and transmitting the sensed parameter to the reader via a wireless response signal. A system for measuring a parameter of a landfill is also provided.

  4. Accuracy in parameter estimation for targeted effects in structural equation modeling: sample size planning for narrow confidence intervals.

    PubMed

    Lai, Keke; Kelley, Ken

    2011-06-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association

  5. Analytical Parameters of an Amperometric Glucose Biosensor for Fast Analysis in Food Samples

    PubMed Central

    2017-01-01

    Amperometric biosensors based on the use of glucose oxidase (GOx) are able to combine the robustness of electrochemical techniques with the specificity of biological recognition processes. However, very little information can be found in literature about the fundamental analytical parameters of these sensors. In this work, the analytical behavior of an amperometric biosensor based on the immobilization of GOx using a hydrogel (Chitosan) onto highly ordered titanium dioxide nanotube arrays (TiO2NTAs) has been evaluated. The GOx–Chitosan/TiO2NTAs biosensor showed a sensitivity of 5.46 μA·mM−1 with a linear range from 0.3 to 1.5 mM; its fundamental analytical parameters were studied using a commercial soft drink. The obtained results proved sufficient repeatability (RSD = 1.9%), reproducibility (RSD = 2.5%), accuracy (95–105% recovery), and robustness (RSD = 3.3%). Furthermore, no significant interferences from fructose, ascorbic acid and citric acid were obtained. In addition, the storage stability was further examined, after 30 days, the GOx–Chitosan/TiO2NTAs biosensor retained 85% of its initial current response. Finally, the glucose content of different food samples was measured using the biosensor and compared with the respective HPLC value. In the worst scenario, a deviation smaller than 10% was obtained among the 20 samples evaluated. PMID:29135931

  6. The Effects of Test Length and Sample Size on Item Parameters in Item Response Theory

    ERIC Educational Resources Information Center

    Sahin, Alper; Anil, Duygu

    2017-01-01

    This study investigates the effects of sample size and test length on item-parameter estimation in test development utilizing three unidimensional dichotomous models of item response theory (IRT). For this purpose, a real language test comprised of 50 items was administered to 6,288 students. Data from this test was used to obtain data sets of…

  7. What predicts inattention in adolescents? An experience-sampling study comparing chronotype, subjective, and objective sleep parameters.

    PubMed

    Hennig, Timo; Krkovic, Katarina; Lincoln, Tania M

    2017-10-01

    Many adolescents sleep insufficiently, which may negatively affect their functioning during the day. To improve sleep interventions, we need a better understanding of the specific sleep-related parameters that predict poor functioning. We investigated to which extent subjective and objective parameters of sleep in the preceding night (state parameters) and the trait variable chronotype predict daytime inattention as an indicator of poor functioning. We conducted an experience-sampling study over one week with 61 adolescents (30 girls, 31 boys; mean age = 15.5 years, standard deviation = 1.1 years). Participants rated their inattention two times each day (morning, afternoon) on a smartphone. Subjective sleep parameters (feeling rested, positive affect upon awakening) were assessed each morning on the smartphone. Objective sleep parameters (total sleep time, sleep efficiency, wake after sleep onset) were assessed with a permanently worn actigraph. Chronotype was assessed with a self-rated questionnaire at baseline. We tested the effect of subjective and objective state parameters of sleep on daytime inattention, using multilevel multiple regressions. Then, we tested whether the putative effect of the trait parameter chronotype on inattention is mediated through state sleep parameters, again using multilevel regressions. We found that short sleep time, but no other state sleep parameter, predicted inattention to a small effect. As expected, the trait parameter chronotype also predicted inattention: morningness was associated with less inattention. However, this association was not mediated by state sleep parameters. Our results indicate that short sleep time causes inattention in adolescents. Extended sleep time might thus alleviate inattention to some extent. However, it cannot alleviate the effect of being an 'owl'. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A system of equations to approximate the pharmacokinetic parameters of lacosamide at steady state from one plasma sample.

    PubMed

    Cawello, Willi; Schäfer, Carina

    2014-08-01

    Frequent plasma sampling to monitor pharmacokinetic (PK) profile of antiepileptic drugs (AEDs), is invasive, costly and time consuming. For drugs with a well-defined PK profile, such as AED lacosamide, equations can accurately approximate PK parameters from one steady-state plasma sample. Equations were derived to approximate steady-state peak and trough lacosamide plasma concentrations (Cpeak,ss and Ctrough,ss, respectively) and area under concentration-time curve during dosing interval (AUCτ,ss) from one plasma sample. Lacosamide (ka: ∼2 h(-1); ke: ∼0.05 h(-1), corresponding to half-life of 13 h) was calculated to reach Cpeak,ss after ∼1 h (tmax,ss). Equations were validated by comparing approximations to reference PK parameters obtained from single plasma samples drawn 3-12h following lacosamide administration, using data from double-blind, placebo-controlled, parallel-group PK study. Values of relative bias (accuracy) between -15% and +15%, and root mean square error (RMSE) values≤15% (precision) were considered acceptable for validation. Thirty-five healthy subjects (12 young males; 11 elderly males, 12 elderly females) received lacosamide 100mg/day for 4.5 days. Equation-derived PK values were compared to reference mean Cpeak,ss, Ctrough,ss and AUCτ,ss values. Equation-derived PK data had a precision of 6.2% and accuracy of -8.0%, 2.9%, and -0.11%, respectively. Equation-derived versus reference PK values for individual samples obtained 3-12h after lacosamide administration showed correlation (R2) range of 0.88-0.97 for AUCτ,ss. Correlation range for Cpeak,ss and Ctrough,ss was 0.65-0.87. Error analyses for individual sample comparisons were independent of time. Derived equations approximated lacosamide Cpeak,ss, Ctrough,ss and AUCτ,ss using one steady-state plasma sample within validation range. Approximated PK parameters were within accepted validation criteria when compared to reference PK values. Copyright © 2014 Elsevier B.V. All rights

  9. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  10. Hybrid Gibbs Sampling and MCMC for CMB Analysis at Small Angular Scales

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Eriksen, H. K.; Wandelt, B. D.; Gorski, K. M.; Huey, G.; O'Dwyer, I. J.; Dickinson, C.; Banday, A. J.; Lawrence, C. R.

    2008-01-01

    A) Gibbs Sampling has now been validated as an efficient, statistically exact, and practically useful method for "low-L" (as demonstrated on WMAP temperature polarization data). B) We are extending Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters for the entire range of angular scales relevant for Planck. C) Made possible by inclusion of foreground model parameters in Gibbs sampling and hybrid MCMC and Gibbs sampling for the low signal to noise (high-L) regime. D) Future items to be included in the Bayesian framework include: 1) Integration with Hybrid Likelihood (or posterior) code for cosmological parameters; 2) Include other uncertainties in instrumental systematics? (I.e. beam uncertainties, noise estimation, calibration errors, other).

  11. Impact of sampling parameters on the radical scavenging potential of olive (Olea europaea L.) leaves.

    PubMed

    Papoti, Vassiliki T; Tsimidou, Maria Z

    2009-05-13

    The impact of sampling parameters, that is, cultivar, leaf age, and sampling date, on the radical scavenging potential of olive leaf extracts was examined via the DPPH(*) and other assays. Total phenol content was estimated colorimetrically and by fluorometry, whereas phenol composition was assessed by RP-HPLC coupled with diode array, fluorometric, and MS detection systems. Oleuropein was not always the major leaf constituent. Considerable differences noted in individual phenol levels (hydroxytyrosol, oleuropein and other secoiridoids, verbascoside, and flavonoids) among samples were not reflected either in the total phenol content or in the radical scavenging potential of the extracts. It can be suggested that olive leaf is a robust source of radical scavengers throughout the year and that differentiation in the levels of individual components depends rather on sampling period than on cultivar or age. The latter does not present predictable regularity. Exploitation of all types of leaves expected in an olive tree shoot for the extraction of bioactive compounds is feasible.

  12. A novel dispersion compensating fiber grating with a large chirp parameter and period sampled distribution

    NASA Astrophysics Data System (ADS)

    Xia, Li; Li, Xuhui; Chen, Xiangfei; Xie, Shizhong

    2003-11-01

    A novel fiber grating structure is proposed for the purpose of dispersion compensation. This kind of grating can be produced with a large chirp parameter and period sampled distribution along the grating length. There are multiple channels in the wide bandwidth and each channel has totally different dispersion and bandwidth. The dispersion compensation effect of this special designed grating is verified through system simulation.

  13. Comparison of three nondestructive and contactless techniques for investigations of recombination parameters on an example of silicon samples

    NASA Astrophysics Data System (ADS)

    Chrobak, Ł.; Maliński, M.

    2018-06-01

    This paper presents a comparison of three nondestructive and contactless techniques used for determination of recombination parameters of silicon samples. They are: photoacoustic method, modulated free carriers absorption method and the photothermal radiometry method. In the paper the experimental set-ups used for measurements of the recombination parameters in these methods as also theoretical models used for interpretation of obtained experimental data have been presented and described. The experimental results and their respective fits obtained with these nondestructive techniques are shown and discussed. The values of the recombination parameters obtained with these methods are also presented and compared. Main advantages and disadvantages of presented methods have been discussed.

  14. A design methodology for nonlinear systems containing parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Young, G. E.; Auslander, D. M.

    1983-01-01

    In the present design methodology for nonlinear systems containing parameter uncertainty, a generalized sensitivity analysis is incorporated which employs parameter space sampling and statistical inference. For the case of a system with j adjustable and k nonadjustable parameters, this methodology (which includes an adaptive random search strategy) is used to determine the combination of j adjustable parameter values which maximize the probability of those performance indices which simultaneously satisfy design criteria in spite of the uncertainty due to k nonadjustable parameters.

  15. The 4-parameter Compressible Packing Model (CPM) including a critical cavity size ratio

    NASA Astrophysics Data System (ADS)

    Roquier, Gerard

    2017-06-01

    The 4-parameter Compressible Packing Model (CPM) has been developed to predict the packing density of mixtures constituted by bidisperse spherical particles. The four parameters are: the wall effect and the loosening effect coefficients, the compaction index and a critical cavity size ratio. The two geometrical interactions have been studied theoretically on the basis of a spherical cell centered on a secondary class bead. For the loosening effect, a critical cavity size ratio, below which a fine particle can be inserted into a small cavity created by touching coarser particles, is introduced. This is the only parameter which requires adaptation to extend the model to other types of particles. The 4-parameter CPM demonstrates its efficiency on frictionless glass beads (300 values), spherical particles numerically simulated (20 values), round natural particles (125 values) and crushed particles (335 values) with correlation coefficients equal to respectively 99.0%, 98.7%, 97.8%, 96.4% and mean deviations equal to respectively 0.007, 0.006, 0.007, 0.010.

  16. Infection frequency of Epstein-Barr virus in subgingival samples from patients with different periodontal status and its correlation with clinical parameters*

    PubMed Central

    Wu, Yan-min; Yan, Jie; Chen, Li-li; Sun, Wei-lian; Gu, Zhi-yuan

    2006-01-01

    Objective: To detect the infection frequencies of different genotypes of Epstein-Barr virus (EBV) in subgingival samples from chronic periodontitis (CP) patients, and to discuss the correlation between infection with EBV and clinical parameters. Methods: Nested-PCR assay was used to detect EBV-1 and EBV-2 in subgingival samples from 65 CP patients, 65 gingivitis patients and 24 periodontally healthy individuals. The amplicons were further identified by restriction fragment length polymorphism analysis (RFLP) with endonucleases Afa I and Stu I. Clinical parameters mainly included bleeding on probing (BOP), probing depth (PD), attachment loss (AL) in six sites of the dentition. Results: In CP patients, gingivitis and periodontally healthy individuals, the infection frequencies were 47.7%, 24.6% and 16.7% for EBV-1, and 15.4%, 7.7% and 0% for EBV-2, respectively. In 2 out of the 65 CP patients co-infection of EBV-1 and EBV-2 was found. The positive rate of EBV-1 in chronic periodontitis patients was higher than that in gingivitis patients (P=0.01) and periodontally healthy individuals (P=0.01). But no significant difference was shown in EBV-1 frequency between gingivitis patients and healthy individuals (P>0.05) or in EBV-2 frequency among the three groups (P>0.05). In CP patients, higher mean BOP value was found in EBV-1 or EBV-2 positive patients than that in EBV negative ones (P<0.01), but with no statistical difference in the mean PD or AL value between EBV positive and negative patients (P>0.05). After initial periodontal treatment, 12 out of the 21 EBV-1 positive CP patients did not show detectable EBV-1 in subgingival samples. Conclusion: nPCR plus RFLP analysis is a sensitive, specific and stable method to detect EBV-1 and EBV-2 in subgingival samples. Subgingival infection with EBV-1 is closely associated with chronic periodontitis. Infection of EBV in subgingival samples was correlated with BOP. PMID:17048301

  17. TriXY-Homogeneous genetic sexing of highly degraded forensic samples including hair shafts.

    PubMed

    Madel, Maria-Bernadette; Niederstätter, Harald; Parson, Walther

    2016-11-01

    Sexing of biological evidence is an important aspect in forensic investigations. A routinely used molecular-genetic approach to this endeavour is the amelogenin sex test, which is integrated in most commercially available polymerase chain reaction (PCR) kits for human identification. However, this assay is not entirely effective in respect to highly degraded DNA samples. This study presents a homogeneous PCR assay for robust sex diagnosis, especially for the analysis of severely fragmented DNA. The introduced triplex for the X and Y chromosome (TriXY) is based on real-time PCR amplification of short intergenic sequences (<50bp) on both gonosomes. Subsequent PCR product examination and molecular-genetic sex-assignment rely on high-resolution melting (HRM) curve analysis. TriXY was optimized using commercially available multi-donor human DNA preparations of either male or female origin and successfully evaluated on challenging samples, including 46 ancient DNA specimens from archaeological excavations and a total of 16 DNA samples extracted from different segments of eight hair shafts of male and female donors. Additionally, sensitivity and cross-species amplification were examined to further test the assay's utility in forensic investigations. TriXY's closed-tube format avoids post-PCR sample manipulations and, therefore, distinctly reduces the risk of PCR product carry-over contamination and sample mix-up, while reducing labour and financial expenses at the same time. The method is sensitive down to the DNA content of approximately two diploid cells and has proven highly useful on severely fragmented and low quantity ancient DNA samples. Furthermore, it even allowed for sexing of proximal hair shafts with very good results. In summary, TriXY facilitates highly sensitive, rapid, and costeffective genetic sex-determination. It outperforms existing sexing methods both in terms of sensitivity and minimum required template molecule lengths. Therefore, we feel confident

  18. Comparison of sampling procedures and microbiological and non-microbiological parameters to evaluate cleaning and disinfection in broiler houses.

    PubMed

    Luyckx, K; Dewulf, J; Van Weyenberg, S; Herman, L; Zoons, J; Vervaet, E; Heyndrickx, M; De Reu, K

    2015-04-01

    Cleaning and disinfection of the broiler stable environment is an essential part of farm hygiene management. Adequate cleaning and disinfection is essential for prevention and control of animal diseases and zoonoses. The goal of this study was to shed light on the dynamics of microbiological and non-microbiological parameters during the successive steps of cleaning and disinfection and to select the most suitable sampling methods and parameters to evaluate cleaning and disinfection in broiler houses. The effectiveness of cleaning and disinfection protocols was measured in six broiler houses on two farms through visual inspection, adenosine triphosphate hygiene monitoring and microbiological analyses. Samples were taken at three time points: 1) before cleaning, 2) after cleaning, and 3) after disinfection. Before cleaning and after disinfection, air samples were taken in addition to agar contact plates and swab samples taken from various sampling points for enumeration of total aerobic flora, Enterococcus spp., and Escherichia coli and the detection of E. coli and Salmonella. After cleaning, air samples, swab samples, and adenosine triphosphate swabs were taken and a visual score was also assigned for each sampling point. The mean total aerobic flora determined by swab samples decreased from 7.7±1.4 to 5.7±1.2 log CFU/625 cm2 after cleaning and to 4.2±1.6 log CFU/625 cm2 after disinfection. Agar contact plates were used as the standard for evaluating cleaning and disinfection, but in this study they were found to be less suitable than swabs for enumeration. In addition to measuring total aerobic flora, Enterococcus spp. seemed to be a better hygiene indicator to evaluate cleaning and disinfection protocols than E. coli. All stables were Salmonella negative, but the detection of its indicator organism E. coli provided additional information for evaluating cleaning and disinfection protocols. Adenosine triphosphate analyses gave additional information about the

  19. Determination of D- and L-pipecolic acid in food samples including processed foods.

    PubMed

    Fujita, Toru; Fujita, Manabu; Kodama, Taku; Hada, Toshikazu; Higashino, Kazuya

    2003-01-01

    Pipecolic acid, a metabolite of lysine, is found in human physiological fluids and is thought to play an important role in the central inhibitory gamma-aminobutyric acid system. However, it is unclear whether plasma D- and L-pipecolic acid originate from oral food intake or intestinal bacterial metabolites. We analyzed the contents of D- and L-pipecolic acid in several processed foods including dairy products (cow's milk, cheese and yogurt), fermented beverages (beer and wine) and heated samples (beef, bovine liver, bread and tofu) to clarify the relationship between plasma D- and L-pipecolic acid and dietary foods. Our study revealed that some of the samples contained high concentrations of total pipecolic acid, and a higher proportion of L- than D-isomers. The other samples also showed high proportions of L-pipecolic acid. It was also shown that there is no significant change in the ratio of the D-isomer before and after heat treatment. The heat treatments could not cause the racemization of pipecolic acid in this study. These findings suggest that plasma pipecolic acid, particularly the D-isomer, does not originate from direct food intake and that D- and L-pipecolic acid can possibly be derived from intestinal bacterial metabolites. Copyright 2003 S. Karger AG, Basel

  20. Attaining insight into interactions between hydrologic model parameters and geophysical attributes for national-scale model parameter estimation

    NASA Astrophysics Data System (ADS)

    Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.

    2017-12-01

    Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.

  1. Effect of experimental and sample factors on dehydration kinetics of mildronate dihydrate: mechanism of dehydration and determination of kinetic parameters.

    PubMed

    Bērziņš, Agris; Actiņš, Andris

    2014-06-01

    The dehydration kinetics of mildronate dihydrate [3-(1,1,1-trimethylhydrazin-1-ium-2-yl)propionate dihydrate] was analyzed in isothermal and nonisothermal modes. The particle size, sample preparation and storage, sample weight, nitrogen flow rate, relative humidity, and sample history were varied in order to evaluate the effect of these factors and to more accurately interpret the data obtained from such analysis. It was determined that comparable kinetic parameters can be obtained in both isothermal and nonisothermal mode. However, dehydration activation energy values obtained in nonisothermal mode showed variation with conversion degree because of different rate-limiting step energy at higher temperature. Moreover, carrying out experiments in this mode required consideration of additional experimental complications. Our study of the different sample and experimental factor effect revealed information about changes of the dehydration rate-limiting step energy, variable contribution from different rate limiting steps, as well as clarified the dehydration mechanism. Procedures for convenient and fast determination of dehydration kinetic parameters were offered. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  2. Arterial waveform parameters in a large, population-based sample of adults: relationships with ethnicity and lifestyle factors.

    PubMed

    Sluyter, J D; Hughes, A D; Thom, S A McG; Lowe, A; Camargo, C A; Hametner, B; Wassertheurer, S; Parker, K H; Scragg, R K R

    2017-05-01

    Little is known about how aortic waveform parameters vary with ethnicity and lifestyle factors. We investigated these issues in a large, population-based sample. We carried out a cross-sectional analysis of 4798 men and women, aged 50-84 years from Auckland, New Zealand. Participants were 3961 European, 321 Pacific, 266 Maori and 250 South Asian people. We assessed modifiable lifestyle factors via questionnaires, and measured body mass index (BMI) and brachial blood pressure (BP). Suprasystolic oscillometry was used to derive aortic pressure, from which several haemodynamic parameters were calculated. Heavy alcohol consumption and BMI were positively related to most waveform parameters. Current smokers had higher levels of aortic augmentation index than non-smokers (difference=3.7%, P<0.0001). Aortic waveform parameters, controlling for demographics, antihypertensives, diabetes and cardiovascular disease (CVD), were higher in non-Europeans than in Europeans. Further adjustment for brachial BP or lifestyle factors (particularly BMI) reduced many differences but several remained. Despite even further adjustment for mean arterial pressure, pulse rate, height and total:high-density lipoprotein cholesterol, compared with Europeans, South Asians had higher levels of all measured aortic waveform parameters (for example, for backward pressure amplitude: β=1.5 mm Hg; P<0.0001), whereas Pacific people had 9% higher log e (excess pressure integral) (P<0.0001). In conclusion, aortic waveform parameters varied with ethnicity in line with the greater prevalence of CVD among non-white populations. Generally, this was true even after accounting for brachial BP, suggesting that waveform parameters may have increased usefulness in capturing ethnic variations in cardiovascular risk. Heavy alcohol consumption, smoking and especially BMI may partially contribute to elevated levels of these parameters.

  3. Arterial waveform parameters in a large, population-based sample of adults: relationships with ethnicity and lifestyle factors

    PubMed Central

    Sluyter, J D; Hughes, A D; Thom, S A McG; Lowe, A; Camargo Jr, C A; Hametner, B; Wassertheurer, S; Parker, K H; Scragg, R K R

    2017-01-01

    Little is known about how aortic waveform parameters vary with ethnicity and lifestyle factors. We investigated these issues in a large, population-based sample. We carried out a cross-sectional analysis of 4798 men and women, aged 50–84 years from Auckland, New Zealand. Participants were 3961 European, 321 Pacific, 266 Maori and 250 South Asian people. We assessed modifiable lifestyle factors via questionnaires, and measured body mass index (BMI) and brachial blood pressure (BP). Suprasystolic oscillometry was used to derive aortic pressure, from which several haemodynamic parameters were calculated. Heavy alcohol consumption and BMI were positively related to most waveform parameters. Current smokers had higher levels of aortic augmentation index than non-smokers (difference=3.7%, P<0.0001). Aortic waveform parameters, controlling for demographics, antihypertensives, diabetes and cardiovascular disease (CVD), were higher in non-Europeans than in Europeans. Further adjustment for brachial BP or lifestyle factors (particularly BMI) reduced many differences but several remained. Despite even further adjustment for mean arterial pressure, pulse rate, height and total:high-density lipoprotein cholesterol, compared with Europeans, South Asians had higher levels of all measured aortic waveform parameters (for example, for backward pressure amplitude: β=1.5 mm Hg; P<0.0001), whereas Pacific people had 9% higher loge (excess pressure integral) (P<0.0001). In conclusion, aortic waveform parameters varied with ethnicity in line with the greater prevalence of CVD among non-white populations. Generally, this was true even after accounting for brachial BP, suggesting that waveform parameters may have increased usefulness in capturing ethnic variations in cardiovascular risk. Heavy alcohol consumption, smoking and especially BMI may partially contribute to elevated levels of these parameters. PMID:28004730

  4. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

    PubMed Central

    Meyer, Karin

    2016-01-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681

  5. Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.

  6. Non-destructive sampling of a comet

    NASA Astrophysics Data System (ADS)

    Jessberger, H. L.; Kotthaus, M.

    1991-04-01

    Various conditions which must be met for the development of a nondestructive sampling and acquisition system are outlined and the development of a new robotic sampling system suited for use on a cometary surface is briefly discussed. The Rosetta mission of ESA will take samples of a comet nucleus and return both core and volatile samples to earth. Various considerations which must be taken into account for such a project are examined including the identification of design parameters for sample quality; the identification of the most probable site conditions; the development of a sample acquisition system with respect to these conditions; the production of model materials and model conditions; and the investigation of the relevant material properties. An adequate sampling system should also be designed and built, including various tools, and the system should be tested under simulated cometary conditions.

  7. Investigations of the possibility of determination of thermal parameters of Si and SiGe samples based on the Photo Thermal Radiometry technique

    NASA Astrophysics Data System (ADS)

    Chrobak, Ł.; Maliński, M.

    2018-03-01

    This paper presents results of investigations of the possibility of determination of thermal parameters (thermal conductivity, thermal diffusivity) of silicon and silicon germanium crystals from the frequency characteristics of the Photo Thermal Radiometry (PTR) signal. The theoretical analysis of the influence of the mentioned parameters on the PTR signal has been presented and discussed. The values of the thermal and recombination parameters have been extracted from the fittings of the theoretical to experimental data. The presented approach uses the reference Si sample whose thermal and recombination parameters are known.

  8. Parameter screening: the use of a dummy parameter to identify non-influential parameters in a global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2017-04-01

    Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol

  9. CRUMP 2003 Selected Water Sample Results

    EPA Pesticide Factsheets

    Point locations and water sampling results performed in 2003 by the Church Rock Uranium Monitoring Project (CRUMP) a consortium of organizations (Navajo Nation Environmental Protection Agency, US Environmental Protection Agency, New Mexico Scientific Laboratory Division, Navajo Tribal Utility Authority and NM Water Quality Control Commission). Samples include general description of the wells sampled, general chemistry, heavy metals and aestheic parameters, and selected radionuclides. Here only six sampling results are presented in this point shapefile, including: Gross Alpha (U-Nat Ref.) (pCi/L), Gross Beta (Sr/Y-90 Ref.) (pCi/L), Radium-226 (pCi/L), Radium-228 (pCi/L), Total Uranium (pCi/L), and Uranium mass (ug/L). The CRUMP samples were collected in the area of Churchrock, NM in the Eastern AUM Region of the Navajo Nation.

  10. Adaptive Peer Sampling with Newscast

    NASA Astrophysics Data System (ADS)

    Tölgyesi, Norbert; Jelasity, Márk

    The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.

  11. Biochemical Parameters of Guinea Pig Perilymph Sampled According to Scala and Following Sound Presentation

    PubMed Central

    Gershbein, Leon L.; Manshio, Dennis T.; Shurrager, Phil S.

    1974-01-01

    Guinea pigs were exposed to sound varying from 2 to 8 kHz in frequency and 80-100 dB (SPL) in intensity for periods of 1 hr. The biochemical parameters, glucose, sodium, total protein, and the glycolytic enzymes, aldolase, phosphohexose isomerase, and total LDH as well as isozymes of the latter were ascertained for blood serum, perilymph, and, in some instances, cerebrospinal fluid. The three enzymes occurred at lower levels in perilymph as compared to blood serum. Except for a small difference in serum total protein, sound presentation incurred no significant effect on any of the above parameters. Definite differences in several metabolites were discerned for perilymph sampled according to scala and which were independent of the respective acoustical treatments. Thus, as compared to the scale tympani, the scala vestibuli perilymph displayed a higher glucose content and a diminished total LDH level and of the latter isozymes, LDH1 ranged lower and LDH2, higher. As further evidence pointing to cerebrospinal fluid as the possible origin of perilymph, similarities in glucose contents and LDH isozyme patterns were noted for both fluids. PMID:4470918

  12. Receiver calibration and the nonlinearity parameter measurement of thick solid samples with diffraction and attenuation corrections.

    PubMed

    Jeong, Hyunjo; Barnard, Daniel; Cho, Sungjong; Zhang, Shuzeng; Li, Xiongbing

    2017-11-01

    This paper presents analytical and experimental techniques for accurate determination of the nonlinearity parameter (β) in thick solid samples. When piezoelectric transducers are used for β measurements, the receiver calibration is required to determine the transfer function from which the absolute displacement can be calculated. The measured fundamental and second harmonic displacement amplitudes should be modified to account for beam diffraction and material absorption. All these issues are addressed in this study and the proposed technique is validated through the β measurements of thick solid samples. A simplified self-reciprocity calibration procedure for a broadband receiver is described. The diffraction and attenuation corrections for the fundamental and second harmonics are explicitly derived. Aluminum alloy samples in five different thicknesses (4, 6, 8, 10, 12cm) are prepared and β measurements are made using the finite amplitude, through-transmission method. The effects of diffraction and attenuation corrections on β measurements are systematically investigated. When diffraction and attenuation corrections are all properly made, the variation of β between different thickness samples is found to be less than 3.2%. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Further comments on sensitivities, parameter estimation, and sampling design in one-dimensional analysis of solute transport in porous media

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1988-01-01

    Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.

  14. Some Physical, Chemical, and Biological Parameters of Samples of Scleractinium Coral Aquaculture Skeleton Used for Reconstruction/Engineering of the Bone Tissue.

    PubMed

    Popov, A A; Sergeeva, N S; Britaev, T A; Komlev, V S; Sviridova, I K; Kirsanova, V A; Akhmedova, S A; Dgebuadze, P Yu; Teterina, A Yu; Kuvshinova, E A; Schanskii, Ya D

    2015-08-01

    Physical and chemical (phase and chemical composition, dynamics of resorption, and strength properties), and biological (cytological compatibility and scaffold properties of the surface) properties of samples of scleractinium coral skeletons from aquacultures of three types and corresponding samples of natural coral skeletons (Pocillopora verrucosa, Acropora formosa, and Acropora nobilis) were studied. Samples of scleractinium coral aquaculture skeleton of A. nobilis, A. formosa, and P. verrucosa met the requirements (all study parameters) to materials for osteoplasty and 3D-scaffolds for engineering of bone tissue.

  15. WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING

    PubMed Central

    Saegusa, Takumi; Wellner, Jon A.

    2013-01-01

    We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559

  16. [Development of an analyzing system for soil parameters based on NIR spectroscopy].

    PubMed

    Zheng, Li-Hua; Li, Min-Zan; Sun, Hong

    2009-10-01

    A rapid estimation system for soil parameters based on spectral analysis was developed by using object-oriented (OO) technology. A class of SOIL was designed. The instance of the SOIL class is the object of the soil samples with the particular type, specific physical properties and spectral characteristics. Through extracting the effective information from the modeling spectral data of soil object, a map model was established between the soil parameters and its spectral data, while it was possible to save the mapping model parameters in the database of the model. When forecasting the content of any soil parameter, the corresponding prediction model of this parameter can be selected with the same soil type and the similar soil physical properties of objects. And after the object of target soil samples was carried into the prediction model and processed by the system, the accurate forecasting content of the target soil samples could be obtained. The system includes modules such as file operations, spectra pretreatment, sample analysis, calibrating and validating, and samples content forecasting. The system was designed to run out of equipment. The parameters and spectral data files (*.xls) of the known soil samples can be input into the system. Due to various data pretreatment being selected according to the concrete conditions, the results of predicting content will appear in the terminal and the forecasting model can be stored in the model database. The system reads the predicting models and their parameters are saved in the model database from the module interface, and then the data of the tested samples are transferred into the selected model. Finally the content of soil parameters can be predicted by the developed system. The system was programmed with Visual C++6.0 and Matlab 7.0. And the Access XP was used to create and manage the model database.

  17. Measurement of neutrino and antineutrino oscillations by the T2K experiment including a new additional sample of νe interactions at the far detector

    NASA Astrophysics Data System (ADS)

    Abe, K.; Amey, J.; Andreopoulos, C.; Antonova, M.; Aoki, S.; Ariga, A.; Ashida, Y.; Ban, S.; Barbi, M.; Barker, G. J.; Barr, G.; Barry, C.; Batkiewicz, M.; Berardi, V.; Berkman, S.; Bhadra, S.; Bienstock, S.; Blondel, A.; Bolognesi, S.; Bordoni, S.; Boyd, S. B.; Brailsford, D.; Bravar, A.; Bronner, C.; Buizza Avanzini, M.; Calland, R. G.; Campbell, T.; Cao, S.; Cartwright, S. L.; Catanesi, M. G.; Cervera, A.; Chappell, A.; Checchia, C.; Cherdack, D.; Chikuma, N.; Christodoulou, G.; Coleman, J.; Collazuol, G.; Coplowe, D.; Cudd, A.; Dabrowska, A.; De Rosa, G.; Dealtry, T.; Denner, P. F.; Dennis, S. R.; Densham, C.; Di Lodovico, F.; Dolan, S.; Drapier, O.; Duffy, K. E.; Dumarchez, J.; Dunne, P.; Emery-Schrenk, S.; Ereditato, A.; Feusels, T.; Finch, A. J.; Fiorentini, G. A.; Fiorillo, G.; Friend, M.; Fujii, Y.; Fukuda, D.; Fukuda, Y.; Garcia, A.; Giganti, C.; Gizzarelli, F.; Golan, T.; Gonin, M.; Hadley, D. R.; Haegel, L.; Haigh, J. T.; Hansen, D.; Harada, J.; Hartz, M.; Hasegawa, T.; Hastings, N. C.; Hayashino, T.; Hayato, Y.; Hillairet, A.; Hiraki, T.; Hiramoto, A.; Hirota, S.; Hogan, M.; Holeczek, J.; Hosomi, F.; Huang, K.; Ichikawa, A. K.; Ikeda, M.; Imber, J.; Insler, J.; Intonti, R. A.; Ishida, T.; Ishii, T.; Iwai, E.; Iwamoto, K.; Izmaylov, A.; Jamieson, B.; Jiang, M.; Johnson, S.; Jonsson, P.; Jung, C. K.; Kabirnezhad, M.; Kaboth, A. C.; Kajita, T.; Kakuno, H.; Kameda, J.; Karlen, D.; Katori, T.; Kearns, E.; Khabibullin, M.; Khotjantsev, A.; Kim, H.; Kim, J.; King, S.; Kisiel, J.; Knight, A.; Knox, A.; Kobayashi, T.; Koch, L.; Koga, T.; Koller, P. P.; Konaka, A.; Kormos, L. L.; Koshio, Y.; Kowalik, K.; Kudenko, Y.; Kurjata, R.; Kutter, T.; Lagoda, J.; Lamont, I.; Lamoureux, M.; Lasorak, P.; Laveder, M.; Lawe, M.; Licciardi, M.; Lindner, T.; Liptak, Z. J.; Litchfield, R. P.; Li, X.; Longhin, A.; Lopez, J. P.; Lou, T.; Ludovici, L.; Lu, X.; Magaletti, L.; Mahn, K.; Malek, M.; Manly, S.; Maret, L.; Marino, A. D.; Martin, J. F.; Martins, P.; Martynenko, S.; Maruyama, T.; Matveev, V.; Mavrokoridis, K.; Ma, W. Y.; Mazzucato, E.; McCarthy, M.; McCauley, N.; McFarland, K. S.; McGrew, C.; Mefodiev, A.; Metelko, C.; Mezzetto, M.; Minamino, A.; Mineev, O.; Mine, S.; Missert, A.; Miura, M.; Moriyama, S.; Morrison, J.; Mueller, Th. A.; Nakadaira, T.; Nakahata, M.; Nakamura, K. G.; Nakamura, K.; Nakamura, K. D.; Nakanishi, Y.; Nakayama, S.; Nakaya, T.; Nakayoshi, K.; Nantais, C.; Nielsen, C.; Nishikawa, K.; Nishimura, Y.; Novella, P.; Nowak, J.; O'Keeffe, H. M.; Okumura, K.; Okusawa, T.; Oryszczak, W.; Oser, S. M.; Ovsyannikova, T.; Owen, R. A.; Oyama, Y.; Palladino, V.; Palomino, J. L.; Paolone, V.; Patel, N. D.; Paudyal, P.; Pavin, M.; Payne, D.; Petrov, Y.; Pickering, L.; Pinzon Guerra, E. S.; Pistillo, C.; Popov, B.; Posiadala-Zezula, M.; Poutissou, J.-M.; Pritchard, A.; Przewlocki, P.; Quilain, B.; Radermacher, T.; Radicioni, E.; Ratoff, P. N.; Rayner, M. A.; Reinherz-Aronis, E.; Riccio, C.; Rodrigues, P. A.; Rondio, E.; Rossi, B.; Roth, S.; Ruggeri, A. C.; Rychter, A.; Sakashita, K.; Sánchez, F.; Scantamburlo, E.; Scholberg, K.; Schwehr, J.; Scott, M.; Seiya, Y.; Sekiguchi, T.; Sekiya, H.; Sgalaberna, D.; Shah, R.; Shaikhiev, A.; Shaker, F.; Shaw, D.; Shiozawa, M.; Shirahige, T.; Smy, M.; Sobczyk, J. T.; Sobel, H.; Steinmann, J.; Stewart, T.; Stowell, P.; Suda, Y.; Suvorov, S.; Suzuki, A.; Suzuki, S. Y.; Suzuki, Y.; Tacik, R.; Tada, M.; Takeda, A.; Takeuchi, Y.; Tamura, R.; Tanaka, H. K.; Tanaka, H. A.; Thakore, T.; Thompson, L. F.; Tobayama, S.; Toki, W.; Tomura, T.; Tsukamoto, T.; Tzanov, M.; Vagins, M.; Vallari, Z.; Vasseur, G.; Vilela, C.; Vladisavljevic, T.; Wachala, T.; Walter, C. W.; Wark, D.; Wascko, M. O.; Weber, A.; Wendell, R.; Wilking, M. J.; Wilkinson, C.; Wilson, J. R.; Wilson, R. J.; Wret, C.; Yamada, Y.; Yamamoto, K.; Yanagisawa, C.; Yano, T.; Yen, S.; Yershov, N.; Yokoyama, M.; Yu, M.; Zalewska, A.; Zalipska, J.; Zambelli, L.; Zaremba, K.; Ziembicki, M.; Zimmerman, E. D.; Zito, M.; T2K Collaboration

    2017-11-01

    The T2K experiment reports an updated analysis of neutrino and antineutrino oscillations in appearance and disappearance channels. A sample of electron neutrino candidates at Super-Kamiokande in which a pion decay has been tagged is added to the four single-ring samples used in previous T2K oscillation analyses. Through combined analyses of these five samples, simultaneous measurements of four oscillation parameters, |Δ m322 |, sin2θ23, sin2θ13, and δCP and of the mass ordering are made. A set of studies of simulated data indicates that the sensitivity to the oscillation parameters is not limited by neutrino interaction model uncertainty. Multiple oscillation analyses are performed, and frequentist and Bayesian intervals are presented for combinations of the oscillation parameters with and without the inclusion of reactor constraints on sin2θ13. When combined with reactor measurements, the hypothesis of C P conservation (δCP=0 or π ) is excluded at 90% confidence level. The 90% confidence region for δCP is [-2.95 ,-0.44 ] ([-1.47 ,-1.27 ] ) for normal (inverted) ordering. The central values and 68% confidence intervals for the other oscillation parameters for normal (inverted) ordering are Δ m322=2.54 ±0.08 (2.51 ±0.08 )×10-3 eV2/c4 and sin2θ23 =0.5 5-0.09+0.05 (0.5 5-0.08+0.05), compatible with maximal mixing. In the Bayesian analysis, the data weakly prefer normal ordering (Bayes factor 3.7) and the upper octant for sin2θ23 (Bayes factor 2.4).

  18. Characteristic parameters of superconductor-coolant interaction including high Tc current density limits

    NASA Technical Reports Server (NTRS)

    Frederking, T. H. K.

    1989-01-01

    In the area of basic mechanisms of helium heat transfer and related influence on super-conducting magnet stability, thermal boundary conditions are important constraints. Characteristic lengths are considered along with other parameters of the superconducting composite-coolant system. Based on helium temperature range developments, limiting critical current densities are assessed at low fields for high transition temperature superconductors.

  19. Atmospheric parameters and magnesium and calcium NLTE abundances for a sample of 16 ultra metal-poor stars

    NASA Astrophysics Data System (ADS)

    Sitnova, Tatyana; Mashonkina, Lyudmila; Ezzeddine, Rana; Frebel, Anna

    2018-06-01

    The most metal-poor stars provide important observational clues to the astrophysical objects that enriched the primordial gas with heavy elements. Accurate atmospheric parameters is a prerequisite of determination of accurate abundances. We present atmospheric parameters and abundances of calcium and magnesium for a sample of 16 ultra-metal poor (UMP) stars. In spectra of UMP stars, iron is represented only by lines of Fe I, while calcium is represented with lines of Ca I and Ca II, which can be used for determination/checking of effective temperature and surface gravity. Accurate calculations of synthetic spectra of UMP stars require non-local thermodynamic equilibrium (NLTE) treatment of line formation, since deviations from LTE grow with metallicity decreasing. The method of atmospheric parameter determination is based on NLTE analysis of lines of Ca I and Ca II, multi-band photometry, and isochrones. The method was tested in advance with the ultra metal-poor giant CD-38 245, where, in addition, trigonometric parallax measurements from Gaia DR1 and lines of Fe I and Fe II are available. Using photometric Teff = 4900 K and distance based log g = 2.0 for CD-38 245, we derived consistent within error bars NLTE abundances from Fe I and Fe II and Ca I and Ca II, while LTE leads to a discrepancy of 0.6 dex between Ca I and Ca II. We determined NLTE and LTE abundances of magnesium and calcium in 16 stars of the sample. For the majority of stars, as expected, [Ca/Mg] NLTE abundance ratios are close to 0, while LTE leads to systematically higher [Ca/Mg], by up to 0.3 dex, and larger spread of [Ca/Mg] for different stars. Three stars of our sample are strongly enhanced in magnesium, with [Mg/Ca] of 1.3 dex. It is worth noting that, for these three stars, we got very similar [Mg/Ca] of 1.30, 1.45, and 1.29, in contrast to the data from the literature, where, for the same stars, [Mg/Ca] vary from 0.7 to 1.4. Very similar [Mg/Ca] abundance ratios of these stars argue that

  20. Simplifying sample pretreatment: application of dried blood spot (DBS) method to blood samples, including postmortem, for UHPLC-MS/MS analysis of drugs of abuse.

    PubMed

    Odoardi, Sara; Anzillotti, Luca; Strano-Rossi, Sabina

    2014-10-01

    The complexity of biological matrices, such as blood, requires the development of suitably selective and reliable sample pretreatment procedures prior to their instrumental analysis. A method has been developed for the analysis of drugs of abuse and their metabolites from different chemical classes (opiates, methadone, fentanyl and analogues, cocaine, amphetamines and amphetamine-like substances, ketamine, LSD) in human blood using dried blood spot (DBS) and subsequent UHPLC-MS/MS analysis. DBS extraction required only 100μL of sample, added with the internal standards and then three droplets (30μL each) of this solution were spotted on the card, let dry for 1h, punched and extracted with methanol with 0.1% of formic acid. The supernatant was evaporated and the residue was then reconstituted in 100μL of water with 0.1% of formic acid and injected in the UHPLC-MS/MS system. The method was validated considering the following parameters: LOD and LOQ, linearity, precision, accuracy, matrix effect and dilution integrity. LODs were 0.05-1ng/mL and LOQs were 0.2-2ng/mL. The method showed satisfactory linearity for all substances, with determination coefficients always higher than 0.99. Intra and inter day precision, accuracy, matrix effect and dilution integrity were acceptable for all the studied substances. The addition of internal standards before DBS extraction and the deposition of a fixed volume of blood on the filter cards ensured the accurate quantification of the analytes. The validated method was then applied to authentic postmortem blood samples. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Changes in bone mineral metabolism parameters, including FGF23, after discontinuing cinacalcet at kidney transplantation.

    PubMed

    Barros, Xoana; Fuster, David; Paschoalin, Raphael; Oppenheimer, Federico; Rubello, Domenico; Perlaza, Pilar; Pons, Francesca; Torregrosa, Jose V

    2015-05-01

    Little is known about the effects of the administration of cinacalcet in dialytic patients who are scheduled for kidney transplantation, and in particular about the changes in FGF23 and other mineral metabolism parameters after surgery compared with recipients not on cinacalcet at kidney transplantation. We performed a prospective observational cohort study with recruitment of consecutive kidney transplant recipients at our institution. Patients were classified according to whether they were under treatment with cinacalcet before transplantation. Bone mineral metabolism parameters, including C-terminal FGF23, were measured at baseline, on day 15, and at 1, 3, and 6 months after transplantation. In previously cinacalcet-treated patients, cinacalcet therapy was discontinued on the day of surgery and was not restarted after transplantation. A total of 48 kidney transplant recipients, 20 on cinacalcet at surgery and 28 cinacalcet non-treated patients, completed the follow-up. Serum phosphate declined significantly in the first 15 days after transplantation with no differences between the two groups, whereas cinacalcet-treated patients showed higher FGF23 levels, although not significant. After transplantation, PTH and serum calcium were significantly higher in cinacalcet-treated patients. We conclude that patients receiving cinacalcet on dialysis presented similar serum phosphate levels but higher PTH and serum calcium levels during the initial six months after kidney transplantation than cinacalcet non-treated patients. The group previously treated with cinacalcet before transplantation showed higher FGF23 levels without significant differences, so further studies should investigate its relevance in the management of these patients.

  2. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  3. Phase Diagrams and the Non-Linear Dielectric Constant in the Landau-Type Potential Including the Linear-Quadratic Coupling between Order Parameters

    NASA Astrophysics Data System (ADS)

    Iwata, Makoto; Orihara, Hiroshi; Ishibashi, Yoshihiro

    1997-04-01

    The phase diagrams in the Landau-type thermodynamic potential including the linear-quadratic coupling between order parameters p and q, i.e., qp2, which is applicable to the phase transition in the benzil, phospholipid bilayers, and the isotropic-nematic phase transition in liquid crystals, are studied. It was found that the phase diagram in the extreme case has one tricritical point c1, one critical end point e1, and two triple points t1 and t2. The linear and nonlinear dielectric constants in the potential are discussed in the case that the order parameter p is the polarization.

  4. A system for comparison of boring parameters of mini-HDD machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunsaulis, F.R.

    A system has been developed to accurately evaluate changes in performance of a mini-horizontal directional drilling (HDD) system in the backreaming/pullback portion of a bore as the parameters influencing the backream are changed. Parameters incorporated in the study include spindle rotation rate, rate of pull, fluid flow rate, and backreamer design. The boring system is able to run at variable, operator-determined rates of spindle rotation and pullback speed utilizing electronic feedback controls for regulation. Spindle torque and pullback force are continuously measured and recorded giving an indication of the performance of the unit. A method has also been developed tomore » measure the pull load on the installed service line to determine the effect of the boring parameters on the service line. Variability of soil along the bore path is measured and quantified using a soil sampling system developed for the study. Sample results obtained with the system are included in the report. 2 refs., 5 figs., 2 tabs.« less

  5. Determination of polarimetric parameters of honey by near-infrared transflectance spectroscopy.

    PubMed

    García-Alvarez, M; Ceresuela, S; Huidobro, J F; Hermida, M; Rodríguez-Otero, J L

    2002-01-30

    NIR transflectance spectroscopy was used to determine polarimetric parameters (direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides) and sucrose in honey. In total, 156 honey samples were collected during 1992 (45 samples), 1995 (56 samples), and 1996 (55 samples). Samples were analyzed by NIR spectroscopy and polarimetric methods. Calibration (118 samples) and validation (38 samples) sets were made up; honeys from the three years were included in both sets. Calibrations were performed by modified partial least-squares regression and scatter correction by standard normal variation and detrend methods. For direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides, good statistics (bias, SEV, and R(2)) were obtained for the validation set, and no statistically (p = 0.05) significant differences were found between instrumental and polarimetric methods for these parameters. Statistical data for sucrose were not as good as those of the other parameters. Therefore, NIR spectroscopy is not an effective method for quantitative analysis of sucrose in these honey samples. However, NIR spectroscopy may be an acceptable method for semiquantitative evaluation of sucrose for honeys, such as those in our study, containing up to 3% of sucrose. Further work is necessary to validate the uncertainty at higher levels.

  6. Associations of rumen parameters with feed efficiency and sampling routine in beef cattle.

    PubMed

    Lam, S; Munro, J C; Zhou, M; Guan, L L; Schenkel, F S; Steele, M A; Miller, S P; Montanholi, Y R

    2018-07-01

    Characterizing ruminal parameters in the context of sampling routine and feed efficiency is fundamental to understand the efficiency of feed utilization in the bovine. Therefore, we evaluated microbial and volatile fatty acid (VFA) profiles, rumen papillae epithelial and stratum corneum thickness and rumen pH (RpH) and temperature (RT) in feedlot cattle. In all, 48 cattle (32 steers plus 16 bulls), fed a high moisture corn and haylage-based ration, underwent a productive performance test to determine residual feed intake (RFI) using feed intake, growth, BW and composition traits. Rumen fluid was collected, then RpH and RT logger were inserted 5.5±1 days before slaughter. At slaughter, the logger was recovered and rumen fluid and rumen tissue were sampled. The relative daily time spent in specific RpH and RT ranges were determined. Polynomial regression analysis was used to characterize RpH and RT circadian patterns. Animals were divided into efficient and inefficient groups based on RFI to compare productive performance and ruminal parameters. Efficient animals consumed 1.8 kg/day less dry matter than inefficient cattle (P⩽0.05) while achieving the same productive performance (P⩾0.10). Ruminal bacteria population was higher (P⩽0.05) (7.6×1011 v. 4.3×1011 copy number of 16S rRNA gene/ml rumen fluid) and methanogen population was lower (P⩽0.05) (2.3×109 v. 4.9×109 copy number of 16S rRNA gene/ml rumen fluid) in efficient compared with inefficient cattle at slaughter with no differences (P⩾0.10) between samples collected on-farm. No differences (P⩾0.10) in rumen fluid VFA were also observed between feed efficiency groups either on-farm or at slaughter. However, increased (P⩽0.05) acetate, and decreased (P⩽0.05) propionate, butyrate, valerate and caproate concentrations were observed at slaughter compared with on-farm. Efficient had increased (P⩽0.05) rumen epithelium thickness (136 v. 126 µm) compared with inefficient cattle. Efficient animals

  7. Order parameter free enhanced sampling of the vapor-liquid transition using the generalized replica exchange method.

    PubMed

    Lu, Qing; Kim, Jaegil; Straub, John E

    2013-03-14

    The generalized Replica Exchange Method (gREM) is extended into the isobaric-isothermal ensemble, and applied to simulate a vapor-liquid phase transition in Lennard-Jones fluids. Merging an optimally designed generalized ensemble sampling with replica exchange, gREM is particularly well suited for the effective simulation of first-order phase transitions characterized by "backbending" in the statistical temperature. While the metastable and unstable states in the vicinity of the first-order phase transition are masked by the enthalpy gap in temperature replica exchange method simulations, they are transformed into stable states through the parameterized effective sampling weights in gREM simulations, and join vapor and liquid phases with a succession of unimodal enthalpy distributions. The enhanced sampling across metastable and unstable states is achieved without the need to identify a "good" order parameter for biased sampling. We performed gREM simulations at various pressures below and near the critical pressure to examine the change in behavior of the vapor-liquid phase transition at different pressures. We observed a crossover from the first-order phase transition at low pressure, characterized by the backbending in the statistical temperature and the "kink" in the Gibbs free energy, to a continuous second-order phase transition near the critical pressure. The controlling mechanisms of nucleation and continuous phase transition are evident and the coexistence properties and phase diagram are found in agreement with literature results.

  8. The influence of inertial sensor sampling frequency on the accuracy of measurement parameters in rearfoot running.

    PubMed

    Mitschke, Christian; Zaumseil, Falk; Milani, Thomas L

    2017-11-01

    Increasingly, inertial sensors are being used for running analyses. The aim of this study was to systematically investigate the influence of inertial sensor sampling frequencies (SF) on the accuracy of kinematic, spatio-temporal, and kinetic parameters. We hypothesized that running analyses at lower SF result in less signal information and therefore the inability to sufficiently interpret measurement data. Twenty-one subjects participated in this study. Rearfoot strikers ran on an indoor running track at a velocity of 3.5 ± 0.1 ms -1 . A uniaxial accelerometer was attached at the tibia and an inertial measurement unit was mounted at the heel of the right shoe. All sensors were synchronized at the start and data was measured with 1000 Hz (reference SF). Datasets were reduced to 500, 333, 250, 200, and 100 Hz in post-processing. The results of this study showed that a minimum SF of 500 Hz should be used to accurately measure kinetic parameters (e.g. peak heel acceleration). In contrast, stride length showed accurate results even at 333 Hz. 200 Hz were required to calculate parameters accurately for peak tibial acceleration, stride duration, and all kinematic measurements. The information from this study is necessary to correctly interpret measurement data of existing investigations and to plan future studies.

  9. Normal- and oblique-shock flow parameters in equilibrium air including attached-shock solutions for surfaces at angles of attack, sweep, and dihedral

    NASA Technical Reports Server (NTRS)

    Hunt, J. L.; Souders, S. W.

    1975-01-01

    Normal- and oblique-shock flow parameters for air in thermochemical equilibrium are tabulated as a function of shock angle for altitudes ranging from 15.24 km to 91.44 km in increments of 7.62 km at selected hypersonic speeds. Post-shock parameters tabulated include flow-deflection angle, velocity, Mach number, compressibility factor, isentropic exponent, viscosity, Reynolds number, entropy difference, and static pressure, temperature, density, and enthalpy ratios across the shock. A procedure is presented for obtaining oblique-shock flow properties in equilibrium air on surfaces at various angles of attack, sweep, and dihedral by use of the two-dimensional tabulations. Plots of the flow parameters against flow-deflection angle are presented at altitudes of 30.48, 60.96, and 91.44 km for various stream velocities.

  10. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, M; Li, R; Xing, L

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly

  11. Realistic sampling of anisotropic correlogram parameters for conditional simulation of daily rainfields

    NASA Astrophysics Data System (ADS)

    Gyasi-Agyei, Yeboah

    2018-01-01

    This paper has established a link between the spatial structure of radar rainfall, which more robustly describes the spatial structure, and gauge rainfall for improved daily rainfield simulation conditioned on the limited gauged data for regions with or without radar records. A two-dimensional anisotropic exponential function that has parameters of major and minor axes lengths, and direction, is used to describe the correlogram (spatial structure) of daily rainfall in the Gaussian domain. The link is a copula-based joint distribution of the radar-derived correlogram parameters that uses the gauge-derived correlogram parameters and maximum daily temperature as covariates of the Box-Cox power exponential margins and Gumbel copula. While the gauge-derived, radar-derived and the copula-derived correlogram parameters reproduced the mean estimates similarly using leave-one-out cross-validation of ordinary kriging, the gauge-derived parameters yielded higher standard deviation (SD) of the Gaussian quantile which reflects uncertainty in over 90% of cases. However, the distribution of the SD generated by the radar-derived and the copula-derived parameters could not be distinguished. For the validation case, the percentage of cases of higher SD by the gauge-derived parameter sets decreased to 81.2% and 86.6% for the non-calibration and the calibration periods, respectively. It has been observed that 1% reduction in the Gaussian quantile SD can cause over 39% reduction in the SD of the median rainfall estimate, actual reduction being dependent on the distribution of rainfall of the day. Hence the main advantage of using the most correct radar correlogram parameters is to reduce the uncertainty associated with conditional simulations that rely on SD through kriging.

  12. The determination of the acoustic parameters of volcanic rocks from compressional velocity measurements

    USGS Publications Warehouse

    Carroll, R.D.

    1969-01-01

    A statistical analysis was made of the relationship of various acoustic parameters of volcanic rocks to compressional wave velocities for data obtained in a volcanic region in Nevada. Some additional samples, chiefly granitic rocks, were also included in the study to extend the range of parameters and the variety of siliceous rock types sampled. Laboratory acoustic measurements obtained on 62 dry core samples were grouped with similar measurements obtained from geophysical logging devices at several depth intervals in a hole from which 15 of the core samples had been obtained. The effects of lithostatic and hydrostatic load on changing the rock acoustic parameters measured in the hole were noticeable when compared with the laboratory measurements on the same core. The results of the analyses determined by grouping all of the data, however, indicate that dynamic Young's, shear and bulk modulus, shear velocity, shear and compressional characteristic impedance, as well as amplitude and energy reflection coefficients may be reliably estimated on the basis of the compressional wave velocities of the rocks investigated. Less precise estimates can be made of density based on the rock compressional velocity. The possible extension of these relationships to include many siliceous rocks is suggested. ?? 1969.

  13. Estimating the Expected Value of Sample Information Using the Probabilistic Sensitivity Analysis Sample

    PubMed Central

    Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny

    2015-01-01

    Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269

  14. Statistical Inference for Data Adaptive Target Parameters.

    PubMed

    Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J

    2016-05-01

    Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.

  15. pypet: A Python Toolkit for Data Management of Parameter Explorations.

    PubMed

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.

  16. Exploring Replica-Exchange Wang-Landau sampling in higher-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentim, Alexandra; Rocha, Julio C. S.; Tsai, Shan-Ho

    We considered a higher-dimensional extension for the replica-exchange Wang-Landau algorithm to perform a random walk in the energy and magnetization space of the two-dimensional Ising model. This hybrid scheme combines the advantages of Wang-Landau and Replica-Exchange algorithms, and the one-dimensional version of this approach has been shown to be very efficient and to scale well, up to several thousands of computing cores. This approach allows us to split the parameter space of the system to be simulated into several pieces and still perform a random walk over the entire parameter range, ensuring the ergodicity of the simulation. Previous work, inmore » which a similar scheme of parallel simulation was implemented without using replica exchange and with a different way to combine the result from the pieces, led to discontinuities in the final density of states over the entire range of parameters. From our simulations, it appears that the replica-exchange Wang-Landau algorithm is able to overcome this diculty, allowing exploration of higher parameter phase space by keeping track of the joint density of states.« less

  17. Calculation of distribution coefficients in the SAMPL5 challenge from atomic solvation parameters and surface areas.

    PubMed

    Santos-Martins, Diogo; Fernandes, Pedro Alexandrino; Ramos, Maria João

    2016-11-01

    In the context of SAMPL5, we submitted blind predictions of the cyclohexane/water distribution coefficient (D) for a series of 53 drug-like molecules. Our method is purely empirical and based on the additive contribution of each solute atom to the free energy of solvation in water and in cyclohexane. The contribution of each atom depends on the atom type and on the exposed surface area. Comparatively to similar methods in the literature, we used a very small set of atomic parameters: only 10 for solvation in water and 1 for solvation in cyclohexane. As a result, the method is protected from overfitting and the error in the blind predictions could be reasonably estimated. Moreover, this approach is fast: it takes only 0.5 s to predict the distribution coefficient for all 53 SAMPL5 compounds, allowing its application in virtual screening campaigns. The performance of our approach (submission 49) is modest but satisfactory in view of its efficiency: the root mean square error (RMSE) was 3.3 log D units for the 53 compounds, while the RMSE of the best performing method (using COSMO-RS) was 2.1 (submission 16). Our method is implemented as a Python script available at https://github.com/diogomart/SAMPL5-DC-surface-empirical .

  18. Determining photon energy absorption parameters for different soil samples

    PubMed Central

    Kucuk, Nil; Tumsavas, Zeynal; Cakir, Merve

    2013-01-01

    The mass attenuation coefficients (μs) for five different soil samples were measured at 661.6, 1173.2 and 1332.5 keV photon energies. The soil samples were separately irradiated with 137Cs and 60Co (370 kBq) radioactive point gamma sources. The measurements were made by performing transmission experiments with a 2″ × 2″ NaI(Tl) scintillation detector, which had an energy resolution of 7% at 0.662 MeV for the gamma-rays from the decay of 137Cs. The effective atomic numbers (Zeff) and the effective electron densities (Neff) were determined experimentally and theoretically using the obtained μs values for the soil samples. Furthermore, the Zeff and Neff values of the soil samples were computed for the total photon interaction cross-sections using theoretical data over a wide energy region ranging from 1 keV to 15 MeV. The experimental values of the soils were found to be in good agreement with the theoretical values. Sandy loam and sandy clay loam soils demonstrated poor photon energy absorption characteristics. However, clay loam and clay soils had good photon energy absorption characteristics. PMID:23179375

  19. Classification of hydrological parameter sensitivity and evaluation of parameter transferability across 431 US MOPEX basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi

    The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other

  20. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including

  1. Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models

    NASA Astrophysics Data System (ADS)

    Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael

    2016-06-01

    We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.

  2. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  3. Limited-sampling strategies for anti-infective agents: systematic review.

    PubMed

    Sprague, Denise A; Ensom, Mary H H

    2009-09-01

    Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or

  4. Sample treatments prior to capillary electrophoresis-mass spectrometry.

    PubMed

    Hernández-Borges, Javier; Borges-Miquel, Teresa M; Rodríguez-Delgado, Miguel Angel; Cifuentes, Alejandro

    2007-06-15

    Sample preparation is a crucial part of chemical analysis and in most cases can become the bottleneck of the whole analytical process. Its adequacy is a key factor in determining the success of the analysis and, therefore, careful selection and optimization of the parameters controlling sample treatment should be carried out. This work revises the different strategies that have been developed for sample preparation prior to capillary electrophoresis-mass spectrometry (CE-MS). Namely the present work presents an exhaustive and critical revision of the different samples treatments used together with on-line CE-MS including works published from January 2000 to July 2006.

  5. pypet: A Python Toolkit for Data Management of Parameter Explorations

    PubMed Central

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines. PMID:27610080

  6. Parameters of metabolic quantification in clinical practice. Is it now time to include them in reports?

    PubMed

    Mucientes, J; Calles, L; Rodríguez, B; Mitjavila, M

    2018-01-18

    Qualitative techniques have traditionally been the standard for the diagnostic assessment with 18 F-FDG PET studies. Since the introduction of the technique, quantitative parameters have been sought, more accurate and with better diagnostic precision, that may offer relevant information of the behavior, aggressiveness or prognosis of tumors. Nowadays, more and more studies with high quality evidence show the utility of other metabolic parameters different from the SUV maximum, which despite being widely used in clinical practice is controversial and many physicians still do not know its real meaning. The objective of this paper has been to review the key concepts of these metabolic parameters that could be relevant in normal practice in the future. It has been seen that there is more evidence in the complete evaluation of the metabolism of a lesion, through volumetric parameters that more adequately reflect the patient's tumor burden. Basically, these parameters calculate the volume of tumor that fulfills certain characteristics. A software available in the majority of the workstations has been used for this purpose and it has allowed to calculate these volumes using more or less complex criteria. The simplest threshold-based segmentation methods are available in most equipments, they are easy to calculate and they have been shown in many studies to have an important prognostic significance. Copyright © 2017 Elsevier España, S.L.U. y SEMNIM. All rights reserved.

  7. Detection of Organic Constituents Including Chloromethylpropene in the Analyses of the ROCKNEST Drift by Sample Analysis at Mars (SAM)

    NASA Technical Reports Server (NTRS)

    Eigenbrode, J. L.; Glavin, D.; Coll, P.; Summons, R. E.; Mahaffy, P.; Archer, D.; Brunner, A.; Conrad, P.; Freissinet, C.; Martin, M.; hide

    2013-01-01

    key challenge in assessing the habitability of martian environments is the detection of organic matter - a requirement of all life as we know it. The Curiosity rover, which landed on August 6, 2012 in Gale Crater of Mars, includes the Sample Analysis at Mars (SAM) instrument suite capable of in situ analysis of gaseous organic components thermally evolved from sediment samples collected, sieved, and delivered by the MSL rover. On Sol 94, SAM received its first solid sample: scooped sediment from Rocknest that was sieved to <150 m particle size. Multiple 10-40 mg portions of the scoop #5 sample were delivered to SAM for analyses. Prior to their introduction, a blank (empty cup) analysis was performed. This blank served 1) to clean the analytical instrument of SAMinternal materials that accumulated in the gas processing system since integration into the rover, and 2) to characterize the background signatures of SAM. Both the blank and the Rocknest samples showed the presence of hydrocarbon components.

  8. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  9. Using an ensemble smoother to evaluate parameter uncertainty of an integrated hydrological model of Yanqi basin

    NASA Astrophysics Data System (ADS)

    Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang

    2015-10-01

    Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally

  10. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  11. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning

    PubMed Central

    Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790

  12. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning.

    PubMed

    Baykal, Cenk; Torres, Luis G; Alterovitz, Ron

    2015-09-28

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.

  13. Ambient airborne solids concentrations including volcanic ash at Hanford, Washington sampling sites subsequent to the Mount St. Helens eruption

    NASA Technical Reports Server (NTRS)

    Sehmel, G. A.

    1982-01-01

    Airborne solids concentrations were measured on a near daily basis at two Hanford, Washington sites after the eruption of Mount St. Helens on May 18, 1980. These sites are about 211 km east of Mount St. Helens. Collected airborne solids included resuspended volcanic ash plus normal ambient solids. Average airborne solids concentrations were greater at the Hanford meteorological station sampling site which is 24 km northwest of the Horn Rapids dam sampling site. These increased concentrations reflect the sampling site proximity to greater ash fallout depths. Both sites are in low ash fallout areas although the Hanford meteorological station site is closer to the greater ash fallout areas. Airborne solids concentrations were decreased by rain, but airborne solids concentrations rapidly increased as surfaces dried. Airborne concentrations tended to become nearly the same at both sampling sites only for July 12 and 13.

  14. Calculating background levels for ecological risk parameters in toxic harbor sediment

    USGS Publications Warehouse

    Leadon, C.J.; McDonnell, T.R.; Lear, J.; Barclift, D.

    2007-01-01

    Establishing background levels for biological parameters is necessary in assessing the ecological risks from harbor sediment contaminated with toxic chemicals. For chemicals in sediment, the term contaminated is defined as having concentrations above background and significant human health or ecological risk levels. For biological parameters, a site could be considered contaminated if levels of the parameter are either more or less than the background level, depending on the specific parameter. Biological parameters can include tissue chemical concentrations in ecological receptors, bioassay responses, bioaccumulation levels, and benthic community metrics. Chemical parameters can include sediment concentrations of a variety of potentially toxic chemicals. Indirectly, contaminated harbor sediment can impact shellfish, fish, birds, and marine mammals, and human populations. This paper summarizes the methods used to define background levels for chemical and biological parameters from a survey of ecological risk investigations of marine harbor sediment at California Navy bases. Background levels for regional biological indices used to quantify ecological risks for benthic communities are also described. Generally, background stations are positioned in relatively clean areas exhibiting the same physical and general chemical characteristics as nearby areas with contaminated harbor sediment. The number of background stations and the number of sample replicates per background station depend on the statistical design of the sediment ecological risk investigation, developed through the data quality objective (DQO) process. Biological data from the background stations can be compared to data from a contaminated site by using minimum or maximum background levels or comparative statistics. In Navy ecological risk assessments (ERA's), calculated background levels and appropriate ecological risk screening criteria are used to identify sampling stations and sites with contaminated

  15. The influence of technological parameters on the dynamic behavior of "liquid wood" samples obtained by injection molding

    NASA Astrophysics Data System (ADS)

    Plavanescu Mazurchevici, Simona; Carausu, Constantin; Comaneci, Radu; Nedelcu, Dumitru

    2017-10-01

    The plastic products contribute to environmental pollution. Replacing the plastic materials with biodegradable materials with superior properties is an absolute necessity and important research direction for the near future. The first steps in this regard were the creation of composite materials containing natural fibers with positive effects on the environment that have penetrated in different fields. The bioplastics and biocomposites made from natural fibers is a topical solution. The next step was made towards obtaining biodegradable and recyclable materials based on cellulose, lignin and no carcinogens. In this category fall the "liquid wood" with a use up to five times without affecting the mechanical properties. "Liquid wood" is a high quality thermoplastic biocomposite. "Liquid wood" is a biopolymer composite divided in three categories, ARBOFORM®, ARBOBLEND® and ARBOFILL®, which have differed composition in terms of lignin percentage, being delivered by Tecnaro, as granules, [1]. The paper's research was focus on Arboform L V3 Nature and Arboform L V3 Nature reinforced with aramid fiber. In the experimental plan were taken into account six parameters (Dinj - direction of injection [°]; Ttop - melting temperature [°C]; Pinj - injection pressure [MPa] Ss - speed [m/min]; tinj - injection time [s] and tc - cooling time [s]) each with two levels, research carried on by Taguchi methodology. Processing Taguchi method allowed both Taguchi setting work parameters influence on storage modulus and damping as the size and influence their ranking. Experimental research concerning the influence technological parameters on storage modulus of samples obtained by injection from Arboform L V3 Nature yielded an average of 6055MPa and descending order as follows: Trac, Ss, Pinj, Dinj and Ttop. The average of model for reinforced material was 6419MPa and descending order of parameters influence such as: Dinj, Trac, Ttop, tinj, Ss and Pinj.

  16. Correlations of fatty acid supplementation, aeroallergens, shampoo, and ear cleanser with multiple parameters in pruritic dogs.

    PubMed

    Nesbitt, Gene H; Freeman, Lisa M; Hannah, Steven S

    2004-01-01

    Seventy-two pruritic dogs were fed one of four diets controlled for n-6:n-3 fatty acid ratios and total dietary intake of fatty acids. Multiple parameters were evaluated, including clinical and cytological findings, aeroallergen testing, microbial sampling techniques, and effects of an anti-fungal/antibacterial shampoo and ear cleanser. Significant correlations were observed between many clinical parameters, anatomical sampling sites, and microbial counts when data from the diet groups was combined. There were no statistically significant differences between individual diets for any of the clinical parameters. The importance of total clinical management in the control of pruritus was demonstrated.

  17. Constitutive parameter measurements of lossy materials

    NASA Technical Reports Server (NTRS)

    Dominek, A.; Park, A.

    1989-01-01

    The electrical constitutive parameters of lossy materials are considered. A discussion of the NRL arch for lossy coatings is presented involving analytical analyses of the reflected field using the geometrical theory of diffraction (GTD) and physical optics (PO). The actual values for these parameters can be obtained through a traditional transmission technique which is examined from an error analysis standpoint. Alternate sample geometries are suggested for this technique to reduce sample tolerance requirements for accurate parameter determination. The performance for one alternate geometry is given.

  18. Two means of sampling sexual minority women: how different are the samples of women?

    PubMed

    Boehmer, Ulrike; Clark, Melissa; Timm, Alison; Ozonoff, Al

    2008-01-01

    We compared 2 sampling approaches of sexual minority women in 1 limited geographic area to better understand the implications of these 2 sampling approaches. Sexual minority women identified through the Census did not differ on average age or the prevalence of raising children from those sampled using nonrandomized methods. Women in the convenience sample were better educated and lived in smaller households. Modeling the likelihood of disability in this population resulted in contradictory parameter estimates by sampling approach. The degree of variation observed both between sampling approaches and between different parameters suggests that the total population of sexual minority women is still unmeasured. Thoroughly constructed convenience samples will continue to be a useful sampling strategy to further research on this population.

  19. Experimental research of the influence of the strength of ore samples on the parameters of an electromagnetic signal during acoustic excitation in the process of uniaxial compression

    NASA Astrophysics Data System (ADS)

    Yavorovich, L. V.; Bespal`ko, A. A.; Fedotov, P. I.

    2018-01-01

    Parameters of electromagnetic responses (EMRe) generated during uniaxial compression of rock samples under excitation by deterministic acoustic pulses are presented and discussed. Such physical modeling in the laboratory allows to reveal the main regularities of electromagnetic signals (EMS) generation in rock massive. The influence of the samples mechanical properties on the parameters of the EMRe excited by an acoustic signal in the process of uniaxial compression is considered. It has been established that sulfides and quartz in the rocks of the Tashtagol iron ore deposit (Western Siberia, Russia) contribute to the conversion of mechanical energy into the energy of the electromagnetic field, which is expressed in an increase in the EMS amplitude. The decrease in the EMS amplitude when the stress-strain state of the sample changes during the uniaxial compression is observed when the amount of conductive magnetite contained in the rock is increased. The obtained results are important for the physical substantiation of testing methods and monitoring of changes in the stress-strain state of the rock massive by the parameters of electromagnetic signals and the characteristics of electromagnetic emission.

  20. The ARIEL mission reference sample

    NASA Astrophysics Data System (ADS)

    Zingales, Tiziano; Tinetti, Giovanna; Pillitteri, Ignazio; Leconte, Jérémy; Micela, Giuseppina; Sarkar, Subhajit

    2018-02-01

    The ARIEL (Atmospheric Remote-sensing Exoplanet Large-survey) mission concept is one of the three M4 mission candidates selected by the European Space Agency (ESA) for a Phase A study, competing for a launch in 2026. ARIEL has been designed to study the physical and chemical properties of a large and diverse sample of exoplanets and, through those, understand how planets form and evolve in our galaxy. Here we describe the assumptions made to estimate an optimal sample of exoplanets - including already known exoplanets and expected ones yet to be discovered - observable by ARIEL and define a realistic mission scenario. To achieve the mission objectives, the sample should include gaseous and rocky planets with a range of temperatures around stars of different spectral type and metallicity. The current ARIEL design enables the observation of ˜1000 planets, covering a broad range of planetary and stellar parameters, during its four year mission lifetime. This nominal list of planets is expected to evolve over the years depending on the new exoplanet discoveries.

  1. Estimation of Surface Heat Flux and Surface Temperature during Inverse Heat Conduction under Varying Spray Parameters and Sample Initial Temperature

    PubMed Central

    Aamir, Muhammad; Liao, Qiang; Zhu, Xun; Aqeel-ur-Rehman; Wang, Hong

    2014-01-01

    An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm) 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck's sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF) within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m2 was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa. PMID:24977219

  2. Bone mineral metabolism parameters and urinary albumin excretion in a representative US population sample.

    PubMed

    Ellam, Timothy; Fotheringham, James; Wilkie, Martin E; Francis, Sheila E; Chico, Timothy J A

    2014-01-01

    Even within accepted normal ranges, higher serum phosphorus, dietary phosphorus density, parathyroid hormone (PTH) and alkaline phosphatase (ALP) are independent predictors of cardiovascular mortality. Lower serum 25-hydroxy vitamin D (25(OH)D) also predicts adverse cardiovascular outcomes. We hypothesized that vascular dysfunction accompanying subtle disturbances of these bone metabolism parameters would result in associations with increased low grade albuminuria. We examined participants in the National Health and Nutrition Examination Surveys 1999-2010 (N = 19,383) with estimated glomerular filtration rate (eGFR) ≥60 ml/min/1.73 m² and without severe albuminuria (urine albumin:creatinine ratio (ACR) <300 mg/g). Albuminuria was quantified as ACR and fractional albumin excretion (FE(alb)). Increasing quintiles of dietary phosphorus density, serum phosphorus and ALP were not associated with higher ACR or FE(alb). The lowest versus highest quintile of 25(OH)D was associated with greater albuminuria, but not after adjustment for other covariates including cardiovascular risk factors. An association between the highest versus lowest quintile of bone-specific ALP and greater ACR persisted after covariate adjustment, but was not accompanied by an independent association with FE(alb). Increasing quintiles of PTH demonstrated associations with both higher ACR and FE(alb) that were not abolished by adjusting for covariates including age, gender, race, body mass index, diabetes, blood pressure, history of cardiovascular disease, smoking, eGFR, 25(OH)D, season of measurement, lipids, hemoglobin and C-reactive protein. Adjusted increases in ACR and FE(alb) associated with the highest versus lowest quintile of PTH were 19% (95% confidence interval 7-28% p<0.001) and 17% (8-31% p = 0.001) respectively. In this population, of the bone mineral parameters associated with cardiovascular outcomes, only PTH is independently associated with ACR and FE(alb).

  3. Influence of Population Variation of Physiological Parameters in Computational Models of Space Physiology

    NASA Technical Reports Server (NTRS)

    Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.

    2016-01-01

    The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and

  4. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    NASA Astrophysics Data System (ADS)

    Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal

    Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.

  5. Influence of meteorological parameters on air quality

    NASA Astrophysics Data System (ADS)

    Gioda, Adriana; Ventura, Luciana; Lima, Igor; Luna, Aderval

    2013-04-01

    The physical characterization representative of ambient air particle concentrations is becoming a topic of great interest for urban air quality monitoring and human exposure assessment. Human exposure to particulate matter of less than 2.5 µm in diameter (PM2.5) can result in a variety of adverse health impacts, including reduced lung function and premature mortality. Numerous studies have shown that fine airborne inhalable particulate matter particles (PM2.5) are more dangerous to human health than coarse particles, e.g. PM10. This study investigates meteorological parameter impacts on PM2.5 concentrations in the atmosphere of Rio de Janeiro, Brazil. Samples were collected during 24 h every six days using a high-volume sampler from six sites in the metropolitan area of Rio de Janeiro from January to December 2011. The particles mass was determined by Gravimetry. Meteorological parameters were obtained from automatic stations near the sampling sites. The average PM2.5 concentrations ranged from 9 to 32 µg/m3 for all sites, exceeding the suggested annual limit of WHO (10 µg/m3). The relationship between the effects of temperature, relative humidity, wind speed and direction and particle concentration was examined using a Principal Component Analysis (PCA) for the different sites and seasons. The results for each sampling point and season presented different principal component numbers, varying from 2 to 4, and extremely different relationships with the parameters. This clearly shows that changes in meteorological conditions exert a marked influence on air quality.

  6. Discussion of parameters associated with the determination of arsenic by electrothermal atomic absorption spectrometry in slurried environmental samples.

    PubMed

    Vassileva, E; Baeten, H; Hoenig, M

    2001-01-02

    A slurry sampling-fast program procedure has been developed for the determination of arsenic in plants, soils and sediments by electrothermal atomic absorption spectrometry. Efficiencies of various single and mixed modifiers for thermal stabilization of arsenic and for a better removal of the matrix during pyrolysis step were compared. The influence of the slurry concentration, amounts of modifier and parameters of the pyrolysis step on the As integrated absorbance signals have been studied and a comparison between fast and conventional furnace programs was also made. The ultrasonic agitation of the slurry followed by a fast electrothermal program using an Ir/Mg modifier provides the most consistent performance in terms of precision and accuracy. The reliability of the whole procedure has been compared with results obtained after application of a wet digestion method with an HF step and validated by analyzing eleven certified reference materials. Arsenic detection and quantitation limits expressed on dry sample matter were about 30 and 100 micrograms kg-1, respectively.

  7. Conformational sampling with stochastic proximity embedding and self-organizing superimposition: establishing reasonable parameters for their practical use.

    PubMed

    Tresadern, Gary; Agrafiotis, Dimitris K

    2009-12-01

    Stochastic proximity embedding (SPE) and self-organizing superimposition (SOS) are two recently introduced methods for conformational sampling that have shown great promise in several application domains. Our previous validation studies aimed at exploring the limits of these methods and have involved rather exhaustive conformational searches producing a large number of conformations. However, from a practical point of view, such searches have become the exception rather than the norm. The increasing popularity of virtual screening has created a need for 3D conformational search methods that produce meaningful answers in a relatively short period of time and work effectively on a large scale. In this work, we examine the performance of these algorithms and the effects of different parameter settings at varying levels of sampling. Our goal is to identify search protocols that can produce a diverse set of chemically sensible conformations and have a reasonable probability of sampling biologically active space within a small number of trials. Our results suggest that both SPE and SOS are extremely competitive in this regard and produce very satisfactory results with as few as 500 conformations per molecule. The results improve even further when the raw conformations are minimized with a molecular mechanics force field to remove minor imperfections and any residual strain. These findings provide additional evidence that these methods are suitable for many everyday modeling tasks, both high- and low-throughput.

  8. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  9. Recovering Parameters of Johnson's SB Distribution

    Treesearch

    Bernard R. Parresol

    2003-01-01

    A new parameter recovery model for Johnson's SB distribution is developed. This latest alternative approach permits recovery of the range and both shape parameters. Previous models recovered only the two shape parameters. Also, a simple procedure for estimating the distribution minimum from sample values is presented. The new methodology...

  10. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 2. [sample problem library guide

    NASA Technical Reports Server (NTRS)

    Jackson, C. E., Jr.

    1977-01-01

    A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.

  11. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  12. Nondestructive prediction of pork freshness parameters using multispectral scattering images

    NASA Astrophysics Data System (ADS)

    Tang, Xiuying; Li, Cuiling; Peng, Yankun; Chao, Kuanglin; Wang, Mingwu

    2012-05-01

    Optical technology is an important and immerging technology for non-destructive and rapid detection of pork freshness. This paper studied on the possibility of using multispectral imaging technique and scattering characteristics to predict the freshness parameters of pork meat. The pork freshness parameters selected for prediction included total volatile basic nitrogen (TVB-N), color parameters (L *, a *, b *), and pH value. Multispectral scattering images were obtained from pork sample surface by a multispectral imaging system developed by ourselves; they were acquired at the selected narrow wavebands whose center wavelengths were 517,550, 560, 580, 600, 760, 810 and 910nm. In order to extract scattering characteristics from multispectral images at multiple wavelengths, a Lorentzian distribution (LD) function with four parameters (a: scattering asymptotic value; b: scattering peak; c: scattering width; d: scattering slope) was used to fit the scattering curves at the selected wavelengths. The results show that the multispectral imaging technique combined with scattering characteristics is promising for predicting the freshness parameters of pork meat.

  13. Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.

    PubMed

    Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra

    2016-11-20

    The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Soluble CD26 levels and its association to epidemiologic parameters in a sample population.

    PubMed

    De Chiara, Loretta; Rodríguez-Piñeiro, Ana M; Cordero, Oscar J; Rodríguez-Berrocal, Francisco J; Ayude, Daniel; Rivas-Hervada And, Francisco J; de la Cadena, María Páez

    2009-01-01

    Previous studies have suggested the use of soluble CD26 (sCD26) as a tumour marker for the detection of colorectal cancer (CRC) and advanced adenomas. The aim of this study was to assess the sCD26 concentration in a large cohort to evaluate its association to epidemiologic parameters and CRC-related symptoms/pathologies. Serum samples were collected from 2,754 putatively healthy individuals with ages ranging from 30-65 years, and with personal or familial history of polyps, CRC and/or CR symptoms. sCD26 levels were measured by ELISA. No association was found between the sCD26 concentration and age (< 50 and 50), the personal or familial history of polyps or CRC, rectal bleeding, haemorrhoids or diverticula. However, sCD26 was related to non-inflammatory benign pathologies (excluding rectal bleeding, changes in bowel habits, haemorrhoids, diverticula) and to inflammatory benign pathologies. Our results confirm that the sCD26 can be easily offered and evaluated in a large cohort. Additionally, the validation of sCD26 as a tumour marker for screening and case-finding purposes requires a further comparison with an established non-invasive test like the faecal occult blood.

  15. CosmoSIS: A system for MC parameter estimation

    DOE PAGES

    Bridle, S.; Dodelson, S.; Jennings, E.; ...

    2015-12-23

    CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less

  16. Prediction of compressibility parameters of the soils using artificial neural network.

    PubMed

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan

    2016-01-01

    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  17. The Effects of Sampling Probe Design and Sampling Techniques on Aerosol Measurements

    DTIC Science & Technology

    1975-05-01

    Schematic of Extraction and Sampling System 39 16. Filter Housing 40 17. Theoretical Isokinetic Flow Requirements of the EPA Sampling...from the flow parameters based on a zero-error assumption at isokinetic sampling conditions. Isokinetic , or equal velocity sampling, was...prior to testing the probes. It was also used to measure the flow field adjacent to the probe inlets to determine the isokinetic condition of the

  18. 300 Area treated effluent disposal facility sampling schedule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loll, C.M.

    1994-10-11

    This document is the interface between the 300 Area Liquid Effluent Process Engineering (LEPE) group and the Waste Sampling and Characterization Facility (WSCF), concerning process control samples. It contains a schedule for process control samples at the 300 Area TEDF which describes the parameters to be measured, the frequency of sampling and analysis, the sampling point, and the purpose for each parameter.

  19. Development and Validation of Limited-Sampling Strategies for Predicting Amoxicillin Pharmacokinetic and Pharmacodynamic Parameters

    PubMed Central

    Suarez-Kurtz, Guilherme; Ribeiro, Frederico Mota; Vicente, Flávio L.; Struchiner, Claudio J.

    2001-01-01

    Amoxicillin plasma concentrations (n = 1,152) obtained from 48 healthy subjects in two bioequivalence studies were used to develop limited-sampling strategy (LSS) models for estimating the area under the concentration-time curve (AUC), the maximum concentration of drug in plasma (Cmax), and the time interval of concentration above MIC susceptibility breakpoints in plasma (T>MIC). Each subject received 500-mg amoxicillin, as reference and test capsules or suspensions, and plasma concentrations were measured by a validated microbiological assay. Linear regression analysis and a “jack-knife” procedure revealed that three-point LSS models accurately estimated (R2, 0.92; precision, <5.8%) the AUC from 0 h to infinity (AUC0-∞) of amoxicillin for the four formulations tested. Validation tests indicated that a three-point LSS model (1, 2, and 5 h) developed for the reference capsule formulation predicts the following accurately (R2, 0.94 to 0.99): (i) the individual AUC0-∞ for the test capsule formulation in the same subjects, (ii) the individual AUC0-∞ for both reference and test suspensions in 24 other subjects, and (iii) the average AUC0-∞ following single oral doses (250 to 1,000 mg) of various amoxicillin formulations in 11 previously published studies. A linear regression equation was derived, using the same sampling time points of the LSS model for the AUC0-∞, but using different coefficients and intercept, for estimating Cmax. Bioequivalence assessments based on LSS-derived AUC0-∞'s and Cmax's provided results similar to those obtained using the original values for these parameters. Finally, two-point LSS models (R2 = 0.86 to 0.95) were developed for T>MICs of 0.25 or 2.0 μg/ml, which are representative of microorganisms susceptible and resistant to amoxicillin. PMID:11600352

  20. Estimation of genetic parameters for heat stress, including dominance gene effects, on milk yield in Thai Holstein dairy cattle.

    PubMed

    Boonkum, Wuttigrai; Duangjinda, Monchai

    2015-03-01

    Heat stress in tropical regions is a major cause that strongly negatively affects to milk production in dairy cattle. Genetic selection for dairy heat tolerance is powerful technique to improve genetic performance. Therefore, the current study aimed to estimate genetic parameters and investigate the threshold point of heat stress for milk yield. Data included 52 701 test-day milk yield records for the first parity from 6247 Thai Holstein dairy cattle, covering the period 1990 to 2007. The random regression test day model with EM-REML was used to estimate variance components, genetic parameters and milk production loss. A decline in milk production was found when temperature and humidity index (THI) exceeded a threshold of 74, also it was associated with the high percentage of Holstein genetics. All variance component estimates increased with THI. The estimate of heritability of test-day milk yield was 0.231. Dominance variance as a proportion to additive variance (0.035) indicated that non-additive effects might not be of concern for milk genetics studies in Thai Holstein cattle. Correlations between genetic and permanent environmental effects, for regular conditions and due to heat stress, were - 0.223 and - 0.521, respectively. The heritability and genetic correlations from this study show that simultaneous selection for milk production and heat tolerance is possible. © 2014 Japanese Society of Animal Science.

  1. Non-linear matter power spectrum covariance matrix errors and cosmological parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Blot, L.; Corasaniti, P. S.; Amendola, L.; Kitching, T. D.

    2016-06-01

    The covariance of the matter power spectrum is a key element of the analysis of galaxy clustering data. Independent realizations of observational measurements can be used to sample the covariance, nevertheless statistical sampling errors will propagate into the cosmological parameter inference potentially limiting the capabilities of the upcoming generation of galaxy surveys. The impact of these errors as function of the number of realizations has been previously evaluated for Gaussian distributed data. However, non-linearities in the late-time clustering of matter cause departures from Gaussian statistics. Here, we address the impact of non-Gaussian errors on the sample covariance and precision matrix errors using a large ensemble of N-body simulations. In the range of modes where finite volume effects are negligible (0.1 ≲ k [h Mpc-1] ≲ 1.2), we find deviations of the variance of the sample covariance with respect to Gaussian predictions above ˜10 per cent at k > 0.3 h Mpc-1. Over the entire range these reduce to about ˜5 per cent for the precision matrix. Finally, we perform a Fisher analysis to estimate the effect of covariance errors on the cosmological parameter constraints. In particular, assuming Euclid-like survey characteristics we find that a number of independent realizations larger than 5000 is necessary to reduce the contribution of sampling errors to the cosmological parameter uncertainties at subpercent level. We also show that restricting the analysis to large scales k ≲ 0.2 h Mpc-1 results in a considerable loss in constraining power, while using the linear covariance to include smaller scales leads to an underestimation of the errors on the cosmological parameters.

  2. Analysis of hepatitis C viral dynamics using Latin hypercube sampling

    NASA Astrophysics Data System (ADS)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2012-12-01

    We consider a mathematical model comprising four coupled ordinary differential equations (ODEs) to study hepatitis C viral dynamics. The model includes the efficacies of a combination therapy of interferon and ribavirin. There are two main objectives of this paper. The first one is to approximate the percentage of cases in which there is a viral clearance in absence of treatment as well as percentage of response to treatment for various efficacy levels. The other is to better understand and identify the parameters that play a key role in the decline of viral load and can be estimated in a clinical setting. A condition for the stability of the uninfected and the infected steady states is presented. A large number of sample points for the model parameters (which are physiologically feasible) are generated using Latin hypercube sampling. An analysis of the simulated values identifies that, approximately 29.85% cases result in clearance of the virus during the early phase of the infection. Results from the χ2 and the Spearman's tests done on the samples, indicate a distinctly different distribution for certain parameters for the cases exhibiting viral clearance under the combination therapy.

  3. Total Arsenic, Cadmium, and Lead Determination in Brazilian Rice Samples Using ICP-MS

    PubMed Central

    Buzzo, Márcia Liane; de Arauz, Luciana Juncioni; Carvalho, Maria de Fátima Henriques; Arakaki, Edna Emy Kumagai; Matsuzaki, Richard; Tiglea, Paulo

    2016-01-01

    This study is aimed at investigating a suitable method for rice sample preparation as well as validating and applying the method for monitoring the concentration of total arsenic, cadmium, and lead in rice by using Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Various rice sample preparation procedures were evaluated. The analytical method was validated by measuring several parameters including limit of detection (LOD), limit of quantification (LOQ), linearity, relative bias, and repeatability. Regarding the sample preparation, recoveries of spiked samples were within the acceptable range from 89.3 to 98.2% for muffle furnace, 94.2 to 103.3% for heating block, 81.0 to 115.0% for hot plate, and 92.8 to 108.2% for microwave. Validation parameters showed that the method fits for its purpose, being the total arsenic, cadmium, and lead within the Brazilian Legislation limits. The method was applied for analyzing 37 rice samples (including polished, brown, and parboiled), consumed by the Brazilian population. The total arsenic, cadmium, and lead contents were lower than the established legislative values, except for total arsenic in one brown rice sample. This study indicated the need to establish monitoring programs for emphasizing the study on this type of cereal, aiming at promoting the Public Health. PMID:27766178

  4. On Markov parameters in system identification

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Longman, Richard W.

    1991-01-01

    A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.

  5. Seasonal microbial and environmental parameters at Crocker Reef, Florida Keys, 2014–2015

    USGS Publications Warehouse

    Kellogg, Christina A.; Yates, Kimberly K.; Lawler, Stephanie N.; Moore, Christopher S.; Smiley, Nathan A.

    2015-11-04

    Microbial measurements included enumeration of total bacteria, enumeration of virus-like particles, and plate counts of Vibrio spp. colony-forming units (CFU). These measurements were intended to give a sense of any seasonal changes in the total microbial load and to provide an indication of water quality. Additional environmental parameters measured included water temperature, salinity, dissolved oxygen, and pH. Four sites (table 1) were intensively sampled for periods of approximately 48 hours during summer (July 2014) and winter (January–February 2015), during which water samples were collected every 4 hours for analysis, except when prevented by weather conditions.

  6. Genetic parameter estimates for carcass traits and visual scores including or not genomic information.

    PubMed

    Gordo, D G M; Espigolan, R; Tonussi, R L; Júnior, G A F; Bresolin, T; Magalhães, A F Braga; Feitosa, F L; Baldi, F; Carvalheiro, R; Tonhati, H; de Oliveira, H N; Chardulo, L A L; de Albuquerque, L G

    2016-05-01

    The objective of this study was to determine whether visual scores used as selection criteria in Nellore breeding programs are effective indicators of carcass traits measured after slaughter. Additionally, this study evaluated the effect of different structures of the relationship matrix ( and ) on the estimation of genetic parameters and on the prediction accuracy of breeding values. There were 13,524 animals for visual scores of conformation (CS), finishing precocity (FP), and muscling (MS) and 1,753, 1,747, and 1,564 for LM area (LMA), backfat thickness (BF), and HCW, respectively. Of these, 1,566 animals were genotyped using a high-density panel containing 777,962 SNP. Six analyses were performed using multitrait animal models, each including the 3 visual scores and 1 carcass trait. For the visual scores, the model included direct additive genetic and residual random effects and the fixed effects of contemporary group (defined by year of birth, management group at yearling, and farm) and the linear effect of age of animal at yearling. The same model was used for the carcass traits, replacing the effect of age of animal at yearling with the linear effect of age of animal at slaughter. The variance and covariance components were estimated by the REML method in analyses using the numerator relationship matrix () or combining the genomic and the numerator relationship matrices (). The heritability estimates for the visual scores obtained with the 2 methods were similar and of moderate magnitude (0.23-0.34), indicating that these traits should response to direct selection. The heritabilities for LMA, BF, and HCW were 0.13, 0.07, and 0.17, respectively, using matrix and 0.29, 0.16, and 0.23, respectively, using matrix . The genetic correlations between the visual scores and carcass traits were positive, and higher correlations were generally obtained when matrix was used. Considering the difficulties and cost of measuring carcass traits postmortem, visual scores of

  7. Growth reference for Saudi preschool children: LMS parameters and percentiles.

    PubMed

    Shaik, Shaffi Ahamed; El Mouzan, Mohammad Issa; AlSalloum, Abdullah Abdulmohsin; AlHerbish, Abdullah Sulaiman

    2016-01-01

    Previous growth charts for Saudi children have not included detailed tables and parameters needed for research and incorporation in electronic records. The objective of this report is to publish the L, M, and S parameters and percentiles as well as the corresponding growth charts for Saudi preschool children. Community-based survey and measurement of growth parameters in a sample selected by a multistage probability procedure. A stratified listing of the Saudi population. Raw data from the previous nationally-representative sample were reanalyzed using the Lambda-Mu-Sigma (LMS) methodology to calculate the L, M, and S parameters of percentiles (from 3rd to 97th) for weight, length/height, head circumference, and body mass index-for-age, and weight for-length/height for boys and girls from birth to 60 months. Length or height and weight of Saudi preschool children. There were 15601 Saudi children younger than 60 months of age, 7896 (50.6 %) were boys. The LMS parameters for weight for age from birth to 60 months (5 years) are reported for the 3rd, 5th, 10th, 25th, 50th, 75th, 90th, 95th, and 97th percentiles as well as the corresponding graphs. Similarly, the LMS parameters for length/height-for-age, head circumference-for-age, weight-for-length/height and body mass index-for-age (BMi) are shown with the corresponding graphs for boys and girls. Using the data in this report, clinicians and researchers can assess the growth of Saudi preschool children. The report does not reflect interregional variations in growth.

  8. Sampling and data handling methods for inhalable particulate sampling. Final report nov 78-dec 80

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, W.B.; Cushing, K.M.; Johnson, J.W.

    1982-05-01

    The report reviews the objectives of a research program on sampling and measuring particles in the inhalable particulate (IP) size range in emissions from stationary sources, and describes methods and equipment required. A computer technique was developed to analyze data on particle-size distributions of samples taken with cascade impactors from industrial process streams. Research in sampling systems for IP matter included concepts for maintaining isokinetic sampling conditions, necessary for representative sampling of the larger particles, while flowrates in the particle-sizing device were constant. Laboratory studies were conducted to develop suitable IP sampling systems with overall cut diameters of 15 micrometersmore » and conforming to a specified collection efficiency curve. Collection efficiencies were similarly measured for a horizontal elutriator. Design parameters were calculated for horizontal elutriators to be used with impactors, the EPA SASS train, and the EPA FAS train. Two cyclone systems were designed and evaluated. Tests on an Andersen Size Selective Inlet, a 15-micrometer precollector for high-volume samplers, showed its performance to be with the proposed limits for IP samplers. A stack sampling system was designed in which the aerosol is diluted in flow patterns and with mixing times simulating those in stack plumes.« less

  9. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted

  10. Modeling association among demographic parameters in analysis of open population capture-recapture data.

    PubMed

    Link, William A; Barker, Richard J

    2005-03-01

    We present a hierarchical extension of the Cormack-Jolly-Seber (CJS) model for open population capture-recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis-Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  11. Modeling association among demographic parameters in analysis of open population capture-recapture data

    USGS Publications Warehouse

    Link, William A.; Barker, Richard J.

    2005-01-01

    We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  12. A Systematic Review and Meta-Analysis of the Effects of Transcranial Direct Current Stimulation (tDCS) Over the Dorsolateral Prefrontal Cortex in Healthy and Neuropsychiatric Samples: Influence of Stimulation Parameters.

    PubMed

    Dedoncker, Josefien; Brunoni, Andre R; Baeken, Chris; Vanderhasselt, Marie-Anne

    2016-01-01

    Research into the effects of transcranial direct current stimulation of the dorsolateral prefrontal cortex on cognitive functioning is increasing rapidly. However, methodological heterogeneity in prefrontal tDCS research is also increasing, particularly in technical stimulation parameters that might influence tDCS effects. To systematically examine the influence of technical stimulation parameters on DLPFC-tDCS effects. We performed a systematic review and meta-analysis of tDCS studies targeting the DLPFC published from the first data available to February 2016. Only single-session, sham-controlled, within-subject studies reporting the effects of tDCS on cognition in healthy controls and neuropsychiatric patients were included. Evaluation of 61 studies showed that after single-session a-tDCS, but not c-tDCS, participants responded faster and more accurately on cognitive tasks. Sub-analyses specified that following a-tDCS, healthy subjects responded faster, while neuropsychiatric patients responded more accurately. Importantly, different stimulation parameters affected a-tDCS effects, but not c-tDCS effects, on accuracy in healthy samples vs. increased current density and density charge resulted in improved accuracy in healthy samples, most prominently in females; for neuropsychiatric patients, task performance during a-tDCS resulted in stronger increases in accuracy rates compared to task performance following a-tDCS. Healthy participants respond faster, but not more accurate on cognitive tasks after a-tDCS. However, increasing the current density and/or charge might be able to enhance response accuracy, particularly in females. In contrast, online task performance leads to greater increases in response accuracy than offline task performance in neuropsychiatric patients. Possible implications and practical recommendations are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Reaction time as an indicator of insufficient effort: Development and validation of an embedded performance validity parameter.

    PubMed

    Stevens, Andreas; Bahlo, Simone; Licha, Christina; Liske, Benjamin; Vossler-Thies, Elisabeth

    2016-11-30

    Subnormal performance in attention tasks may result from various sources including lack of effort. In this report, the derivation and validation of a performance validity parameter for reaction time is described, using a set of malingering-indices ("Slick-criteria"), and 3 independent samples of participants (total n =893). The Slick-criteria yield an estimate of the probability of malingering based on the presence of an external incentive, evidence from neuropsychological testing, from self-report and clinical data. In study (1) a validity parameter is derived using reaction time data of a sample, composed of inpatients with recent severe brain lesions not involved in litigation and of litigants with and without brain lesion. In study (2) the validity parameter is tested in an independent sample of litigants. In study (3) the parameter is applied to an independent sample comprising cooperative and non-cooperative testees. Logistic regression analysis led to a derived validity parameter based on median reaction time and standard deviation. It performed satisfactorily in studies (2) and (3) (study 2 sensitivity=0.94, specificity=1.00; study 3 sensitivity=0.79, specificity=0.87). The findings suggest that median reaction time and standard deviation may be used as indicators of negative response bias. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. SPOTting Model Parameters Using a Ready-Made Python Package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2017-04-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  15. SPOTting Model Parameters Using a Ready-Made Python Package.

    PubMed

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  16. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  17. Effects of Chitin and Sepia Ink Hybrid Hemostatic Sponge on the Blood Parameters of Mice

    PubMed Central

    Zhang, Wei; Sun, Yu-Lin; Chen, Dao-Hai

    2014-01-01

    Chitin and sepia ink hybrid hemostatic sponge (CTSH sponge), a new biomedical material, was extensively studied for its beneficial biological properties of hemostasis and stimulation of healing. However, studies examining the safety of CTSH sponge in the blood system are lacking. This experiment aimed to examine whether CTSH sponge has negative effect on blood systems of mice, which were treated with a dosage of CTSH sponge (135 mg/kg) through a laparotomy. CTSH sponge was implanted into the abdominal subcutaneous and a laparotomy was used for blood sampling from abdominal aortic. Several kinds of blood parameters were detected at different time points, which were reflected by coagulation parameters including thrombin time (TT), prothrombin time (PT), activated partial thromboplatin time (APTT), fibrinogen (FIB) and platelet factor 4 (PF4); anticoagulation parameter including antithrombin III (AT-III); fibrinolytic parameters including plasminogen (PLG), fibrin degradation product (FDP) and D-dimer; hemorheology parameters including blood viscosity (BV) and plasma viscosity (PV). Results showed that CTSH sponge has no significant effect on the blood parameters of mice. The data suggested that CTSH sponge can be applied in the field of biomedical materials and has potential possibility to be developed into clinical drugs of hemostatic agents. PMID:24727395

  18. Hardrock Elastic Physical Properties: Birch's Seismic Parameter Revisited

    NASA Astrophysics Data System (ADS)

    Wu, M.; Milkereit, B.

    2014-12-01

    Identifying rock composition and properties is imperative in a variety of fields including geotechnical engineering, mining, and petroleum exploration, in order to accurately make any petrophysical calculations. Density is, in particular, an important parameter that allows us to differentiate between lithologies and estimate or calculate other petrophysical properties. It is well established that compressional and shear wave velocities of common crystalline rocks increase with increasing densities (i.e. the Birch and Nafe-Drake relationships). Conventional empirical relations do not take into account S-wave velocity. Physical properties of Fe-oxides and massive sulfides, however, differ significantly from the empirical velocity-density relationships. Currently, acquiring in-situ density data is challenging and problematic, and therefore, developing an approximation for density based on seismic wave velocity and elastic moduli would be beneficial. With the goal of finding other possible or better relationships between density and the elastic moduli, a database of density, P-wave velocity, S-wave velocity, bulk modulus, shear modulus, Young's modulus, and Poisson's ratio was compiled based on a multitude of lab samples. The database is comprised of isotropic, non-porous metamorphic rock. Multi-parameter cross plots of the various elastic parameters have been analyzed in order to find a suitable parameter combination that reduces high density outliers. As expected, the P-wave velocity to S-wave velocity ratios show no correlation with density. However, Birch's seismic parameter, along with the bulk modulus, shows promise in providing a link between observed compressional and shear wave velocities and rock densities, including massive sulfides and Fe-oxides.

  19. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing

  20. Laboratory Studies on Surface Sampling of Bacillus anthracis Contamination: Summary, Gaps, and Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Amidan, Brett G.; Hu, Rebecca

    2011-11-28

    This report summarizes previous laboratory studies to characterize the performance of methods for collecting, storing/transporting, processing, and analyzing samples from surfaces contaminated by Bacillus anthracis or related surrogates. The focus is on plate culture and count estimates of surface contamination for swab, wipe, and vacuum samples of porous and nonporous surfaces. Summaries of the previous studies and their results were assessed to identify gaps in information needed as inputs to calculate key parameters critical to risk management in biothreat incidents. One key parameter is the number of samples needed to make characterization or clearance decisions with specified statistical confidence. Othermore » key parameters include the ability to calculate, following contamination incidents, the (1) estimates of Bacillus anthracis contamination, as well as the bias and uncertainties in the estimates, and (2) confidence in characterization and clearance decisions for contaminated or decontaminated buildings. Gaps in knowledge and understanding identified during the summary of the studies are discussed and recommendations are given for future studies.« less

  1. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    PubMed

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  2. Parameter Estimation with Small Sample Size: A Higher-Order IRT Model Approach

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Hong, Yuan

    2010-01-01

    Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…

  3. On the effect of model parameters on forecast objects

    NASA Astrophysics Data System (ADS)

    Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott

    2018-04-01

    Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map. The field for some quantities generally consists of spatially coherent and disconnected objects. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.

  4. On the Accuracy of Atmospheric Parameter Determination in BAFGK Stars

    NASA Astrophysics Data System (ADS)

    Ryabchikova, T.; Piskunov, N.; Shulyak, D.

    2015-04-01

    During the past few years, many papers determining the atmospheric parameters in FGK stars appeared in the literature where the accuracy of effective temperatures is given as 20-40 K. For main sequence stars within the 5 000-13 000 K temperature range, we have performed a comparative analysis of the parameters derived from the spectra by using the SME (Spectroscopy Made Easy) package and those found in the literature. Our sample includes standard stars Sirius, Procyon, δ Eri, and the Sun. Combining different spectral regions in the fitting procedure, we investigated an effect different atomic species have on the derived atmospheric parameters. The temperature difference may exceed 100 K depending on the spectral regions used in the SME procedure. It is shown that the atmospheric parameters derived with the SME procedure which includes wings of hydrogen lines in fitting agrees better with the results derived by the other methods and tools across a large part of the main sequence. For three stars—π Cet, 21 Peg, and Procyon—the atmospheric parameters were also derived by fitting a calculated energy distribution to the observed one. We found a substantial difference in the parameters inferred from different sets and combinations of spectrophotometric observations. An intercomparison of our results and literature data shows that the average accuracy of effective temperature determination for cool stars and for the early B-stars is 70-85 K and 170-200 K, respectively.

  5. Hybrid artificial bee colony algorithm for parameter optimization of five-parameter bidirectional reflectance distribution function model.

    PubMed

    Wang, Qianqian; Zhao, Jing; Gong, Yong; Hao, Qun; Peng, Zhong

    2017-11-20

    A hybrid artificial bee colony (ABC) algorithm inspired by the best-so-far solution and bacterial chemotaxis was introduced to optimize the parameters of the five-parameter bidirectional reflectance distribution function (BRDF) model. To verify the performance of the hybrid ABC algorithm, we measured BRDF of three kinds of samples and simulated the undetermined parameters of the five-parameter BRDF model using the hybrid ABC algorithm and the genetic algorithm, respectively. The experimental results demonstrate that the hybrid ABC algorithm outperforms the genetic algorithm in convergence speed, accuracy, and time efficiency under the same conditions.

  6. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes

  7. The effect of uphill and downhill walking on gait parameters: A self-paced treadmill study.

    PubMed

    Kimel-Naor, Shani; Gottlieb, Amihai; Plotnik, Meir

    2017-07-26

    It has been shown that gait parameters vary systematically with the slope of the surface when walking uphill (UH) or downhill (DH) (Andriacchi et al., 1977; Crowe et al., 1996; Kawamura et al., 1991; Kirtley et al., 1985; McIntosh et al., 2006; Sun et al., 1996). However, gait trials performed on inclined surfaces have been subject to certain technical limitations including using fixed speed treadmills (TMs) or, alternatively, sampling only a few gait cycles on inclined ramps. Further, prior work has not analyzed upper body kinematics. This study aims to investigate effects of slope on gait parameters using a self-paced TM (SPTM) which facilitates more natural walking, including measuring upper body kinematics and gait coordination parameters. Gait of 11 young healthy participants was sampled during walking in steady state speed. Measurements were made at slopes of +10°, 0° and -10°. Force plates and a motion capture system were used to reconstruct twenty spatiotemporal gait parameters. For validation, previously described parameters were compared with the literature, and novel parameters measuring upper body kinematics and bilateral gait coordination were also analyzed. Results showed that most lower and upper body gait parameters were affected by walking slope angle. Specifically, UH walking had a higher impact on gait kinematics than DH walking. However, gait coordination parameters were not affected by walking slope, suggesting that gait asymmetry, left-right coordination and gait variability are robust characteristics of walking. The findings of the study are discussed in reference to a potential combined effect of slope and gait speed. Follow-up studies are needed to explore the relative effects of each of these factors. Copyright © 2017. Published by Elsevier Ltd.

  8. Polymerase chain reaction system using magnetic beads for analyzing a sample that includes nucleic acid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nasarabadi, Shanavaz

    2011-01-11

    A polymerase chain reaction system for analyzing a sample containing nucleic acid includes providing magnetic beads; providing a flow channel having a polymerase chain reaction chamber, a pre polymerase chain reaction magnet position adjacent the polymerase chain reaction chamber, and a post pre polymerase magnet position adjacent the polymerase chain reaction chamber. The nucleic acid is bound to the magnetic beads. The magnetic beads with the nucleic acid flow to the pre polymerase chain reaction magnet position in the flow channel. The magnetic beads and the nucleic acid are washed with ethanol. The nucleic acid in the polymerase chain reactionmore » chamber is amplified. The magnetic beads and the nucleic acid are separated into a waste stream containing the magnetic beads and a post polymerase chain reaction mix containing the nucleic acid. The reaction mix containing the nucleic acid flows to an analysis unit in the channel for analysis.« less

  9. Fundamental Parameters Line Profile Fitting in Laboratory Diffractometers

    PubMed Central

    Cheary, R. W.; Coelho, A. A.; Cline, J. P.

    2004-01-01

    The fundamental parameters approach to line profile fitting uses physically based models to generate the line profile shapes. Fundamental parameters profile fitting (FPPF) has been used to synthesize and fit data from both parallel beam and divergent beam diffractometers. The refined parameters are determined by the diffractometer configuration. In a divergent beam diffractometer these include the angular aperture of the divergence slit, the width and axial length of the receiving slit, the angular apertures of the axial Soller slits, the length and projected width of the x-ray source, the absorption coefficient and axial length of the sample. In a parallel beam system the principal parameters are the angular aperture of the equatorial analyser/Soller slits and the angular apertures of the axial Soller slits. The presence of a monochromator in the beam path is normally accommodated by modifying the wavelength spectrum and/or by changing one or more of the axial divergence parameters. Flat analyzer crystals have been incorporated into FPPF as a Lorentzian shaped angular acceptance function. One of the intrinsic benefits of the fundamental parameters approach is its adaptability any laboratory diffractometer. Good fits can normally be obtained over the whole 20 range without refinement using the known properties of the diffractometer, such as the slit sizes and diffractometer radius, and emission profile. PMID:27366594

  10. Cosmological Parameters and Hyper-Parameters: The Hubble Constant from Boomerang and Maxima

    NASA Astrophysics Data System (ADS)

    Lahav, Ofer

    Recently several studies have jointly analysed data from different cosmological probes with the motivation of estimating cosmological parameters. Here we generalise this procedure to allow freedom in the relative weights of various probes. This is done by including in the joint likelihood function a set of `Hyper-Parameters', which are dealt with using Bayesian considerations. The resulting algorithm, which assumes uniform priors on the log of the Hyper-Parameters, is very simple to implement. We illustrate the method by estimating the Hubble constant H0 from different sets of recent CMB experiments (including Saskatoon, Python V, MSAM1, TOCO, Boomerang and Maxima). The approach can be generalised for a combination of cosmic probes, and for other priors on the Hyper-Parameters. Reference: Lahav, Bridle, Hobson, Lasenby & Sodre, 2000, MNRAS, in press (astro-ph/9912105)

  11. What’s Driving Uncertainty? The Model or the Model Parameters (What’s Driving Uncertainty? The influences of model and model parameters in data analysis)

    DOE PAGES

    Anderson-Cook, Christine Michaela

    2017-03-01

    Here, one of the substantial improvements to the practice of data analysis in recent decades is the change from reporting just a point estimate for a parameter or characteristic, to now including a summary of uncertainty for that estimate. Understanding the precision of the estimate for the quantity of interest provides better understanding of what to expect and how well we are able to predict future behavior from the process. For example, when we report a sample average as an estimate of the population mean, it is good practice to also provide a confidence interval (or credible interval, if youmore » are doing a Bayesian analysis) to accompany that summary. This helps to calibrate what ranges of values are reasonable given the variability observed in the sample and the amount of data that were included in producing the summary.« less

  12. Vitrification of neat semen alters sperm parameters and DNA integrity.

    PubMed

    Khalili, Mohammad Ali; Adib, Maryam; Halvaei, Iman; Nabi, Ali

    2014-05-06

    Our aim was to evaluate the effect of neat semen vitrification on human sperm vital parameters and DNA integrity in men with normal and abnormal sperm parameters. Semen samples were 17 normozoospermic samples and 17 specimens with abnormal sperm parameters. Semen analysis was performed according to World Health Organization (WHO) criteria. Then, the smear was provided from each sample and fixed for terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) staining. Vitrification of neat semen was done by plunging cryoloops directly into liquid nitrogen and preserved for 7 days. The samples were warmed and re-evaluated for sperm parameters as well as DNA integrity. Besides, the correlation between sperm parameters and DNA fragmentation was assessed pre- and post vitrification. Cryopreserved spermatozoa showed significant decrease in sperm motility, viability and normal morphology after thawing in both normal and abnormal semen. Also, the rate of sperm DNA fragmentation was significantly higher after vitrification compared to fresh samples in normal (24.76 ± 5.03 and 16.41 ± 4.53, P = .002) and abnormal (34.29 ± 10.02 and 23.5 ± 8.31, P < .0001), respectively. There was negative correlation between sperm motility and sperm DNA integrity in both groups after vitrification. Vitrification of neat ejaculates has negative impact on sperm parameters as well as DNA integrity, particularly among abnormal semen subjects. It is, therefore, recommend to process semen samples and vitrify the sperm pellets.

  13. Adapted random sampling patterns for accelerated MRI.

    PubMed

    Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf

    2011-02-01

    Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.

  14. Selection of sampling rate for digital control of aircrafts

    NASA Technical Reports Server (NTRS)

    Katz, P.; Powell, J. D.

    1974-01-01

    The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.

  15. Planning Robot-Control Parameters With Qualitative Reasoning

    NASA Technical Reports Server (NTRS)

    Peters, Stephen F.

    1993-01-01

    Qualitative-reasoning planning algorithm helps to determine quantitative parameters controlling motion of robot. Algorithm regarded as performing search in multidimensional space of control parameters from starting point to goal region in which desired result of robotic manipulation achieved. Makes use of directed graph representing qualitative physical equations describing task, and interacts, at each sampling period, with history of quantitative control parameters and sensory data, to narrow search for reliable values of quantitative control parameters.

  16. Conditioning and Robustness of RNA Boltzmann Sampling under Thermodynamic Parameter Perturbations.

    PubMed

    Rogers, Emily; Murrugarra, David; Heitsch, Christine

    2017-07-25

    Understanding how RNA secondary structure prediction methods depend on the underlying nearest-neighbor thermodynamic model remains a fundamental challenge in the field. Minimum free energy (MFE) predictions are known to be "ill conditioned" in that small changes to the thermodynamic model can result in significantly different optimal structures. Hence, the best practice is now to sample from the Boltzmann distribution, which generates a set of suboptimal structures. Although the structural signal of this Boltzmann sample is known to be robust to stochastic noise, the conditioning and robustness under thermodynamic perturbations have yet to be addressed. We present here a mathematically rigorous model for conditioning inspired by numerical analysis, and also a biologically inspired definition for robustness under thermodynamic perturbation. We demonstrate the strong correlation between conditioning and robustness and use its tight relationship to define quantitative thresholds for well versus ill conditioning. These resulting thresholds demonstrate that the majority of the sequences are at least sample robust, which verifies the assumption of sampling's improved conditioning over the MFE prediction. Furthermore, because we find no correlation between conditioning and MFE accuracy, the presence of both well- and ill-conditioned sequences indicates the continued need for both thermodynamic model refinements and alternate RNA structure prediction methods beyond the physics-based ones. Copyright © 2017. Published by Elsevier Inc.

  17. Variability of morphometric parameters of human trabecular tissue from coxo-arthritis and osteoporotic samples.

    PubMed

    Marinozzi, Franco; Marinozzi, Andrea; Bini, Fabiano; Zuppante, Francesca; Pecci, Raffaella; Bedini, Rossella

    2012-01-01

    Morphometric and architectural bone parameters change in diseases such as osteoarthritis and osteoporosis. The mechanical strength of bone is primarily influenced by bone quantity and quality. Bone quality is defined by parameters such as trabecular thickness, trabecular separation, trabecular density and degree of anisotropy that describe the micro-architectural structure of bone. Recently, many studies have validated microtomography as a valuable investigative technique to assess bone morphometry, thanks to micro-CT non-destructive, non-invasive and reliability features, in comparison to traditional techniques such as histology. The aim of this study is the analysis by micro-computed tomography of six specimens, extracted from patients affected by osteoarthritis and osteoporosis, in order to observe the tridimensional structure and calculate several morphometric parameters.

  18. STRUCTURAL PARAMETERS FOR 10 HALO GLOBULAR CLUSTERS IN M33

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jun, E-mail: majun@nao.cas.cn

    2015-05-15

    In this paper, we present the properties of 10 halo globular clusters (GCs) with luminosities L ≃ 5–7 × 10{sup 5} L{sub ⊙} in the Local Group galaxy M33 using images from the Hubble Space Telescope WFPC2 in the F555W and F814W bands. We obtained the ellipticities, position angles, and surface brightness profiles for each GC. In general, the ellipticities of the M33 sample clusters are similar to those of the M31 clusters. The structural and dynamical parameters are derived by fitting the profiles to three different models combined with mass-to-light ratios (M/L values) from population-synthesis models. The structural parametersmore » include core radii, concentration, half-light radii, and central surface brightness. The dynamical parameters include the integrated cluster mass, integrated binding energy, central surface mass density, and predicted line of sight velocity dispersion at the cluster center. The velocity dispersions of the four clusters predicted here agree well with the observed dispersions by Larsen et al. The results here showed that the majority of the sample halo GCs are better fitted by both the King model and the Wilson model than the Sérsic model. In general, the properties of the clusters in M33, M31, and the Milky Way fall in the same regions of parameter spaces. The tight correlations of cluster properties indicate a “fundamental plane” for clusters, which reflects some universal physical conditions and processes operating at the epoch of cluster formation.« less

  19. Estimating parameters of hidden Markov models based on marked individuals: use of robust design data

    USGS Publications Warehouse

    Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun

    2012-01-01

    Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).

  20. The use of Landsat for monitoring water parameters in the coastal zone

    NASA Technical Reports Server (NTRS)

    Bowker, D. E.; Witte, W. G.

    1977-01-01

    Landsats 1 and 2 have been successful in detecting and quantifying suspended sediment and several other important parameters in the coastal zone, including chlorophyll, particles, alpha (light transmission), tidal conditions, acid and sewage dumps, and in some instances oil spills. When chlorophyll a is present in detectable quantities, however, it is shown to interfere with the measurement of sediment. The Landsat banding problem impairs the instrument resolution and places a requirement on the sampling program to collect surface data from a sufficiently large area. A sampling method which satisfies this condition is demonstrated.

  1. Predictive model for inflammation grades of chronic hepatitis B: Large-scale analysis of clinical parameters and gene expressions.

    PubMed

    Zhou, Weichen; Ma, Yanyun; Zhang, Jun; Hu, Jingyi; Zhang, Menghan; Wang, Yi; Li, Yi; Wu, Lijun; Pan, Yida; Zhang, Yitong; Zhang, Xiaonan; Zhang, Xinxin; Zhang, Zhanqing; Zhang, Jiming; Li, Hai; Lu, Lungen; Jin, Li; Wang, Jiucun; Yuan, Zhenghong; Liu, Jie

    2017-11-01

    Liver biopsy is the gold standard to assess pathological features (eg inflammation grades) for hepatitis B virus-infected patients although it is invasive and traumatic; meanwhile, several gene profiles of chronic hepatitis B (CHB) have been separately described in relatively small hepatitis B virus (HBV)-infected samples. We aimed to analyse correlations among inflammation grades, gene expressions and clinical parameters (serum alanine amino transaminase, aspartate amino transaminase and HBV-DNA) in large-scale CHB samples and to predict inflammation grades by using clinical parameters and/or gene expressions. We analysed gene expressions with three clinical parameters in 122 CHB samples by an improved regression model. Principal component analysis and machine-learning methods including Random Forest, K-nearest neighbour and support vector machine were used for analysis and further diagnosis models. Six normal samples were conducted to validate the predictive model. Significant genes related to clinical parameters were found enriching in the immune system, interferon-stimulated, regulation of cytokine production, anti-apoptosis, and etc. A panel of these genes with clinical parameters can effectively predict binary classifications of inflammation grade (area under the ROC curve [AUC]: 0.88, 95% confidence interval [CI]: 0.77-0.93), validated by normal samples. A panel with only clinical parameters was also valuable (AUC: 0.78, 95% CI: 0.65-0.86), indicating that liquid biopsy method for detecting the pathology of CHB is possible. This is the first study to systematically elucidate the relationships among gene expressions, clinical parameters and pathological inflammation grades in CHB, and to build models predicting inflammation grades by gene expressions and/or clinical parameters as well. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Behavior of optical properties of coagulated blood sample at 633 nm wavelength

    NASA Astrophysics Data System (ADS)

    Morales Cruzado, Beatriz; Vázquez y Montiel, Sergio; Delgado Atencio, José Alberto

    2011-03-01

    Determination of tissue optical parameters is fundamental for application of light in either diagnostics or therapeutical procedures. However, in samples of biological tissue in vitro, the optical properties are modified by cellular death or cellular agglomeration that can not be avoided. This phenomena change the propagation of light within the biological sample. Optical properties of human blood tissue were investigated in vitro at 633 nm using an optical setup that includes a double integrating sphere system. We measure the diffuse transmittance and diffuse reflectance of the blood sample and compare these physical properties with those obtained by Monte Carlo Multi-Layered (MCML). The extraction of the optical parameters: absorption coefficient μa, scattering coefficient μs and anisotropic factor g from the measurements were carried out using a Genetic Algorithm, in which the search procedure is based in the evolution of a population due to selection of the best individual, evaluated by a function that compares the diffuse transmittance and diffuse reflectance of those individuals with the experimental ones. The algorithm converges rapidly to the best individual, extracting the optical parameters of the sample. We compare our results with those obtained by using other retrieve procedures. We found that the scattering coefficient and the anisotropic factor change dramatically due to the formation of clusters.

  3. Investigation into the influence of build parameters on failure of 3D printed parts

    NASA Astrophysics Data System (ADS)

    Fornasini, Giacomo

    Additive manufacturing, including fused deposition modeling (FDM), is transforming the built world and engineering education. Deep understanding of parts created through FDM technology has lagged behind its adoption in home, work, and academic environments. Properties of parts created from bulk materials through traditional manufacturing are understood well enough to accurately predict their behavior through analytical models. Unfortunately, Additive Manufacturing (AM) process parameters create anisotropy on a scale that fundamentally affects the part properties. Understanding AM process parameters (implemented by program algorithms called slicers) is necessary to predict part behavior. Investigating algorithms controlling print parameters (slicers) revealed stark differences between the generation of part layers. In this work, tensile testing experiments, including a full factorial design, determined that three key factors, width, thickness, infill density, and their interactions, significantly affect the tensile properties of 3D printed test samples.

  4. Constitutive parameter de-embedding using inhomogeneously-filled rectangular waveguides with longitudinal section modes

    NASA Technical Reports Server (NTRS)

    Park, A.; Dominek, A. K.

    1990-01-01

    Constitutive parameter extraction from S parameter data using a rectangular waveguide whose cross section is partially filled with a material sample as opposed to being completely filled was examined. One reason for studying a partially filled geometry is to analyze the effect of air gaps between the sample and fixture for the extraction of constitutive parameters. Air gaps can occur in high temperature parameter measurements when the sample was prepared at room temperature. Single port and two port measurement approaches to parameter extraction are also discussed.

  5. Parameter extraction of coupling-of-modes equations including coupling between two surface acoustic waves on SiO2/Cu/LiNbO3 structures

    NASA Astrophysics Data System (ADS)

    Huang, Yulin; Bao, Jingfu; Li, Xinyi; Zhang, Benfeng; Omori, Tatsuya; Hashimoto, Ken-ya

    2018-07-01

    This paper describes extraction of parameters of an extended coupling-of-modes (COM) model including coupling between Rayleigh and shear-horizontal (SH) surface acoustic waves (SAW) on the SiO2-overlay/Cu-grating/LiNbO3-substrate structure. First, dispersion characteristics of two SAWs are calculated by the finite element method (FEM), and are fitted with those given by the extended COM. Then variation of COM parameters is expressed in polynomials in terms of the SiO2 and Cu thicknesses and the rotation angle Θ of LiNbO3. Then it is shown how the optimal Θ giving the SH SAW suppression changes with the thicknesses. The result agrees well with that obtained directly by FEM. It is also shown the optimal Θ changes abruptly at certain Cu thickness, and is due to decoupling between two SAW modes.

  6. A portable foot-parameter-extracting system

    NASA Astrophysics Data System (ADS)

    Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan

    2016-03-01

    In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.

  7. Nonlinear Spatial Inversion Without Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Nawaz, A.

    2017-12-01

    High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable

  8. Rotation-Activity Correlations in K and M Dwarfs. I. Stellar Parameters and Compilations of v sin I and P/sin I for a Large Sample of Late-K and M Dwarfs

    NASA Astrophysics Data System (ADS)

    Houdebine, E. R.; Mullan, D. J.; Paletou, F.; Gebran, M.

    2016-05-01

    The reliable determination of rotation-activity correlations (RACs) depends on precise measurements of the following stellar parameters: T eff, parallax, radius, metallicity, and rotational speed v sin I. In this paper, our goal is to focus on the determination of these parameters for a sample of K and M dwarfs. In a future paper (Paper II), we will combine our rotational data with activity data in order to construct RACs. Here, we report on a determination of effective temperatures based on the (R-I) C color from the calibrations of Mann et al. and Kenyon & Hartmann for four samples of late-K, dM2, dM3, and dM4 stars. We also determine stellar parameters (T eff, log(g), and [M/H]) using the principal component analysis-based inversion technique for a sample of 105 late-K dwarfs. We compile all effective temperatures from the literature for this sample. We determine empirical radius-[M/H] correlations in our stellar samples. This allows us to propose new effective temperatures, stellar radii, and metallicities for a large sample of 612 late-K and M dwarfs. Our mean radii agree well with those of Boyajian et al. We analyze HARPS and SOPHIE spectra of 105 late-K dwarfs, and we have detected v sin I in 92 stars. In combination with our previous v sin I measurements in M and K dwarfs, we now derive P/sin I measures for a sample of 418 K and M dwarfs. We investigate the distributions of P/sin I, and we show that they are different from one spectral subtype to another at a 99.9% confidence level. Based on observations available at Observatoire de Haute Provence and the European Southern Observatory databases and on Hipparcos parallax measurements.

  9. ROTATION–ACTIVITY CORRELATIONS IN K AND M DWARFS. I. STELLAR PARAMETERS AND COMPILATIONS OF v sin i AND P /sin i FOR A LARGE SAMPLE OF LATE-K AND M DWARFS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houdebine, E. R.; Mullan, D. J.; Paletou, F.

    The reliable determination of rotation–activity correlations (RACs) depends on precise measurements of the following stellar parameters: T {sub eff}, parallax, radius, metallicity, and rotational speed v sin i . In this paper, our goal is to focus on the determination of these parameters for a sample of K and M dwarfs. In a future paper (Paper II), we will combine our rotational data with activity data in order to construct RACs. Here, we report on a determination of effective temperatures based on the ( R – I ){sub C} color from the calibrations of Mann et al. and Kenyon andmore » Hartmann for four samples of late-K, dM2, dM3, and dM4 stars. We also determine stellar parameters ( T {sub eff}, log( g ), and [M/H]) using the principal component analysis–based inversion technique for a sample of 105 late-K dwarfs. We compile all effective temperatures from the literature for this sample. We determine empirical radius–[M/H] correlations in our stellar samples. This allows us to propose new effective temperatures, stellar radii, and metallicities for a large sample of 612 late-K and M dwarfs. Our mean radii agree well with those of Boyajian et al. We analyze HARPS and SOPHIE spectra of 105 late-K dwarfs, and we have detected v sin i in 92 stars. In combination with our previous v sin i measurements in M and K dwarfs, we now derive P /sin i measures for a sample of 418 K and M dwarfs. We investigate the distributions of P /sin i , and we show that they are different from one spectral subtype to another at a 99.9% confidence level.« less

  10. System and method for motor parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less

  11. EXOFIT: orbital parameters of extrasolar planets from radial velocities

    NASA Astrophysics Data System (ADS)

    Balan, Sreekumar T.; Lahav, Ofer

    2009-04-01

    Retrieval of orbital parameters of extrasolar planets poses considerable statistical challenges. Due to sparse sampling, measurement errors, parameters degeneracy and modelling limitations, there are no unique values of basic parameters, such as period and eccentricity. Here, we estimate the orbital parameters from radial velocity data in a Bayesian framework by utilizing Markov Chain Monte Carlo (MCMC) simulations with the Metropolis-Hastings algorithm. We follow a methodology recently proposed by Gregory and Ford. Our implementation of MCMC is based on the object-oriented approach outlined by Graves. We make our resulting code, EXOFIT, publicly available with this paper. It can search for either one or two planets as illustrated on mock data. As an example we re-analysed the orbital solution of companions to HD 187085 and HD 159868 from the published radial velocity data. We confirm the degeneracy reported for orbital parameters of the companion to HD 187085, and show that a low-eccentricity orbit is more probable for this planet. For HD 159868, we obtained slightly different orbital solution and a relatively high `noise' factor indicating the presence of an unaccounted signal in the radial velocity data. EXOFIT is designed in such a way that it can be extended for a variety of probability models, including different Bayesian priors.

  12. The Impact of Uncertain Physical Parameters on HVAC Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Elizondo, Marcelo A.; Lu, Shuai

    HVAC units are currently one of the major resources providing demand response (DR) in residential buildings. Models of HVAC with DR function can improve understanding of its impact on power system operations and facilitate the deployment of DR technologies. This paper investigates the importance of various physical parameters and their distributions to the HVAC response to DR signals, which is a key step to the construction of HVAC models for a population of units with insufficient data. These parameters include the size of floors, insulation efficiency, the amount of solid mass in the house, and efficiency of the HVAC units.more » These parameters are usually assumed to follow Gaussian or Uniform distributions. We study the effect of uncertainty in the chosen parameter distributions on the aggregate HVAC response to DR signals, during transient phase and in steady state. We use a quasi-Monte Carlo sampling method with linear regression and Prony analysis to evaluate sensitivity of DR output to the uncertainty in the distribution parameters. The significance ranking on the uncertainty sources is given for future guidance in the modeling of HVAC demand response.« less

  13. 300 Area treated effluent disposal facility sampling schedule. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loll, C.M.

    1995-03-28

    This document is the interface between the 300 Area liquid effluent process engineering (LEPE) group and the waste sampling and characterization facility (WSCF), concerning process control samples. It contains a schedule for process control samples at the 300 Area TEDF which describes the parameters to be measured, the frequency of sampling and analysis, the sampling point, and the purpose for each parameter.

  14. Radio Evolution of Supernova Remnants Including Nonlinear Particle Acceleration: Insights from Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Pavlović, Marko Z.; Urošević, Dejan; Arbutina, Bojan; Orlando, Salvatore; Maxted, Nigel; Filipović, Miroslav D.

    2018-01-01

    We present a model for the radio evolution of supernova remnants (SNRs) obtained by using three-dimensional hydrodynamic simulations coupled with nonlinear kinetic theory of cosmic-ray (CR) acceleration in SNRs. We model the radio evolution of SNRs on a global level by performing simulations for a wide range of the relevant physical parameters, such as the ambient density, supernova (SN) explosion energy, acceleration efficiency, and magnetic field amplification (MFA) efficiency. We attribute the observed spread of radio surface brightnesses for corresponding SNR diameters to the spread of these parameters. In addition to our simulations of Type Ia SNRs, we also considered SNR radio evolution in denser, nonuniform circumstellar environments modified by the progenitor star wind. These simulations start with the mass of the ejecta substantially higher than in the case of a Type Ia SN and presumably lower shock speed. The magnetic field is understandably seen as very important for the radio evolution of SNRs. In terms of MFA, we include both resonant and nonresonant modes in our large-scale simulations by implementing models obtained from first-principles, particle-in-cell simulations and nonlinear magnetohydrodynamical simulations. We test the quality and reliability of our models on a sample consisting of Galactic and extragalactic SNRs. Our simulations give Σ ‑ D slopes between ‑4 and ‑6 for the full Sedov regime. Recent empirical slopes obtained for the Galactic samples are around ‑5, while those for the extragalactic samples are around ‑4.

  15. The assessment of body sway and the choice of the stability parameter(s).

    PubMed

    Raymakers, J A; Samson, M M; Verhaar, H J J

    2005-01-01

    This methodological study aims at comparison of the practical usefulness of several parameters of body sway derived from recordings of the center of pressure (CoP) with the aid of a static force platform as proposed in the literature. These included: mean displacement velocity, maximal range of movement along x- and y-co-ordinates, movement area, planar deviation, phase plane parameter of Riley and the parameters of the diffusion stabilogram according to Collins. They were compared in over 850 experiments in a group of young healthy subjects (n = 10, age 21-45 years), a group of elderly healthy (n = 38, age 61-78 years) and two groups of elderly subjects (n = 10 and n = 21, age 65-89 years) with stability problems under different conditions known to interfere with stability as compared to standing with open eyes fixing a visual anchoring point: closing the eyes, standing on plastic foam in stead of a firm surface and performing a cognitive task: the modified stroop test. A force platform (Kistler) was used and co-ordinates of the body's center of pressure were recorded during 60 s of quiet barefoot standing with a sampling frequency of 10 Hz. In general, the results show important overlapping among groups and test conditions. Mean displacement velocity shows the most consistent differences between test situations, health conditions and age ranges, but is not affected by an extra cognitive task in healthy old people. Mean maximal sideways sway range is different among groups and test conditions except for the cognitive task in young and elderly subjects. Standardised displacement parameters such as standard deviations of displacements and planar deviation discriminate less well than the actual range of motion or the velocity. The critical time interval derived from the diffusion stabilogram according to Collins et al. seems to add a specific type of information since it shows significant influence from addition of a cognitive task in old subjects standing on a firm

  16. New prior sampling methods for nested sampling - Development and testing

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie; Tuyl, Frank; Hudson, Irene

    2017-06-01

    Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].

  17. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  18. Evaluation of the information content of long-term wastewater characteristics data in relation to activated sludge model parameters.

    PubMed

    Alikhani, Jamal; Takacs, Imre; Al-Omari, Ahmed; Murthy, Sudhir; Massoudieh, Arash

    2017-03-01

    A parameter estimation framework was used to evaluate the ability of observed data from a full-scale nitrification-denitrification bioreactor to reduce the uncertainty associated with the bio-kinetic and stoichiometric parameters of an activated sludge model (ASM). Samples collected over a period of 150 days from the effluent as well as from the reactor tanks were used. A hybrid genetic algorithm and Bayesian inference were used to perform deterministic and parameter estimations, respectively. The main goal was to assess the ability of the data to obtain reliable parameter estimates for a modified version of the ASM. The modified ASM model includes methylotrophic processes which play the main role in methanol-fed denitrification. Sensitivity analysis was also used to explain the ability of the data to provide information about each of the parameters. The results showed that the uncertainty in the estimates of the most sensitive parameters (including growth rate, decay rate, and yield coefficients) decreased with respect to the prior information.

  19. Comparison of the solid-phase extraction efficiency of a bounded and an included cyclodextrin-silica microporous composite for polycyclic aromatic hydrocarbons determination in water samples.

    PubMed

    Mauri-Aucejo, Adela; Amorós, Pedro; Moragues, Alaina; Guillem, Carmen; Belenguer-Sapiña, Carolina

    2016-08-15

    Solid-phase extraction is one of the most important techniques for sample purification and concentration. A wide variety of solid phases have been used for sample preparation over time. In this work, the efficiency of a new kind of solid-phase extraction adsorbent, which is a microporous material made from modified cyclodextrin bounded to a silica network, is evaluated through an analytical method which combines solid-phase extraction with high-performance liquid chromatography to determine polycyclic aromatic hydrocarbons in water samples. Several parameters that affected the analytes recovery, such as the amount of solid phase, the nature and volume of the eluent or the sample volume and concentration influence have been evaluated. The experimental results indicate that the material possesses adsorption ability to the tested polycyclic aromatic hydrocarbons. Under the optimum conditions, the quantification limits of the method were in the range of 0.09-2.4μgL(-1) and fine linear correlations between peak height and concentration were found around 1.3-70μgL(-1). The method has good repeatability and reproducibility, with coefficients of variation under 8%. Due to the concentration results, this material may represent an alternative for trace analysis of polycyclic aromatic hydrocarbons in water trough solid-phase extraction. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. VizieR Online Data Catalog: Fundamental parameters of Kepler stars (Silva Aguirre+, 2015)

    NASA Astrophysics Data System (ADS)

    Silva Aguirre, V.; Davies, G. R.; Basu, S.; Christensen-Dalsgaard, J.; Creevey, O.; Metcalfe, T. S.; Bedding, T. R.; Casagrande, L.; Handberg, R.; Lund, M. N.; Nissen, P. E.; Chaplin, W. J.; Huber, D.; Serenelli, A. M.; Stello, D.; van Eylen, V.; Campante, T. L.; Elsworth, Y.; Gilliland, R. L.; Hekker, S.; Karoff, C.; Kawaler, S. D.; Kjeldsen, H.; Lundkvist, M. S.

    2016-02-01

    Our sample has been extracted from the 77 exoplanet host stars presented in Huber et al. (2013, Cat. J/ApJ/767/127). We have made use of the full time-base of observations from the Kepler satellite to uniformly determine precise fundamental stellar parameters, including ages, for a sample of exoplanet host stars where high-quality asteroseismic data were available. We devised a Bayesian procedure flexible in its input and applied it to different grids of models to study systematics from input physics and extract statistically robust properties for all stars. (4 data files).

  1. Photospheric properties and fundamental parameters of M dwarfs

    NASA Astrophysics Data System (ADS)

    Rajpurohit, A. S.; Allard, F.; Teixeira, G. D. C.; Homeier, D.; Rajpurohit, S.; Mousis, O.

    2018-02-01

    Context. M dwarfs are an important source of information when studying and probing the lower end of the Hertzsprung-Russell (HR) diagram, down to the hydrogen-burning limit. Being the most numerous and oldest stars in the galaxy, they carry fundamental information on its chemical history. The presence of molecules in their atmospheres, along with various condensed species, complicates our understanding of their physical properties and thus makes the determination of their fundamental stellar parameters more challenging and difficult. Aim. The aim of this study is to perform a detailed spectroscopic analysis of the high-resolution H-band spectra of M dwarfs in order to determine their fundamental stellar parameters and to validate atmospheric models. The present study will also help us to understand various processes, including dust formation and depletion of metals onto dust grains in M dwarf atmospheres. The high spectral resolution also provides a unique opportunity to constrain other chemical and physical processes that occur in a cool atmosphere. Methods: The high-resolution APOGEE spectra of M dwarfs, covering the entire H-band, provide a unique opportunity to measure their fundamental parameters. We have performed a detailed spectral synthesis by comparing these high-resolution H-band spectra to that of the most recent BT-Settl model and have obtained fundamental parameters such as effective temperature, surface gravity, and metallicity (Teff, log g, and [Fe/H]), respectively. Results: We have determined Teff, log g, and [Fe/H] for 45 M dwarfs using high-resolution H-band spectra. The derived Teff for the sample ranges from 3100 to 3900 K, values of log g lie in the range 4.5 ≤ log g ≤ 5.5, and the resulting metallicities lie in the range ‑0.5 ≤ [Fe/H] ≤ +0.5. We have explored systematic differences between effective temperature and metallicity calibrations with other studies using the same sample of M dwarfs. We have also shown that the stellar

  2. Cosmological parameters, shear maps and power spectra from CFHTLenS using Bayesian hierarchical inference

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Heavens, Alan; Jaffe, Andrew H.

    2017-04-01

    We apply two Bayesian hierarchical inference schemes to infer shear power spectra, shear maps and cosmological parameters from the Canada-France-Hawaii Telescope (CFHTLenS) weak lensing survey - the first application of this method to data. In the first approach, we sample the joint posterior distribution of the shear maps and power spectra by Gibbs sampling, with minimal model assumptions. In the second approach, we sample the joint posterior of the shear maps and cosmological parameters, providing a new, accurate and principled approach to cosmological parameter inference from cosmic shear data. As a first demonstration on data, we perform a two-bin tomographic analysis to constrain cosmological parameters and investigate the possibility of photometric redshift bias in the CFHTLenS data. Under the baseline ΛCDM (Λ cold dark matter) model, we constrain S_8 = σ _8(Ω _m/0.3)^{0.5} = 0.67+0.03-0.03 (68 per cent), consistent with previous CFHTLenS analyses but in tension with Planck. Adding neutrino mass as a free parameter, we are able to constrain ∑mν < 4.6 eV (95 per cent) using CFHTLenS data alone. Including a linear redshift-dependent photo-z bias Δz = p2(z - p1), we find p_1=-0.25+0.53-0.60 and p_2 = -0.15+0.17-0.15, and tension with Planck is only alleviated under very conservative prior assumptions. Neither the non-minimal neutrino mass nor photo-z bias models are significantly preferred by the CFHTLenS (two-bin tomography) data.

  3. Nested sampling at karst springs: from basic patterns to event triggered sampling and on-line monitoring.

    NASA Astrophysics Data System (ADS)

    Stadler, Hermann; Skritek, Paul; Zerobin, Wolfgang; Klock, Erich; Farnleitner, Andreas H.

    2010-05-01

    In the last year, global changes in ecosystems, the growth of population, and modifications of the legal framework within the EU have caused an increased need of qualitative groundwater and spring water monitoring with the target to continue to supply the consumers with high-quality drinking water in the future. Additionally the demand for sustainable protection of drinking water resources effected the initiated implementation of early warning systems and quality assurance networks in water supplies. In the field of hydrogeological investigations, event monitoring and event sampling is worst case scenario monitoring. Therefore, such tools become more and more indispensible to get detailed information about aquifer parameter and vulnerability. In the framework of water supplies, smart sampling designs combined with in-situ measurements of different parameters and on-line access can play an important role in early warning systems and quality surveillance networks. In this study nested sampling tiers are presented, which were designed to cover total system dynamic. Basic monitoring sampling (BMS), high frequency sampling (HFS) and automated event sampling (AES) were combined. BMS was organized with a monthly increment for at least two years, and HFS was performed during times of increased groundwater recharge (e.g. during snowmelt). At least one AES tier was embedded in this system. AES was enabled by cross-linking of hydrological stations, so the system could be run fully automated and could include real-time availability of data. By means of networking via Low Earth Orbiting Satellites (LEO-satellites), data from the precipitation station (PS) in the catchment area are brought together with data from the spring sampling station (SSS) without the need of terrestrial infrastructure for communication and power supply. Furthermore, the whole course of input and output parameters, like precipitation (input system) and discharge (output system), and the status of the

  4. Atmospheric stellar parameters from cross-correlation functions

    NASA Astrophysics Data System (ADS)

    Malavolta, L.; Lovis, C.; Pepe, F.; Sneden, C.; Udry, S.

    2017-08-01

    The increasing number of spectra gathered by spectroscopic sky surveys and transiting exoplanet follow-up has pushed the community to develop automated tools for atmospheric stellar parameters determination. Here we present a novel approach that allows the measurement of temperature (Teff), metallicity ([Fe/H]) and gravity (log g) within a few seconds and in a completely automated fashion. Rather than performing comparisons with spectral libraries, our technique is based on the determination of several cross-correlation functions (CCFs) obtained by including spectral features with different sensitivity to the photospheric parameters. We use literature stellar parameters of high signal-to-noise (SNR), high-resolution HARPS spectra of FGK main-sequence stars to calibrate Teff, [Fe/H] and log g as a function of CCF parameters. Our technique is validated using low-SNR spectra obtained with the same instrument. For FGK stars we achieve a precision of σ _{{T_eff}} = 50 K, σlog g = 0.09 dex and σ _{{{[Fe/H]}}} =0.035 dex at SNR = 50, while the precision for observation with SNR ≳ 100 and the overall accuracy are constrained by the literature values used to calibrate the CCFs. Our approach can easily be extended to other instruments with similar spectral range and resolution or to other spectral range and stars other than FGK dwarfs if a large sample of reference stars is available for the calibration. Additionally, we provide the mathematical formulation to convert synthetic equivalent widths to CCF parameters as an alternative to direct calibration. We have made our tool publicly available.

  5. Population pharmacokinetic characterization of BAY 81-8973, a full-length recombinant factor VIII: lessons learned - importance of including samples with factor VIII levels below the quantitation limit.

    PubMed

    Garmann, D; McLeay, S; Shah, A; Vis, P; Maas Enriquez, M; Ploeger, B A

    2017-07-01

    The pharmacokinetics (PK), safety and efficacy of BAY 81-8973, a full-length, unmodified, recombinant human factor VIII (FVIII), were evaluated in the LEOPOLD trials. The aim of this study was to develop a population PK model based on pooled data from the LEOPOLD trials and to investigate the importance of including samples with FVIII levels below the limit of quantitation (BLQ) to estimate half-life. The analysis included 1535 PK observations (measured by the chromogenic assay) from 183 male patients with haemophilia A aged 1-61 years from the 3 LEOPOLD trials. The limit of quantitation was 1.5 IU dL -1 for the majority of samples. Population PK models that included or excluded BLQ samples were used for FVIII half-life estimations, and simulations were performed using both estimates to explore the influence on the time below a determined FVIII threshold. In the data set used, approximately 16.5% of samples were BLQ, which is not uncommon for FVIII PK data sets. The structural model to describe the PK of BAY 81-8973 was a two-compartment model similar to that seen for other FVIII products. If BLQ samples were excluded from the model, FVIII half-life estimations were longer compared with a model that included BLQ samples. It is essential to assess the importance of BLQ samples when performing population PK estimates of half-life for any FVIII product. Exclusion of BLQ data from half-life estimations based on population PK models may result in an overestimation of half-life and underestimation of time under a predetermined FVIII threshold, resulting in potential underdosing of patients. © 2017 Bayer AG. Haemophilia Published by John Wiley & Sons Ltd.

  6. New fundamental parameters for attitude representation

    NASA Astrophysics Data System (ADS)

    Patera, Russell P.

    2017-08-01

    A new attitude parameter set is developed to clarify the geometry of combining finite rotations in a rotational sequence and in combining infinitesimal angular increments generated by angular rate. The resulting parameter set of six Pivot Parameters represents a rotation as a great circle arc on a unit sphere that can be located at any clocking location in the rotation plane. Two rotations are combined by linking their arcs at either of the two intersection points of the respective rotation planes. In a similar fashion, linking rotational increments produced by angular rate is used to derive the associated kinematical equations, which are linear and have no singularities. Included in this paper is the derivation of twelve Pivot Parameter elements that represent all twelve Euler Angle sequences, which enables efficient conversions between Pivot Parameters and any Euler Angle sequence. Applications of this new parameter set include the derivation of quaternions and the quaternion composition rule, as well as, the derivation of the analytical solution to time dependent coning motion. The relationships between Pivot Parameters and traditional parameter sets are included in this work. Pivot Parameters are well suited for a variety of aerospace applications due to their effective composition rule, singularity free kinematic equations, efficient conversion to and from Euler Angle sequences and clarity of their geometrical foundation.

  7. Measurement of the hyperelastic properties of 44 pathological ex vivo breast tissue samples

    NASA Astrophysics Data System (ADS)

    O'Hagan, Joseph J.; Samani, Abbas

    2009-04-01

    The elastic and hyperelastic properties of biological soft tissues have been of interest to the medical community. There are several biomedical applications where parameters characterizing such properties are critical for a reliable clinical outcome. These applications include surgery planning, needle biopsy and brachtherapy where tissue biomechanical modeling is involved. Another important application is interpreting nonlinear elastography images. While there has been considerable research on the measurement of the linear elastic modulus of small tissue samples, little research has been conducted for measuring parameters that characterize the nonlinear elasticity of tissues included in tissue slice specimens. This work presents hyperelastic measurement results of 44 pathological ex vivo breast tissue samples. For each sample, five hyperelastic models have been used, including the Yeoh, N = 2 polynomial, N = 1 Ogden, Arruda-Boyce, and Veronda-Westmann models. Results show that the Yeoh, polynomial and Ogden models are the most accurate in terms of fitting experimental data. The results indicate that almost all of the parameters corresponding to the pathological tissues are between two times to over two orders of magnitude larger than those of normal tissues, with C11 showing the most significant difference. Furthermore, statistical analysis indicates that C02 of the Yeoh model, and C11 and C20 of the polynomial model have very good potential for cancer classification as they show statistically significant differences for various cancer types, especially for invasive lobular carcinoma. In addition to the potential for use in cancer classification, the presented data are very important for applications such as surgery planning and virtual reality based clinician training systems where accurate nonlinear tissue response modeling is required.

  8. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  9. Noise in NC-AFM measurements with significant tip–sample interaction

    PubMed Central

    Lübbe, Jannis; Temmen, Matthias

    2016-01-01

    The frequency shift noise in non-contact atomic force microscopy (NC-AFM) imaging and spectroscopy consists of thermal noise and detection system noise with an additional contribution from amplitude noise if there are significant tip–sample interactions. The total noise power spectral density D Δ f(f m) is, however, not just the sum of these noise contributions. Instead its magnitude and spectral characteristics are determined by the strongly non-linear tip–sample interaction, by the coupling between the amplitude and tip–sample distance control loops of the NC-AFM system as well as by the characteristics of the phase locked loop (PLL) detector used for frequency demodulation. Here, we measure D Δ f(f m) for various NC-AFM parameter settings representing realistic measurement conditions and compare experimental data to simulations based on a model of the NC-AFM system that includes the tip–sample interaction. The good agreement between predicted and measured noise spectra confirms that the model covers the relevant noise contributions and interactions. Results yield a general understanding of noise generation and propagation in the NC-AFM and provide a quantitative prediction of noise for given experimental parameters. We derive strategies for noise-optimised imaging and spectroscopy and outline a full optimisation procedure for the instrumentation and control loops. PMID:28144538

  10. Noise in NC-AFM measurements with significant tip-sample interaction.

    PubMed

    Lübbe, Jannis; Temmen, Matthias; Rahe, Philipp; Reichling, Michael

    2016-01-01

    The frequency shift noise in non-contact atomic force microscopy (NC-AFM) imaging and spectroscopy consists of thermal noise and detection system noise with an additional contribution from amplitude noise if there are significant tip-sample interactions. The total noise power spectral density D Δ f ( f m ) is, however, not just the sum of these noise contributions. Instead its magnitude and spectral characteristics are determined by the strongly non-linear tip-sample interaction, by the coupling between the amplitude and tip-sample distance control loops of the NC-AFM system as well as by the characteristics of the phase locked loop (PLL) detector used for frequency demodulation. Here, we measure D Δ f ( f m ) for various NC-AFM parameter settings representing realistic measurement conditions and compare experimental data to simulations based on a model of the NC-AFM system that includes the tip-sample interaction. The good agreement between predicted and measured noise spectra confirms that the model covers the relevant noise contributions and interactions. Results yield a general understanding of noise generation and propagation in the NC-AFM and provide a quantitative prediction of noise for given experimental parameters. We derive strategies for noise-optimised imaging and spectroscopy and outline a full optimisation procedure for the instrumentation and control loops.

  11. A COMPARATIVE STUDY ON PARAMETERS USED FOR CHARACTERIZING COTTON SHORT FIBERS

    USDA-ARS?s Scientific Manuscript database

    The quantity of short cotton fibers in a cotton sample is an important cotton quality parameter which impacts yarn production performance and yarn quality. Researchers have proposed different parameters for characterizing the amount of short fibers in a cotton sample. A comprehensive study was car...

  12. Quality parameters and antioxidant and antibacterial properties of some Mexican honeys.

    PubMed

    Rodríguez, Beatriz A; Mendoza, Sandra; Iturriga, Montserrat H; Castaño-Tostado, Eduardo

    2012-01-01

    A total of 14 Mexican honeys were screened for quality parameters including color, moisture, proline, and acidity. Antioxidant properties of complete honey and its methanolic extracts were evaluated by the DPPH, ABTS, and FRAP assays. In addition, the antimicrobial activity of complete honeys against Bacillus cereus ATCC 10876, Listeria monocytogenes Scott A, Salmonella Typhimurium ATCC 14028, and Sthapylococcus aureus ATCC 6538 was determined. Most of honeys analyzed showed values within quality parameters established by the Codex Alimentarius Commission in 2001. Eucalyptus flower honey and orange blossom honey showed the highest phenolic contents and antioxidant capacity. Bell flower, orange blossom, and eucalyptus flower honeys inhibited the growth of the 4 evaluated microorganisms. The remaining honeys affected at least 1 of the estimated growth parameters (increased lag phase, decreased growth rate, and/or maximum population density). Microorganism sensitivity to the antimicrobial activity of honeys followed the order B. cereus > L. monocytogenes > Salmonella Typhimurium > S. aureus. The monofloral honey samples from orange blossoms, and eucalyptus flowers demonstrated to be good sources of antioxidant and antimicrobial compounds. All the Mexican honey samples examined proved to be good sources of antioxidants and antimicrobial agents that might serve to maintain health and protect against several diseases. The results of the study showed that Mexican honeys display good quality parameters and antioxidant and antimicrobial activities. Mexican honey can be used as an additive in the food industry to increase the nutraceutical value of products. © 2011 Institute of Food Technologists®

  13. High energy PIXE: A tool to characterize multi-layer thick samples

    NASA Astrophysics Data System (ADS)

    Subercaze, A.; Koumeir, C.; Métivier, V.; Servagent, N.; Guertin, A.; Haddad, F.

    2018-02-01

    High energy PIXE is a useful and non-destructive tool to characterize multi-layer thick samples such as cultural heritage objects. In a previous work, we demonstrated the possibility to perform quantitative analysis of simple multi-layer samples using high energy PIXE, without any assumption on their composition. In this work an in-depth study of the parameters involved in the method previously published is proposed. Its extension to more complex samples with a repeated layer is also presented. Experiments have been performed at the ARRONAX cyclotron using 68 MeV protons. The thicknesses and sequences of a multi-layer sample including two different layers of the same element have been determined. Performances and limits of this method are presented and discussed.

  14. Alaska Geochemical Database, Version 2.0 (AGDB2)--including “best value” data compilations for rock, sediment, soil, mineral, and concentrate sample media

    USGS Publications Warehouse

    Granitto, Matthew; Schmidt, Jeanine M.; Shew, Nora B.; Gamble, Bruce M.; Labay, Keith A.

    2013-01-01

    The Alaska Geochemical Database Version 2.0 (AGDB2) contains new geochemical data compilations in which each geologic material sample has one “best value” determination for each analyzed species, greatly improving speed and efficiency of use. Like the Alaska Geochemical Database (AGDB, http://pubs.usgs.gov/ds/637/) before it, the AGDB2 was created and designed to compile and integrate geochemical data from Alaska in order to facilitate geologic mapping, petrologic studies, mineral resource assessments, definition of geochemical baseline values and statistics, environmental impact assessments, and studies in medical geology. This relational database, created from the Alaska Geochemical Database (AGDB) that was released in 2011, serves as a data archive in support of present and future Alaskan geologic and geochemical projects, and contains data tables in several different formats describing historical and new quantitative and qualitative geochemical analyses. The analytical results were determined by 85 laboratory and field analytical methods on 264,095 rock, sediment, soil, mineral and heavy-mineral concentrate samples. Most samples were collected by U.S. Geological Survey personnel and analyzed in U.S. Geological Survey laboratories or, under contracts, in commercial analytical laboratories. These data represent analyses of samples collected as part of various U.S. Geological Survey programs and projects from 1962 through 2009. In addition, mineralogical data from 18,138 nonmagnetic heavy-mineral concentrate samples are included in this database. The AGDB2 includes historical geochemical data originally archived in the U.S. Geological Survey Rock Analysis Storage System (RASS) database, used from the mid-1960s through the late 1980s and the U.S. Geological Survey PLUTO database used from the mid-1970s through the mid-1990s. All of these data are currently maintained in the National Geochemical Database (NGDB). Retrievals from the NGDB were used to generate

  15. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  16. Atmospheric Sampling on Ascension Island Using Multirotor UAVs

    PubMed Central

    Greatwood, Colin; Richardson, Thomas S.; Freer, Jim; Thomas, Rick M.; MacKenzie, A. Rob; Brownlow, Rebecca; Lowry, David; Fisher, Rebecca E.; Nisbet, Euan G.

    2017-01-01

    As part of an NERC-funded project investigating the southern methane anomaly, a team drawn from the Universities of Bristol, Birmingham and Royal Holloway flew small unmanned multirotors from Ascension Island for the purposes of atmospheric sampling. The objective of these flights was to collect air samples from below, within and above a persistent atmospheric feature, the Trade Wind Inversion, in order to characterise methane concentrations and their isotopic composition. These parameters allow the methane in the different air masses to be tied to different source locations, which can be further analysed using back trajectory atmospheric computer modelling. This paper describes the campaigns as a whole including the design of the bespoke eight rotor aircraft and the operational requirements that were needed in order to collect targeted multiple air samples up to 2.5 km above the ground level in under 20 min of flight time. Key features of the system described include real-time feedback of temperature and humidity, as well as system health data. This enabled detailed targeting of the air sampling design to be realised and planned during the flight mission on the downward leg, a capability that is invaluable in the presence of uncertainty in the pre-flight meteorological data. Environmental considerations are also outlined together with the flight plans that were created in order to rapidly fly vertical transects of the atmosphere whilst encountering changing wind conditions. Two sampling campaigns were carried out in September 2014 and July 2015 with over one hundred high altitude sampling missions. Lessons learned are given throughout, including those associated with operating in the testing environment encountered on Ascension Island. PMID:28545231

  17. Atmospheric Sampling on Ascension Island Using Multirotor UAVs.

    PubMed

    Greatwood, Colin; Richardson, Thomas S; Freer, Jim; Thomas, Rick M; MacKenzie, A Rob; Brownlow, Rebecca; Lowry, David; Fisher, Rebecca E; Nisbet, Euan G

    2017-05-23

    As part of an NERC-funded project investigating the southern methane anomaly, a team drawn from the Universities of Bristol, Birmingham and Royal Holloway flew small unmanned multirotors from Ascension Island for the purposes of atmospheric sampling. The objective of these flights was to collect air samples from below, within and above a persistent atmospheric feature, the Trade Wind Inversion, in order to characterise methane concentrations and their isotopic composition. These parameters allow the methane in the different air masses to be tied to different source locations, which can be further analysed using back trajectory atmospheric computer modelling. This paper describes the campaigns as a whole including the design of the bespoke eight rotor aircraft and the operational requirements that were needed in order to collect targeted multiple air samples up to 2.5 km above the ground level in under 20 min of flight time. Key features of the system described include real-time feedback of temperature and humidity, as well as system health data. This enabled detailed targeting of the air sampling design to be realised and planned during the flight mission on the downward leg, a capability that is invaluable in the presence of uncertainty in the pre-flight meteorological data. Environmental considerations are also outlined together with the flight plans that were created in order to rapidly fly vertical transects of the atmosphere whilst encountering changing wind conditions. Two sampling campaigns were carried out in September 2014 and July 2015 with over one hundred high altitude sampling missions. Lessons learned are given throughout, including those associated with operating in the testing environment encountered on Ascension Island.

  18. Simulation-based Extraction of Key Material Parameters from Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Alsafi, Huseen; Peninngton, Gray

    Models for the atomic force microscopy (AFM) tip and sample interaction contain numerous material parameters that are often poorly known. This is especially true when dealing with novel material systems or when imaging samples that are exposed to complicated interactions with the local environment. In this work we use Monte Carlo methods to extract sample material parameters from the experimental AFM analysis of a test sample. The parameterized theoretical model that we use is based on the Virtual Environment for Dynamic AFM (VEDA) [1]. The extracted material parameters are then compared with the accepted values for our test sample. Using this procedure, we suggest a method that can be used to successfully determine unknown material properties in novel and complicated material systems. We acknowledge Fisher Endowment Grant support from the Jess and Mildred Fisher College of Science and Mathematics,Towson University.

  19. Quantitative analysis of iris parameters in keratoconus patients using optical coherence tomography.

    PubMed

    Bonfadini, Gustavo; Arora, Karun; Vianna, Lucas M; Campos, Mauro; Friedman, David; Muñoz, Beatriz; Jun, Albert S

    2015-01-01

    To investigate the relationship between quantitative iris parameters and the presence of keratoconus. Cross-sectional observational study that included 15 affected eyes of 15 patients with keratoconus and 26 eyes of 26 normal age- and sex-matched controls. Iris parameters (area, thickness, and pupil diameter) of affected and unaffected eyes were measured under standardized light and dark conditions using anterior segment optical coherence tomography (AS-OCT). To identify optimal iris thickness cutoff points to maximize the sensitivity and specificity when discriminating keratoconus eyes from normal eyes, the analysis included the use of receiver operating characteristic (ROC) curves. Iris thickness and area were lower in keratoconus eyes than in normal eyes. The mean thickness at the pupillary margin under both light and dark conditions was found to be the best parameter for discriminating normal patients from keratoconus patients. Diagnostic performance was assessed by the area under the ROC curve (AROC), which had a value of 0.8256 with 80.0% sensitivity and 84.6% specificity, using a cutoff of 0.4125 mm. The sensitivity increased to 86.7% when a cutoff of 0.4700 mm was used. In our sample, iris thickness was lower in keratoconus eyes than in normal eyes. These results suggest that tomographic parameters may provide novel adjunct approaches for keratoconus screening.

  20. Bibliography for aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.; Maine, Richard E.

    1986-01-01

    An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.

  1. Comparison of chain sampling plans with single and double sampling plans

    NASA Technical Reports Server (NTRS)

    Stephens, K. S.; Dodge, H. F.

    1976-01-01

    The efficiency of chain sampling is examined through matching of operating characteristics (OC) curves of chain sampling plans (ChSP) with single and double sampling plans. In particular, the operating characteristics of some ChSP-0, 3 and 1, 3 as well as ChSP-0, 4 and 1, 4 are presented, where the number pairs represent the first and the second cumulative acceptance numbers. The fact that the ChSP procedure uses cumulative results from two or more samples and that the parameters can be varied to produce a wide variety of operating characteristics raises the question whether it may be possible for such plans to provide a given protection with less inspection than with single or double sampling plans. The operating ratio values reported illustrate the possibilities of matching single and double sampling plans with ChSP. It is shown that chain sampling plans provide improved efficiency over single and double sampling plans having substantially the same operating characteristics.

  2. Probing microstructural information of anisotropic scattering media using rotation-independent polarization parameters.

    PubMed

    Sun, Minghao; He, Honghui; Zeng, Nan; Du, E; Guo, Yihong; Peng, Cheng; He, Yonghong; Ma, Hui

    2014-05-10

    Polarization parameters contain rich information on the micro- and macro-structure of scattering media. However, many of these parameters are sensitive to the spatial orientation of anisotropic media, and may not effectively reveal the microstructural information. In this paper, we take polarization images of different textile samples at different azimuth angles. The results demonstrate that the rotation insensitive polarization parameters from rotating linear polarization imaging and Mueller matrix transformation methods can be used to distinguish the characteristic features of different textile samples. Further examinations using both experiments and Monte Carlo simulations reveal that the residue rotation dependence in these polarization parameters is due to the oblique incidence illumination. This study shows that such rotation independent parameters are potentially capable of quantitatively classifying anisotropic samples, such as textiles or biological tissues.

  3. Application of Statistically Derived CPAS Parachute Parameters

    NASA Technical Reports Server (NTRS)

    Romero, Leah M.; Ray, Eric S.

    2013-01-01

    The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in

  4. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  5. Neutron activation analysis of certified samples by the absolute method

    NASA Astrophysics Data System (ADS)

    Kadem, F.; Belouadah, N.; Idiri, Z.

    2015-07-01

    The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.

  6. A comparison between two powder compaction parameters of plasticity: the effective medium A parameter and the Heckel 1/K parameter.

    PubMed

    Mahmoodi, Foad; Klevan, Ingvild; Nordström, Josefina; Alderborn, Göran; Frenning, Göran

    2013-09-10

    The purpose of the research was to introduce a procedure to derive a powder compression parameter (EM A) representing particle yield stress using an effective medium equation and to compare the EM A parameter with the Heckel compression parameter (1/K). 16 pharmaceutical powders, including drugs and excipients, were compressed in a materials testing instrument and powder compression profiles were derived using the EM and Heckel equations. The compression profiles thus obtained could be sub-divided into regions among which one region was approximately linear and from this region, the compression parameters EM A and 1/K were calculated. A linear relationship between the EM A parameter and the 1/K parameter was obtained with a strong correlation. The slope of the plot was close to 1 (0.84) and the intercept of the plot was small in comparison to the range of parameter values obtained. The relationship between the theoretical EM A parameter and the 1/K parameter supports the interpretation of the empirical Heckel parameter as being a measure of yield stress. It is concluded that the combination of Heckel and EM equations represents a suitable procedure to derive a value of particle plasticity from powder compression data. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Reconstructing gravitational wave source parameters via direct comparisons to numerical relativity I: Method

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei

    2016-03-01

    In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.

  8. Parameter Estimation for Compact Binaries with Ground-Based Gravitational-Wave Observations Using the LALInference

    NASA Technical Reports Server (NTRS)

    Veitch, J.; Raymond, V.; Farr, B.; Farr, W.; Graff, P.; Vitale, S.; Aylott, B.; Blackburn, K.; Christensen, N.; Coughlin, M.

    2015-01-01

    The Advanced LIGO and Advanced Virgo gravitational wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star (BNS), a neutron star - black hole binary (NSBH) and a binary black hole (BBH), where we show a cross-comparison of results obtained using three independent sampling algorithms. These systems were analysed with non-spinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analysing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms, and describe the general and problem-specific sampling techniques we have used to improve the efficiency of sampling the compact binary coalescence (CBC) parameter space.

  9. Field Exploration and Life Detection Sampling Through Planetary Analogue Sampling (FELDSPAR).

    NASA Technical Reports Server (NTRS)

    Stockton, A.; Amador, E. S.; Cable, M. L.; Cantrell, T.; Chaudry, N.; Cullen, T.; Duca, Z.; Gentry, D. M.; Kirby, J.; Jacobsen, M.; hide

    2017-01-01

    reflectance spectra at all scales greater than 10 cm. Field lab assays were conducted to monitor microbial habitation, including ATP quantification, qPCR for fungal, bacterial, and archaeal DNA, and direct cell imaging using fluorescence microscopy. Home laboratory analyses include Raman spectroscopy and community sequencing. ATP appeared to be significantly more sensitive to small changes in sampling location than qPCR or fluorescence microscopy. Bacterial and archaeal DNA content were more consistent at the smaller scales, but similarly variable across more distant sites. Conversely, cell counts and fungal DNA content have significant local variation but appear relatively homogeneous over scales of 1 km. ATP, bacterial DNA, and archaeal DNA content were relatively well correlated at many spatial scales. While we have observed spatial variation at various scales and are beginning to observe how that variation fluctuates over time as biodiversity recovers after an eruption, we do not yet fully understand what parameters lead to the observed spatial variation. Home laboratory analyses will help us further understand the elemental and structural composition of the basaltic matrices, but further field analyses are vital for the understanding how temperature, moisture, incident radiation, and so forth influence the habitability of a microclimate.

  10. Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.

    PubMed

    Ette, E I; Howie, C A; Kelman, A W; Whiting, B

    1995-05-01

    Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.

  11. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 2: Mission payloads subsystem description

    NASA Technical Reports Server (NTRS)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    The scheduling algorithm for mission planning and logistics evaluation (SAMPLE) is presented. Two major subsystems are included: The mission payloads program; and the set covering program. Formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.

  12. Mathematical estimation of the level of microbial contamination on spacecraft surfaces by volumetric air sampling

    NASA Technical Reports Server (NTRS)

    Oxborrow, G. S.; Roark, A. L.; Fields, N. D.; Puleo, J. R.

    1974-01-01

    Microbiological sampling methods presently used for enumeration of microorganisms on spacecraft surfaces require contact with easily damaged components. Estimation of viable particles on surfaces using air sampling methods in conjunction with a mathematical model would be desirable. Parameters necessary for the mathematical model are the effect of angled surfaces on viable particle collection and the number of viable cells per viable particle. Deposition of viable particles on angled surfaces closely followed a cosine function, and the number of viable cells per viable particle was consistent with a Poisson distribution. Other parameters considered by the mathematical model included deposition rate and fractional removal per unit time. A close nonlinear correlation between volumetric air sampling and airborne fallout on surfaces was established with all fallout data points falling within the 95% confidence limits as determined by the mathematical model.

  13. QA/QC requirements for physical properties sampling and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Innis, B.E.

    1993-07-21

    This report presents results of an assessment of the available information concerning US Environmental Protection Agency (EPA) quality assurance/quality control (QA/QC) requirements and guidance applicable to sampling, handling, and analyzing physical parameter samples at Comprehensive Environmental Restoration, Compensation, and Liability Act (CERCLA) investigation sites. Geotechnical testing laboratories measure the following physical properties of soil and sediment samples collected during CERCLA remedial investigations (RI) at the Hanford Site: moisture content, grain size by sieve, grain size by hydrometer, specific gravity, bulk density/porosity, saturated hydraulic conductivity, moisture retention, unsaturated hydraulic conductivity, and permeability of rocks by flowing air. Geotechnical testing laboratories alsomore » measure the following chemical parameters of soil and sediment samples collected during Hanford Site CERCLA RI: calcium carbonate and saturated column leach testing. Physical parameter data are used for (1) characterization of vadose and saturated zone geology and hydrogeology, (2) selection of monitoring well screen sizes, (3) to support modeling and analysis of the vadose and saturated zones, and (4) for engineering design. The objectives of this report are to determine the QA/QC levels accepted in the EPA Region 10 for the sampling, handling, and analysis of soil samples for physical parameters during CERCLA RI.« less

  14. Seasonal variation of human sperm cells among 4,422 semen samples: A retrospective study in Turkey.

    PubMed

    Ozelci, Runa; Yılmaz, Saynur; Dilbaz, Berna; Akpınar, Funda; Akdag Cırık, Derya; Dilbaz, Serdar; Ocal, Aslı

    2016-12-01

    We aimed to assess the possible presence of a seasonal pattern in three parameters of semen analysis: sperm concentration, morphology, and motility as a function of the time of ejaculation and sperm production (spermatogenesis) in normal and oligozoospermic men. This retrospective study included a consecutive series of 4,422 semen samples that were collected from patients as a part of the basic evaluation of the infertile couples attending the Reproductive Endocrine Outpatient Clinic of a tertiary women's hospital in Ankara, Turkey, between January 1, 2012 and December 31, 2013. The samples were classified according to sperm concentration: ≥15 x10 6 /mL as normozoospermic samples and 4 -14.99 x10 6 /mL as oligozoospermic samples and seasonal analysis of the semen samples were carried out separately. When the data was analyzed according to the season of semen production, there was no seasonal effect on the sperm concentration. A gradual and consistent decrease in the rate of sperm with fast forward motility was observed from spring to fall with a recovery noticed during the winter. The percentage of sperms with normal morphology was found to be statistically significantly higher in the spring samples compared with the summer samples (p=0.001). Both normozoospermic and oligozoospermic semen samples appeared to have better sperm parameters in spring and winter. The circannual variation of semen parameters may be important in diagnosis and treatment desicions. WHO: World Health Organization; mRNA:messenger ribonucleic acid.

  15. A three-parameter asteroid taxonomy

    NASA Technical Reports Server (NTRS)

    Tedesco, Edward F.; Williams, James G.; Matson, Dennis L.; Veeder, Glenn J.; Gradie, Jonathan C.

    1989-01-01

    Broadband U, V, and x photometry together with IRAS asteroid albedos have been used to construct an asteroid classification system. The system is based on three parameters (U-V and v-x color indices and visual geometric albedo), and it is able to place 96 percent of the present sample of 357 asteroids into 11 taxonomic classes. It is noted that all but one of these classes are analogous to those previously found using other classification schemes. The algorithm is shown to account for the observational uncertainties in each of the classification parameters.

  16. The ionisation parameter of star-forming galaxies evolves with the specific star formation rate

    NASA Astrophysics Data System (ADS)

    Kaasinen, Melanie; Kewley, Lisa; Bian, Fuyan; Groves, Brent; Kashino, Daichi; Silverman, John; Kartaltepe, Jeyhan

    2018-04-01

    We investigate the evolution of the ionisation parameter of star-forming galaxies using a high-redshift (z ˜ 1.5) sample from the FMOS-COSMOS survey and matched low-redshift samples from the Sloan Digital Sky Survey. By constructing samples of low-redshift galaxies for which the stellar mass (M*), star formation rate (SFR) and specific star formation rate (sSFR) are matched to the high-redshift sample we remove the effects of an evolution in these properties. We also account for the effect of metallicity by jointly constraining the metallicity and ionisation parameter of each sample. We find an evolution in the ionisation parameter for main-sequence, star-forming galaxies and show that this evolution is driven by the evolution of sSFR. By analysing the matched samples as well as a larger sample of z < 0.3, star-forming galaxies we show that high ionisation parameters are directly linked to high sSFRs and are not simply the byproduct of an evolution in metallicity. Our results are physically consistent with the definition of the ionisation parameter, a measure of the hydrogen ionising photon flux relative to the number density of hydrogen atoms.

  17. Identifying parameter regions for multistationarity

    PubMed Central

    Conradi, Carsten; Mincheva, Maya; Wiuf, Carsten

    2017-01-01

    Mathematical modelling has become an established tool for studying the dynamics of biological systems. Current applications range from building models that reproduce quantitative data to identifying systems with predefined qualitative features, such as switching behaviour, bistability or oscillations. Mathematically, the latter question amounts to identifying parameter values associated with a given qualitative feature. We introduce a procedure to partition the parameter space of a parameterized system of ordinary differential equations into regions for which the system has a unique or multiple equilibria. The procedure is based on the computation of the Brouwer degree, and it creates a multivariate polynomial with parameter depending coefficients. The signs of the coefficients determine parameter regions with and without multistationarity. A particular strength of the procedure is the avoidance of numerical analysis and parameter sampling. The procedure consists of a number of steps. Each of these steps might be addressed algorithmically using various computer programs and available software, or manually. We demonstrate our procedure on several models of gene transcription and cell signalling, and show that in many cases we obtain a complete partitioning of the parameter space with respect to multistationarity. PMID:28972969

  18. Impact of Reservoir Fluid Saturation on Seismic Parameters: Endrod Gas Field, Hungary

    NASA Astrophysics Data System (ADS)

    El Sayed, Abdel Moktader A.; El Sayed, Nahla A.

    2017-12-01

    Outlining the reservoir fluid types and saturation is the main object of the present research work. 37 core samples were collected from three different gas bearing zones in the Endrod gas field in Hungary. These samples are belonging to the Miocene and the Upper - Lower Pliocene. These samples were prepared and laboratory measurements were conducted. Compression and shear wave velocity were measured using the Sonic Viewer-170-OYO. The sonic velocities were measured at the frequencies of 63 and 33 kHz for compressional and shear wave respectively. All samples were subjected to complete petrophysical investigations. Sonic velocities and mechanical parameters such as young’s modulus, rigidity, and bulk modulus were measured when samples were saturated by 100%-75%-0% brine water. Several plots have been performed to show the relationship between seismic parameters and saturation percentages. Robust relationships were obtained, showing the impact of fluid saturation on seismic parameters. Seismic velocity, Poisson’s ratio, bulk modulus and rigidity prove to be applicable during hydrocarbon exploration or production stages. Relationships among the measured seismic parameters in gas/water fully and partially saturated samples are useful to outline the fluid type and saturation percentage especially in gas/water transitional zones.

  19. Systematic parameter inference in stochastic mesoscopic modeling

    NASA Astrophysics Data System (ADS)

    Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  20. Systematic parameter inference in stochastic mesoscopic modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Yang, Xiu; Li, Zhen

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the priormore » knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.« less

  1. Parameters for scale-up of lethal microwave treatment to eradicate cerambycid larvae infesting solid wood packing materials

    Treesearch

    Mary R. Fleming; John J. Janowiak; Joseph Kearns; Jeffrey E. Shield; Rustum Roy; Dinesh K. Agrawal; Leah S. Bauer; Kelli Hoover

    2004-01-01

    The use of microwave irradiation to eradicate insects infesting wood used to manufacture packing materials such as pallets and crateswas evaluated. The focus of this preliminary studywas to determinewhich microwave parameters, including chamber-volume to sample-volumeratios,variations ofpower and time, and energydensity (total microwavepower/woodvolume), affect the...

  2. Rapid mapping of compound eye visual sampling parameters with FACETS, a highly automated wide-field goniometer.

    PubMed

    Douglass, John K; Wehling, Martin F

    2016-12-01

    A highly automated goniometer instrument (called FACETS) has been developed to facilitate rapid mapping of compound eye parameters for investigating regional visual field specializations. The instrument demonstrates the feasibility of analyzing the complete field of view of an insect eye in a fraction of the time required if using non-motorized, non-computerized methods. Faster eye mapping makes it practical for the first time to employ sample sizes appropriate for testing hypotheses about the visual significance of interspecific differences in regional specializations. Example maps of facet sizes are presented from four dipteran insects representing the Asilidae, Calliphoridae, and Stratiomyidae. These maps provide the first quantitative documentation of the frontal enlarged-facet zones (EFZs) that typify asilid eyes, which, together with the EFZs in male Calliphoridae, are likely to be correlated with high-spatial-resolution acute zones. The presence of EFZs contrasts sharply with the almost homogeneous distribution of facet sizes in the stratiomyid. Moreover, the shapes of EFZs differ among species, suggesting functional specializations that may reflect differences in visual ecology. Surveys of this nature can help identify species that should be targeted for additional studies, which will elucidate fundamental principles and constraints that govern visual field specializations and their evolution.

  3. Evaluation of growth performance, serum biochemistry and haematological parameters on broiler birds fed with raw and processed samples of Entada scandens, Canavalia gladiata and Canavalia ensiformis seed meal as an alternative protein source.

    PubMed

    Sasipriya, Gopalakrishnan; Siddhuraju, Perumal

    2013-03-01

    The experiment was carried out to investigate the inclusion of underutilised legumes, Entada scandens, Canavalia gladiata and Canavalia ensiformis, seed meal in soybean-based diet in broilers. The utilisation of these wild legumes is limited by the presence of antinutrient compounds. Processing methods like soaking followed by autoclaving in sodium bicarbonate solution in E. scandens and C. gladiata and soaking followed by autoclaving in ash solution in C. ensiformis were adopted. The proximate composition of raw and processed samples of E. scandens, C. gladiata and C. ensiformis were determined. The protein content was enhanced in processed sample of E. scandens (46 %) and C. ensiformis (16 %). This processing method had reduced the maximum number of antinutrients such as tannins (10-100 %), trypsin inhibitor activity (99 %), chymotrypsin inhibitor activity (72-100 %), canavanine (60-62 %), amylase inhibitor activity (73-100 %), saponins (78-92 %), phytic acid (19-40 %) and lectins. Hence, the raw samples at 15 % and processed samples at 15 and 30 % were replaced with soybean protein in commercial broiler diet respectively. Birds fed with 30 % processed samples of E. scandens, C. gladiata and C. ensiformis showed significantly similar results of growth performance, carcass characteristics, organ weight, haematological parameters and serum biochemical parameters (cholesterol, protein, bilirubin, albumin, globulin and liver and kidney function parameters) without any adverse effects after 42 days of supplementation. The proper utilisation of these underutilised legumes may act as an alternative protein ingredient in poultry diets.

  4. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    PubMed Central

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  5. VizieR Online Data Catalog: Orbital parameters of Kuiper Belt objects (Volk+, 2017)

    NASA Astrophysics Data System (ADS)

    Volk, K.; Malhotra, R.

    2017-11-01

    Our starting point is the list of minor planets in the outer solar system cataloged in the database of the Minor Planet Center (http://www.minorplanetcenter.net/iau/lists/t_centaurs.html and http://www.minorplanetcenter.net/iau/lists/t_tnos.html) as of 2016 October 20. The complete listing of our sample, including best-fit orbital parameters and sky locations, is provided in Table1. (1 data file).

  6. Calculation of Optical Parameters of Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Kumar, A.

    2007-12-01

    Validation of a modified four-parameter model describing temperature effect on liquid crystal refractive indices is being reported in the present article. This model is based upon the Vuks equation. Experimental data of ordinary and extraordinary refractive indices for two liquid crystal samples MLC-9200-000 and MLC-6608 are used to validate the above-mentioned theoretical model. Using these experimental data, birefringence, order parameter, normalized polarizabilities, and the temperature gradient of refractive indices are determined. Two methods: directly using birefringence measurements and using Haller's extrapolation procedure are adopted for the determination of order parameter. Both approches of order parameter calculation are compared. The temperature dependences of all these parameters are discussed. A close agreement between theory and experiment is obtained.

  7. Identification of control parameters for the sulfur gas storability with bag sampling methods

    USDA-ARS?s Scientific Manuscript database

    Air samples containing sulfur compounds are often collected and stored in sample bags prior to analyses. The storage stability of six gaseous sulfur compounds (H2S, CH3SH, DMS, CS2, DMDS and SO2) was compared between two different bag materials (polyvinyl fluoride (PVF) and polyester aluminum (PEA))...

  8. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  9. Material characterization in partially filled waveguides using inverse scattering and multiple sample orientations

    NASA Astrophysics Data System (ADS)

    Sjöberg, Daniel; Larsson, Christer

    2015-06-01

    We present a method aimed at reducing uncertainties and instabilities when characterizing materials in waveguide setups. The method is based on measuring the S parameters for three different orientations of a rectangular sample block in a rectangular waveguide. The corresponding geometries are modeled in a commercial full-wave simulation program, taking any material parameters as input. The material parameters of the sample are found by minimizing the squared distance between measured and calculated S parameters. The information added by the different sample orientations is quantified using the Cramér-Rao lower bound. The flexibility of the method allows the determination of material parameters of an arbitrarily shaped sample that fits in the waveguide.

  10. Parameters of Concrete Modified with Glass Meal and Chalcedonite Dust

    NASA Astrophysics Data System (ADS)

    Kotwa, Anna

    2017-10-01

    Additives used for production of concrete mixtures affect the rheological properties and parameters of hardened concrete, including compressive strength, water resistance, durability and shrinkage of hardened concrete. By their application, the use of cement and production costs may be reduced. The scheduled program of laboratory tests included preparation of six batches of concrete mixtures with addition of glass meal and / or chalcedonite dust. Mineral dust is a waste product obtained from crushed aggregate mining, with grain size below 0,063μm. The main ingredient of chalcedonite dust is silica. Glass meal used in the study is a material with very fine grain size, less than 65μm. This particle size is present in 60% - 90% of the sample. Additives were used to replace cement in concrete mixes in an amount of 15% and 25%. The amount of aggregate was left unchanged. The study used Portland cement CEM I 42.5R. Concrete mixes were prepared with a constant rate w / s = 0.4. The aim of the study was to identify the effect of the addition of chalcedonite dust and / or glass meal on the parameters of hardened concrete, i.e. compressive strength, water absorption and capillarity. Additives used in the laboratory tests significantly affect the compressive strength. The largest decrease in compressive strength of concrete samples was recorded for samples with 50% substitutes of cement additives. This decrease is 34.35%. The smallest decrease in compressive strength was noted in concrete with the addition of 15% of chalcedonite dust or 15% glass meal, it amounts to an average of 15%. The study of absorption shows that all concrete with the addition of chalcedonite dust and glass meal gained a percentage weight increase between 2.7 ÷ 3.1% for the test batches. This is a very good result, which is probably due to grout sealing. In capillary action for the test batches, the percentage weight gains of samples ranges from 4.6% to 5.1%. However, the reference concrete obtained

  11. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    NASA Astrophysics Data System (ADS)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to

  12. Inverse sampling regression for pooled data.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Eskridge, Kent; Crossa, José

    2017-06-01

    Because pools are tested instead of individuals in group testing, this technique is helpful for estimating prevalence in a population or for classifying a large number of individuals into two groups at a low cost. For this reason, group testing is a well-known means of saving costs and producing precise estimates. In this paper, we developed a mixed-effect group testing regression that is useful when the data-collecting process is performed using inverse sampling. This model allows including covariate information at the individual level to incorporate heterogeneity among individuals and identify which covariates are associated with positive individuals. We present an approach to fit this model using maximum likelihood and we performed a simulation study to evaluate the quality of the estimates. Based on the simulation study, we found that the proposed regression method for inverse sampling with group testing produces parameter estimates with low bias when the pre-specified number of positive pools (r) to stop the sampling process is at least 10 and the number of clusters in the sample is also at least 10. We performed an application with real data and we provide an NLMIXED code that researchers can use to implement this method.

  13. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  14. Finite-sample and asymptotic sign-based tests for parameters of non-linear quantile regression with Markov noise

    NASA Astrophysics Data System (ADS)

    Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.

    2017-01-01

    One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.

  15. SPOTting model parameters using a ready-made Python package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  16. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less

  17. Four-year stability of anthropometric and cardio-metabolic parameters in a prospective cohort of older adults.

    PubMed

    Jackson, Sarah E; van Jaarsveld, Cornelia Hm; Beeken, Rebecca J; Gunter, Marc J; Steptoe, Andrew; Wardle, Jane

    2015-01-01

    To examine the medium-term stability of anthropometric and cardio-metabolic parameters in the general population. Participants were 5160 men and women from the English Longitudinal Study of Ageing (age ≥50 years) assessed in 2004 and 2008. Anthropometric data included height, weight, BMI and waist circumference. Cardio-metabolic parameters included blood pressure, serum lipids (total cholesterol, HDL, LDL, triglycerides), hemoglobin, fasting glucose, fibrinogen and C-reactive protein. Stability of anthropometric variables was high (all intraclass correlations >0.92), although mean values changed slightly (-0.01 kg weight, +1.33 cm waist). Cardio-metabolic parameters showed more variation: correlations ranged from 0.43 (glucose) to 0.81 (HDL). The majority of participants (71-97%) remained in the same grouping relative to established clinical cut-offs. Over a 4-year period, anthropometric and cardio-metabolic parameters showed good stability. These findings suggest that when no means to obtain more recent data exist, a one-time sample will give a reasonable approximation to average levels over the medium-term, although reliability is reduced.

  18. Sampling design for long-term regional trends in marine rocky intertidal communities

    USGS Publications Warehouse

    Irvine, Gail V.; Shelley, Alice

    2013-01-01

    Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.

  19. Systematic Evaluation of Non-Uniform Sampling Parameters in the Targeted Analysis of Urine Metabolites by 1H,1H 2D NMR Spectroscopy.

    PubMed

    Schlippenbach, Trixi von; Oefner, Peter J; Gronwald, Wolfram

    2018-03-09

    Non-uniform sampling (NUS) allows the accelerated acquisition of multidimensional NMR spectra. The aim of this contribution was the systematic evaluation of the impact of various quantitative NUS parameters on the accuracy and precision of 2D NMR measurements of urinary metabolites. Urine aliquots spiked with varying concentrations (15.6-500.0 µM) of tryptophan, tyrosine, glutamine, glutamic acid, lactic acid, and threonine, which can only be resolved fully by 2D NMR, were used to assess the influence of the sampling scheme, reconstruction algorithm, amount of omitted data points, and seed value on the quantitative performance of NUS in 1 H, 1 H-TOCSY and 1 H, 1 H-COSY45 NMR spectroscopy. Sinusoidal Poisson-gap sampling and a compressed sensing approach employing the iterative re-weighted least squares method for spectral reconstruction allowed a 50% reduction in measurement time while maintaining sufficient quantitative accuracy and precision for both types of homonuclear 2D NMR spectroscopy. Together with other advances in instrument design, such as state-of-the-art cryogenic probes, use of 2D NMR spectroscopy in large biomedical cohort studies seems feasible.

  20. Discrimination of Clover and Citrus Honeys from Egypt According to Floral Type Using Easily Assessable Physicochemical Parameters and Discriminant Analysis: An External Validation of the Chemometric Approach.

    PubMed

    Karabagias, Ioannis K; Karabournioti, Sofia

    2018-05-03

    Twenty-two honey samples, namely clover and citrus honeys, were collected from the greater Cairo area during the harvesting year 2014⁻2015. The main purpose of the present study was to characterize the aforementioned honey types and to investigate whether the use of easily assessable physicochemical parameters, including color attributes in combination with chemometrics, could differentiate honey floral origin. Parameters taken into account were: pH, electrical conductivity, ash, free acidity, lactonic acidity, total acidity, moisture content, total sugars (degrees Brix-°Bx), total dissolved solids and their ratio to total acidity, salinity, CIELAB color parameters, along with browning index values. Results showed that all honey samples analyzed met the European quality standards set for honey and had variations in the aforementioned physicochemical parameters depending on floral origin. Application of linear discriminant analysis showed that eight physicochemical parameters, including color, could classify Egyptian honeys according to floral origin ( p < 0.05). Correct classification rate was 95.5% using the original method and 90.9% using the cross validation method. The discriminatory ability of the developed model was further validated using unknown honey samples. The overall correct classification rate was not affected. Specific physicochemical parameter analysis in combination with chemometrics has the potential to enhance the differences in floral honeys produced in a given geographical zone.

  1. Discrimination of Clover and Citrus Honeys from Egypt According to Floral Type Using Easily Assessable Physicochemical Parameters and Discriminant Analysis: An External Validation of the Chemometric Approach

    PubMed Central

    Karabournioti, Sofia

    2018-01-01

    Twenty-two honey samples, namely clover and citrus honeys, were collected from the greater Cairo area during the harvesting year 2014–2015. The main purpose of the present study was to characterize the aforementioned honey types and to investigate whether the use of easily assessable physicochemical parameters, including color attributes in combination with chemometrics, could differentiate honey floral origin. Parameters taken into account were: pH, electrical conductivity, ash, free acidity, lactonic acidity, total acidity, moisture content, total sugars (degrees Brix-°Bx), total dissolved solids and their ratio to total acidity, salinity, CIELAB color parameters, along with browning index values. Results showed that all honey samples analyzed met the European quality standards set for honey and had variations in the aforementioned physicochemical parameters depending on floral origin. Application of linear discriminant analysis showed that eight physicochemical parameters, including color, could classify Egyptian honeys according to floral origin (p < 0.05). Correct classification rate was 95.5% using the original method and 90.9% using the cross validation method. The discriminatory ability of the developed model was further validated using unknown honey samples. The overall correct classification rate was not affected. Specific physicochemical parameter analysis in combination with chemometrics has the potential to enhance the differences in floral honeys produced in a given geographical zone. PMID:29751543

  2. Evaluation of Chemical Warfare Agent Wipe Sampling ...

    EPA Pesticide Factsheets

    Report This investigation tested specific (CWAs), including sarin (GB), soman (GD), cyclosarin (GF), sulfur mustard (HD), and O-ethyl-S-(2-diisopropylaminoethyl) methylphosphonothioate (VX) on the non-ideal (e.g., porous and permeable) surfaces of drywall, vinyl tile, wood, laminate, and coated glass. Pesticides (diazinon and malathion) were used so that a comparison is possible with existing literature data (1). Experiments included testing with coupons having surface areas of 10 cm2 and 100 cm2. The 10-cm2 coupons were of a size that could easily be extracted in a 2 oz jar (to provide comparative data for CWA recoveries generated by direct extraction) and the 100-cm2 coupons better represented the area of a surface that might typically be sampled by wipe extraction. In addition, CWA, at a normalized surface concentration of 0.1 µg per cm2 surface area, were spiked on coupons of the tested surfaces. Wipes were wetted with either dichloromethane (DCM) or isopropanol (IPA) before sampling for CWA. Experimental parameters include multiple wipe types, porous/permeable surfaces, coupon surface area, solvent used to wet the wipe (i.e., wetting solvent), and the utility of VX-d14 as an extracted internal standard.

  3. Sherborne Missile Fire Frequency with Unconstraint Parameters

    NASA Astrophysics Data System (ADS)

    Dong, Shaquan

    2018-01-01

    For the modeling problem of shipborne missile fire frequency, the fire frequency models with unconstant parameters were proposed, including maximum fire frequency models with unconstant parameters, and actual fire frequency models with unconstant parameters, which can be used to calculate the missile fire frequency with unconstant parameters.

  4. Aircraft data summaries for the SURE intensives. Final report. [Sampling done August 1977 near Rockport, Indiana and Duncan Falls, Ohio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumenthal, D.L.; Tommerdahl, J.B.; McDonald, J.A.

    1981-09-01

    As part of the EPRI sulfate regional experiment (SURE), Meteorology Research, Inc., (MRI) and Research Triangle Institute (RTI) conducted six air quality sampling programs in the eastern United States using instrumented aircraft. This volume includes the air quality and meteorological data obtained during the August 1977 Intensive when MRI sampled near the Rockport, Indiana, SURE Station and RTI sampled near the Duncan Falls, Ohio, SURE Station. Sampling data are presented for all measured parameters.

  5. Stability evaluation of quality parameters for palm oil products at low temperature storage.

    PubMed

    Ramli, Nur Aainaa Syahirah; Mohd Noor, Mohd Azmil; Musa, Hajar; Ghazali, Razmah

    2018-07-01

    Palm oil is one of the major oils and fats produced and traded worldwide. The value of palm oil products is mainly influenced by their quality. According to ISO 17025:2005, accredited laboratories require a quality control procedure with respect to monitoring the validity of tests for determination of quality parameters. This includes the regular use of internal quality control using secondary reference materials. Unfortunately, palm oil reference materials are not currently available. To establish internal quality control samples, the stability of quality parameters needs to be evaluated. In the present study, the stability of quality parameters for palm oil products was examined over 10 months at low temperature storage (6 ± 2 °C). The palm oil products tested included crude palm oil (CPO); refined, bleached and deodorized (RBD) palm oil (RBDPO); RBD palm olein (RBDPOo); and RBD palm stearin (RBDPS). The quality parameters of the oils [i.e. moisture content, free fatty acid content (FFA), iodine value (IV), fatty acids composition (FAC) and slip melting point (SMP)] were determined prior to and throughout the storage period. The moisture, FFA, IV, FAC and SMP for palm oil products changed significantly (P < 0.05), whereas the moisture content for CPO, IV for RBDPO and RBDPOo, stearic acid composition for CPO and linolenic acid composition for CPO, RBDPO, RBDPOo and RBDPS did not (P > 0.05). The stability study indicated that the quality of the palm oil products was stable within the specified limits throughout the storage period at low temperature. The storage conditions preserved the quality of palm oil products throughout the storage period. These findings qualify the use of the palm oil products CPO, RBDPO, RBDPOo and RBDPS as control samples in the validation of test results. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  6. The Effect of Including or Excluding Students with Testing Accommodations on IRT Calibrations.

    ERIC Educational Resources Information Center

    Karkee, Thakur; Lewis, Dan M.; Barton, Karen; Haug, Carolyn

    This study aimed to determine the degree to which the inclusion of accommodated students with disabilities in the calibration sample affects the characteristics of item parameters and the test results. Investigated were effects on test reliability, item fit to the applicable item response theory (IRT) model, item parameter estimates, and students'…

  7. The ionization parameter of star-forming galaxies evolves with the specific star formation rate

    NASA Astrophysics Data System (ADS)

    Kaasinen, Melanie; Kewley, Lisa; Bian, Fuyan; Groves, Brent; Kashino, Daichi; Silverman, John; Kartaltepe, Jeyhan

    2018-07-01

    We investigate the evolution of the ionization parameter of star-forming galaxies using a high-redshift (z˜ 1.5) sample from the FMOS-COSMOS (Fibre Multi-Object Spectrograph-COSMic evOlution Survey) and matched low-redshift samples from the Sloan Digital Sky Survey. By constructing samples of low-redshift galaxies for which the stellar mass (M*), star formation rate (SFR), and specific star formation rate (sSFR) are matched to the high-redshift sample, we remove the effects of an evolution in these properties. We also account for the effect of metallicity by jointly constraining the metallicity and ionization parameter of each sample. We find an evolution in the ionization parameter for main-sequence, star-forming galaxies and show that this evolution is driven by the evolution of sSFR. By analysing the matched samples as well as a larger sample of z< 0.3, star-forming galaxies we show that high ionization parameters are directly linked to high sSFRs and are not simply the by-product of an evolution in metallicity. Our results are physically consistent with the definition of the ionization parameter, a measure of the hydrogen ionizing photon flux relative to the number density of hydrogen atoms.

  8. The Grid[Way] Job Template Manager, a tool for parameter sweeping

    NASA Astrophysics Data System (ADS)

    Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.

    2011-04-01

    the input parameter sets. Also management of the job template files including the job submission to the grid, control and information retrieval. Restrictions: The parameter sweep is limited by disk space during generation of the job templates. The wild-carding of parameters cannot be done in decreasing order. Job submission, control and information is delegated to the GridWay Metascheduler. Running time: From half a second in the simplest operation to a few minutes for thousands of exponential sampling parameters.

  9. Decision Models for Determining the Optimal Life Test Sampling Plans

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.

    2010-11-01

    Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.

  10. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  11. Communication: Multiple atomistic force fields in a single enhanced sampling simulation

    NASA Astrophysics Data System (ADS)

    Hoang Viet, Man; Derreumaux, Philippe; Nguyen, Phuong H.

    2015-07-01

    The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.

  12. Direct sample introduction-gas chromatography-mass spectrometry for the determination of haloanisole compounds in cork stoppers.

    PubMed

    Cacho, J I; Nicolás, J; Viñas, P; Campillo, N; Hernández-Córdoba, M

    2016-12-02

    A solventless analytical method is proposed for analyzing the compounds responsible for cork taint in cork stoppers. Direct sample introduction (DSI) is evaluated as a sample introduction system for the gas chromatography-mass spectrometry (GC-MS) determination of four haloanisoles (HAs) in cork samples. Several parameters affecting the DSI step, including desorption temperature and time, gas flow rate and other focusing parameters, were optimized using univariate and multivariate approaches. The proposed method shows high sensitivity and minimises sample handling, with detection limits of 1.6-2.6ngg -1 , depending on the compound. The suitability of the optimized procedure as a screening method was evaluated by obtaining decision limits (CCα) and detection capabilities (CCβ) for each analyte, which were found to be in 6.9-11.8 and 8.7-14.8ngg -1 , respectively, depending on the compound. Twenty-four cork samples were analysed, and 2,4,6-trichloroanisole was found in four of them at levels between 12.6 and 53ngg -1 . Copyright © 2016 Elsevier B.V. All rights reserved.

  13. IPO: a tool for automated optimization of XCMS parameters.

    PubMed

    Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph

    2015-04-16

    Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to

  14. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  15. Tackling the conformational sampling of larger flexible compounds and macrocycles in pharmacology and drug discovery.

    PubMed

    Chen, I-Jen; Foloppe, Nicolas

    2013-12-15

    Computational conformational sampling underpins much of molecular modeling and design in pharmaceutical work. The sampling of smaller drug-like compounds has been an active area of research. However, few studies have tested in details the sampling of larger more flexible compounds, which are also relevant to drug discovery, including therapeutic peptides, macrocycles, and inhibitors of protein-protein interactions. Here, we investigate extensively mainstream conformational sampling methods on three carefully curated compound sets, namely the 'Drug-like', larger 'Flexible', and 'Macrocycle' compounds. These test molecules are chemically diverse with reliable X-ray protein-bound bioactive structures. The compared sampling methods include Stochastic Search and the recent LowModeMD from MOE, all the low-mode based approaches from MacroModel, and MD/LLMOD recently developed for macrocycles. In addition to default settings, key parameters of the sampling protocols were explored. The performance of the computational protocols was assessed via (i) the reproduction of the X-ray bioactive structures, (ii) the size, coverage and diversity of the output conformational ensembles, (iii) the compactness/extendedness of the conformers, and (iv) the ability to locate the global energy minimum. The influence of the stochastic nature of the searches on the results was also examined. Much better results were obtained by adopting search parameters enhanced over the default settings, while maintaining computational tractability. In MOE, the recent LowModeMD emerged as the method of choice. Mixed torsional/low-mode from MacroModel performed as well as LowModeMD, and MD/LLMOD performed well for macrocycles. The low-mode based approaches yielded very encouraging results with the flexible and macrocycle sets. Thus, one can productively tackle the computational conformational search of larger flexible compounds for drug discovery, including macrocycles. Copyright © 2013 Elsevier Ltd. All

  16. Image re-sampling detection through a novel interpolation kernel.

    PubMed

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  18. Determination of low molecular weight alcohols including fusel oil in various samples by diethyl ether extraction and capillary gas chromatography.

    PubMed

    Woo, Kang-Lyung

    2005-01-01

    Low molecular weight alcohols including fusel oil were determined using diethyl ether extraction and capillary gas chromatography. Twelve kinds of alcohols were successfully resolved on the HP-FFAP (polyethylene glycol) capillary column. The diethyl ether extraction method was very useful for the analysis of alcohols in alcoholic beverages and biological samples with excellent cleanliness of the resulting chromatograms and high sensitivity compared to the direct injection method. Calibration graphs for all standard alcohols showed good linearity in the concentration range used, 0.001-2% (w/v) for all alcohols. Salting out effects were significant (p < 0.01) for the low molecular weight alcohols methanol, isopropanol, propanol, 2-butanol, n-butanol and ethanol, but not for the relatively high molecular weight alcohols amyl alcohol, isoamyl alcohol, and heptanol. The coefficients of variation of the relative molar responses were less than 5% for all of the alcohols. The limits of detection and quantitation were 1-5 and 10-60 microg/L for the diethyl ether extraction method, and 10-50 and 100-350 microg/L for the direct injection method, respectively. The retention times and relative retention times of standard alcohols were significantly shifted in the direct injection method when the injection volumes were changed, even with the same analysis conditions, but they were not influenced in the diethyl ether extraction method. The recoveries by the diethyl ether extraction method were greater than 95% for all samples and greater than 97% for biological samples.

  19. Communications circuit including a linear quadratic estimator

    DOEpatents

    Ferguson, Dennis D.

    2015-07-07

    A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.

  20. Sparsely sampling the sky: Regular vs. random sampling

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.

    2015-09-01

    Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.

  1. Optimal three-dimensional reusable tug trajectories for planetary missions including correction for nodal precession

    NASA Technical Reports Server (NTRS)

    Borsody, J.

    1976-01-01

    Equations are derived by using the maximum principle to maximize the payload of a reusable tug for planetary missions. The analysis includes a correction for precession of the space shuttle orbit. The tug returns to this precessed orbit (within a specified time) and makes the required nodal correction. A sample case is analyzed that represents an inner planet mission as specified by a fixed declination and right ascension of the outgoing asymptote and the mission energy. The reusable stage performance corresponds to that of a typical cryogenic tug. Effects of space shuttle orbital inclination, several trajectory parameters, and tug thrust on payload are also investigated.

  2. Model-based recovery of histological parameters from multispectral images of the colon

    NASA Astrophysics Data System (ADS)

    Hidovic-Rowe, Dzena; Claridge, Ela

    2005-04-01

    Colon cancer alters the macroarchitecture of the colon tissue. Common changes include angiogenesis and the distortion of the tissue collagen matrix. Such changes affect the colon colouration. This paper presents the principles of a novel optical imaging method capable of extracting parameters depicting histological quantities of the colon. The method is based on a computational, physics-based model of light interaction with tissue. The colon structure is represented by three layers: mucosa, submucosa and muscle layer. Optical properties of the layers are defined by molar concentration and absorption coefficients of haemoglobins; the size and density of collagen fibres; the thickness of the layer and the refractive indexes of collagen and the medium. Using the entire histologically plausible ranges for these parameters, a cross-reference is created computationally between the histological quantities and the associated spectra. The output of the model was compared to experimental data acquired in vivo from 57 histologically confirmed normal and abnormal tissue samples and histological parameters were extracted. The model produced spectra which match well the measured data, with the corresponding spectral parameters being well within histologically plausible ranges. Parameters extracted for the abnormal spectra showed the increase in blood volume fraction and changes in collagen pattern characteristic of the colon cancer. The spectra extracted from multi-spectral images of ex-vivo colon including adenocarcinoma show the characteristic features associated with normal and abnormal colon tissue. These findings suggest that it should be possible to compute histological quantities for the colon from the multi-spectral images.

  3. Correlation of Blood Gas Parameters with Central Venous Pressure in Patients with Septic Shock; a Pilot Study

    PubMed Central

    Baratloo, Alireza; Rahmati, Farhad; Rouhipour, Alaleh; Motamedi, Maryam; Gheytanchi, Elmira; Amini, Fariba; Safari, Saeed

    2014-01-01

    Objective: To determine the correlation between blood gas parameters and central venous pressure (CVP) in patients suffering from septic shock. Methods: Forty adult patients with diagnosis of septic shock who were admitted to the emergency department (ED) of Shohadaye Tajrish Hospital affiliated with Shahid Beheshti University of Medical Sciences, and met inclusion and exclusion criteria were enrolled. For all patients, sampling was done for venous blood gas analysis, serum sodium and chlorine levels. At the time of sampling; blood pressure, pulse rate and CVP were recorded. Correlation between blood gas parameters and hemodynamic indices were. Results: A significant direct correlation between CVP with anion gap (AG) and inversely with base deficit (BD) and bicarbonate. CVP also showed a relative correlation with pH, whereas it was not correlated with BD/ AG ratio and serum chlorine level. There was no significant association between CVP and clinical parameters including shock index (SI) and mean arterial pressure (MAP). Conclusion: It seems that some of non invasive blood gas parameters could be served as alternative to invasive measures such as CVP in treatment planning of patients referred to an ED with septic shock. PMID:27162870

  4. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  5. Characteristics of a random sample of emergency food program users in New York: II. Soup kitchens.

    PubMed Central

    Bowering, J; Clancy, K L; Poppendieck, J

    1991-01-01

    A random sample of soup kitchen clients in New York City was studied and specific comparisons made on various parameters including homelessness. Compared with the general population of low income persons, soup kitchen users were overwhelmingly male, disproportionately African-American, and more likely to live alone. The homeless (41 percent of the sample) were less likely to receive food stamps or free food, or to use food pantries. Fewer of them received Medicaid or had health insurance. Forty-seven percent had no income in contrast to 29 percent of the total sample. PMID:2053673

  6. Aircraft data summaries for the SURE intensives. Final report. [Sampling done October, 1978 near Duncan Falls, Ohio and Giles County, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keifer, W.S.; Blumenthal, D.L.; Tommerdahl, J.B.

    1981-09-01

    As part of the EPRI sulfate regional experiment (SURE), Meteorology Research, Inc., (MRI) and Research Triangle Institute (RTI) conducted six air quality sampling programs in the eastern United States using instrumented aircraft. This volume includes the air quality and meteorological data obtained during the October 1978 intensive when MRI sampled near the Giles County, Tennessee, SURE Station and RTI sampled near the Duncan Falls, Ohio, SURE Station. Sampling data are presented for all measured parameters.

  7. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  8. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  9. Data Validation Package May 2016 Groundwater Sampling at the Sherwood, Washington, Disposal Site August 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreie, Ken; Traub, David

    The 2001 Long-Term Surveillance Plan (LTSP) for the US. Department of Energy Sherwood Project (UMI'RCA Title II) Reclamation Cell, Wellpinit, Washington, does not require groundwater compliance monitoring at the Sherwood site. However, the LTSP stipulates limited groundwater monitoring for chloride and sulfate (designated indicator parameters) and total dissolved solids (TDS) as a best management practice. Samples were collected from the background well, MW-2B, and the two downgradient wells, MW-4 and MW-10, in accordance with the LTSP. Sampling and analyses were conducted as specified in the Sampling and Analysis Plan for US. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351,more » continually updated). Water levels were measured in all wells prior to sampling and in four piezometers completed in the tailings dam. Time-concentration graphs included in this report indicate that the chloride, sulfate, and TDS concentrations are consistent with historical measurements. The concentrations of chloride and sulfate are well below the State of Washington water quality criteria value of 250 milligrams per liter (mg/L) for both parameters.« less

  10. Estimating system parameters for solvent-water and plant cuticle-water using quantum chemically estimated Abraham solute parameters.

    PubMed

    Liang, Yuzhen; Torralba-Sanchez, Tifany L; Di Toro, Dominic M

    2018-04-18

    Polyparameter Linear Free Energy Relationships (pp-LFERs) using Abraham system parameters have many useful applications. However, developing the Abraham system parameters depends on the availability and quality of the Abraham solute parameters. Using Quantum Chemically estimated Abraham solute Parameters (QCAP) is shown to produce pp-LFERs that have lower root mean square errors (RMSEs) of predictions for solvent-water partition coefficients than parameters that are estimated using other presently available methods. pp-LFERs system parameters are estimated for solvent-water, plant cuticle-water systems, and for novel compounds using QCAP solute parameters and experimental partition coefficients. Refitting the system parameter improves the calculation accuracy and eliminates the bias. Refitted models for solvent-water partition coefficients using QCAP solute parameters give better results (RMSE = 0.278 to 0.506 log units for 24 systems) than those based on ABSOLV (0.326 to 0.618) and QSPR (0.294 to 0.700) solute parameters. For munition constituents and munition-like compounds not included in the calibration of the refitted model, QCAP solute parameters produce pp-LFER models with much lower RMSEs for solvent-water partition coefficients (RMSE = 0.734 and 0.664 for original and refitted model, respectively) than ABSOLV (4.46 and 5.98) and QSPR (2.838 and 2.723). Refitting plant cuticle-water pp-LFER including munition constituents using QCAP solute parameters also results in lower RMSE (RMSE = 0.386) than that using ABSOLV (0.778) and QSPR (0.512) solute parameters. Therefore, for fitting a model in situations for which experimental data exist and system parameters can be re-estimated, or for which system parameters do not exist and need to be developed, QCAP is the quantum chemical method of choice.

  11. An overview of STRUCTURE: applications, parameter settings, and supporting software

    PubMed Central

    Porras-Hurtado, Liliana; Ruiz, Yarimar; Santos, Carla; Phillips, Christopher; Carracedo, Ángel; Lareu, Maria V.

    2013-01-01

    Objectives: We present an up-to-date review of STRUCTURE software: one of the most widely used population analysis tools that allows researchers to assess patterns of genetic structure in a set of samples. STRUCTURE can identify subsets of the whole sample by detecting allele frequency differences within the data and can assign individuals to those sub-populations based on analysis of likelihoods. The review covers STRUCTURE's most commonly used ancestry and frequency models, plus an overview of the main applications of the software in human genetics including case-control association studies (CCAS), population genetics, and forensic analysis. The review is accompanied by supplementary material providing a step-by-step guide to running STRUCTURE. Methods: With reference to a worked example, we explore the effects of changing the principal analysis parameters on STRUCTURE results when analyzing a uniform set of human genetic data. Use of the supporting software: CLUMPP and distruct is detailed and we provide an overview and worked example of STRAT software, applicable to CCAS. Conclusion: The guide offers a simplified view of how STRUCTURE, CLUMPP, distruct, and STRAT can be applied to provide researchers with an informed choice of parameter settings and supporting software when analyzing their own genetic data. PMID:23755071

  12. Determination of the Landau Lifshitz damping parameter of composite magnetic fluids

    NASA Astrophysics Data System (ADS)

    Fannin, P. C.; Malaescu, I.; Marin, C. N.

    2007-01-01

    Measurements of the frequency dependent, complex magnetic susceptibility, χ(ω)= χ‧( ω)- iχ″( ω), in the GHz range, are used to investigate the effect which the mixing of two different magnetic fluids has on the value of the damping parameter, α, of the Landau-Lifshitz equation. The magnetic fluid samples investigated in this study were three kerosene-based magnetic fluids, stabilised with oleic acid, denoted as MF1, MF2 and MF3. Sample MF1 was a magnetic fluid with Mn 0.6Fe 0.4Fe 2O 4 particles, sample MF2 was a magnetic fluid with Ni 0.4Zn 0.6Fe 2O 4 particles and sample MF3 was a composite magnetic fluid obtained by mixing a part of sample MF1 with a part of sample MF2, in proportion of 1:1. The experimental results revealed that the value of the damping parameter of the composite sample (sample MF3) is between the α values obtained for its constituents (samples MF1 and MF2). Based on the superposition principle, which states that the susceptibility of a magnetic fluid sample is a superposition of individual contributions of the magnetic particles, a theoretical model is proposed. The experimental results are shown to be in close agreement with the theoretical results. This result is potentially useful in the design of microwave-operating materials, in that it enables one to determine a particular value of damping parameter.

  13. The multi-parameter remote measurement of rainfall

    NASA Technical Reports Server (NTRS)

    Atlas, D.; Ulbrich, C. W.; Meneghini, R.

    1982-01-01

    The measurement of rainfall by remote sensors is investigated. One parameter radar rainfall measurement is limited because both reflectivity and rain rate are dependent on at least two parameters of the drop size distribution (DSD), i.e., representative raindrop size and number concentration. A generalized rain parameter diagram is developed which includes a third distribution parameter, the breadth of the DSD, to better specify rain rate and all possible remote variables. Simulations show the improvement in accuracy attainable through the use of combinations of two and three remote measurables. The spectrum of remote measurables is reviewed. These include path integrated techniques of radiometry and of microwave and optical attenuation.

  14. Including gauge-group parameters into the theory of interactions: an alternative mass-generating mechanism for gauge fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldaya, V.; Lopez-Ruiz, F. F.; Sanchez-Sastre, E.

    2006-11-03

    We reformulate the gauge theory of interactions by introducing the gauge group parameters into the model. The dynamics of the new 'Goldstone-like' bosons is accomplished through a non-linear {sigma}-model Lagrangian. They are minimally coupled according to a proper prescription which provides mass terms to the intermediate vector bosons without spoiling gauge invariance. The present formalism is explicitly applied to the Standard Model of electroweak interactions.

  15. Local versus field scale soil heterogeneity characterization - a challenge for representative sampling in pollution studies

    NASA Astrophysics Data System (ADS)

    Kardanpour, Z.; Jacobsen, O. S.; Esbensen, K. H.

    2015-06-01

    This study is a contribution to development of a heterogeneity characterisation facility for "next generation" sampling aimed at more realistic and controllable pesticide variability in laboratory pots in experimental environmental contaminant assessment. The role of soil heterogeneity on quantification of a set of exemplar parameters, organic matter, loss on ignition (LOI), biomass, soil microbiology, MCPA sorption and mineralization is described, including a brief background on how heterogeneity affects sampling/monitoring procedures in environmental pollutant studies. The Theory of Sampling (TOS) and variographic analysis has been applied to develop a fit-for-purpose heterogeneity characterization approach. All parameters were assessed in large-scale profile (1-100 m) vs. small-scale (0.1-1 m) replication sampling pattern. Variographic profiles of experimental analytical results concludes that it is essential to sample at locations with less than a 2.5 m distance interval to benefit from spatial auto-correlation and thereby avoid unnecessary, inflated compositional variation in experimental pots; this range is an inherent characteristic of the soil heterogeneity and will differ among soils types. This study has a significant carrying-over potential for related research areas e.g. soil science, contamination studies, and environmental monitoring and environmental chemistry.

  16. Optimized design and analysis of sparse-sampling FMRI experiments.

    PubMed

    Perrachione, Tyler K; Ghosh, Satrajit S

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase

  17. A Stepwise Test Characteristic Curve Method to Detect Item Parameter Drift

    ERIC Educational Resources Information Center

    Guo, Rui; Zheng, Yi; Chang, Hua-Hua

    2015-01-01

    An important assumption of item response theory is item parameter invariance. Sometimes, however, item parameters are not invariant across different test administrations due to factors other than sampling error; this phenomenon is termed item parameter drift. Several methods have been developed to detect drifted items. However, most of the…

  18. The association between lipid parameters and obesity in university students.

    PubMed

    Hertelyova, Z; Salaj, R; Chmelarova, A; Dombrovsky, P; Dvorakova, M C; Kruzliak, P

    2016-07-01

    Abdominal obesity is associated with high plasma triglyceride and with low plasma high-density lipoprotein cholesterol levels. Objective of the study was to find an association between plasma lipid and lipoprotein levels and anthropometric parameters in abdominal obesity in Slovakian university students. Lipid profile and anthropometric parameters of obesity were studied in a sample of 419 probands, including 137 men and 282 women. Males had higher values of non-high-density lipoprotein cholesterol (non-HDL-C), low-density lipoprotein cholesterol (LDL-C), triglycerides (TG) and very low-density lipoprotein cholesterol (VLDL-C) than females, but these differences were not significant. Females had significantly (P < 0.05) higher TC and HDL-C (P < 0.001) than males. In comparison, all anthropometric parameters in the males were significantly (P < 0.001) higher than in the females. A positive correlation between non-HDL-C, TG, VLDL-C and anthropometric parameters (BMI, WC, WHR, WHtR) was found at P < 0.001. LDL was positively correlated with BMI, WCF, WHtR and TC with BMI, WHtR at P < 0.001. We also observed a correlation between TC-WCF and LDL-WHR at P < 0.01. A negative correlation was found between HDL and all monitored anthropometric parameters at P < 0.001. On the other hand, no correlation between TC and WHR was detected. This study shows an association between plasma lipid and lipoprotein levels and anthropometric parameters in abdominal obesity in young people, predominantly university students.

  19. Genotoxic Potential and Physicochemical Parameters of Sinos River, Southern Brazil

    PubMed Central

    Scalon, Madalena C. S.; Rechenmacher, Ciliana; Siebel, Anna Maria; Kayser, Michele L.; Rodrigues, Manoela T.; Maluf, Sharbel W.; Rodrigues, Marco Antonio S.

    2013-01-01

    The present study aimed to evaluate the physicochemical parameters and the genotoxic potential of water samples collected in the upper, middle, and lower courses of the Sinos River, southern Brazil. The comet assay was performed in the peripheral blood of fish Hyphessobrycon luetkenii exposed under laboratory conditions to water samples collected in summer and winter in three sampling sites of Sinos River. Water quality analysis demonstrated values above those described in Brazilian legislation in Parobé and Sapucaia do Sul sites, located in the middle and in the lower courses of the Sinos River, respectively. The Caraá site, located in the upper river reach, presented all the physicochemical parameters in accordance with the allowed limits in both sampling periods. Comet assay in fish revealed genotoxicity in water samples collected in the middle course site in summer and in the three sites in winter when compared to control group. Thus, the physicochemical parameters indicated that the water quality of the upper course complies with the limits set by the national guidelines, and the ecotoxicological assessment, however, indicated the presence of genotoxic agents. The present study highlights the importance of combining water physicochemical analysis and bioassays to river monitoring. PMID:24285934

  20. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  1. Apollo 14 rock samples

    NASA Technical Reports Server (NTRS)

    Carlson, I. C.

    1978-01-01

    Petrographic descriptions of all Apollo 14 samples larger than 1 cm in any dimension are presented. The sample description format consists of: (1) an introductory section which includes information on lunar sample location, orientation, and return containers, (2) a section on physical characteristics, which contains the sample mass, dimensions, and a brief description; (3) surface features, including zap pits, cavities, and fractures as seen in binocular view; (4) petrographic description, consisting of a binocular description and, if possible, a thin section description; and (5) a discussion of literature relevant to sample petrology is included for samples which have previously been examined by the scientific community.

  2. Influence of sexual stimulation on sperm parameters in semen samples collected via masturbation from normozoospermic men or cryptozoospermic men participating in an assisted reproduction programme.

    PubMed

    Yamamoto, Y; Sofikitis, N; Mio, Y; Miyagawa, I

    2000-05-01

    To evaluate the influence of sexual stimulation via sexually stimulating videotaped visual images (VIM) on sperm function, two semen samples were collected from each of 19 normozoospermic men via masturbation with VIM. Two additional samples were collected from each man via masturbation without VIM. The volume of seminal plasma, total sperm count, sperm motility, percentage of morphologically normal spermatozoa, outcome of hypo-osmotic swelling test and zona-free hamster oocyte sperm penetration assay, and markers of the secretory function of prostate were significantly larger in semen samples collected via masturbation with VIM than masturbation without VIM. The improved sperm parameters in the samples collected via masturbation with VIM may reflect an enhanced prostatic secretory function and increased loading of the vas deferens at that time. In a similar protocol, two semen samples were collected via masturbation with VIM from each of 22 non-obstructed azoospermic men. Semen samples from these men had been occasionally positive in the past for a very small number of spermatozoa (cryptozoospermic men). Two additional samples were collected from each cryptozoospermic man via masturbation without VIM. The volume of seminal plasma, total sperm count, sperm motility, and a marker of the secretory function of prostate were significantly larger in semen samples collected via masturbation with VIM. Fourteen out of the 22 men were negative for spermatozoa in both samples collected via masturbation without VIM. These men demonstrated spermatozoa in both samples collected via masturbation with VIM. Six men with immotile spermatozoa in both samples collected via masturbation without VIM exposed motile spermatozoa in both samples collected via masturbation with VIM. High sexual stimulation during masturbation with VIM results in recovery of spermatozoa of greater fertilizing potential both in normozoospermic and cryptozoospermic men. The appearance of spermatozoa after

  3. Dirac-Born-Infeld inflation using a one-parameter family of throat geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gmeiner, Florian; White, Chris D, E-mail: fgmeiner@nikhef.nl, E-mail: cwhite@nikhef.nl

    2008-02-15

    We demonstrate the possibility of examining cosmological signatures in the Dirac-Born-Infeld (DBI) inflation setup using the BGMPZ solution, a one-parameter family of geometries for the warped throat which interpolate between the Maldacena-Nunez and Klebanov-Strassler solutions. The warp factor is determined numerically and is subsequently used to calculate cosmological observables, including the scalar and tensor spectral indices, for a sample point in the parameter space. As one moves away from the Klebanov-Strassler (KS) solution for the throat, the warp factor is qualitatively different, which leads to a significant change for the observables, but also generically increases the non-Gaussianity of the models.more » We argue that the different models can potentially be differentiated by current and future experiments.« less

  4. The redshift distribution of cosmological samples: a forward modeling approach

    NASA Astrophysics Data System (ADS)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-08-01

    Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  5. Effects of sampling close relatives on some elementary population genetics analyses.

    PubMed

    Wang, Jinliang

    2018-01-01

    Many molecular ecology analyses assume the genotyped individuals are sampled at random from a population and thus are representative of the population. Realistically, however, a sample may contain excessive close relatives (ECR) because, for example, localized juveniles are drawn from fecund species. Our knowledge is limited about how ECR affect the routinely conducted elementary genetics analyses, and how ECR are best dealt with to yield unbiased and accurate parameter estimates. This study quantifies the effects of ECR on some popular population genetics analyses of marker data, including the estimation of allele frequencies, F-statistics, expected heterozygosity (H e ), effective and observed numbers of alleles, and the tests of Hardy-Weinberg equilibrium (HWE) and linkage equilibrium (LE). It also investigates several strategies for handling ECR to mitigate their impact and to yield accurate parameter estimates. My analytical work, assisted by simulations, shows that ECR have large and global effects on all of the above marker analyses. The naïve approach of simply ignoring ECR could yield low-precision and often biased parameter estimates, and could cause too many false rejections of HWE and LE. The bold approach, which simply identifies and removes ECR, and the cautious approach, which estimates target parameters (e.g., H e ) by accounting for ECR and using naïve allele frequency estimates, eliminate the bias and the false HWE and LE rejections, but could reduce estimation precision substantially. The likelihood approach, which accounts for ECR in estimating allele frequencies and thus target parameters relying on allele frequencies, usually yields unbiased and the most accurate parameter estimates. Which of the four approaches is the most effective and efficient may depend on the particular marker analysis to be conducted. The results are discussed in the context of using marker data for understanding population properties and marker properties. © 2017

  6. Technical note: Alternatives to reduce adipose tissue sampling bias.

    PubMed

    Cruz, G D; Wang, Y; Fadel, J G

    2014-10-01

    Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.

  7. Importance sampling large deviations in nonequilibrium steady states. I.

    PubMed

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  8. Importance sampling large deviations in nonequilibrium steady states. I

    NASA Astrophysics Data System (ADS)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  9. Detecting the sampling rate through observations

    NASA Astrophysics Data System (ADS)

    Shoji, Isao

    2018-09-01

    This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.

  10. Material parameter estimation with terahertz time-domain spectroscopy.

    PubMed

    Dorney, T D; Baraniuk, R G; Mittleman, D M

    2001-07-01

    Imaging systems based on terahertz (THz) time-domain spectroscopy offer a range of unique modalities owing to the broad bandwidth, subpicosecond duration, and phase-sensitive detection of the THz pulses. Furthermore, the possibility exists for combining spectroscopic characterization or identification with imaging because the radiation is broadband in nature. To achieve this, we require novel methods for real-time analysis of THz waveforms. This paper describes a robust algorithm for extracting material parameters from measured THz waveforms. Our algorithm simultaneously obtains both the thickness and the complex refractive index of an unknown sample under certain conditions. In contrast, most spectroscopic transmission measurements require knowledge of the sample's thickness for an accurate determination of its optical parameters. Our approach relies on a model-based estimation, a gradient descent search, and the total variation measure. We explore the limits of this technique and compare the results with literature data for optical parameters of several different materials.

  11. O-star parameters from line profiles of wind-blanketed model atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voels, S.A.

    1989-01-01

    The basic stellar parameters (i.e. effective temperature, gravity, helium content, bolometric correction, etc...) of several O-stars are determined by matching high signal-to-noise observed line profiles of optical hydrogen and helium line transitions with theoretical line profiles from a core-halo model of the stellar atmosphere. The core-halo atmosphere includes the effect of radiation backscattered from a stellar wind by incorporating the stellar wind model of Abbott and Lucy as a reflective upper boundary condition in the Mihalas atmosphere model. Three of the four supergiants analyzed showed an enhanced surface abundance of helium. Using a large sample of equivalent width data frommore » Conti a simple argument is made that surface enhancement of helium may be a common property of the most luminous supergiants. The stellar atmosphere theory is sufficient to determine the stellar parameters only if careful attention is paid to the detection and exclusion of lines which are not accurately modeled by the physical processes included. It was found that some strong lines which form entirely below the sonic point are not well modeled due to effects of atmospheric extension. For spectral class 09.5, one of these lines is the classification line He I {lambda}4471{angstrom}. For supergiant, the gravity determined could be systematically low by up to 0.05 dex as the radiation pressure due to lines is neglected. Within the error ranges, the stellar parameters determined, including helium abundance, agree with those from the stellar evolution calculations of Maeder and Maynet.« less

  12. Comparing basal area growth models, consistency of parameters, and accuracy of prediction

    Treesearch

    J.J. Colbert; Michael Schuckers; Desta Fekedulegn

    2002-01-01

    We fit alternative sigmoid growth models to sample tree basal area historical data derived from increment cores and disks taken at breast height. We examine and compare the estimated parameters for these models across a range of sample sites. Models are rated on consistency of parameters and on their ability to fit growth data from four sites that are located across a...

  13. A MegaCam Survey of Outer Halo Satellites. III. Photometric and Structural Parameters

    NASA Astrophysics Data System (ADS)

    Muñoz, Ricardo R.; Côté, Patrick; Santana, Felipe A.; Geha, Marla; Simon, Joshua D.; Oyarzún, Grecco A.; Stetson, Peter B.; Djorgovski, S. G.

    2018-06-01

    We present structural parameters from a wide-field homogeneous imaging survey of Milky Way satellites carried out with the MegaCam imagers on the 3.6 m Canada–France–Hawaii Telescope and 6.5 m Magellan-Clay telescope. Our survey targets an unbiased sample of “outer halo” satellites (i.e., substructures having galactocentric distances greater than 25 kpc) and includes classical dSph galaxies, ultra-faint dwarfs, and remote globular clusters. We combine deep, panoramic gr imaging for 44 satellites and archival gr imaging for 14 additional objects (primarily obtained with the DECam instrument as part of the Dark Energy Survey) to measure photometric and structural parameters for 58 outer halo satellites. This is the largest and most uniform analysis of Milky Way satellites undertaken to date and represents roughly three-quarters (58/81 ≃ 72%) of all known outer halo satellites. We use a maximum-likelihood method to fit four density laws to each object in our survey: exponential, Plummer, King, and Sérsic models. We systematically examine the isodensity contour maps and color–magnitude diagrams for each of our program objects, present a comparison with previous results, and tabulate our best-fit photometric and structural parameters, including ellipticities, position angles, effective radii, Sérsic indices, absolute magnitudes, and surface brightness measurements. We investigate the distribution of outer halo satellites in the size–magnitude diagram and show that the current sample of outer halo substructures spans a wide range in effective radius, luminosity, and surface brightness, with little evidence for a clean separation into star cluster and galaxy populations at the faintest luminosities and surface brightnesses.

  14. Variation of semen parameters in healthy medical students due to exam stress.

    PubMed

    Lampiao, Fanuel

    2009-12-01

    This study was aimed at investigating semen parameters that vary most in samples of healthy donors undergoing stressful examination period. Samples were left to liquefy in an incubator at 37 degrees C, 5% CO2 for 30 minutes before volume was measured. Concentration and motility parameters were measured by means of computer assisted semen analysis (CASA) using Sperm Class Analyzer (Microptic S.L, Madrid, Spain). Sperm concentration was significantly decreased in samples donated close to the exam period as well as samples donated during the exam period when compared to samples donated at the beginning of the semester. Stress levels of donors might prove to be clinically relevant and important when designing experiment protocols.

  15. Complex structures of different CaFe2As2 samples

    PubMed Central

    Saparov, Bayrammurad; Cantoni, Claudia; Pan, Minghu; Hogan, Thomas C.; II, William Ratcliff; Wilson, Stephen D.; Fritsch, Katharina; Gaulin, Bruce D.; Sefat, Athena S.

    2014-01-01

    The interplay between magnetism and crystal structures in three CaFe2As2 samples is studied. For the nonmagnetic quenched crystals, different crystalline domains with varying lattice parameters are found, and three phases (orthorhombic, tetragonal, and collapsed tetragonal) coexist between TS = 95 K and 45 K. Annealing of the quenched crystals at 350°C leads to a strain relief through a large (~1.3%) expansion of the c-parameter and a small (~0.2%) contraction of the a-parameter, and to local ~0.2 Å displacements at the atomic-level. This annealing procedure results in the most homogeneous crystals for which the antiferromagnetic and orthorhombic phase transitions occur at TN/TS = 168(1) K. In the 700°C-annealed crystal, an intermediate strain regime takes place, with tetragonal and orthorhombic structural phases coexisting between 80 to 120 K. The origin of such strong shifts in the transition temperatures are tied to structural parameters. Importantly, with annealing, an increase in the Fe-As length leads to more localized Fe electrons and higher local magnetic moments on Fe ions. Synergistic contribution of other structural parameters, including a decrease in the Fe-Fe distance, and a dramatic increase of the c-parameter, which enhances the Fermi surface nesting in CaFe2As2, are also discussed. PMID:24844399

  16. Measurement of spectral characteristics and CCT mixture of PDMS and the luminophore depending on the geometric parameters and the concentration of the samples of the special optical fibers

    NASA Astrophysics Data System (ADS)

    Jargus, Jan; Nedoma, Jan; Fajkus, Marcel; Novak, Martin; Bednarek, Lukas; Vasinek, Vladimir

    2017-05-01

    White light is produced by a suitable combination of spectral components RGB (colors) or through exposure excitation of blue light (the blue component of light). This blue part of the light is partly and suitably transformed by luminophore so that the resulting emitted spectrum corresponded to the spectral characteristics of white light with a given correlated color temperature (CCT). This paper deals with the measurement of optical properties of a mixture polydimethylsiloxane (PDMS) and luminophore, which is irradiated by the blue LED (Light-Emitting Diode) to obtain the white color of light. The subject of the investigation is the dependence of CCT on the concentration of the luminophore in a mixture of PDMS and different geometrical parameters of the samples. There are many kinds of PDMS and luminophore. We used PDMS Sylgard 184 and luminophore-labeled U2. More accurately Yttrium Aluminium Oxide: Cerium Y3Al5O12: Ce. From the analyzed data, we determined, which mutual combinations of concentration of the mixture of luminophore and PDMS together with the geometric parameters of the samples of the special optical fibers are suitable for illumination, while we get the desired CCT.

  17. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  18. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  19. Enhanced conformational sampling using enveloping distribution sampling.

    PubMed

    Lin, Zhixiong; van Gunsteren, Wilfred F

    2013-10-14

    To lessen the problem of insufficient conformational sampling in biomolecular simulations is still a major challenge in computational biochemistry. In this article, an application of the method of enveloping distribution sampling (EDS) is proposed that addresses this challenge and its sampling efficiency is demonstrated in simulations of a hexa-β-peptide whose conformational equilibrium encompasses two different helical folds, i.e., a right-handed 2.7(10∕12)-helix and a left-handed 3(14)-helix, separated by a high energy barrier. Standard MD simulations of this peptide using the GROMOS 53A6 force field did not reach convergence of the free enthalpy difference between the two helices even after 500 ns of simulation time. The use of soft-core non-bonded interactions in the centre of the peptide did enhance the number of transitions between the helices, but at the same time led to neglect of relevant helical configurations. In the simulations of a two-state EDS reference Hamiltonian that envelops both the physical peptide and the soft-core peptide, sampling of the conformational space of the physical peptide ensures that physically relevant conformations can be visited, and sampling of the conformational space of the soft-core peptide helps to enhance the transitions between the two helices. The EDS simulations sampled many more transitions between the two helices and showed much faster convergence of the relative free enthalpy of the two helices compared with the standard MD simulations with only a slightly larger computational effort to determine optimized EDS parameters. Combined with various methods to smoothen the potential energy surface, the proposed EDS application will be a powerful technique to enhance the sampling efficiency in biomolecular simulations.

  20. Curation of Frozen Samples

    NASA Technical Reports Server (NTRS)

    Fletcher, L. A.; Allen, C. C.; Bastien, R.

    2008-01-01

    NASA's Johnson Space Center (JSC) and the Astromaterials Curator are charged by NPD 7100.10D with the curation of all of NASA s extraterrestrial samples, including those from future missions. This responsibility includes the development of new sample handling and preparation techniques; therefore, the Astromaterials Curator must begin developing procedures to preserve, prepare and ship samples at sub-freezing temperatures in order to enable future sample return missions. Such missions might include the return of future frozen samples from permanently-shadowed lunar craters, the nuclei of comets, the surface of Mars, etc. We are demonstrating the ability to curate samples under cold conditions by designing, installing and testing a cold curation glovebox. This glovebox will allow us to store, document, manipulate and subdivide frozen samples while quantifying and minimizing contamination throughout the curation process.

  1. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  2. Influence of the freezing method on the changes that occur in grape samples after frozen storage.

    PubMed

    Santesteban, Luis G; Miranda, Carlos; Royo, José B

    2013-09-01

    Sample freezing is frequently used in oenological laboratories as a compromise solution to increase the number of samples that can be analysed, despite the fact that some grape characteristics are known to change after frozen storage. However, freezing is usually performed using standard freezers, which provide a slow freezing. The aim of this work was to evaluate whether blast freezing would decrease the impact of standard freezing on grape composition. Grape quality parameters were assessed in fresh and in frozen stored samples that had been frozen using three different procedures: standard freezing and blast freezing using either a blast freezer or an ultra-freezer. The implications of frozen storage in grape samples reported in earlier research were observed for the three freezing methods evaluated. Although blast freezing improved repeatability for the most problematic parameters (tartaric acidity, TarA; total phenolics, TP), the improvement was not important from a practical point of view. However, TarA and TP were relatively repeatable among the three freezing procedures, which suggests that freezing had an effect on these parameters independently of the method used . According to our results, the salification potential of the must is probably implied in the changes observed for TarA, whereas for TP the precipitation of protoanthocyanins after association with cell wall material is hypothesized to cause the lack of repeatability between fresh and frozen grapes. Blast freezing would not imply a great improvement if implemented in oenological laboratories, at least for the parameters included in this study. © 2013 Society of Chemical Industry.

  3. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  4. MODEST - JPL GEODETIC AND ASTROMETRIC VLBI MODELING AND PARAMETER ESTIMATION PROGRAM

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1994-01-01

    Observations of extragalactic radio sources in the gigahertz region of the radio frequency spectrum by two or more antennas, separated by a baseline as long as the diameter of the Earth, can be reduced, by radio interferometry techniques, to yield time delays and their rates of change. The Very Long Baseline Interferometric (VLBI) observables can be processed by the MODEST software to yield geodetic and astrometric parameters of interest in areas such as geophysical satellite and spacecraft tracking applications and geodynamics. As the accuracy of radio interferometry has improved, increasingly complete models of the delay and delay rate observables have been developed. MODEST is a delay model (MOD) and parameter estimation (EST) program that takes into account delay effects such as geometry, clock, troposphere, and the ionosphere. MODEST includes all known effects at the centimeter level in modeling. As the field evolves and new effects are discovered, these can be included in the model. In general, the model includes contributions to the observables from Earth orientation, antenna motion, clock behavior, atmospheric effects, and radio source structure. Within each of these categories, a number of unknown parameters may be estimated from the observations. Since all parts of the time delay model contain nearly linear parameter terms, a square-root-information filter (SRIF) linear least-squares algorithm is employed in parameter estimation. Flexibility (via dynamic memory allocation) in the MODEST code ensures that the same executable can process a wide array of problems. These range from a few hundred observations on a single baseline, yielding estimates of tens of parameters, to global solutions estimating tens of thousands of parameters from hundreds of thousands of observations at antennas widely distributed over the Earth's surface. Depending on memory and disk storage availability, large problems may be subdivided into more tractable pieces that are processed

  5. Composition of diesel exhaust with particular reference to particle bound organics including formation of artifacts.

    PubMed

    Lies, K H; Hartung, A; Postulka, A; Gring, H; Schulze, J

    1986-01-01

    For particulate emissions, standards were established by the US EPA in February 1980. Regulations limiting particulates from new light duty diesel vehicles are valid by model year 1982. The corresponding standards on a pure mass basis do not take into account any chemical character of the diesel particulate matter. Our investigation of the material composition shows that diesel particulates consist mainly of soot (up to 80% by weight) and adsorptively bound organics including polycyclic aromatic hydrocarbons (PAH). The qualitative and quantitative nature of hydrocarbon compounds associated with the particulates is dependent not only on the combustion parameters of the engine but also to an important degree on the sampling conditions when the particulates are collected (dilution ratio, temperature, filter material, sampling time etc.). Various methods for the analyses of PAH and their oxy- and nitro-derivatives are described including sampling, extraction, fractionation and chemical analysis. Quantitative comparison of PAH, nitro-PAH and oxy-PAH from different engines are given. For assessing mutagenicity of particulate matter, short-term biological tests are widely used. These biological tests often need a great amount of particulate matter requiring prolonged filter sampling times. Since it is well known that facile PAH oxidation can take place under the conditions used for sampling and analysis, the question rises if these PAH-derivates found in particle extracts partly or totally are produced during sampling (artifacts). Various results concerning nitro- and oxy-PAH are presented characterizing artifact formation as a minor problem under the conditions of the Federal Test Procedure. But results show that under other sampling conditions, e.g. electrostatic precipitation, higher NO2-concentrations and longer sampling times, artifact formation can become a bigger problem. The more stringent particulate standard of 0.2 g/mi for model years 1986 and 1987 respectively

  6. The Statistics of Radio Astronomical Polarimetry: Disjoint, Superposed, and Composite Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straten, W. van; Tiburzi, C., E-mail: willem.van.straten@aut.ac.nz

    2017-02-01

    A statistical framework is presented for the study of the orthogonally polarized modes of radio pulsar emission via the covariances between the Stokes parameters. To accommodate the typically heavy-tailed distributions of single-pulse radio flux density, the fourth-order joint cumulants of the electric field are used to describe the superposition of modes with arbitrary probability distributions. The framework is used to consider the distinction between superposed and disjoint modes, with particular attention to the effects of integration over finite samples. If the interval over which the polarization state is estimated is longer than the timescale for switching between two or moremore » disjoint modes of emission, then the modes are unresolved by the instrument. The resulting composite sample mean exhibits properties that have been attributed to mode superposition, such as depolarization. Because the distinction between disjoint modes and a composite sample of unresolved disjoint modes depends on the temporal resolution of the observing instrumentation, the arguments in favor of superposed modes of pulsar emission are revisited, and observational evidence for disjoint modes is described. In principle, the four-dimensional covariance matrix that describes the distribution of sample mean Stokes parameters can be used to distinguish between disjoint modes, superposed modes, and a composite sample of unresolved disjoint modes. More comprehensive and conclusive interpretation of the covariance matrix requires more detailed consideration of various relevant phenomena, including temporally correlated subpulse modulation (e.g., jitter), statistical dependence between modes (e.g., covariant intensities and partial coherence), and multipath propagation effects (e.g., scintillation and scattering).« less

  7. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.

    PubMed

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.

  8. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices

    PubMed Central

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188

  9. An integrated and accessible sample data library for Mars sample return science

    NASA Astrophysics Data System (ADS)

    Tuite, M. L., Jr.; Williford, K. H.

    2015-12-01

    Over the course of the next decade or more, many thousands of geological samples will be collected and analyzed in a variety of ways by researchers at the Jet Propulsion Laboratory (California Institute of Technology) in order to facilitate discovery and contextualize observations made of Mars rocks both in situ and here on Earth if samples are eventually returned. Integration of data from multiple analyses of samples including petrography, thin section and SEM imaging, isotope and organic geochemistry, XRF, XRD, and Raman spectrometry is a challenge and a potential obstacle to discoveries that require supporting lines of evidence. We report the development of a web-accessible repository, the Sample Data Library (SDL) for the sample-based data that are generated by the laboratories and instruments that comprise JPL's Center for Analysis of Returned Samples (CARS) in order to facilitate collaborative interpretation of potential biosignatures in Mars-analog geological samples. The SDL is constructed using low-cost, open-standards-based Amazon Web Services (AWS), including web-accessible storage, relational data base services, and a virtual web server. The data structure is sample-centered with a shared registry for assigning unique identifiers to all samples including International Geo-Sample Numbers. Both raw and derived data produced by instruments and post-processing workflows are automatically uploaded to online storage and linked via the unique identifiers. Through the web interface, users are able to find all the analyses associated with a single sample or search across features shared by multiple samples, sample localities, and analysis types. Planned features include more sophisticated search and analytical interfaces as well as data discoverability through NSF's EarthCube program.

  10. Determination techniques of Archie’s parameters: a, m and n in heterogeneous reservoirs

    NASA Astrophysics Data System (ADS)

    Mohamad, A. M.; Hamada, G. M.

    2017-12-01

    The determination of water saturation in a heterogeneous reservoir is becoming more challenging, as Archie’s equation is only suitable for clean homogeneous formation and Archie’s parameters are highly dependent on the properties of the rock. This study focuses on the measurement of Archie’s parameters in carbonate and sandstone core samples around Malaysian heterogeneous carbonate and sandstone reservoirs. Three techniques for the determination of Archie’s parameters a, m and n will be implemented: the conventional technique, core Archie parameter estimation (CAPE) and the three-dimensional regression technique (3D). By using the results obtained by the three different techniques, water saturation graphs were produced to observe the symbolic difference of Archie’s parameter and its relevant impact on water saturation values. The difference in water saturation values can be primarily attributed to showing the uncertainty level of Archie’s parameters, mainly in carbonate and sandstone rock samples. It is obvious that the accuracy of Archie’s parameters has a profound impact on the calculated water saturation values in carbonate sandstone reservoirs due to regions of high stress reducing electrical conduction resulting from the raised electrical heterogeneity of the heterogeneous carbonate core samples. Due to the unrealistic assumptions involved in the conventional method, it is better to use either the CAPE or 3D method to accurately determine Archie’s parameters in heterogeneous as well as homogeneous reservoirs.

  11. Pharmacokinetic design optimization in children and estimation of maturation parameters: example of cytochrome P450 3A4.

    PubMed

    Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel

    2011-02-01

    The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the

  12. Comparison of different parameters for recording sagittal maxillo mandibular relation using natural head posture: A cephalometric study

    PubMed Central

    Singh, Ashish Kumar; Ganeshkar, Sanjay V.; Mehrotra, Praveen; Bhagchandani, Jitendra

    2013-01-01

    Background: Commonly used parameters for anteroposterior assessment of the jaw relationship includes several analyses such as ANB, NA-Pog, AB-NPog, Wits appraisal, Harvold's unit length difference, Beta angle. Considering the fact that there are several parameters (with different range and values) which account for sagittal relation, and still the published literature for comparisons and correlation of these measurements is scarce. Therefore, the objective of this study was to correlate these values in subjects of Indian origin. Materials and Methods: The sample consisted of fifty adult individuals (age group 18-26 years) with equal number of males and females. The selection criteria included subjects with no previous history of orthodontic and/or orthognathic surgical treatment; orthognathic facial profile; Angle's Class I molar relation; clinical Frankfort Mandibular plane angle FMA of 30±5° and no gross facial asymmetry. The cephalograms were taken in natural head position (NHP). Seven sagittal skeletal parameters were measured in the cephalograms and subjected to statistical evaluation with Wits reading on the true horizontal as reference. A correlation coefficient analysis was done to assess the significance of association between these variables. Results: ANB angle showed statistically significant correlation for the total sample, though the values were insignificant for the individual groups and therefore may not be very accurate. Wits appraisal was seen to have a significant correlation only in the female sample group. Conclusions: If cephalograms cannot be recorded in a NHP, then the best indicator for recording A-P skeletal dimension would be angle AB-NPog, followed by Harvold's unit length difference. However, considering biologic variability, more than one reading should necessarily be used to verify the same. PMID:24987638

  13. Consistency in color parameters of a commonly used shade guide.

    PubMed

    Tashkandi, Esam

    2010-01-01

    The use of shade guides to assess the color of natural teeth subjectively remains one of the most common means for dental shade assessment. Any variation in the color parameters of the different shade guides may lead to significant clinical implications. Particularly, since the communication between the clinic and the dental laboratory is based on using the shade guide designation. The purpose of this study was to investigate the consistency of the L∗a∗b∗ color parameters of a sample of a commonly used shade guide. The color parameters of a total of 100 VITAPAN Classical Vacuum shade guide (VITA Zahnfabrik, Bad Säckingen, Germany(were measured using a X-Rite ColorEye 7000A Spectrophotometer (Grand Rapids, Michigan, USA). Each shade guide consists of 16 tabs with different designations. Each shade tab was measured five times and the average values were calculated. The ΔE between the average L∗a∗b∗ value for each shade tab and the average of the 100 shade tabs of the same designation was calculated. Using the Student t-test analysis, no significant differences were found among the measured sample. There is a high consistency level in terms of color parameters of the measured VITAPAN Classical Vacuum shade guide sample tested.

  14. Parameter recovery, bias and standard errors in the linear ballistic accumulator model.

    PubMed

    Visser, Ingmar; Poessé, Rens

    2017-05-01

    The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.

  15. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    NASA Technical Reports Server (NTRS)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  16. Stability of haematological parameters and its relevance on the athlete's biological passport model.

    PubMed

    Lombardi, Giovanni; Lanteri, Patrizia; Colombini, Alessandra; Lippi, Giuseppe; Banfi, Giuseppe

    2011-12-01

    The stability of haematological parameters is crucial to guarantee accurate and reliable data for implementing and interpreting the athlete's biological passport (ABP). In this model, the values of haemoglobin, reticulocytes and out-of-doping period (OFF)-score (Hb-60√Ret) are used to monitor the possible variations of those parameters, and also to compare the thresholds developed by the statistical model for the single athlete on the basis of its personal values and the variance of parameters in the modal group. Nevertheless, a critical review of the current scientific literature dealing with the stability of the haematological parameters included in the ABP programme, and which are used for evaluating the probability of anomalies in the athlete's profile, is currently lacking. In addition, we collected information from published studies, in order to supply a useful, practical and updated review to sports physicians and haematologists. There are some parameters that are highly stable, such as haemoglobin and erythrocytes (red blood cells [RBCs]), whereas others, (e.g. reticulocytes, mean RBC volume and haematocrit) appear less stable. Regardless of the methodology, the stability of haematological parameters is improved by sample refrigeration. The stability of all parameters is highly affected from high storage temperatures, whereas the stability of RBCs and haematocrit is affected by initial freezing followed by refrigeration. Transport and rotation of tubes do not substantially influence any haematological parameter except for reticulocytes. In all the studies we reviewed that used Sysmex instrumentation, which is recommended for ABP measurements, stability was shown for 72 hours at 4 ° C for haemoglobin, RBCs and mean curpuscular haemoglobin concentration (MCHC); up to 48 hours for reticulocytes; and up to 24 hours for haematocrit. In one study, Sysmex instrumentation shows stability extended up to 72 hours at 4 ° C for all the parameters. There are

  17. Effects of whirling disease on selected hematological parameters in rainbow trout

    USGS Publications Warehouse

    Densmore, Christine L.; Blazer, V.S.; Waldrop, T.B.; Pooler, P.S.

    2001-01-01

    Hematological responses to whirling disease in rainbow trout (Oncorhynchus mykiss) were investigated. Two-mo-old fingerling rainbow trout were exposed to cultured triactinomyxon spores of Myxobolus cerebralis at 9,000 spores/fish in December, 1997. Twenty-four wks post-exposure, fish were taken from infected and uninfected groups for peripheral blood and cranial tissue sampling. Histological observations on cranial tissues confirmed M. cerebralis infection in all exposed fish. Differences in hematological parameters between the two groups included significantly lower total leukocyte and small lymphocyte counts for the infected fish. No effects on hematocrit, plasma protein concentration, or other differential leukocyte counts were noted.

  18. Neuro-genetic system for optimization of GMI samples sensitivity.

    PubMed

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  20. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  1. The redshift distribution of cosmological samples: a forward modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizesmore » and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.« less

  2. Observation model and parameter partials for the JPL VLBI parameter estimation software MASTERFIT-1987

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Fanselow, J. L.

    1987-01-01

    This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.

  3. Observation model and parameter partials for the JPL VLBI parameter estimation software MASTERFIT-1987

    NASA Astrophysics Data System (ADS)

    Sovers, O. J.; Fanselow, J. L.

    1987-12-01

    This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.

  4. HICOSMO - cosmology with a complete sample of galaxy clusters - I. Data analysis, sample selection and luminosity-mass scaling relation

    NASA Astrophysics Data System (ADS)

    Schellenberger, G.; Reiprich, T. H.

    2017-08-01

    The X-ray regime, where the most massive visible component of galaxy clusters, the intracluster medium, is visible, offers directly measured quantities, like the luminosity, and derived quantities, like the total mass, to characterize these objects. The aim of this project is to analyse a complete sample of galaxy clusters in detail and constrain cosmological parameters, like the matter density, Ωm, or the amplitude of initial density fluctuations, σ8. The purely X-ray flux-limited sample (HIFLUGCS) consists of the 64 X-ray brightest galaxy clusters, which are excellent targets to study the systematic effects, that can bias results. We analysed in total 196 Chandra observations of the 64 HIFLUGCS clusters, with a total exposure time of 7.7 Ms. Here, we present our data analysis procedure (including an automated substructure detection and an energy band optimization for surface brightness profile analysis) that gives individually determined, robust total mass estimates. These masses are tested against dynamical and Planck Sunyaev-Zeldovich (SZ) derived masses of the same clusters, where good overall agreement is found with the dynamical masses. The Planck SZ masses seem to show a mass-dependent bias to our hydrostatic masses; possible biases in this mass-mass comparison are discussed including the Planck selection function. Furthermore, we show the results for the (0.1-2.4) keV luminosity versus mass scaling relation. The overall slope of the sample (1.34) is in agreement with expectations and values from literature. Splitting the sample into galaxy groups and clusters reveals, even after a selection bias correction, that galaxy groups exhibit a significantly steeper slope (1.88) compared to clusters (1.06).

  5. Optimized Design and Analysis of Sparse-Sampling fMRI Experiments

    PubMed Central

    Perrachione, Tyler K.; Ghosh, Satrajit S.

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase

  6. Comparison of semen parameters in samples collected by masturbation at a clinic and at home.

    PubMed

    Elzanaty, Saad; Malm, Johan

    2008-06-01

    To investigate differences in semen quality between samples collected by masturbation at a clinic and at home. Cross-sectional study. Fertility center. Three hundred seventy-nine men assessed for infertility. None. Semen was analyzed according to World Health Organization guidelines. Seminal markers of epididymal (neutral alpha-glucosidase), prostatic (prostate-specific antigen and zinc), and seminal vesicle (fructose) function were measured. Two patient groups were defined according to sample collection location: at a clinic (n = 273) or at home (n = 106). Compared with clinic-collected semen, home-collected samples had statistically significantly higher values for sperm concentration, total sperm count, rapid progressive motility, and total count of progressive motility. Semen volume, proportion of normal sperm morphology, neutral alpha-glucosidase, prostate-specific antigen, zinc, and fructose did not differ significantly between groups. An abnormal sperm concentration (<20 x 10(6)/mL) was seen in statistically significantly fewer of the samples obtained at home (19/106, 18%) than at the clinic (81/273, 30%), and the same applied to proportions of samples with abnormal (< 25%) rapid progressive motility (68/106 [64%] and 205/273 [75%], respectively). The present results demonstrate superior semen quality in samples collected by masturbation at home compared with at a clinic. This should be taken into consideration in infertility investigations.

  7. Efficient computation of the joint sample frequency spectra for multiple populations.

    PubMed

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  8. Efficient computation of the joint sample frequency spectra for multiple populations

    PubMed Central

    Kamm, John A.; Terhorst, Jonathan; Song, Yun S.

    2016-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248

  9. Estimating cellular parameters through optimization procedures: elementary principles and applications.

    PubMed

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  10. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  11. Molecular pedigree reconstruction and estimation of evolutionary parameters in a wild Atlantic salmon river system with incomplete sampling: a power analysis

    PubMed Central

    2014-01-01

    Background Pedigree reconstruction using genetic analysis provides a useful means to estimate fundamental population biology parameters relating to population demography, trait heritability and individual fitness when combined with other sources of data. However, there remain limitations to pedigree reconstruction in wild populations, particularly in systems where parent-offspring relationships cannot be directly observed, there is incomplete sampling of individuals, or molecular parentage inference relies on low quality DNA from archived material. While much can still be inferred from incomplete or sparse pedigrees, it is crucial to evaluate the quality and power of available genetic information a priori to testing specific biological hypotheses. Here, we used microsatellite markers to reconstruct a multi-generation pedigree of wild Atlantic salmon (Salmo salar L.) using archived scale samples collected with a total trapping system within a river over a 10 year period. Using a simulation-based approach, we determined the optimal microsatellite marker number for accurate parentage assignment, and evaluated the power of the resulting partial pedigree to investigate important evolutionary and quantitative genetic characteristics of salmon in the system. Results We show that at least 20 microsatellites (ave. 12 alleles/locus) are required to maximise parentage assignment and to improve the power to estimate reproductive success and heritability in this study system. We also show that 1.5 fold differences can be detected between groups simulated to have differing reproductive success, and that it is possible to detect moderate heritability values for continuous traits (h2 ~ 0.40) with more than 80% power when using 28 moderately to highly polymorphic markers. Conclusion The methodologies and work flow described provide a robust approach for evaluating archived samples for pedigree-based research, even where only a proportion of the total population is sampled. The

  12. Image parameters for maturity determination of a composted material containing sewage sludge

    NASA Astrophysics Data System (ADS)

    Kujawa, S.; Nowakowski, K.; Tomczak, R. J.; Boniecki, P.; Dach, J.

    2013-07-01

    Composting is one of the best methods for management of sewage sludge. In a reasonably conducted composting process it is important to early identify the moment in which a material reaches the young compost stage. The objective of this study was to determine parameters contained in images of composted material's samples that can be used for evaluation of the degree of compost maturity. The study focused on two types of compost: containing sewage sludge with corn straw and sewage sludge with rapeseed straw. The photographing of the samples was carried out on a prepared stand for the image acquisition using VIS, UV-A and mixed (VIS + UV-A) light. In the case of UV-A light, three values of the exposure time were assumed. The values of 46 parameters were estimated for each of the images extracted from the photographs of the composted material's samples. Exemplary averaged values of selected parameters obtained from the images of the composted material in the following sampling days were presented. All of the parameters obtained from the composted material's images are the basis for preparation of training, validation and test data sets necessary in development of neural models for classification of the young compost stage.

  13. Estimation of k-ε parameters using surrogate models and jet-in-crossflow data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan

    2014-11-01

    We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model

  14. Is 50 Hz high enough ECG sampling frequency for accurate HRV analysis?

    PubMed

    Mahdiani, Shadi; Jeyhani, Vala; Peltokangas, Mikko; Vehkaoja, Antti

    2015-01-01

    With the worldwide growth of mobile wireless technologies, healthcare services can be provided at anytime and anywhere. Usage of wearable wireless physiological monitoring system has been extensively increasing during the last decade. These mobile devices can continuously measure e.g. the heart activity and wirelessly transfer the data to the mobile phone of the patient. One of the significant restrictions for these devices is usage of energy, which leads to requiring low sampling rate. This article is presented in order to investigate the lowest adequate sampling frequency of ECG signal, for achieving accurate enough time domain heart rate variability (HRV) parameters. For this purpose the ECG signals originally measured with high 5 kHz sampling rate were down-sampled to simulate the measurement with lower sampling rate. Down-sampling loses information, decreases temporal accuracy, which was then restored by interpolating the signals to their original sampling rates. The HRV parameters obtained from the ECG signals with lower sampling rates were compared. The results represent that even when the sampling rate of ECG signal is equal to 50 Hz, the HRV parameters are almost accurate with a reasonable error.

  15. A NuSTAR census of coronal parameters in Seyfert galaxies

    NASA Astrophysics Data System (ADS)

    Tortosa, A.; Bianchi, S.; Marinucci, A.; Matt, G.; Petrucci, P. O.

    2018-06-01

    Context. We discuss the results of the hot corona parameters of active galactic nuclei (AGN) that have been recently measured with NuSTAR. The values taken from the literature of a sample of 19 bright Seyfert galaxies are analysed. Aims: The aim of this work is to look for correlations between coronal parameters, such as the photon index and cut-off energy (when a phenomenological model is adopted) or the optical depth and temperature (when a Comptonization model is used), and other parameters of the systems, such as the black hole mass or the Eddington ratio. Methods: We analysed the coronal parameters of the 19 unobscured, bright Seyfert galaxies that are present in the Swift/BAT 70-month catalogue and that have been observed by NuSTAR, alone or simultaneously with others X-ray observatories, such as Swift, Suzaku, or XMM-Newton. Results: We found an anti-correlation with a significance level >98% between the coronal optical depth and the coronal temperature of our sample. On the other hand, no correlation between the above parameters and the black hole mass, the accretion rate, and the intrinsic spectral slope of the sources is found.

  16. Planning spatial sampling of the soil from an uncertain reconnaissance variogram

    NASA Astrophysics Data System (ADS)

    Lark, R. Murray; Hamilton, Elliott M.; Kaninga, Belinda; Maseka, Kakoma K.; Mutondo, Moola; Sakala, Godfrey M.; Watts, Michael J.

    2017-12-01

    An estimated variogram of a soil property can be used to support a rational choice of sampling intensity for geostatistical mapping. However, it is known that estimated variograms are subject to uncertainty. In this paper we address two practical questions. First, how can we make a robust decision on sampling intensity, given the uncertainty in the variogram? Second, what are the costs incurred in terms of oversampling because of uncertainty in the variogram model used to plan sampling? To achieve this we show how samples of the posterior distribution of variogram parameters, from a computational Bayesian analysis, can be used to characterize the effects of variogram parameter uncertainty on sampling decisions. We show how one can select a sample intensity so that a target value of the kriging variance is not exceeded with some specified probability. This will lead to oversampling, relative to the sampling intensity that would be specified if there were no uncertainty in the variogram parameters. One can estimate the magnitude of this oversampling by treating the tolerable grid spacing for the final sample as a random variable, given the target kriging variance and the posterior sample values. We illustrate these concepts with some data on total uranium content in a relatively sparse sample of soil from agricultural land near mine tailings in the Copperbelt Province of Zambia.

  17. Fluid sampling device

    NASA Technical Reports Server (NTRS)

    Studenick, D. K. (Inventor)

    1977-01-01

    An inlet leak is described for sampling gases, more specifically, for selectively sampling multiple fluids. This fluid sampling device includes a support frame. A plurality of fluid inlet devices extend through the support frame and each of the fluid inlet devices include a longitudinal aperture. An opening device that is responsive to a control signal selectively opens the aperture to allow fluid passage. A closing device that is responsive to another control signal selectively closes the aperture for terminating further fluid flow.

  18. Accelerated Enveloping Distribution Sampling: Enabling Sampling of Multiple End States while Preserving Local Energy Minima.

    PubMed

    Perthold, Jan Walther; Oostenbrink, Chris

    2018-05-17

    Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.

  19. Extrapolation of sonic boom pressure signatures by the waveform parameter method

    NASA Technical Reports Server (NTRS)

    Thomas, C. L.

    1972-01-01

    The waveform parameter method of sonic boom extrapolation is derived and shown to be equivalent to the F-function method. A computer program based on the waveform parameter method is presented and discussed, with a sample case demonstrating program input and output.

  20. Influence of scanning parameters on the estimation accuracy of control points of B-spline surfaces

    NASA Astrophysics Data System (ADS)

    Aichinger, Julia; Schwieger, Volker

    2018-04-01

    This contribution deals with the influence of scanning parameters like scanning distance, incidence angle, surface quality and sampling width on the average estimated standard deviations of the position of control points from B-spline surfaces which are used to model surfaces from terrestrial laser scanning data. The influence of the scanning parameters is analyzed by the Monte Carlo based variance analysis. The samples were generated for non-correlated and correlated data, leading to the samples generated by Latin hypercube and replicated Latin hypercube sampling algorithms. Finally, the investigations show that the most influential scanning parameter is the distance from the laser scanner to the object. The angle of incidence shows a significant effect for distances of 50 m and longer, while the surface quality contributes only negligible effects. The sampling width has no influence. Optimal scanning parameters can be found in the smallest possible object distance at an angle of incidence close to 0° in the highest surface quality. The consideration of correlations improves the estimation accuracy and underlines the importance of complete stochastic models for TLS measurements.

  1. A Mars Sample Return Sample Handling System

    NASA Technical Reports Server (NTRS)

    Wilson, David; Stroker, Carol

    2013-01-01

    We present a sample handling system, a subsystem of the proposed Dragon landed Mars Sample Return (MSR) mission [1], that can return to Earth orbit a significant mass of frozen Mars samples potentially consisting of: rock cores, subsurface drilled rock and ice cuttings, pebble sized rocks, and soil scoops. The sample collection, storage, retrieval and packaging assumptions and concepts in this study are applicable for the NASA's MPPG MSR mission architecture options [2]. Our study assumes a predecessor rover mission collects samples for return to Earth to address questions on: past life, climate change, water history, age dating, understanding Mars interior evolution [3], and, human safety and in-situ resource utilization. Hence the rover will have "integrated priorities for rock sampling" [3] that cover collection of subaqueous or hydrothermal sediments, low-temperature fluidaltered rocks, unaltered igneous rocks, regolith and atmosphere samples. Samples could include: drilled rock cores, alluvial and fluvial deposits, subsurface ice and soils, clays, sulfates, salts including perchlorates, aeolian deposits, and concretions. Thus samples will have a broad range of bulk densities, and require for Earth based analysis where practical: in-situ characterization, management of degradation such as perchlorate deliquescence and volatile release, and contamination management. We propose to adopt a sample container with a set of cups each with a sample from a specific location. We considered two sample cups sizes: (1) a small cup sized for samples matching those submitted to in-situ characterization instruments, and, (2) a larger cup for 100 mm rock cores [4] and pebble sized rocks, thus providing diverse samples and optimizing the MSR sample mass payload fraction for a given payload volume. We minimize sample degradation by keeping them frozen in the MSR payload sample canister using Peltier chip cooling. The cups are sealed by interference fitted heat activated memory

  2. [A comparison of convenience sampling and purposive sampling].

    PubMed

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  3. Determination of drugs and drug-like compounds in different samples with direct analysis in real time mass spectrometry.

    PubMed

    Chernetsova, Elena S; Morlock, Gertrud E

    2011-01-01

    Direct analysis in real time (DART), a relatively new ionization source for mass spectrometry, ionizes small-molecule components from different kinds of samples without any sample preparation and chromatographic separation. The current paper reviews the published data available on the determination of drugs and drug-like compounds in different matrices with DART-MS, including identification and quantitation issues. Parameters that affect ionization efficiency and mass spectra composition are also discussed. Copyright © 2011 Wiley Periodicals, Inc.

  4. Experimental investigation of effective parameters on signal enhancement in spark assisted laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Hassanimatin, M. M.; Tavassoli, S. H.

    2018-05-01

    A combination of electrical spark and laser induced breakdown spectroscopy (LIBS), which is called spark assisted LIBS (SA-LIBS), has shown its capability in plasma spectral emission enhancement. The aim of this paper is a detailed study of plasma emission to determine the effect of plasma and experimental parameters on increasing the spectral signal. An enhancement ratio of SA-LIBS spectral lines compared with LIBS is theoretically introduced. The parameters affecting the spectral enhancement ratio including ablated mass, plasma temperature, the lifetime of neutral and ionic spectral lines, plasma volume, and electron density are experimentally investigated and discussed. By substitution of the effective parameters, the theoretical spectral enhancement ratio is calculated and compared with the experimental one. Two samples of granite as a dielectric and aluminum as a metal at different laser pulse energies are studied. There is a good agreement between the calculated and the experimental enhancement ratio.

  5. Gaussian mass optimization for kernel PCA parameters

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Wang, Zulin

    2011-10-01

    This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.

  6. Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging

    PubMed Central

    Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin

    2018-01-01

    Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo. The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential (BP) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations. PMID:29675325

  7. Post-processing of seismic parameter data based on valid seismic event determination

    DOEpatents

    McEvilly, Thomas V.

    1985-01-01

    An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.

  8. 40 CFR 1037.205 - What must I include in my application?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the application (including the test procedures, test parameters, and test fuels) to show you meet the... basic parameters of the vehicle's design and emission controls. List the fuel type on which your vehicles are designed to operate (for example, ultra low-sulfur diesel fuel). (b) Explain how the emission...

  9. 40 CFR 1037.205 - What must I include in my application?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the application (including the test procedures, test parameters, and test fuels) to show you meet the... basic parameters of the vehicle's design and emission controls. List the fuel type on which your vehicles are designed to operate (for example, ultra low-sulfur diesel fuel). (b) Explain how the emission...

  10. Cleaning Genesis Sample Return Canister for Flight: Lessons for Planetary Sample Return

    NASA Technical Reports Server (NTRS)

    Allton, J. H.; Hittle, J. D.; Mickelson, E. T.; Stansbery, Eileen K.

    2016-01-01

    Sample return missions require chemical contamination to be minimized and potential sources of contamination to be documented and preserved for future use. Genesis focused on and successfully accomplished the following: - Early involvement provided input to mission design: a) cleanable materials and cleanable design; b) mission operation parameters to minimize contamination during flight. - Established contamination control authority at a high level and developed knowledge and respect for contamination control across all institutions at the working level. - Provided state-of-the-art spacecraft assembly cleanroom facilities for science canister assembly and function testing. Both particulate and airborne molecular contamination was minimized. - Using ultrapure water, cleaned spacecraft components to a very high level. Stainless steel components were cleaned to carbon monolayer levels (10 (sup 15) carbon atoms per square centimeter). - Established long-term curation facility Lessons learned and areas for improvement, include: - Bare aluminum is not a cleanable surface and should not be used for components requiring extreme levels of cleanliness. The problem is formation of oxides during rigorous cleaning. - Representative coupons of relevant spacecraft components (cut from the same block at the same time with identical surface finish and cleaning history) should be acquired, documented and preserved. Genesis experience suggests that creation of these coupons would be facilitated by specification on the engineering component drawings. - Component handling history is critical for interpretation of analytical results on returned samples. This set of relevant documents is not the same as typical documentation for one-way missions and does include data from several institutions, which need to be unified. Dedicated resources need to be provided for acquiring and archiving appropriate documents in one location with easy access for decades. - Dedicated, knowledgeable

  11. Extensive monitoring through multiple blood samples in professional soccer players.

    PubMed

    Heisterberg, Mette F; Fahrenkrug, Jan; Krustrup, Peter; Storskov, Anders; Kjær, Michael; Andersen, Jesper L

    2013-05-01

    The aim of this study was to make a comprehensive gathering of consecutive detailed blood samples from professional soccer players and to analyze different blood parameters in relation to seasonal changes in training and match exposure. Blood samples were collected 5 times during a 6-month period and analyzed for 37 variables in 27 professional soccer players from the best Danish league. Additionally, the players were tested for body composition, V[Combining Dot Above]O2max and physical performance by the Yo-Yo intermittent endurance submax test (IE2). Multiple variations in blood parameters occurred during the observation period, including a decrease in hemoglobin and an increase in hematocrit as the competitive season progressed. Iron and transferrin were stable, whereas ferritin showed a decrease at the end of the season. The immunoglobulin A (IgA) and IgM increased in the period with basal physical training and at the end of the season. Leucocytes decreased with increased physical training. Lymphocytes decreased at the end of the season. The V[Combining Dot Above]O2max decreased toward the end of the season, whereas no significant changes were observed in the IE2 test. The regular blood samples from elite soccer players reveal significant changes that may be related to changes in training pattern, match exposure, or length of the match season. Especially the end of the preparation season and at the end of the competitive season seem to be time points were the blood-derived values indicate that the players are under excessive physical strain and might be more subjected to a possible overreaching-overtraining conditions. We suggest that regular analyses of blood samples could be an important initiative to optimize training adaptation, training load, and game participation, but sampling has to be regular, and a database has to be built for each individual player.

  12. Sample introducing apparatus and sample modules for mass spectrometer

    DOEpatents

    Thompson, Cyril V.; Wise, Marcus B.

    1993-01-01

    An apparatus for introducing gaseous samples from a wide range of environmental matrices into a mass spectrometer for analysis of the samples is described. Several sample preparing modules including a real-time air monitoring module, a soil/liquid purge module, and a thermal desorption module are individually and rapidly attachable to the sample introducing apparatus for supplying gaseous samples to the mass spectrometer. The sample-introducing apparatus uses a capillary column for conveying the gaseous samples into the mass spectrometer and is provided with an open/split interface in communication with the capillary and a sample archiving port through which at least about 90 percent of the gaseous sample in a mixture with an inert gas that was introduced into the sample introducing apparatus is separated from a minor portion of the mixture entering the capillary discharged from the sample introducing apparatus.

  13. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    NASA Astrophysics Data System (ADS)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  14. Automatic tree parameter extraction by a Mobile LiDAR System in an urban context.

    PubMed

    Herrero-Huerta, Mónica; Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo

    2018-01-01

    In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees.

  15. Automatic tree parameter extraction by a Mobile LiDAR System in an urban context

    PubMed Central

    Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo

    2018-01-01

    In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees. PMID:29689076

  16. Quantitative tissue parameters of Achilles tendon and plantar fascia in healthy subjects using a handheld myotonometer.

    PubMed

    Orner, Sarah; Kratzer, Wolfgang; Schmidberger, Julian; Grüner, Beate

    2018-01-01

    The aim of the study was to examine the quantitative tissue properties of the Achilles tendon and plantar fascia using a handheld, non-invasive MyotonPRO device, in order to generate normal values and examine the biomechanical relationship of both structures. Prospective study of a large, healthy sample population. The study sample included 207 healthy subjects (87 males and 120 females) for the Achilles tendon and 176 healthy subjects (73 males and 103 females) for the plantar fascia. For the correlations of the tissue parameters of the Achilles tendon and plantar fascia an intersection of both groups was formed which included 150 healthy subjects (65 males and 85 females). All participants were measured in a prone position. Consecutive measurements of the Achilles tendon and plantar fascia were performed by MyotonPRO device at defined sites. For the left and right Achilles tendons and plantar fasciae all five MyotonPRO parameters (Frequency [Hz], Decrement, Stiffness [N/m], Creep and Relaxation Time [ms]) were calculated of healthy males and females. The correlation of the tissue parameters of the Achilles tendon and plantar fascia showed a significant positive correlation of all parameters on the left as well as on the right side. The MyotonPRO is a feasible device for easy measurement of passive tissue properties of the Achilles tendon and plantar fascia in a clinical setting. The generated normal values of the Achilles tendon and plantar fascia are important for detecting abnormalities in patients with Achilles tendinopathy or plantar fasciitis in the future. Biomechanically, both structures are positively correlated. This may provide new aspects in the diagnostics and therapy of plantar fasciitis and Achilles tendinopathy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data.

    PubMed

    O'Reilly, Joseph E; Donoghue, Philip C J

    2018-03-01

    Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.

  18. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data

    PubMed Central

    O’Reilly, Joseph E; Donoghue, Philip C J

    2018-01-01

    Abstract Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data. PMID:29106675

  19. Migration of antioxidants from polylactic acid films, a parameter estimation approach: Part I - A model including convective mass transfer coefficient.

    PubMed

    Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda

    2018-03-01

    A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. One mouse, one pharmacokinetic profile: quantitative whole blood serial sampling for biotherapeutics.

    PubMed

    Joyce, Alison P; Wang, Mengmeng; Lawrence-Henderson, Rosemary; Filliettaz, Cynthia; Leung, Sheldon S; Xu, Xin; O'Hara, Denise M

    2014-07-01

    The purpose of this study was to validate the approach of serial sampling from one mouse through ligand binding assay (LBA) quantification of dosed biotherapeutic in diluted whole blood to derive a pharmacokinetic (PK) profile. This investigation compared PK parameters obtained using serial and composite sampling methods following administration of human IgG monoclonal antibody. The serial sampling technique was established by collecting 10 μL of blood via tail vein at each time point following drug administration. Blood was immediately diluted into buffer followed by analyte quantitation using Gyrolab to derive plasma concentrations. Additional studies were conducted to understand matrix and sampling site effects on drug concentrations. The drug concentration profiles, irrespective of biological matrix, and PK parameters using both sampling methods were not significantly different. There were no sampling site effects on drug concentration measurements except that concentrations were slightly lower in sodium citrated plasma than other matrices. We recommend the application of mouse serial sampling, particularly with limiting drug supply or specialized animal models. Overall the efficiencies gained by serial sampling were 40-80% savings in study cost, animal usage, study length and drug conservation while inter-subject variability across PK parameters was less than 30%.

  1. Sample size calculation in economic evaluations.

    PubMed

    Al, M J; van Hout, B A; Michel, B C; Rutten, F F

    1998-06-01

    A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.

  2. Improving Lab Sample Management - POS/MCEARD

    EPA Science Inventory

    "Scientists face increasing challenges in managing their laboratory samples, including long-term storage of legacy samples, tracking multiple aliquots of samples for many experiments, and linking metadata to these samples. Other factors complicating sample management include the...

  3. Exploring the functional side of the Ocean Sampling Day metagenomes

    NASA Astrophysics Data System (ADS)

    Antonio, F. G.; Kottmann, R.; Wallom, D.; Glöckner, F. O.

    2016-02-01

    The Ocean Sampling Day (OSD) is a simultaneous, collaborative, standardized, and global mega-sequencing campaign to analyze marine microbial community composition and functional traits. 150 metagenomes were sequenced from the first OSD in June 2014 including a rich set of environmental and oceanographic measurements. Unlike other ocean mega-surveys such as Global Ocean Sampling (GOS) or the TARA expedition that mostly sampled open ocean waters most of the OSD samples are from coastal sampling sites, an area not previously well studied in this regard. The result is that OSD adds more than three million new genes to the recently published Ocean Microbial-Reference Gene Catalog (Sunawaga et al., 2015). This allows us to significantly increase our knowledge of the ocean microbiome, identify hot-spots of novelty in terms of function and investigate the impact of human activities on oceans coastal areas where there is the largest interaction between dense human populations and the marine world. Additionally, these cumulative samples, related in time, space and environmental parameters, are providing insights into fundamental rules describing microbial diversity and function and contribute to the blue economy through the identification of novel ocean-derived biotechnologies. References: Sunagawa, Coelho, Chaffron, et al. (2015, May). Structure and function of the global ocean microbiome. Science, 348(6237), 126135.

  4. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  5. WaveAR: A software tool for calculating parameters for water waves with incident and reflected components

    NASA Astrophysics Data System (ADS)

    Landry, Blake J.; Hancock, Matthew J.; Mei, Chiang C.; García, Marcelo H.

    2012-09-01

    The ability to determine wave heights and phases along a spatial domain is vital to understanding a wide range of littoral processes. The software tool presented here employs established Stokes wave theory and sampling methods to calculate parameters for the incident and reflected components of a field of weakly nonlinear waves, monochromatic at first order in wave slope and propagating in one horizontal dimension. The software calculates wave parameters over an entire wave tank and accounts for reflection, weak nonlinearity, and a free second harmonic. Currently, no publicly available program has such functionality. The included MATLAB®-based open source code has also been compiled for Windows®, Mac® and Linux® operating systems. An additional companion program, VirtualWave, is included to generate virtual wave fields for WaveAR. Together, the programs serve as ideal analysis and teaching tools for laboratory water wave systems.

  6. Geotechnical Parameters of Alluvial Soils from in-situ Tests

    NASA Astrophysics Data System (ADS)

    Młynarek, Zbigniew; Stefaniak, Katarzyna; Wierzbicki, Jędrzej

    2012-10-01

    The article concentrates on the identification of geotechnical parameters of alluvial soil represented by silts found near Poznan and Elblag. Strength and deformation parameters of the subsoil tested were identified by the CPTU (static penetration) and SDMT (dilatometric) methods, as well as by the vane test (VT). Geotechnical parameters of the subsoil were analysed with a view to using the soil as an earth construction material and as a foundation for buildings constructed on the grounds tested. The article includes an analysis of the overconsolidation process of the soil tested and a formula for the identification of the overconsolidation ratio OCR. Equation 9 reflects the relation between the undrained shear strength and plasticity of the silts analyzed and the OCR value. The analysis resulted in the determination of the Nkt coefficient, which might be used to identify the undrained shear strength of both sediments tested. On the basis of a detailed analysis of changes in terms of the constrained oedometric modulus M0, the relations between the said modulus, the liquidity index and the OCR value were identified. Mayne's formula (1995) was used to determine the M0 modulus from the CPTU test. The usefullness of the sediments found near Poznan as an earth construction material was analysed after their structure had been destroyed and compacted with a Proctor apparatus. In cases of samples characterised by different water content and soil particle density, the analysis of changes in terms of cohesion and the internal friction angle proved that these parameters are influenced by the soil phase composition (Fig. 18 and 19). On the basis of the tests, it was concluded that the most desirable shear strength parameters are achieved when the silt is compacted below the optimum water content.

  7. Geotechnical Parameters of Alluvial Soils from in-situ Tests

    NASA Astrophysics Data System (ADS)

    Młynarek, Zbigniew; Stefaniak, Katarzyna; Wierzbicki, Jedrzej

    2012-10-01

    The article concentrates on the identification of geotechnical parameters of alluvial soil represented by silts found near Poznan and Elblag. Strength and deformation parameters of the subsoil tested were identified by the CPTU (static penetration) and SDMT (dilatometric) methods, as well as by the vane test (VT). Geotechnical parameters of the subsoil were analysed with a view to using the soil as an earth construction material and as a foundation for buildings constructed on the grounds tested. The article includes an analysis of the overconsolidation process of the soil tested and a formula for the identification of the overconsolidation ratio OCR. Equation 9 reflects the relation between the undrained shear strength and plasticity of the silts analyzed and the OCR value. The analysis resulted in the determination of the Nkt coefficient, which might be used to identify the undrained shear strength of both sediments tested. On the basis of a detailed analysis of changes in terms of the constrained oedometric modulus M0, the relations between the said modulus, the liquidity index and the OCR value were identified. Mayne's formula (1995) was used to determine the M0 modulus from the CPTU test. The usefullness of the sediments found near Poznan as an earth construction material was analysed after their structure had been destroyed and compacted with a Proctor apparatus. In cases of samples characterised by different water content and soil particle density, the analysis of changes in terms of cohesion and the internal friction angle proved that these parameters are influenced by the soil phase composition (Fig. 18 and 19). On the basis of the tests, it was concluded that the most desirable shear strength parameters are achieved when the silt is compacted below the optimum water content.

  8. PAR -- Interface to the ADAM Parameter System

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm J.; Chipperfield, Alan J.

    PAR is a library of Fortran subroutines that provides convenient mechanisms for applications to exchange information with the outside world, through input-output channels called parameters. Parameters enable a user to control an application's behaviour. PAR supports numeric, character, and logical parameters, and is currently implemented only on top of the ADAM parameter system. The PAR library permits parameter values to be obtained, without or with a variety of constraints. Results may be put into parameters to be passed onto other applications. Other facilities include setting a prompt string, and suggested defaults. This document also introduces a preliminary C interface for the PAR library -- this may be subject to change in the light of experience.

  9. Parameter inference in small world network disease models with approximate Bayesian Computational methods

    NASA Astrophysics Data System (ADS)

    Walker, David M.; Allingham, David; Lee, Heung Wing Joseph; Small, Michael

    2010-02-01

    Small world network models have been effective in capturing the variable behaviour of reported case data of the SARS coronavirus outbreak in Hong Kong during 2003. Simulations of these models have previously been realized using informed “guesses” of the proposed model parameters and tested for consistency with the reported data by surrogate analysis. In this paper we attempt to provide statistically rigorous parameter distributions using Approximate Bayesian Computation sampling methods. We find that such sampling schemes are a useful framework for fitting parameters of stochastic small world network models where simulation of the system is straightforward but expressing a likelihood is cumbersome.

  10. Sample introducing apparatus and sample modules for mass spectrometer

    DOEpatents

    Thompson, C.V.; Wise, M.B.

    1993-12-21

    An apparatus for introducing gaseous samples from a wide range of environmental matrices into a mass spectrometer for analysis of the samples is described. Several sample preparing modules including a real-time air monitoring module, a soil/liquid purge module, and a thermal desorption module are individually and rapidly attachable to the sample introducing apparatus for supplying gaseous samples to the mass spectrometer. The sample-introducing apparatus uses a capillary column for conveying the gaseous samples into the mass spectrometer and is provided with an open/split interface in communication with the capillary and a sample archiving port through which at least about 90 percent of the gaseous sample in a mixture with an inert gas that was introduced into the sample introducing apparatus is separated from a minor portion of the mixture entering the capillary discharged from the sample introducing apparatus. 5 figures.

  11. Under-sampling trajectory design for compressed sensing based DCE-MRI.

    PubMed

    Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting

    2013-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.

  12. Influence of combined pretreatments on color parameters during convective drying of Mirabelle plum ( Prunus domestica subsp. syriaca)

    NASA Astrophysics Data System (ADS)

    Dehghannya, Jalal; Gorbani, Rasoul; Ghanbarzadeh, Babak

    2017-07-01

    Discoloration and browning are caused primarily by various reactions, including Maillard condensation of hexoses and amino components, phenol polymerization and pigment destruction. Convective drying can be combined with various pretreatments to help reduce undesired color changes and improve color parameters of dried products. In this study, effects of ultrasound-assisted osmotic dehydration as a pretreatment before convective drying on color parameters of Mirabelle plum were investigated. Variations of L* (lightness), a* (redness/greenness), b* (yellowness/blueness), total color change (ΔE), chroma, hue angle and browning index values were presented versus drying time during convective drying of control and pretreated Mirabelle plums as influenced by ultrasonication time, osmotic solution concentration and immersion time in osmotic solution. Samples pretreated with ultrasound for 30 min and osmotic solution concentration of 70% had a more desirable color among all other pretreated samples, with the closest L*, a* and b* values to the fresh one, showing that ultrasound and osmotic dehydration are beneficial to the color of final products after drying.

  13. [Environmental surveillance of a sample of indoor swimming pools from Emilia Romagna region: microclimate characteristics and chemical parameters, particularly disinfection by products, in pool waters].

    PubMed

    Fantuzzi, G; Righi, E; Predieri, G; Giacobazzi, P; Mastroianni, K; Aggazzotti, G

    2010-01-01

    The aim of the present study was to investigate the environmental and healthy aspects from a representative sample of indoor swimming pools located in the Emilia Romagna region. During the sampling sessions, the occupational environment was evaluated in terms of microclimate parameters and thermal comfort/discomfort conditions. Moreover the chemical risk was assessed by analyzing from the pool water the presence of disinfection by-products (DBPs), such as: trihalomethanes (THMs), haloacetic acids (HAAs), chlorite, chlorate and bromate. The analytical results are in agreement with the Italian legislation (Accordo Stato-Regioni; 2003) even if in some of the sampled indoor swimming pools, the dosed combined chlorine levels, were greater than the Italian limit. With the regard to the microclimate conditions evaluation, the considered thermal indices, Predicted Mean Vote (PMV) and Predicted Percentage of Dissatisfied (PPD%), described a satisfactory occupational environment. Among DBPs, the THMs mean levels (41.4 +/- 30.0 microg/l) resulted close to the values of the current Italian drinking water legislation, and seem to not represent an health issue. The pool waters chlorate levels (range: 5 - 19537 microg/l) need further investigations as recent epidemiological studies on drinking water hypothesized a potential genotoxicity effect of these compounds which are involved in cellular oxidative processes.

  14. Comparison of sampling methodologies and estimation of population parameters for a temporary fish ectoparasite.

    PubMed

    Artim, J M; Sikkel, P C

    2016-08-01

    Characterizing spatio-temporal variation in the density of organisms in a community is a crucial part of ecological study. However, doing so for small, motile, cryptic species presents multiple challenges, especially where multiple life history stages are involved. Gnathiid isopods are ecologically important marine ectoparasites, micropredators that live in substrate for most of their lives, emerging only once during each juvenile stage to feed on fish blood. Many gnathiid species are nocturnal and most have distinct substrate preferences. Studies of gnathiid use of habitat, exploitation of hosts, and population dynamics have used various trap designs to estimate rates of gnathiid emergence, study sensory ecology, and identify host susceptibility. In the studies reported here, we compare and contrast the performance of emergence, fish-baited and light trap designs, outline the key features of these traps, and determine some life cycle parameters derived from trap counts for the Eastern Caribbean coral-reef gnathiid, Gnathia marleyi. We also used counts from large emergence traps and light traps to estimate additional life cycle parameters, emergence rates, and total gnathiid density on substrate, and to calibrate the light trap design to provide estimates of rate of emergence and total gnathiid density in habitat not amenable to emergence trap deployment.

  15. Sample holder with optical features

    DOEpatents

    Milas, Mirko; Zhu, Yimei; Rameau, Jonathan David

    2013-07-30

    A sample holder for holding a sample to be observed for research purposes, particularly in a transmission electron microscope (TEM), generally includes an external alignment part for directing a light beam in a predetermined beam direction, a sample holder body in optical communication with the external alignment part and a sample support member disposed at a distal end of the sample holder body opposite the external alignment part for holding a sample to be analyzed. The sample holder body defines an internal conduit for the light beam and the sample support member includes a light beam positioner for directing the light beam between the sample holder body and the sample held by the sample support member.

  16. Histogram analysis parameters of dynamic contrast-enhanced magnetic resonance imaging can predict histopathological findings including proliferation potential, cellularity, and nucleic areas in head and neck squamous cell carcinoma.

    PubMed

    Surov, Alexey; Meyer, Hans Jonas; Leifels, Leonard; Höhn, Anne-Kathrin; Richter, Cindy; Winter, Karsten

    2018-04-20

    Our purpose was to analyze possible associations between histogram analysis parameters of dynamic contrast-enhanced magnetic resonance imaging DCE MRI and histopathological findings like proliferation index, cell count and nucleic areas in head and neck squamous cell carcinoma (HNSCC). 30 patients (mean age 57.0 years) with primary HNSCC were included in the study. In every case, histogram analysis parameters of K trans , V e , and K ep were estimated using a mathlab based software. Tumor proliferation index, cell count, and nucleic areas were estimated on Ki 67 antigen stained specimens. Spearman's non-parametric rank sum correlation coefficients were calculated between DCE and different histopathological parameters. KI 67 correlated with K trans min ( p = -0.386, P = 0.043) and s K trans skewness ( p = 0.382, P = 0.045), V e min ( p = -0.473, P = 0.011), Ve entropy ( p = 0.424, P = 0.025), and K ep entropy ( p = 0.464, P = 0.013). Cell count correlated with K trans kurtosis ( p = 0.40, P = 0.034), V e entropy ( p = 0.475, P = 0.011). Total nucleic area correlated with V e max ( p = 0.386, P = 0.042) and V e entropy ( p = 0.411, P = 0.030). In G1/2 tumors, only K trans entropy correlated well with total ( P =0.78, P =0.013) and average nucleic areas ( p = 0.655, P = 0.006). In G3 tumors, KI 67 correlated with Ve min ( p = -0.552, P = 0.022) and V e entropy ( p = 0.524, P = 0.031). Ve max correlated with total nucleic area ( p = 0.483, P = 0.049). Kep max correlated with total area ( p = -0.51, P = 0.037), and K ep entropy with KI 67 ( p = 0.567, P = 0.018). We concluded that histogram-based parameters skewness, kurtosis and entropy of K trans , V e , and K ep can be used as markers for proliferation activity, cellularity and nucleic content in HNSCC. Tumor grading influences significantly associations between perfusion and histopathological parameters.

  17. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  18. Process Parameter Optimization of Extrusion-Based 3D Metal Printing Utilizing PW-LDPE-SA Binder System.

    PubMed

    Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan

    2017-03-16

    Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW-LDPE-SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.

  19. Process Parameter Optimization of Extrusion-Based 3D Metal Printing Utilizing PW–LDPE–SA Binder System

    PubMed Central

    Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan

    2017-01-01

    Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity. PMID:28772665

  20. Design of state-feedback controllers including sensitivity reduction, with applications to precision pointing

    NASA Technical Reports Server (NTRS)

    Hadass, Z.

    1974-01-01

    The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.

  1. Evaluation for Bearing Wear States Based on Online Oil Multi-Parameters Monitoring

    PubMed Central

    Hu, Hai-Feng

    2018-01-01

    As bearings are critical components of a mechanical system, it is important to characterize their wear states and evaluate health conditions. In this paper, a novel approach for analyzing the relationship between online oil multi-parameter monitoring samples and bearing wear states has been proposed based on an improved gray k-means clustering model (G-KCM). First, an online monitoring system with multiple sensors for bearings is established, obtaining oil multi-parameter data and vibration signals for bearings through the whole lifetime. Secondly, a gray correlation degree distance matrix is generated using a gray correlation model (GCM) to express the relationship of oil monitoring samples at different times and then a KCM is applied to cluster the matrix. Analysis and experimental results show that there is an obvious correspondence that state changing coincides basically in time between the lubricants’ multi-parameters and the bearings’ wear states. It also has shown that online oil samples with multi-parameters have early wear failure prediction ability for bearings superior to vibration signals. It is expected to realize online oil monitoring and evaluation for bearing health condition and to provide a novel approach for early identification of bearing-related failure modes. PMID:29621175

  2. Evaluation for Bearing Wear States Based on Online Oil Multi-Parameters Monitoring.

    PubMed

    Wang, Si-Yuan; Yang, Ding-Xin; Hu, Hai-Feng

    2018-04-05

    As bearings are critical components of a mechanical system, it is important to characterize their wear states and evaluate health conditions. In this paper, a novel approach for analyzing the relationship between online oil multi-parameter monitoring samples and bearing wear states has been proposed based on an improved gray k-means clustering model (G-KCM). First, an online monitoring system with multiple sensors for bearings is established, obtaining oil multi-parameter data and vibration signals for bearings through the whole lifetime. Secondly, a gray correlation degree distance matrix is generated using a gray correlation model (GCM) to express the relationship of oil monitoring samples at different times and then a KCM is applied to cluster the matrix. Analysis and experimental results show that there is an obvious correspondence that state changing coincides basically in time between the lubricants' multi-parameters and the bearings' wear states. It also has shown that online oil samples with multi-parameters have early wear failure prediction ability for bearings superior to vibration signals. It is expected to realize online oil monitoring and evaluation for bearing health condition and to provide a novel approach for early identification of bearing-related failure modes.

  3. Estimating Divergence Parameters With Small Samples From a Large Number of Loci

    PubMed Central

    Wang, Yong; Hey, Jody

    2010-01-01

    Most methods for studying divergence with gene flow rely upon data from many individuals at few loci. Such data can be useful for inferring recent population history but they are unlikely to contain sufficient information about older events. However, the growing availability of genome sequences suggests a different kind of sampling scheme, one that may be more suited to studying relatively ancient divergence. Data sets extracted from whole-genome alignments may represent very few individuals but contain a very large number of loci. To take advantage of such data we developed a new maximum-likelihood method for genomic data under the isolation-with-migration model. Unlike many coalescent-based likelihood methods, our method does not rely on Monte Carlo sampling of genealogies, but rather provides a precise calculation of the likelihood by numerical integration over all genealogies. We demonstrate that the method works well on simulated data sets. We also consider two models for accommodating mutation rate variation among loci and find that the model that treats mutation rates as random variables leads to better estimates. We applied the method to the divergence of Drosophila melanogaster and D. simulans and detected a low, but statistically significant, signal of gene flow from D. simulans to D. melanogaster. PMID:19917765

  4. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  5. Sampling methods for recovery of human enteric viruses from environmental surfaces.

    PubMed

    Turnage, Nicole L; Gibson, Kristen E

    2017-10-01

    Acute gastroenteritis causes the second highest infectious disease burden worldwide. Human enteric viruses have been identified as leading causative agents of acute gastroenteritis as well as foodborne illnesses in the U.S. and are generally transmitted by fecal-oral contamination. There is growing evidence of transmission occurring via contaminated fomite including food contact surfaces. Additionally, human enteric viruses have been shown to remain infectious on fomites over prolonged periods of time. To better understand viral persistence, there is a need for more studies to investigate this phenomenon. Therefore, optimization of surface sampling methods is essential to aid in understanding environmental contamination to ensure proper preventative measures are being applied. In general, surface sampling studies are limited and highly variable among recovery efficiencies and research parameters used (e.g., virus type/density, surface type, elution buffers, tools). This review aims to discuss the various factors impacting surface sampling of viruses from fomites and to explore how researchers could move towards a more sensitive and standard sampling method. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Prediction of oxidation parameters of purified Kilka fish oil including gallic acid and methyl gallate by adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network.

    PubMed

    Asnaashari, Maryam; Farhoosh, Reza; Farahmandfar, Reza

    2016-10-01

    As a result of concerns regarding possible health hazards of synthetic antioxidants, gallic acid and methyl gallate may be introduced as natural antioxidants to improve oxidative stability of marine oil. Since conventional modelling could not predict the oxidative parameters precisely, artificial neural network (ANN) and neuro-fuzzy inference system (ANFIS) modelling with three inputs, including type of antioxidant (gallic acid and methyl gallate), temperature (35, 45 and 55 °C) and concentration (0, 200, 400, 800 and 1600 mg L(-1) ) and four outputs containing induction period (IP), slope of initial stage of oxidation curve (k1 ) and slope of propagation stage of oxidation curve (k2 ) and peroxide value at the IP (PVIP ) were performed to predict the oxidation parameters of Kilka oil triacylglycerols and were compared to multiple linear regression (MLR). The results showed ANFIS was the best model with high coefficient of determination (R(2)  = 0.99, 0.99, 0.92 and 0.77 for IP, k1 , k2 and PVIP , respectively). So, the RMSE and MAE values for IP were 7.49 and 4.92 in ANFIS model. However, they were to be 15.95 and 10.88 and 34.14 and 3.60 for the best MLP structure and MLR, respectively. So, MLR showed the minimum accuracy among the constructed models. Sensitivity analysis based on the ANFIS model suggested a high sensitivity of oxidation parameters, particularly the induction period on concentrations of gallic acid and methyl gallate due to their high antioxidant activity to retard oil oxidation and enhanced Kilka oil shelf life. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  7. Bayesian Estimation of the DINA Model with Gibbs Sampling

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2015-01-01

    A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…

  8. Cortisol Awakening Response in Elite Military Men: Summary Parameters, Stability Measurement, and Effect of Compliance.

    PubMed

    Taylor, Marcus K; Hernández, Lisa M; Fuller, Shiloah A; Sargent, Paul; Padilla, Genieleah A; Harris, Erica

    2016-11-01

    The cortisol awakening response (CAR) holds promise as a clinically important marker of health status. However, CAR research is routinely challenged by its innate complexity, sensitivity to confounds, and methodological inconsistencies. In this unprecedented characterization of CAR in elite military men (N = 58), we established summary parameters, evaluated sampling stability across two consecutive days, and explored the effect of subject compliance. Average salivary cortisol concentrations increased nearly 60% within 30 minutes of waking, followed by a swift recovery to waking values at 60 minutes. Approximately one in six were classified as negative responders (i.e., <0% change from waking to 30-minute postawakening). Three summary parameters of magnitude, as well as three summary parameters of pattern, were computed. Consistent with our hypothesis, summary parameters of magnitude displayed superior stability compared with summary parameters of pattern in the total sample. As expected, compliance with target sampling times was relatively good; average deviations of self-reported morning sampling times in relation to actigraph-derived wake times across both days were within ±5 minutes, and nearly two-thirds of the sample was classified as CAR compliant across both days. Although compliance had equivocal effects on some measures of magnitude, it substantially improved the stability of summary parameters of pattern. The first of its kind, this study established the foundation for a program of CAR research in a profoundly resilient yet chronically stressed population. Building from this, our forthcoming research will evaluate demographic, biobehavioral, and clinical determinants of CAR in this unique population. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  9. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  10. The DSM‐5 Dimensional Anxiety Scales in a Dutch non‐clinical sample: psychometric properties including the adult separation anxiety disorder scale

    PubMed Central

    Bögels, Susan M.

    2016-01-01

    Abstract With DSM‐5, the American Psychiatric Association encourages complementing categorical diagnoses with dimensional severity ratings. We therefore examined the psychometric properties of the DSM‐5 Dimensional Anxiety Scales, a set of brief dimensional scales that are consistent in content and structure and assess DSM‐5‐based core features of anxiety disorders. Participants (285 males, 255 females) completed the DSM‐5 Dimensional Anxiety Scales for social anxiety disorder, generalized anxiety disorder, specific phobia, agoraphobia, and panic disorder that were included in previous studies on the scales, and also for separation anxiety disorder, which is included in the DSM‐5 chapter on anxiety disorders. Moreover, they completed the Screen for Child Anxiety Related Emotional Disorders Adult version (SCARED‐A). The DSM‐5 Dimensional Anxiety Scales demonstrated high internal consistency, and the scales correlated significantly and substantially with corresponding SCARED‐A subscales, supporting convergent validity. Separation anxiety appeared present among adults, supporting the DSM‐5 recognition of separation anxiety as an anxiety disorder across the life span. To conclude, the DSM‐5 Dimensional Anxiety Scales are a valuable tool to screen for specific adult anxiety disorders, including separation anxiety. Research in more diverse and clinical samples with anxiety disorders is needed. © 2016 The Authors International Journal of Methods in Psychiatric Research Published by John Wiley & Sons Ltd. PMID:27378317

  11. Inventory of forest resources (including water) by multi-level sampling. [nine northern Virginia coastal plain counties

    NASA Technical Reports Server (NTRS)

    Aldrich, R. C.; Dana, R. W.; Roberts, E. H. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. A stratified random sample using LANDSAT band 5 and 7 panchromatic prints resulted in estimates of water in counties with sampling errors less than + or - 9% (67% probability level). A forest inventory using a four band LANDSAT color composite resulted in estimates of forest area by counties that were within + or - 6.7% and + or - 3.7% respectively (67% probability level). Estimates of forest area for counties by computer assisted techniques were within + or - 21% of operational forest survey figures and for all counties the difference was only one percent. Correlations of airborne terrain reflectance measurements with LANDSAT radiance verified a linear atmospheric model with an additive (path radiance) term and multiplicative (transmittance) term. Coefficients of determination for 28 of the 32 modeling attempts, not adverseley affected by rain shower occurring between the times of LANDSAT passage and aircraft overflights, exceeded 0.83.

  12. Use of Unlabeled Samples for Mitigating the Hughes Phenomenon

    NASA Technical Reports Server (NTRS)

    Landgrebe, David A.; Shahshahani, Behzad M.

    1993-01-01

    The use of unlabeled samples in improving the performance of classifiers is studied. When the number of training samples is fixed and small, additional feature measurements may reduce the performance of a statistical classifier. It is shown that by using unlabeled samples, estimates of the parameters can be improved and therefore this phenomenon may be mitigated. Various methods for using unlabeled samples are reviewed and experimental results are provided.

  13. Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1996-01-01

    Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.

  14. Map showing locations and statistical parameters of beach and offshore sand samples, Tutuila Island, American Samoa

    USGS Publications Warehouse

    Dingler, J.R.; Carlson, D.V.; Sallenger, A.H.

    1987-01-01

    In April 1985, sand samples were collected from many of the beaches on Tutuila Island, American Samoa, and in July 1985, three bays were surveyed using side-scan sonar and shallow seismic profiling. During that second trip, scuba divers collected sand samples from the surveyed areas. Dingler and others (1986) describes the study; this report presents the grain-size and composition data for the onshore and offshore sand samples. Locations of the onshore samples are plotted on the map of the island, which is reproduced from Normark and others (1985); locations of most of the offshore samples and side-scan sonar interpretations made during the study are plotted on enlargements (A and B, respectively) of Fagaitua and Nua-seetaga Bays. Lam Yuen (1981), U.S. Army Corps of Engineers (1980), and Sea Engineering Services Inc. (1980) provide additional information pertaining to the island's beaches.

  15. A software tool to assess uncertainty in transient-storage model parameters using Monte Carlo simulations

    USGS Publications Warehouse

    Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.

    2017-01-01

    Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.

  16. Isolators Including Main Spring Linear Guide Systems

    NASA Technical Reports Server (NTRS)

    Goold, Ryan (Inventor); Buchele, Paul (Inventor); Hindle, Timothy (Inventor); Ruebsamen, Dale Thomas (Inventor)

    2017-01-01

    Embodiments of isolators, such as three parameter isolators, including a main spring linear guide system are provided. In one embodiment, the isolator includes first and second opposing end portions, a main spring mechanically coupled between the first and second end portions, and a linear guide system extending from the first end portion, across the main spring, and toward the second end portion. The linear guide system expands and contracts in conjunction with deflection of the main spring along the working axis, while restricting displacement and rotation of the main spring along first and second axes orthogonal to the working axis.

  17. The Reliability of Difference Scores in Populations and Samples

    ERIC Educational Resources Information Center

    Zimmerman, Donald W.

    2009-01-01

    This study was an investigation of the relation between the reliability of difference scores, considered as a parameter characterizing a population of examinees, and the reliability estimates obtained from random samples from the population. The parameters in familiar equations for the reliability of difference scores were redefined in such a way…

  18. Multirate sampled-data yaw-damper and modal suppression system design

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.

    1990-01-01

    A multirate control law synthesized algorithm based on an infinite-time quadratic cost function, was developed along with a method for analyzing the robustness of multirate systems. A generalized multirate sampled-data control law structure (GMCLS) was introduced. A new infinite-time-based parameter optimization multirate sampled-data control law synthesis method and solution algorithm were developed. A singular-value-based method for determining gain and phase margins for multirate systems was also developed. The finite-time-based parameter optimization multirate sampled-data control law synthesis algorithm originally intended to be applied to the aircraft problem was instead demonstrated by application to a simpler problem involving the control of the tip position of a two-link robot arm. The GMCLS, the infinite-time-based parameter optimization multirate control law synthesis method and solution algorithm, and the singular-value based method for determining gain and phase margins were all demonstrated by application to the aircraft control problem originally proposed for this project.

  19. [NUTRITIONAL STATUS BY ANTHROPOMETRIC AND BIOCHEMICAL PARAMETERS OF COLLEGE BASKETBALL PLAYERS].

    PubMed

    Godoy-Cumillaf, Andrés Esteban Roberto; Cárcamo-Araneda, Cristian Rodolfo; Hermosilla-Rodríguez, Freddy Patricio; Oyarzún-Ruiz, Jean Pierre; Viveros-Herrera, José Francisco Javier

    2015-12-01

    in relation to the student population, their class schedules, hours of study, budget shortages, among others, do not allow them to have good eating habits and sedentary ago. Within this context are the sports teams, which must deal with the above. knowing the nutritional status of a group of college basketball players (BU) by anthropometric and biochemical parameters. the research provides a non-experimental, descriptive, transversal, with a quantitative approach The sample was selected on a non-probabilistic approach. which included 12 players design. Anthropometric parameters for body mass index (BMI), somatotype and body composition was assessed. For biochemical glucose, triglycerides and cholesterol. have a BMI of 24.6 (kg/m2), are classified as endomesomorfas (5,5-4,3-1,2) have a fat mass 39.9% and 37.8% of muscle mass, glucose values are 68.7 (mg/dl), triglycerides 128 (mg/dl) and 189 cholesterol (mg/dl). the BU have normal values for BMI and biochemical parameters, but dig deeper greater amount of adipose tissue is found as reported by body composition and somatotype, a situation that could be related to poor eating habits, however is required further study to reach a categorical conclusion. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  20. A simple vibrating sample magnetometer for macroscopic samples

    NASA Astrophysics Data System (ADS)

    Lopez-Dominguez, V.; Quesada, A.; Guzmán-Mínguez, J. C.; Moreno, L.; Lere, M.; Spottorno, J.; Giacomone, F.; Fernández, J. F.; Hernando, A.; García, M. A.

    2018-03-01

    We here present a simple model of a vibrating sample magnetometer (VSM). The system allows recording magnetization curves at room temperature with a resolution of the order of 0.01 emu and is appropriated for macroscopic samples. The setup can be mounted with different configurations depending on the requirements of the sample to be measured (mass, saturation magnetization, saturation field, etc.). We also include here examples of curves obtained with our setup and comparison curves measured with a standard commercial VSM that confirms the reliability of our device.

  1. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  2. Estimation of distributional parameters for censored trace level water quality data: 2. Verification and applications

    USGS Publications Warehouse

    Helsel, Dennis R.; Gilliom, Robert J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.

  3. Method of extruding and packaging a thin sample of reactive material including forming the extrusion die

    DOEpatents

    Lewandowski, Edward F.; Peterson, Leroy L.

    1985-01-01

    This invention teaches a method of cutting a narrow slot in an extrusion die with an electrical discharge machine by first drilling spaced holes at the ends of where the slot will be, whereby the oil can flow through the holes and slot to flush the material eroded away as the slot is being cut. The invention further teaches a method of extruding a very thin ribbon of solid highly reactive material such as lithium or sodium through the die in an inert atmosphere of nitrogen, argon or the like as in a glovebox. The invention further teaches a method of stamping out sample discs from the ribbon and of packaging each disc by sandwiching it between two aluminum sheets and cold welding the sheets together along an annular seam beyond the outer periphery of the disc. This provides a sample of high purity reactive material that can have a long shelf life.

  4. Method of extruding and packaging a thin sample of reactive material, including forming the extrusion die

    DOEpatents

    Lewandowski, E.F.; Peterson, L.L.

    1981-11-30

    This invention teaches a method of cutting a narrow slot in an extrusion die with an electrical discharge machine by first drilling spaced holes at the ends of where the slot will be, whereby the oil can flow through the holes and slot to flush the material eroded away as the slot is being cut. The invention further teaches a method of extruding a very thin ribbon of solid highly reactive material such as lithium or sodium through the die in an inert atmosphere of nitrogen, argon, or the like as in a glovebox. The invention further teaches a method of stamping out sample discs from the ribbon and of packaging each disc by sandwiching it between two aluminum sheets and cold welding the sheets together along an annular seam beyond the outer periphery of the disc. This provides a sample of high purity reactive material that can have a long shelf life.

  5. Nonclinical dose formulation analysis method validation and sample analysis.

    PubMed

    Whitmire, Monica Lee; Bryan, Peter; Henry, Teresa R; Holbrook, John; Lehmann, Paul; Mollitor, Thomas; Ohorodnik, Susan; Reed, David; Wietgrefe, Holly D

    2010-12-01

    Nonclinical dose formulation analysis methods are used to confirm test article concentration and homogeneity in formulations and determine formulation stability in support of regulated nonclinical studies. There is currently no regulatory guidance for nonclinical dose formulation analysis method validation or sample analysis. Regulatory guidance for the validation of analytical procedures has been developed for drug product/formulation testing; however, verification of the formulation concentrations falls under the framework of GLP regulations (not GMP). The only current related regulatory guidance is the bioanalytical guidance for method validation. The fundamental parameters for bioanalysis and formulation analysis validations that overlap include: recovery, accuracy, precision, specificity, selectivity, carryover, sensitivity, and stability. Divergence in bioanalytical and drug product validations typically center around the acceptance criteria used. As the dose formulation samples are not true "unknowns", the concept of quality control samples that cover the entire range of the standard curve serving as the indication for the confidence in the data generated from the "unknown" study samples may not always be necessary. Also, the standard bioanalytical acceptance criteria may not be directly applicable, especially when the determined concentration does not match the target concentration. This paper attempts to reconcile the different practices being performed in the community and to provide recommendations of best practices and proposed acceptance criteria for nonclinical dose formulation method validation and sample analysis.

  6. Identifiability of altimetry-based rating curve parameters in function of river morphological parameters

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; André Garambois, Pierre; Calmant, Stéphane; Paiva, Rodrigo; Walter, Collischonn; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Bonnet, Marie-Paule; Seyler, Frédérique; Monnier, Jérôme

    2016-04-01

    Estimating river discharge for ungauged river reaches from satellite measurements is not straightforward given the nonlinearity of flow behavior with respect to measurable and non measurable hydraulic parameters. As a matter of facts, current satellite datasets do not give access to key parameters such as river bed topography and roughness. A unique set of almost one thousand altimetry-based rating curves was built by fit of ENVISAT and Jason-2 water stages with discharges obtained from the MGB-IPH rainfall-runoff model in the Amazon basin. These rated discharges were successfully validated towards simulated discharges (Ens = 0.70) and in-situ discharges (Ens = 0.71) and are not mission-dependent. The rating curve writes Q = a(Z-Z0)b*sqrt(S), with Z the water surface elevation and S its slope gained from satellite altimetry, a and b power law coefficient and exponent and Z0 the river bed elevation such as Q(Z0) = 0. For several river reaches in the Amazon basin where ADCP measurements are available, the Z0 values are fairly well validated with a relative error lower than 10%. The present contribution aims at relating the identifiability and the physical meaning of a, b and Z0given various hydraulic and geomorphologic conditions. Synthetic river bathymetries sampling a wide range of rivers and inflow discharges are used to perform twin experiments. A shallow water model is run for generating synthetic satellite observations, and then rating curve parameters are determined for each river section thanks to a MCMC algorithm. Thanks to twin experiments, it is shown that rating curve formulation with water surface slope, i.e. closer from Manning equation form, improves parameter identifiability. The compensation between parameters is limited, especially for reaches with little water surface variability. Rating curve parameters are analyzed for riffle and pools for small to large rivers, different river slopes and cross section shapes. It is shown that the river bed

  7. The use of mini-samples in palaeomagnetism

    NASA Astrophysics Data System (ADS)

    Böhnel, Harald; Michalk, Daniel; Nowaczyk, Norbert; Naranjo, Gildardo Gonzalez

    2009-10-01

    Rock cores of ~25 mm diameter are widely used in palaeomagnetism. Occasionally smaller diameters have been used as well which represents distinct advantages in terms of throughput, weight of equipment and core collections. How their orientation precision compares to 25 mm cores, however, has not been evaluated in detail before. Here we compare the site mean directions and their statistical parameters for 12 lava flows sampled with 25 mm cores (standard samples, typically 8 cores per site) and with 12 mm drill cores (mini-samples, typically 14 cores per site). The site-mean directions for both sample sizes appear to be indistinguishable in most cases. For the mini-samples, site dispersion parameters k on average are slightly lower than for the standard samples reflecting their larger orienting and measurement errors. Applying the Wilcoxon signed-rank test the probability that k or α95 have the same distribution for both sizes is acceptable only at the 17.4 or 66.3 per cent level, respectively. The larger mini-core numbers per site appears to outweigh the lower k values yielding also slightly smaller confidence limits α95. Further, both k and α95 are less variable for mini-samples than for standard size samples. This is interpreted also to result from the larger number of mini-samples per site, which better averages out the detrimental effect of undetected abnormal remanence directions. Sampling of volcanic rocks with mini-samples therefore does not present a disadvantage in terms of the overall obtainable uncertainty of site mean directions. Apart from this, mini-samples do present clear advantages during the field work, as about twice the number of drill cores can be recovered compared to 25 mm cores, and the sampled rock unit is then more widely covered, which reduces the contribution of natural random errors produced, for example, by fractures, cooling joints, and palaeofield inhomogeneities. Mini-samples may be processed faster in the laboratory, which is of

  8. Uncertainty-driven nuclear data evaluation including thermal (n,α) applied to 59Ni

    NASA Astrophysics Data System (ADS)

    Helgesson, P.; Sjöstrand, H.; Rochman, D.

    2017-11-01

    This paper presents a novel approach to the evaluation of nuclear data (ND), combining experimental data for thermal cross sections with resonance parameters and nuclear reaction modeling. The method involves sampling of various uncertain parameters, in particular uncertain components in experimental setups, and provides extensive covariance information, including consistent cross-channel correlations over the whole energy spectrum. The method is developed for, and applied to, 59Ni, but may be used as a whole, or in part, for other nuclides. 59Ni is particularly interesting since a substantial amount of 59Ni is produced in thermal nuclear reactors by neutron capture in 58Ni and since it has a non-threshold (n,α) cross section. Therefore, 59Ni gives a very important contribution to the helium production in stainless steel in a thermal reactor. However, current evaluated ND libraries contain old information for 59Ni, without any uncertainty information. The work includes a study of thermal cross section experiments and a novel combination of this experimental information, giving the full multivariate distribution of the thermal cross sections. In particular, the thermal (n,α) cross section is found to be 12.7 ± . 7 b. This is consistent with, but yet different from, current established values. Further, the distribution of thermal cross sections is combined with reported resonance parameters, and with TENDL-2015 data, to provide full random ENDF files; all of this is done in a novel way, keeping uncertainties and correlations in mind. The random files are also condensed into one single ENDF file with covariance information, which is now part of a beta version of JEFF 3.3. Finally, the random ENDF files have been processed and used in an MCNP model to study the helium production in stainless steel. The increase in the (n,α) rate due to 59Ni compared to fresh stainless steel is found to be a factor of 5.2 at a certain time in the reactor vessel, with a relative

  9. Three-dimensional cathodoluminescence characterization of a semipolar GaInN based LED sample

    NASA Astrophysics Data System (ADS)

    Hocker, Matthias; Maier, Pascal; Tischer, Ingo; Meisch, Tobias; Caliebe, Marian; Scholz, Ferdinand; Mundszinger, Manuel; Kaiser, Ute; Thonke, Klaus

    2017-02-01

    A semipolar GaInN based light-emitting diode (LED) sample is investigated by three-dimensionally resolved cathodoluminescence (CL) mapping. Similar to conventional depth-resolved CL spectroscopy (DRCLS), the spatial resolution perpendicular to the sample surface is obtained by calibration of the CL data with Monte-Carlo-simulations (MCSs) of the primary electron beam scattering. In addition to conventional MCSs, we take into account semiconductor-specific processes like exciton diffusion and the influence of the band gap energy. With this method, the structure of the LED sample under investigation can be analyzed without additional sample preparation, like cleaving of cross sections. The measurement yields the thickness of the p-type GaN layer, the vertical position of the quantum wells, and a defect analysis of the underlying n-type GaN, including the determination of the free charge carrier density. The layer arrangement reconstructed from the DRCLS data is in good agreement with the nominal parameters defined by the growth conditions.

  10. Sampling procedures for throughfall monitoring: A simulation study

    NASA Astrophysics Data System (ADS)

    Zimmermann, Beate; Zimmermann, Alexander; Lark, Richard Murray; Elsenbeer, Helmut

    2010-01-01

    What is the most appropriate sampling scheme to estimate event-based average throughfall? A satisfactory answer to this seemingly simple question has yet to be found, a failure which we attribute to previous efforts' dependence on empirical studies. Here we try to answer this question by simulating stochastic throughfall fields based on parameters for statistical models of large monitoring data sets. We subsequently sampled these fields with different sampling designs and variable sample supports. We evaluated the performance of a particular sampling scheme with respect to the uncertainty of possible estimated means of throughfall volumes. Even for a relative error limit of 20%, an impractically large number of small, funnel-type collectors would be required to estimate mean throughfall, particularly for small events. While stratification of the target area is not superior to simple random sampling, cluster random sampling involves the risk of being less efficient. A larger sample support, e.g., the use of trough-type collectors, considerably reduces the necessary sample sizes and eliminates the sensitivity of the mean to outliers. Since the gain in time associated with the manual handling of troughs versus funnels depends on the local precipitation regime, the employment of automatically recording clusters of long troughs emerges as the most promising sampling scheme. Even so, a relative error of less than 5% appears out of reach for throughfall under heterogeneous canopies. We therefore suspect a considerable uncertainty of input parameters for interception models derived from measured throughfall, in particular, for those requiring data of small throughfall events.

  11. Effects of Intra-Family Parameters: Educative Style and Academic Knowledge of Parents and Their Economic Conditions on Teenagers' Personality and Behavior

    ERIC Educational Resources Information Center

    Bakhtavar, Mohammad; Bayova, Rana

    2015-01-01

    The present study aims to investigate the effects of intra-family parameters; educative styles and academic knowledge of parents and their economic condition on teenagers' personality and behavior. The present study is a descriptive survey. The statistical sample of the study included 166 teenage students from Baku, Azerbaijan and 332 of their…

  12. Use of randomized sampling for analysis of metabolic networks.

    PubMed

    Schellenberger, Jan; Palsson, Bernhard Ø

    2009-02-27

    Genome-scale metabolic network reconstructions in microorganisms have been formulated and studied for about 8 years. The constraint-based approach has shown great promise in analyzing the systemic properties of these network reconstructions. Notably, constraint-based models have been used successfully to predict the phenotypic effects of knock-outs and for metabolic engineering. The inherent uncertainty in both parameters and variables of large-scale models is significant and is well suited to study by Monte Carlo sampling of the solution space. These techniques have been applied extensively to the reaction rate (flux) space of networks, with more recent work focusing on dynamic/kinetic properties. Monte Carlo sampling as an analysis tool has many advantages, including the ability to work with missing data, the ability to apply post-processing techniques, and the ability to quantify uncertainty and to optimize experiments to reduce uncertainty. We present an overview of this emerging area of research in systems biology.

  13. The Utility of IRT in Small-Sample Testing Applications.

    ERIC Educational Resources Information Center

    Sireci, Stephen G.

    The utility of modified item response theory (IRT) models in small sample testing applications was studied. The modified IRT models were modifications of the one- and two-parameter logistic models. One-, two-, and three-parameter models were also studied. Test data were from 4 years of a national certification examination for persons desiring…

  14. A new Bayesian recursive technique for parameter estimation

    NASA Astrophysics Data System (ADS)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  15. Auxiliary Parameter MCMC for Exponential Random Graph Models

    NASA Astrophysics Data System (ADS)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  16. Quantification of soil water retention parameters using multi-section TDR-waveform analysis

    NASA Astrophysics Data System (ADS)

    Baviskar, S. M.; Heimovaara, T. J.

    2017-06-01

    Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.

  17. CO2 response (ACi) gas exchange, calculated Vcmax & Jmax parameters, Feb2016-May2016, PA-SLZ, PA-PNM: Panama

    DOE Data Explorer

    Rogers, Alistair [Brookhaven National Lab; Serbin, Shawn [Brookhaven National Lab; Ely, Kim [Brookhaven National Lab; Wu, Jin [BNL; Wolfe, Brett [Smithsonian; Dickman, Turin [Los Alamos National Lab; Collins, Adam [Los Alamos National Lab; Detto, Matteo [Princeton; Grossiord, Charlotte [Los Alamos National Lab; McDowell, Nate [Los Alamos National Lab; Michaletz, Sean

    2017-01-01

    CO2 response (ACi) gas exchange measured on leaves collected from sunlit canopy trees on a monthly basis from Feb to May 2016 at SLZ and PNM. Dataset includes calculated Vcmax and Jmax parameters. This data was collected as part of the 2016 ENSO campaign. See related datasets (existing and future) for further sample details, leaf water potential, LMA, leaf spectra, other gas exchange and leaf chemistry.

  18. Linear elastic properties derivation from microstructures representative of transport parameters.

    PubMed

    Hoang, Minh Tan; Bonnet, Guy; Tuan Luu, Hoang; Perrot, Camille

    2014-06-01

    It is shown that three-dimensional periodic unit cells (3D PUC) representative of transport parameters involved in the description of long wavelength acoustic wave propagation and dissipation through real foam samples may also be used as a standpoint to estimate their macroscopic linear elastic properties. Application of the model yields quantitative agreement between numerical homogenization results, available literature data, and experiments. Key contributions of this work include recognizing the importance of membranes and properties of the base material for the physics of elasticity. The results of this paper demonstrate that a 3D PUC may be used to understand and predict not only the sound absorbing properties of porous materials but also their transmission loss, which is critical for sound insulation problems.

  19. Determination of the atrazine migration parameters in Vertisol

    NASA Astrophysics Data System (ADS)

    Raymundo-Raymundo, E.; Hernandez-Vargas, J.; Nikol'Skii, Yu. N.; Guber, A. K.; Gavi-Reyes, F.; Prado-Pano, B. L.; Figueroa-Sandoval, B.; Mendosa-Hernandez, J. R.

    2010-05-01

    The parameters of the atrazine migration in columns with undisturbed Vertisol sampled from an irrigated plot in Guanajuato, Mexico were determined. A model of the convection-dispersion transport of the chemical compounds accounting for the decomposition and equilibrium adsorption, which is widely applied for assessing the risk of contamination of natural waters with pesticides, was used. The model parameters were obtained by solving the inverse problem of the transport equation on the basis of laboratory experiments on the transport of the 18O isotope and atrazine in soil columns with an undisturbed structure at three filtration velocities. The model adequately described the experimental data at the individual selection of the parameters for each output curve. Physically unsubstantiated parameters of the atrazine adsorption and degradation were obtained when the parameter of the hydrodynamic dispersion was determined from the data on the 18O migration. The simulation also showed that the use of parameters obtained at water content close to saturation in the calculations for an unsaturated soil resulted in the overestimation of the leaching rate and the maximum concentration of atrazine in the output curve compared to the experimental data.

  20. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  1. A systematic investigation of sample diluents in modern supercritical fluid chromatography.

    PubMed

    Desfontaine, Vincent; Tarafder, Abhijit; Hill, Jason; Fairchild, Jacob; Grand-Guillaume Perrenoud, Alexandre; Veuthey, Jean-Luc; Guillarme, Davy

    2017-08-18

    This paper focuses on the possibility to inject large volumes (up to 10μL) in ultra-high performance supercritical fluid chromatography (UHPSFC) under generic gradient conditions. Several injection and method parameters have been individually evaluated (i.e. analyte concentration, injection volume, initial percentage of co-solvent in the gradient, nature of the weak needle wash solvent, nature of the sample diluent, nature of the column and of the analyte). The most critical parameters were further investigated using in a multivariate approach. The overall results suggested that several aprotic solvents including methyl tert-butyl ether (MTBE), dichloromethane, acetonitrile or cyclopentyl methyl ether (CPME) were well adapted for the injection of large volume in UHPSFC, while MeOH was generally the worst alternative. However, the nature of the stationary phase also had a strong impact and some of these diluents did not perform equally on each column. This was due to the existence of a competition in the adsorption of the analyte and the diluent on the stationary phase. This observation introduced the idea that the sample diluent should not only be chosen according to the analyte but also to the column chemistry to limit the interactions between the diluent and the ligands. Other important characteristics of the "ideal" SFC sample diluent were finally highlighted. Aprotic solvents with low viscosity are preferable to avoid strong solvent effects and viscous fingering, respectively. In the end, the authors suggest that the choice of the sample diluent should be part of the method development, as a function of the analyte and the selected stationary phase. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Rain sampling device

    DOEpatents

    Nelson, Danny A.; Tomich, Stanley D.; Glover, Donald W.; Allen, Errol V.; Hales, Jeremy M.; Dana, Marshall T.

    1991-01-01

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of said precipitation from said chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device.

  3. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  4. The DSM-5 Dimensional Anxiety Scales in a Dutch non-clinical sample: psychometric properties including the adult separation anxiety disorder scale.

    PubMed

    Möller, Eline L; Bögels, Susan M

    2016-09-01

    With DSM-5, the American Psychiatric Association encourages complementing categorical diagnoses with dimensional severity ratings. We therefore examined the psychometric properties of the DSM-5 Dimensional Anxiety Scales, a set of brief dimensional scales that are consistent in content and structure and assess DSM-5-based core features of anxiety disorders. Participants (285 males, 255 females) completed the DSM-5 Dimensional Anxiety Scales for social anxiety disorder, generalized anxiety disorder, specific phobia, agoraphobia, and panic disorder that were included in previous studies on the scales, and also for separation anxiety disorder, which is included in the DSM-5 chapter on anxiety disorders. Moreover, they completed the Screen for Child Anxiety Related Emotional Disorders Adult version (SCARED-A). The DSM-5 Dimensional Anxiety Scales demonstrated high internal consistency, and the scales correlated significantly and substantially with corresponding SCARED-A subscales, supporting convergent validity. Separation anxiety appeared present among adults, supporting the DSM-5 recognition of separation anxiety as an anxiety disorder across the life span. To conclude, the DSM-5 Dimensional Anxiety Scales are a valuable tool to screen for specific adult anxiety disorders, including separation anxiety. Research in more diverse and clinical samples with anxiety disorders is needed. © 2016 The Authors International Journal of Methods in Psychiatric Research Published by John Wiley & Sons Ltd. © 2016 The Authors International Journal of Methods in Psychiatric Research Published by John Wiley & Sons Ltd.

  5. How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation

    ERIC Educational Resources Information Center

    Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard

    2006-01-01

    Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…

  6. Optimization of sampling parameters for collection and preconcentration of alveolar air by needle traps.

    PubMed

    Filipiak, Wojciech; Filipiak, Anna; Ager, Clemens; Wiesenhofer, Helmut; Amann, Anton

    2012-06-01

    The approach for breath-VOCs' collection and preconcentration by applying needle traps was developed and optimized. The alveolar air was collected from only a few exhalations under visual control of expired CO(2) into a large gas-tight glass syringe and then warmed up to 45 °C for a short time to avoid condensation. Subsequently, a specially constructed sampling device equipped with Bronkhorst® electronic flow controllers was used for automated adsorption. This sampling device allows time-saving collection of expired/inspired air in parallel onto three different needle traps as well as improvement of sensitivity and reproducibility of NT-GC-MS analysis by collection of relatively large (up to 150 ml) volume of exhaled breath. It was shown that the collection of alveolar air derived from only a few exhalations into a large syringe followed by automated adsorption on needle traps yields better results than manual sorption by up/down cycles with a 1 ml syringe, mostly due to avoided condensation and electronically controlled stable sample flow rate. The optimal profile and composition of needle traps consists of 2 cm Carbopack X and 1 cm Carboxen 1000, allowing highly efficient VOCs' enrichment, while injection by a fast expansive flow technique requires no modifications in instrumentation and fully automated GC-MS analysis can be performed with a commercially available autosampler. This optimized analytical procedure considerably facilitates the collection and enrichment of alveolar air, and is therefore suitable for application at the bedside of critically ill patients in an intensive care unit. Due to its simplicity it can replace the time-consuming sampling of sufficient breath volume by numerous up/down cycles with a 1 ml syringe.

  7. [Parameters of prosaccades and antisaccades as potential markers of anxiety disorders].

    PubMed

    Shalaginova, I G; Vakoliuk, I A; Ecina, I G

    To evaluate the parameters of visually-induced saccades and antisaccades in drug-naïve patients with anxiety disorders. A sample consisted of 18 subjects, including 10 healthy people and 8 patients with the diagnosis of anxiety disorder (ICD-10 items F43.0, F41.0, F41.1, F42). The authors' method of video-oculography was used to assess eye-movement reactions. An increase in latency of correct antisaccades (AS) and visually-induced saccades (VIS) in patients with anxiety disorders was found. The effectiveness of task performance did not differ compared to healthy controls. A decreased generation of predictive saccades was identified in the experimental group. Possible neurophysiological foundations of the saccadic dysfunctions are discussed.

  8. Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Chen, W.; Ni, S.; Wang, Z.

    2011-12-01

    In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.

  9. Modelling tourists arrival using time varying parameter

    NASA Astrophysics Data System (ADS)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  10. Calculation of Weibull strength parameters, Batdorf flaw density constants and related statistical quantities using PC-CARES

    NASA Technical Reports Server (NTRS)

    Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1990-01-01

    This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.

  11. Impact of Processing Method on Recovery of Bacteria from Wipes Used in Biological Surface Sampling

    PubMed Central

    Olson, Nathan D.; Filliben, James J.; Morrow, Jayne B.

    2012-01-01

    Environmental sampling for microbiological contaminants is a key component of hygiene monitoring and risk characterization practices utilized across diverse fields of application. However, confidence in surface sampling results, both in the field and in controlled laboratory studies, has been undermined by large variation in sampling performance results. Sources of variation include controlled parameters, such as sampling materials and processing methods, which often differ among studies, as well as random and systematic errors; however, the relative contributions of these factors remain unclear. The objective of this study was to determine the relative impacts of sample processing methods, including extraction solution and physical dissociation method (vortexing and sonication), on recovery of Gram-positive (Bacillus cereus) and Gram-negative (Burkholderia thailandensis and Escherichia coli) bacteria from directly inoculated wipes. This work showed that target organism had the largest impact on extraction efficiency and recovery precision, as measured by traditional colony counts. The physical dissociation method (PDM) had negligible impact, while the effect of the extraction solution was organism dependent. Overall, however, extraction of organisms from wipes using phosphate-buffered saline with 0.04% Tween 80 (PBST) resulted in the highest mean recovery across all three organisms. The results from this study contribute to a better understanding of the factors that influence sampling performance, which is critical to the development of efficient and reliable sampling methodologies relevant to public health and biodefense. PMID:22706055

  12. A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions

    NASA Astrophysics Data System (ADS)

    Lienert, Sebastian; Joos, Fortunat

    2018-05-01

    A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.

  13. 40 CFR 257.23 - Ground-water sampling and analysis requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... parameters shall be determined after considering the number of samples in the background data base, the data... considering the number of samples in the background data base, the data distribution, and the range of the... of § 257.22(a)(1). (f) The number of samples collected to establish ground-water quality data must be...

  14. 40 CFR 257.23 - Ground-water sampling and analysis requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... parameters shall be determined after considering the number of samples in the background data base, the data... considering the number of samples in the background data base, the data distribution, and the range of the... of § 257.22(a)(1). (f) The number of samples collected to establish ground-water quality data must be...

  15. An algebraic aspect of Pareto mixture parameter estimation using censored sample: A Bayesian approach.

    PubMed

    Saleem, Muhammad; Sharif, Kashif; Fahmi, Aliya

    2018-04-27

    Applications of Pareto distribution are common in reliability, survival and financial studies. In this paper, A Pareto mixture distribution is considered to model a heterogeneous population comprising of two subgroups. Each of two subgroups is characterized by the same functional form with unknown distinct shape and scale parameters. Bayes estimators have been derived using flat and conjugate priors using squared error loss function. Standard errors have also been derived for the Bayes estimators. An interesting feature of this study is the preparation of components of Fisher Information matrix.

  16. Preparing Monodisperse Macromolecular Samples for Successful Biological Small-Angle X-ray and Neutron Scattering Experiments

    PubMed Central

    Jeffries, Cy M.; Graewert, Melissa A.; Blanchet, Clément E.; Langley, David B.; Whitten, Andrew E.; Svergun, Dmitri I

    2017-01-01

    Small-angle X-ray and neutron scattering (SAXS and SANS) are techniques used to extract structural parameters and determine the overall structures and shapes of biological macromolecules, complexes and assemblies in solution. The scattering intensities measured from a sample contain contributions from all atoms within the illuminated sample volume including the solvent and buffer components as well as the macromolecules of interest. In order to obtain structural information, it is essential to prepare an exactly matched solvent blank so that background scattering contributions can be accurately subtracted from the sample scattering to obtain the net scattering from the macromolecules in the sample. In addition, sample heterogeneity caused by contaminants, aggregates, mismatched solvents, radiation damage or other factors can severely influence and complicate data analysis so it is essential that the samples are pure and monodisperse for the duration of the experiment. This Protocol outlines the basic physics of SAXS and SANS and reveals how the underlying conceptual principles of the techniques ultimately ‘translate’ into practical laboratory guidance for the production of samples of sufficiently high quality for scattering experiments. The procedure describes how to prepare and characterize protein and nucleic acid samples for both SAXS and SANS using gel electrophoresis, size exclusion chromatography and light scattering. Also included are procedures specific to X-rays (in-line size exclusion chromatography SAXS) and neutrons, specifically preparing samples for contrast matching/variation experiments and deuterium labeling of proteins. PMID:27711050

  17. Thermal nanostructure: An order parameter multiscale ensemble approach

    NASA Astrophysics Data System (ADS)

    Cheluvaraja, S.; Ortoleva, P.

    2010-02-01

    Deductive all-atom multiscale techniques imply that many nanosystems can be understood in terms of the slow dynamics of order parameters that coevolve with the quasiequilibrium probability density for rapidly fluctuating atomic configurations. The result of this multiscale analysis is a set of stochastic equations for the order parameters whose dynamics is driven by thermal-average forces. We present an efficient algorithm for sampling atomistic configurations in viruses and other supramillion atom nanosystems. This algorithm allows for sampling of a wide range of configurations without creating an excess of high-energy, improbable ones. It is implemented and used to calculate thermal-average forces. These forces are then used to search the free-energy landscape of a nanosystem for deep minima. The methodology is applied to thermal structures of Cowpea chlorotic mottle virus capsid. The method has wide applicability to other nanosystems whose properties are described by the CHARMM or other interatomic force field. Our implementation, denoted SIMNANOWORLD™, achieves calibration-free nanosystem modeling. Essential atomic-scale detail is preserved via a quasiequilibrium probability density while overall character is provided via predicted values of order parameters. Applications from virology to the computer-aided design of nanocapsules for delivery of therapeutic agents and of vaccines for nonenveloped viruses are envisioned.

  18. Novel scheme for rapid parallel parameter estimation of gravitational waves from compact binary coalescences

    NASA Astrophysics Data System (ADS)

    Pankow, C.; Brady, P.; Ochsner, E.; O'Shaughnessy, R.

    2015-07-01

    We introduce a highly parallelizable architecture for estimating parameters of compact binary coalescence using gravitational-wave data and waveform models. Using a spherical harmonic mode decomposition, the waveform is expressed as a sum over modes that depend on the intrinsic parameters (e.g., masses) with coefficients that depend on the observer dependent extrinsic parameters (e.g., distance, sky position). The data is then prefiltered against those modes, at fixed intrinsic parameters, enabling efficiently evaluation of the likelihood for generic source positions and orientations, independent of waveform length or generation time. We efficiently parallelize our intrinsic space calculation by integrating over all extrinsic parameters using a Monte Carlo integration strategy. Since the waveform generation and prefiltering happens only once, the cost of integration dominates the procedure. Also, we operate hierarchically, using information from existing gravitational-wave searches to identify the regions of parameter space to emphasize in our sampling. As proof of concept and verification of the result, we have implemented this algorithm using standard time-domain waveforms, processing each event in less than one hour on recent computing hardware. For most events we evaluate the marginalized likelihood (evidence) with statistical errors of ≲5 %, and even smaller in many cases. With a bounded runtime independent of the waveform model starting frequency, a nearly unchanged strategy could estimate neutron star (NS)-NS parameters in the 2018 advanced LIGO era. Our algorithm is usable with any noise curve and existing time-domain model at any mass, including some waveforms which are computationally costly to evolve.

  19. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between

  20. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.

  1. Sampling and Data Analysis for Environmental Microbiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, Christopher J.

    2001-06-01

    A brief review of the literature indicates the importance of statistical analysis in applied and environmental microbiology. Sampling designs are particularly important for successful studies, and it is highly recommended that researchers review their sampling design before heading to the laboratory or the field. Most statisticians have numerous stories of scientists who approached them after their study was complete only to have to tell them that the data they gathered could not be used to test the hypothesis they wanted to address. Once the data are gathered, a large and complex body of statistical techniques are available for analysis ofmore » the data. Those methods include both numerical and graphical techniques for exploratory characterization of the data. Hypothesis testing and analysis of variance (ANOVA) are techniques that can be used to compare the mean and variance of two or more groups of samples. Regression can be used to examine the relationships between sets of variables and is often used to examine the dependence of microbiological populations on microbiological parameters. Multivariate statistics provides several methods that can be used for interpretation of datasets with a large number of variables and to partition samples into similar groups, a task that is very common in taxonomy, but also has applications in other fields of microbiology. Geostatistics and other techniques have been used to examine the spatial distribution of microorganisms. The objectives of this chapter are to provide a brief survey of some of the statistical techniques that can be used for sample design and data analysis of microbiological data in environmental studies, and to provide some examples of their use from the literature.« less

  2. Adaptive sampling in behavioral surveys.

    PubMed

    Thompson, S K

    1997-01-01

    Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.

  3. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  4. Advances in parameter estimation techniques applied to flexible structures

    NASA Technical Reports Server (NTRS)

    Maben, Egbert; Zimmerman, David C.

    1994-01-01

    In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.

  5. Chymotrypsin effects on the determination of sperm parameters and seminal biochemistry markers.

    PubMed

    Chen, Fang; Lu, Jin-Chun; Xu, Hui-Ru; Huang, Yu-Feng; Lu, Nian-Qing

    2006-01-01

    Few reports of the effects of treatment with chymotrypsin on the determination of sperm parameters and seminal biochemistry markers are documented. Sperm parameters of 63 liquefied and 27 non-liquefied samples, untreated or treated with chymotrypsin, were evaluated using computer-assisted semen analysis. In addition, biochemistry markers such as gamma-glutamyltranspeptidase, alpha-glucosidase and fructose in 50 liquefied and 39 non-liquefied samples, untreated or treated with chymotrypsin, were determined. Treatment with chymotrypsin had no effect on sperm concentration, motility, motility a and b, straightness, curvilinear velocity, straight line velocity, average path velocity and beat cross frequency in both liquefied and non-liquefied semen. However, linearity (p=0.025) decreased and the amplitude of the lateral head (p=0.029) increased significantly in non-liquefied semen after treatment with chymotrypsin. The levels of gamma-glutamyltranspeptidase, alpha-glucosidase and fructose in seminal plasma were unaffected by chymotrypsin, regardless of liquefaction status. Chymotrypsin had no effects on the detection of sperm parameters and biochemistry markers, and could be used to treat non-liquefied samples before semen analysis in the andrology laboratory.

  6. Learning Maximal Entropy Models from finite size datasets: a fast Data-Driven algorithm allows to sample from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).

  7. Using Multistate Reweighting to Rapidly and Efficiently Explore Molecular Simulation Parameters Space for Nonbonded Interactions.

    PubMed

    Paliwal, Himanshu; Shirts, Michael R

    2013-11-12

    Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.

  8. HICOSMO: cosmology with a complete sample of galaxy clusters - II. Cosmological results

    NASA Astrophysics Data System (ADS)

    Schellenberger, G.; Reiprich, T. H.

    2017-10-01

    The X-ray bright, hot gas in the potential well of a galaxy cluster enables systematic X-ray studies of samples of galaxy clusters to constrain cosmological parameters. HIFLUGCS consists of the 64 X-ray brightest galaxy clusters in the Universe, building up a local sample. Here, we utilize this sample to determine, for the first time, individual hydrostatic mass estimates for all the clusters of the sample and, by making use of the completeness of the sample, we quantify constraints on the two interesting cosmological parameters, Ωm and σ8. We apply our total hydrostatic and gas mass estimates from the X-ray analysis to a Bayesian cosmological likelihood analysis and leave several parameters free to be constrained. We find Ωm = 0.30 ± 0.01 and σ8 = 0.79 ± 0.03 (statistical uncertainties, 68 per cent credibility level) using our default analysis strategy combining both a mass function analysis and the gas mass fraction results. The main sources of biases that we correct here are (1) the influence of galaxy groups (incompleteness in parent samples and differing behaviour of the Lx-M relation), (2) the hydrostatic mass bias, (3) the extrapolation of the total mass (comparing various methods), (4) the theoretical halo mass function and (5) other physical effects (non-negligible neutrino mass). We find that galaxy groups introduce a strong bias, since their number density seems to be over predicted by the halo mass function. On the other hand, incorporating baryonic effects does not result in a significant change in the constraints. The total (uncorrected) systematic uncertainties (∼20 per cent) clearly dominate the statistical uncertainties on cosmological parameters for our sample.

  9. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less

  10. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less

  11. Estimation of distributional parameters for censored trace-level water-quality data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1984-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less

  12. Quantitative spectroscopy of Galactic BA-type supergiants. I. Atmospheric parameters

    NASA Astrophysics Data System (ADS)

    Firnstein, M.; Przybilla, N.

    2012-07-01

    Context. BA-type supergiants show a high potential as versatile indicators for modern astronomy. This paper constitutes the first in a series that aims at a systematic spectroscopic study of Galactic BA-type supergiants. Various problems will be addressed, including in particular observational constraints on the evolution of massive stars and a determination of abundance gradients in the Milky Way. Aims: The focus here is on the determination of accurate and precise atmospheric parameters for a sample of Galactic BA-type supergiants as prerequisite for all further analysis. Some first applications include a recalibration of functional relationships between spectral-type, intrinsic colours, bolometric corrections and effective temperature, and an exploration of the reddening-free Johnson Q and Strömgren [c1] and β-indices as photometric indicators for effective temperatures and gravities of BA-type supergiants. Methods: An extensive grid of theoretical spectra is computed based on a hybrid non-LTE approach, covering the relevant parameter space in effective temperature, surface gravity, helium abundance, microturbulence and elemental abundances. The atmospheric parameters are derived spectroscopically by line-profile fits of our theoretical models to high-resolution and high-S/N spectra obtained at various observatories. Ionization equilibria of multiple metals and the Stark-broadened hydrogen and the neutral helium lines constitute our primary indicators for the parameter determination, supplemented by (spectro-)photometry from the UV to the near-IR. Results: We obtain accurate atmospheric parameters for 35 sample supergiants from a homogeneous analysis. Data on effective temperatures, surface gravities, helium abundances, microturbulence, macroturbulence and rotational velocities are presented. The interstellar reddening and the ratio of total-to-selective extinction towards the stars are determined. Our empirical spectral-type-Teff scale is steeper than

  13. Rain sampling device

    DOEpatents

    Nelson, D.A.; Tomich, S.D.; Glover, D.W.; Allen, E.V.; Hales, J.M.; Dana, M.T.

    1991-05-14

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of the precipitation from the chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device. 11 figures.

  14. Scientific guidelines for preservation of samples collected from Mars

    NASA Technical Reports Server (NTRS)

    Gooding, James L. (Editor)

    1990-01-01

    The maximum scientific value of Martian geologic and atmospheric samples is retained when the samples are preserved in the conditions that applied prior to their collection. Any sample degradation equates to loss of information. Based on detailed review of pertinent scientific literature, and advice from experts in planetary sample analysis, number values are recommended for key parameters in the environmental control of collected samples with respect to material contamination, temperature, head-space gas pressure, ionizing radiation, magnetic fields, and acceleration/shock. Parametric values recommended for the most sensitive geologic samples should also be adequate to preserve any biogenic compounds or exobiological relics.

  15. Numerical simulations of regolith sampling processes

    NASA Astrophysics Data System (ADS)

    Schäfer, Christoph M.; Scherrer, Samuel; Buchwald, Robert; Maindl, Thomas I.; Speith, Roland; Kley, Wilhelm

    2017-07-01

    We present recent improvements in the simulation of regolith sampling processes in microgravity using the numerical particle method smooth particle hydrodynamics (SPH). We use an elastic-plastic soil constitutive model for large deformation and failure flows for dynamical behaviour of regolith. In the context of projected small body (asteroid or small moons) sample return missions, we investigate the efficiency and feasibility of a particular material sampling method: Brushes sweep material from the asteroid's surface into a collecting tray. We analyze the influence of different material parameters of regolith such as cohesion and angle of internal friction on the sampling rate. Furthermore, we study the sampling process in two environments by varying the surface gravity (Earth's and Phobos') and we apply different rotation rates for the brushes. We find good agreement of our sampling simulations on Earth with experiments and provide estimations for the influence of the material properties on the collecting rate.

  16. Subjective ranking of concert halls substantiated through orthogonal objective parameters.

    PubMed

    Cerdá, Salvador; Giménez, Alicia; Cibrián, Rosa; Girón, Sara; Zamarreño, Teófilo

    2015-02-01

    This paper studies the global subjective assessment, obtained from mean values of the results of surveys addressed to members of the audience of live concerts in Spanish auditoriums, through the mean values of the three orthogonal objective parameters (Tmid, IACCE3, and LEV), expressed in just noticeable differences (JNDs), regarding the best-valued hall. Results show that a linear combination of the relative variations of orthogonal parameters can largely explain the overall perceived quality of the sample. However, the mean values of certain orthogonal parameters are not representative, which shows that an alternative approach to the problem is necessary. Various possibilities are proposed.

  17. Assessing the Impact of Model Parameter Uncertainty in Simulating Grass Biomass Using a Hybrid Carbon Allocation Strategy

    NASA Astrophysics Data System (ADS)

    Reyes, J. J.; Adam, J. C.; Tague, C.

    2016-12-01

    Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in

  18. Reference intervals for 24 laboratory parameters determined in 24-hour urine collections.

    PubMed

    Curcio, Raffaele; Stettler, Helen; Suter, Paolo M; Aksözen, Jasmin Barman; Saleh, Lanja; Spanaus, Katharina; Bochud, Murielle; Minder, Elisabeth; von Eckardstein, Arnold

    2016-01-01

    Reference intervals for many laboratory parameters determined in 24-h urine collections are either not publicly available or based on small numbers, not sex specific or not from a representative sample. Osmolality and concentrations or enzymatic activities of sodium, potassium, chloride, glucose, creatinine, citrate, cortisol, pancreatic α-amylase, total protein, albumin, transferrin, immunoglobulin G, α1-microglobulin, α2-macroglobulin, as well as porphyrins and their precursors (δ-aminolevulinic acid and porphobilinogen) were determined in 241 24-h urine samples of a population-based cohort of asymptomatic adults (121 men and 120 women). For 16 of these 24 parameters creatinine-normalized ratios were calculated based on 24-h urine creatinine. The reference intervals for these parameters were calculated according to the CLSI C28-A3 statistical guidelines. By contrast to most published reference intervals, which do not stratify for sex, reference intervals of 12 of 24 laboratory parameters in 24-h urine collections and of eight of 16 parameters as creatinine-normalized ratios differed significantly between men and women. For six parameters calculated as 24-h urine excretion and four parameters calculated as creatinine-normalized ratios no reference intervals had been published before. For some parameters we found significant and relevant deviations from previously reported reference intervals, most notably for 24-h urine cortisol in women. Ten 24-h urine parameters showed weak or moderate sex-specific correlations with age. By applying up-to-date analytical methods and clinical chemistry analyzers to 24-h urine collections from a large population-based cohort we provide as yet the most comprehensive set of sex-specific reference intervals calculated according to CLSI guidelines for parameters determined in 24-h urine collections.

  19. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    PubMed

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  20. Effect of Preload Alterations on Left Ventricular Systolic Parameters Including Speckle-Tracking Echocardiography Radial Strain During General Anesthesia.

    PubMed

    Weber, Ulrike; Base, Eva; Ristl, Robin; Mora, Bruno

    2015-08-01

    Frequently used parameters for evaluation of left ventricular systolic function are load-sensitive. However, the impact of preload alterations on speckle-tracking echocardiographic parameters during anesthesia has not been validated. Therefore, two-dimensional (2D) speckle-tracking echocardiography radial strain (RS) was assessed during general anesthesia, simulating 3 different preload conditions. Single-center prospective observational study. University hospital. Thirty-three patients with normal left ventricular systolic function undergoing major surgery. Transgastric views of the midpapillary level of the left ventricle were acquired at 3 different positions. Fractional shortening (FS), fractional area change (FAC), and 2D speckle-tracking echocardiography RS were analyzed in the transgastric midpapillary view. Considerable correlation above 0.5 was found for FAC and FS in the zero and Trendelenburg positions (r = 0.629, r = 0.587), and for RS and FAC in the anti-Trendelenburg position (r = 0.518). In the repeated-measures analysis, significant differences among the values measured at the 3 positions were found for FAC and FS. For FAC, there were differences up to 2.8 percentage points between the anti-Trendelenburg position and the other 2 positions. For FS, only the difference between position zero and anti-Trendelenburg was significant, with an observed change of 1.66. Two-dimensional RS was not significantly different at all positions, with observed changes below 1 percentage point. Alterations in preload did not result in clinically relevant changes of RS, FS, or FAC. Observed changes for RS were smallest; however, the variation of RS was larger than that of FS or FAC. Copyright © 2015 Elsevier Inc. All rights reserved.