Sample records for parameters including sample

  1. Hybrid Gibbs Sampling and MCMC for CMB Analysis at Small Angular Scales

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Eriksen, H. K.; Wandelt, B. D.; Gorski, K. M.; Huey, G.; O'Dwyer, I. J.; Dickinson, C.; Banday, A. J.; Lawrence, C. R.

    2008-01-01

    A) Gibbs Sampling has now been validated as an efficient, statistically exact, and practically useful method for "low-L" (as demonstrated on WMAP temperature polarization data). B) We are extending Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters for the entire range of angular scales relevant for Planck. C) Made possible by inclusion of foreground model parameters in Gibbs sampling and hybrid MCMC and Gibbs sampling for the low signal to noise (high-L) regime. D) Future items to be included in the Bayesian framework include: 1) Integration with Hybrid Likelihood (or posterior) code for cosmological parameters; 2) Include other uncertainties in instrumental systematics? (I.e. beam uncertainties, noise estimation, calibration errors, other).

  2. MontePython 3: Parameter inference code for cosmology

    NASA Astrophysics Data System (ADS)

    Brinckmann, Thejs; Lesgourgues, Julien; Audren, Benjamin; Benabed, Karim; Prunet, Simon

    2018-05-01

    MontePython 3 provides numerous ways to explore parameter space using Monte Carlo Markov Chain (MCMC) sampling, including Metropolis-Hastings, Nested Sampling, Cosmo Hammer, and a Fisher sampling method. This improved version of the Monte Python (ascl:1307.002) parameter inference code for cosmology offers new ingredients that improve the performance of Metropolis-Hastings sampling, speeding up convergence and offering significant time improvement in difficult runs. Additional likelihoods and plotting options are available, as are post-processing algorithms such as Importance Sampling and Adding Derived Parameter.

  3. Compressive properties of passive skeletal muscle-the impact of precise sample geometry on parameter identification in inverse finite element analysis.

    PubMed

    Böl, Markus; Kruse, Roland; Ehret, Alexander E; Leichsenring, Kay; Siebert, Tobias

    2012-10-11

    Due to the increasing developments in modelling of biological material, adequate parameter identification techniques are urgently needed. The majority of recent contributions on passive muscle tissue identify material parameters solely by comparing characteristic, compressive stress-stretch curves from experiments and simulation. In doing so, different assumptions concerning e.g. the sample geometry or the degree of friction between the sample and the platens are required. In most cases these assumptions are grossly simplified leading to incorrect material parameters. In order to overcome such oversimplifications, in this paper a more reliable parameter identification technique is presented: we use the inverse finite element method (iFEM) to identify the optimal parameter set by comparison of the compressive stress-stretch response including the realistic geometries of the samples and the presence of friction at the compressed sample faces. Moreover, we judge the quality of the parameter identification by comparing the simulated and experimental deformed shapes of the samples. Besides this, the study includes a comprehensive set of compressive stress-stretch data on rabbit soleus muscle and the determination of static friction coefficients between muscle and PTFE. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Apparatus and method for radioactive waste screening

    DOEpatents

    Akers, Douglas W.; Roybal, Lyle G.; Salomon, Hopi; Williams, Charles Leroy

    2012-09-04

    An apparatus and method relating to screening radioactive waste are disclosed for ensuring that at least one calculated parameter for the measurement data of a sample falls within a range between an upper limit and a lower limit prior to the sample being packaged for disposal. The apparatus includes a radiation detector configured for detecting radioactivity and radionuclide content of the of the sample of radioactive waste and generating measurement data in response thereto, and a collimator including at least one aperture to direct a field of view of the radiation detector. The method includes measuring a radioactive content of a sample, and calculating one or more parameters from the radioactive content of the sample.

  5. Contained radiological analytical chemistry module

    DOEpatents

    Barney, David M.

    1989-01-01

    A system which provides analytical determination of a plurality of water chemistry parameters with respect to water samples subject to radiological contamination. The system includes a water sample analyzer disposed within a containment and comprising a sampling section for providing predetermined volumes of samples for analysis; a flow control section for controlling the flow through the system; and a gas analysis section for analyzing samples provided by the sampling system. The sampling section includes a controllable multiple port valve for, in one position, metering out sample of a predetermined volume and for, in a second position, delivering the material sample for analysis. The flow control section includes a regulator valve for reducing the pressure in a portion of the system to provide a low pressure region, and measurement devices located in the low pressure region for measuring sample parameters such as pH and conductivity, at low pressure. The gas analysis section which is of independent utility provides for isolating a small water sample and extracting the dissolved gases therefrom into a small expansion volume wherein the gas pressure and thermoconductivity of the extracted gas are measured.

  6. Contained radiological analytical chemistry module

    DOEpatents

    Barney, David M.

    1990-01-01

    A system which provides analytical determination of a plurality of water chemistry parameters with respect to water samples subject to radiological contamination. The system includes a water sample analyzer disposed within a containment and comprising a sampling section for providing predetermined volumes of samples for analysis; a flow control section for controlling the flow through the system; and a gas analysis section for analyzing samples provided by the sampling system. The sampling section includes a controllable multiple port valve for, in one position, metering out sample of a predetermined volume and for, in a second position, delivering the material sample for analysis. The flow control section includes a regulator valve for reducing the pressure in a portion of the system to provide a low pressure region, and measurement devices located in the low pressure region for measuring sample parameters such as pH and conductivity, at low pressure. The gas analysis section which is of independent utility provides for isolating a small water sample and extracting the dissolved gases therefrom into a small expansion volume wherein the gas pressure and thermoconductivity of the extracted gas are measured.

  7. The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter

    2017-03-01

    Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.

  8. Proceedings of the Antenna Applications Symposium (32nd) Held in Monticello, Illinois on 16-18 September 2008. Volume 1

    DTIC Science & Technology

    2008-12-20

    Equation 6 for the sample likelihood function gives a “concentrated likelihood function,” which depends on correlation parameters θh and ph. This...step one and estimates correlation parameters using the new data set including all previous sample points and the new data point x. The algorithm...Unclassified b. ABSTRACT Unclassified c. THIS PAGE Unclassified UU 279 19b. TELEPHONE NUMBER (include area code ) N/A

  9. Crystal-Chemical Analysis Martian Minerals in Gale Crater

    NASA Technical Reports Server (NTRS)

    Morrison, S. M.; Downs, R. T.; Blake, D. F.; Bish, D. L.; Ming, D. W.; Morris, R. V.; Yen, A. S.; Chipera, S. J.; Treiman, A. H.; Vaniman, D. T.; hide

    2015-01-01

    The CheMin instrument on the Mars Science Laboratory rover Curiosity performed X-ray diffraction analyses on scooped soil at Rocknest and on drilled rock fines at Yellowknife Bay (John Klein and Cumberland samples), The Kimberley (Windjana sample), and Pahrump (Confidence Hills sample) in Gale crater, Mars. Samples were analyzed with the Rietveld method to determine the unit-cell parameters and abundance of each observed crystalline phase. Unit-cell parameters were used to estimate compositions of the major crystalline phases using crystal-chemical techniques. These phases include olivine, plagioclase and clinopyroxene minerals. Comparison of the CheMin sample unit-cell parameters with those in the literature provides an estimate of the chemical compositions of the major crystalline phases. Preliminary unit-cell parameters, abundances and compositions of crystalline phases found in Rocknest and Yellowknife Bay samples were reported in. Further instrument calibration, development of 2D-to- 1D pattern conversion corrections, and refinement of corrected data allows presentation of improved compositions for the above samples.

  10. Using an ensemble smoother to evaluate parameter uncertainty of an integrated hydrological model of Yanqi basin

    NASA Astrophysics Data System (ADS)

    Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang

    2015-10-01

    Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.

  11. Resolving structural influences on water-retention properties of alluvial deposits

    USGS Publications Warehouse

    Winfield, K.A.; Nimmo, J.R.; Izbicki, J.A.; Martin, P.M.

    2006-01-01

    With the goal of improving property-transfer model (PTM) predictions of unsaturated hydraulic properties, we investigated the influence of sedimentary structure, defined as particle arrangement during deposition, on laboratory-measured water retention (water content vs. potential [??(??)]) of 10 undisturbed core samples from alluvial deposits in the western Mojave Desert, California. The samples were classified as having fluvial or debris-flow structure based on observed stratification and measured spread of particle-size distribution. The ??(??) data were fit with the Rossi-Nimmo junction model, representing water retention with three parameters: the maximum water content (??max), the ??-scaling parameter (??o), and the shape parameter (??). We examined trends between these hydraulic parameters and bulk physical properties, both textural - geometric mean, Mg, and geometric standard deviation, ??g, of particle diameter - and structural - bulk density, ??b, the fraction of unfilled pore space at natural saturation, Ae, and porosity-based randomness index, ??s, defined as the excess of total porosity over 0.3. Structural parameters ??s and Ae were greater for fluvial samples, indicating greater structural pore space and a possibly broader pore-size distribution associated with a more systematic arrangement of particles. Multiple linear regression analysis and Mallow's Cp statistic identified combinations of textural and structural parameters for the most useful predictive models: for ??max, including Ae, ??s, and ??g, and for both ??o and ??, including only textural parameters, although use of Ae can somewhat improve ??o predictions. Textural properties can explain most of the sample-to-sample variation in ??(??) independent of deposit type, but inclusion of the simple structural indicators Ae and ??s can improve PTM predictions, especially for the wettest part of the ??(??) curve. ?? Soil Science Society of America.

  12. The determination of the acoustic parameters of volcanic rocks from compressional velocity measurements

    USGS Publications Warehouse

    Carroll, R.D.

    1969-01-01

    A statistical analysis was made of the relationship of various acoustic parameters of volcanic rocks to compressional wave velocities for data obtained in a volcanic region in Nevada. Some additional samples, chiefly granitic rocks, were also included in the study to extend the range of parameters and the variety of siliceous rock types sampled. Laboratory acoustic measurements obtained on 62 dry core samples were grouped with similar measurements obtained from geophysical logging devices at several depth intervals in a hole from which 15 of the core samples had been obtained. The effects of lithostatic and hydrostatic load on changing the rock acoustic parameters measured in the hole were noticeable when compared with the laboratory measurements on the same core. The results of the analyses determined by grouping all of the data, however, indicate that dynamic Young's, shear and bulk modulus, shear velocity, shear and compressional characteristic impedance, as well as amplitude and energy reflection coefficients may be reliably estimated on the basis of the compressional wave velocities of the rocks investigated. Less precise estimates can be made of density based on the rock compressional velocity. The possible extension of these relationships to include many siliceous rocks is suggested. ?? 1969.

  13. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    NASA Astrophysics Data System (ADS)

    Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal

    Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.

  14. Discrimination of Clover and Citrus Honeys from Egypt According to Floral Type Using Easily Assessable Physicochemical Parameters and Discriminant Analysis: An External Validation of the Chemometric Approach.

    PubMed

    Karabagias, Ioannis K; Karabournioti, Sofia

    2018-05-03

    Twenty-two honey samples, namely clover and citrus honeys, were collected from the greater Cairo area during the harvesting year 2014⁻2015. The main purpose of the present study was to characterize the aforementioned honey types and to investigate whether the use of easily assessable physicochemical parameters, including color attributes in combination with chemometrics, could differentiate honey floral origin. Parameters taken into account were: pH, electrical conductivity, ash, free acidity, lactonic acidity, total acidity, moisture content, total sugars (degrees Brix-°Bx), total dissolved solids and their ratio to total acidity, salinity, CIELAB color parameters, along with browning index values. Results showed that all honey samples analyzed met the European quality standards set for honey and had variations in the aforementioned physicochemical parameters depending on floral origin. Application of linear discriminant analysis showed that eight physicochemical parameters, including color, could classify Egyptian honeys according to floral origin ( p < 0.05). Correct classification rate was 95.5% using the original method and 90.9% using the cross validation method. The discriminatory ability of the developed model was further validated using unknown honey samples. The overall correct classification rate was not affected. Specific physicochemical parameter analysis in combination with chemometrics has the potential to enhance the differences in floral honeys produced in a given geographical zone.

  15. Discrimination of Clover and Citrus Honeys from Egypt According to Floral Type Using Easily Assessable Physicochemical Parameters and Discriminant Analysis: An External Validation of the Chemometric Approach

    PubMed Central

    Karabournioti, Sofia

    2018-01-01

    Twenty-two honey samples, namely clover and citrus honeys, were collected from the greater Cairo area during the harvesting year 2014–2015. The main purpose of the present study was to characterize the aforementioned honey types and to investigate whether the use of easily assessable physicochemical parameters, including color attributes in combination with chemometrics, could differentiate honey floral origin. Parameters taken into account were: pH, electrical conductivity, ash, free acidity, lactonic acidity, total acidity, moisture content, total sugars (degrees Brix-°Bx), total dissolved solids and their ratio to total acidity, salinity, CIELAB color parameters, along with browning index values. Results showed that all honey samples analyzed met the European quality standards set for honey and had variations in the aforementioned physicochemical parameters depending on floral origin. Application of linear discriminant analysis showed that eight physicochemical parameters, including color, could classify Egyptian honeys according to floral origin (p < 0.05). Correct classification rate was 95.5% using the original method and 90.9% using the cross validation method. The discriminatory ability of the developed model was further validated using unknown honey samples. The overall correct classification rate was not affected. Specific physicochemical parameter analysis in combination with chemometrics has the potential to enhance the differences in floral honeys produced in a given geographical zone. PMID:29751543

  16. Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.

  17. Correlations of fatty acid supplementation, aeroallergens, shampoo, and ear cleanser with multiple parameters in pruritic dogs.

    PubMed

    Nesbitt, Gene H; Freeman, Lisa M; Hannah, Steven S

    2004-01-01

    Seventy-two pruritic dogs were fed one of four diets controlled for n-6:n-3 fatty acid ratios and total dietary intake of fatty acids. Multiple parameters were evaluated, including clinical and cytological findings, aeroallergen testing, microbial sampling techniques, and effects of an anti-fungal/antibacterial shampoo and ear cleanser. Significant correlations were observed between many clinical parameters, anatomical sampling sites, and microbial counts when data from the diet groups was combined. There were no statistically significant differences between individual diets for any of the clinical parameters. The importance of total clinical management in the control of pruritus was demonstrated.

  18. The Joker: A custom Monte Carlo sampler for binary-star and exoplanet radial velocity data

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter

    2017-01-01

    Given sparse or low-quality radial-velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and MCMC posterior sampling over the orbital parameters. The Joker is a custom-built Monte Carlo sampler that can produce a posterior sampling for orbital parameters given sparse or noisy radial-velocity measurements, even when the likelihood function is poorly behaved. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still highly informative and can be used in hierarchical (population) modeling.

  19. Data bank of optical properties of biological tissue and blood in the visible and near infrared spectral region

    NASA Astrophysics Data System (ADS)

    Khairullina, Alphiya Y.; Bui, Lilia; Oleinik, Tatiana V.; Artishevsky, Nelli; Prigoun, Natalia; Sevkovsky, Jakov; Mokhort, Tatiana

    1996-12-01

    The data bank contains optical, ordinary biochemical and biophysical information on 120 venous blood samples of donors, healthy persons, patients with high pathology, 60 tissue samples. The optical parameters include diffuse reflection R((lambda) ) and transmission T((lambda) ) coefficients for optically thick layers, the absorption K((lambda) ) and extinction (epsilon) ((lambda) ) spectra, oxygenation degree CO2, parameter p determined by sizes and shapes of cells and their aggregates, refractive index of a disperse phase relative to surrounding media, and cooperative effects at high relative concentration. The peculiarities in absorption K((lambda) spectra are connected with different pathologies. It is shown from K((lambda) ) that the grade of pathology connected with the concentration of hemoglobin and mithohondrion together with oxygenation degree of blood and tissues, with the pathological hemoglobin's forms and its decomposition products of different levels. Parameter p is an important diagnostic parameter. We consider that it is necessary to include the oxygenation degree and erythrocyte's aggregation parameter to extend the range of common diagnostic parameters of blood by the first rota.

  20. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  1. A design methodology for nonlinear systems containing parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Young, G. E.; Auslander, D. M.

    1983-01-01

    In the present design methodology for nonlinear systems containing parameter uncertainty, a generalized sensitivity analysis is incorporated which employs parameter space sampling and statistical inference. For the case of a system with j adjustable and k nonadjustable parameters, this methodology (which includes an adaptive random search strategy) is used to determine the combination of j adjustable parameter values which maximize the probability of those performance indices which simultaneously satisfy design criteria in spite of the uncertainty due to k nonadjustable parameters.

  2. Reliable change, sensitivity, and specificity of a multidimensional concussion assessment battery: implications for caution in clinical practice.

    PubMed

    Register-Mihalik, Johna K; Guskiewicz, Kevin M; Mihalik, Jason P; Schmidt, Julianne D; Kerr, Zachary Y; McCrea, Michael A

    2013-01-01

    To provide reliable change confidence intervals for common clinical concussion measures using a healthy sample of collegiate athletes and to apply these reliable change parameters to a sample of concussed collegiate athletes. Two independent samples were included in the study and evaluated on common clinical measures of concussion. The healthy sample included male, collegiate football student-athletes (n = 38) assessed at 2 time points. The concussed sample included college-aged student-athletes (n = 132) evaluated before and after a concussion. Outcome measures included symptom severity scores, Automated Neuropsychological Assessment Metrics throughput scores, and Sensory Organization Test composite scores. Application of the reliable change parameters suggests that a small percentage of concussed participants were impaired on each measure. We identified a low sensitivity of the entire battery (all measures combined) of 50% but high specificity of 96%. Clinicians should be trained in understanding clinical concussion measures and should be aware of evidence suggesting the multifaceted battery is more sensitive than any single measure. Clinicians should be cautioned that sensitivity to balance and neurocognitive impairments was low for each individual measure. Applying the confidence intervals to our injured sample suggests that these measures do not adequately identify postconcussion impairments when used in isolation.

  3. Correlations between Microbial Indicators, Pathogens, and Environmental Factors in a Subtropical Estuary

    PubMed Central

    Ortega, Cristina; Solo-Gabriele, Helena M.; Abdelzaher, Amir; Wright, Mary; Deng, Yang; Stark, Lillian M.

    2009-01-01

    The objective of this study was to evaluate whether indicator microbes and physical-chemical parameters were correlated with pathogens within a tidally influenced estuary. Measurements included the analysis of physical-chemical parameters (pH, salinity, temperature, and turbidity), measurements of bacterial indicators (enterococci, fecal coliform, E. coli, and total coliform), viral indicators (somatic and MS2 coliphage), viral pathogens (enterovirus by culture), and protozoan pathogens (Cryptosporidium and Giardia). All pathogen results were negative with the exception of one sample which tested positive for culturable reovirus (8.5 MPN/100 L).. Notable physical-chemical parameters for this sample included low salinity (<1 ppt) and high water temperature (31 °C). Indicator bacteria and indicator virus levels for this sample were within average values typically measured within the study site and were low in comparison with levels observed in other freshwater environments. Overall results suggest that high levels of bacterial and viral indicators were associated with low salinity sites. PMID:19464704

  4. Determination of polarimetric parameters of honey by near-infrared transflectance spectroscopy.

    PubMed

    García-Alvarez, M; Ceresuela, S; Huidobro, J F; Hermida, M; Rodríguez-Otero, J L

    2002-01-30

    NIR transflectance spectroscopy was used to determine polarimetric parameters (direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides) and sucrose in honey. In total, 156 honey samples were collected during 1992 (45 samples), 1995 (56 samples), and 1996 (55 samples). Samples were analyzed by NIR spectroscopy and polarimetric methods. Calibration (118 samples) and validation (38 samples) sets were made up; honeys from the three years were included in both sets. Calibrations were performed by modified partial least-squares regression and scatter correction by standard normal variation and detrend methods. For direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides, good statistics (bias, SEV, and R(2)) were obtained for the validation set, and no statistically (p = 0.05) significant differences were found between instrumental and polarimetric methods for these parameters. Statistical data for sucrose were not as good as those of the other parameters. Therefore, NIR spectroscopy is not an effective method for quantitative analysis of sucrose in these honey samples. However, NIR spectroscopy may be an acceptable method for semiquantitative evaluation of sucrose for honeys, such as those in our study, containing up to 3% of sucrose. Further work is necessary to validate the uncertainty at higher levels.

  5. Neutrino oscillation parameter sampling with MonteCUBES

    NASA Astrophysics Data System (ADS)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].

  6. [Development of an analyzing system for soil parameters based on NIR spectroscopy].

    PubMed

    Zheng, Li-Hua; Li, Min-Zan; Sun, Hong

    2009-10-01

    A rapid estimation system for soil parameters based on spectral analysis was developed by using object-oriented (OO) technology. A class of SOIL was designed. The instance of the SOIL class is the object of the soil samples with the particular type, specific physical properties and spectral characteristics. Through extracting the effective information from the modeling spectral data of soil object, a map model was established between the soil parameters and its spectral data, while it was possible to save the mapping model parameters in the database of the model. When forecasting the content of any soil parameter, the corresponding prediction model of this parameter can be selected with the same soil type and the similar soil physical properties of objects. And after the object of target soil samples was carried into the prediction model and processed by the system, the accurate forecasting content of the target soil samples could be obtained. The system includes modules such as file operations, spectra pretreatment, sample analysis, calibrating and validating, and samples content forecasting. The system was designed to run out of equipment. The parameters and spectral data files (*.xls) of the known soil samples can be input into the system. Due to various data pretreatment being selected according to the concrete conditions, the results of predicting content will appear in the terminal and the forecasting model can be stored in the model database. The system reads the predicting models and their parameters are saved in the model database from the module interface, and then the data of the tested samples are transferred into the selected model. Finally the content of soil parameters can be predicted by the developed system. The system was programmed with Visual C++6.0 and Matlab 7.0. And the Access XP was used to create and manage the model database.

  7. Seasonal microbial and environmental parameters at Crocker Reef, Florida Keys, 2014–2015

    USGS Publications Warehouse

    Kellogg, Christina A.; Yates, Kimberly K.; Lawler, Stephanie N.; Moore, Christopher S.; Smiley, Nathan A.

    2015-11-04

    Microbial measurements included enumeration of total bacteria, enumeration of virus-like particles, and plate counts of Vibrio spp. colony-forming units (CFU). These measurements were intended to give a sense of any seasonal changes in the total microbial load and to provide an indication of water quality. Additional environmental parameters measured included water temperature, salinity, dissolved oxygen, and pH. Four sites (table 1) were intensively sampled for periods of approximately 48 hours during summer (July 2014) and winter (January–February 2015), during which water samples were collected every 4 hours for analysis, except when prevented by weather conditions.

  8. A system for comparison of boring parameters of mini-HDD machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunsaulis, F.R.

    A system has been developed to accurately evaluate changes in performance of a mini-horizontal directional drilling (HDD) system in the backreaming/pullback portion of a bore as the parameters influencing the backream are changed. Parameters incorporated in the study include spindle rotation rate, rate of pull, fluid flow rate, and backreamer design. The boring system is able to run at variable, operator-determined rates of spindle rotation and pullback speed utilizing electronic feedback controls for regulation. Spindle torque and pullback force are continuously measured and recorded giving an indication of the performance of the unit. A method has also been developed tomore » measure the pull load on the installed service line to determine the effect of the boring parameters on the service line. Variability of soil along the bore path is measured and quantified using a soil sampling system developed for the study. Sample results obtained with the system are included in the report. 2 refs., 5 figs., 2 tabs.« less

  9. Predictive model for inflammation grades of chronic hepatitis B: Large-scale analysis of clinical parameters and gene expressions.

    PubMed

    Zhou, Weichen; Ma, Yanyun; Zhang, Jun; Hu, Jingyi; Zhang, Menghan; Wang, Yi; Li, Yi; Wu, Lijun; Pan, Yida; Zhang, Yitong; Zhang, Xiaonan; Zhang, Xinxin; Zhang, Zhanqing; Zhang, Jiming; Li, Hai; Lu, Lungen; Jin, Li; Wang, Jiucun; Yuan, Zhenghong; Liu, Jie

    2017-11-01

    Liver biopsy is the gold standard to assess pathological features (eg inflammation grades) for hepatitis B virus-infected patients although it is invasive and traumatic; meanwhile, several gene profiles of chronic hepatitis B (CHB) have been separately described in relatively small hepatitis B virus (HBV)-infected samples. We aimed to analyse correlations among inflammation grades, gene expressions and clinical parameters (serum alanine amino transaminase, aspartate amino transaminase and HBV-DNA) in large-scale CHB samples and to predict inflammation grades by using clinical parameters and/or gene expressions. We analysed gene expressions with three clinical parameters in 122 CHB samples by an improved regression model. Principal component analysis and machine-learning methods including Random Forest, K-nearest neighbour and support vector machine were used for analysis and further diagnosis models. Six normal samples were conducted to validate the predictive model. Significant genes related to clinical parameters were found enriching in the immune system, interferon-stimulated, regulation of cytokine production, anti-apoptosis, and etc. A panel of these genes with clinical parameters can effectively predict binary classifications of inflammation grade (area under the ROC curve [AUC]: 0.88, 95% confidence interval [CI]: 0.77-0.93), validated by normal samples. A panel with only clinical parameters was also valuable (AUC: 0.78, 95% CI: 0.65-0.86), indicating that liquid biopsy method for detecting the pathology of CHB is possible. This is the first study to systematically elucidate the relationships among gene expressions, clinical parameters and pathological inflammation grades in CHB, and to build models predicting inflammation grades by gene expressions and/or clinical parameters as well. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Reaction time as an indicator of insufficient effort: Development and validation of an embedded performance validity parameter.

    PubMed

    Stevens, Andreas; Bahlo, Simone; Licha, Christina; Liske, Benjamin; Vossler-Thies, Elisabeth

    2016-11-30

    Subnormal performance in attention tasks may result from various sources including lack of effort. In this report, the derivation and validation of a performance validity parameter for reaction time is described, using a set of malingering-indices ("Slick-criteria"), and 3 independent samples of participants (total n =893). The Slick-criteria yield an estimate of the probability of malingering based on the presence of an external incentive, evidence from neuropsychological testing, from self-report and clinical data. In study (1) a validity parameter is derived using reaction time data of a sample, composed of inpatients with recent severe brain lesions not involved in litigation and of litigants with and without brain lesion. In study (2) the validity parameter is tested in an independent sample of litigants. In study (3) the parameter is applied to an independent sample comprising cooperative and non-cooperative testees. Logistic regression analysis led to a derived validity parameter based on median reaction time and standard deviation. It performed satisfactorily in studies (2) and (3) (study 2 sensitivity=0.94, specificity=1.00; study 3 sensitivity=0.79, specificity=0.87). The findings suggest that median reaction time and standard deviation may be used as indicators of negative response bias. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Psychoacoustical evaluation of natural and urban sounds in soundscapes.

    PubMed

    Yang, Ming; Kang, Jian

    2013-07-01

    Among various sounds in the environment, natural sounds, such as water sounds and birdsongs, have proven to be highly preferred by humans, but the reasons for these preferences have not been thoroughly researched. This paper explores differences between various natural and urban environmental sounds from the viewpoint of objective measures, especially psychoacoustical parameters. The sound samples used in this study include the recordings of single sound source categories of water, wind, birdsongs, and urban sounds including street music, mechanical sounds, and traffic noise. The samples are analyzed with a number of existing psychoacoustical parameter algorithmic models. Based on hierarchical cluster and principal components analyses of the calculated results, a series of differences has been shown among different sound types in terms of key psychoacoustical parameters. While different sound categories cannot be identified using any single acoustical and psychoacoustical parameter, identification can be made with a group of parameters, as analyzed with artificial neural networks and discriminant functions in this paper. For artificial neural networks, correlations between network predictions and targets using the average and standard deviation data of psychoacoustical parameters as inputs are above 0.95 for the three natural sound categories and above 0.90 for the urban sound category. For sound identification/classification, key parameters are fluctuation strength, loudness, and sharpness.

  12. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  13. Temperature dependence of photoluminescence peaks of porous silicon structures

    NASA Astrophysics Data System (ADS)

    Brunner, Róbert; Pinčík, Emil; Kučera, Michal; Greguš, Ján; Vojtek, Pavel; Zábudlá, Zuzana

    2017-12-01

    Evaluation of photoluminescence spectra of porous silicon (PS) samples prepared by electrochemical etching is presented. The samples were measured at temperatures 30, 70 and 150 K. Peak parameters (energy, intensity and width) were calculated. The PL spectrum was approximated by a set of Gaussian peaks. Their parameters were fixed using fitting a procedure in which the optimal number of peeks included into the model was estimated using the residuum of the approximation. The weak thermal dependence of the spectra indicates the strong influence of active defects.

  14. TracerLPM (Version 1): An Excel® workbook for interpreting groundwater age distributions from environmental tracer data

    USGS Publications Warehouse

    Jurgens, Bryant C.; Böhlke, J.K.; Eberts, Sandra M.

    2012-01-01

    TracerLPM is an interactive Excel® (2007 or later) workbook program for evaluating groundwater age distributions from environmental tracer data by using lumped parameter models (LPMs). Lumped parameter models are mathematical models of transport based on simplified aquifer geometry and flow configurations that account for effects of hydrodynamic dispersion or mixing within the aquifer, well bore, or discharge area. Five primary LPMs are included in the workbook: piston-flow model (PFM), exponential mixing model (EMM), exponential piston-flow model (EPM), partial exponential model (PEM), and dispersion model (DM). Binary mixing models (BMM) can be created by combining primary LPMs in various combinations. Travel time through the unsaturated zone can be included as an additional parameter. TracerLPM also allows users to enter age distributions determined from other methods, such as particle tracking results from numerical groundwater-flow models or from other LPMs not included in this program. Tracers of both young groundwater (anthropogenic atmospheric gases and isotopic substances indicating post-1940s recharge) and much older groundwater (carbon-14 and helium-4) can be interpreted simultaneously so that estimates of the groundwater age distribution for samples with a wide range of ages can be constrained. TracerLPM is organized to permit a comprehensive interpretive approach consisting of hydrogeologic conceptualization, visual examination of data and models, and best-fit parameter estimation. Groundwater age distributions can be evaluated by comparing measured and modeled tracer concentrations in two ways: (1) multiple tracers analyzed simultaneously can be evaluated against each other for concordance with modeled concentrations (tracer-tracer application) or (2) tracer time-series data can be evaluated for concordance with modeled trends (tracer-time application). Groundwater-age estimates can also be obtained for samples with a single tracer measurement at one point in time; however, prior knowledge of an appropriate LPM is required because the mean age is often non-unique. LPM output concentrations depend on model parameters and sample date. All of the LPMs have a parameter for mean age. The EPM, PEM, and DM have an additional parameter that characterizes the degree of age mixing in the sample. BMMs have a parameter for the fraction of the first component in the mixture. An LPM, together with its parameter values, provides a description of the age distribution or the fractional contribution of water for every age of recharge contained within a sample. For the PFM, the age distribution is a unit pulse at one distinct age. For the other LPMs, the age distribution can be much broader and span decades, centuries, millennia, or more. For a sample with a mixture of groundwater ages, the reported interpretation of tracer data includes the LPM name, the mean age, and the values of any other independent model parameters. TracerLPM also can be used for simulating the responses of wells, springs, streams, or other groundwater discharge receptors to nonpoint-source contaminants that are introduced in recharge, such as nitrate. This is done by combining an LPM or user-defined age distribution with information on contaminant loading at the water table. Information on historic contaminant loading can be used to help evaluate a model's ability to match real world conditions and understand observed contaminant trends, while information on future contaminant loading scenarios can be used to forecast potential contaminant trends.

  15. Selected physical properties of various diesel blends

    NASA Astrophysics Data System (ADS)

    Hlaváčová, Zuzana; Božiková, Monika; Hlaváč, Peter; Regrut, Tomáš; Ardonová, Veronika

    2018-01-01

    The quality determination of biofuels requires identifying the chemical and physical parameters. The key physical parameters are rheological, thermal and electrical properties. In our study, we investigated samples of diesel blends with rape-seed methyl esters content in the range from 3 to 100%. In these, we measured basic thermophysical properties, including thermal conductivity and thermal diffusivity, using two different transient methods - the hot-wire method and the dynamic plane source. Every thermophysical parameter was measured 100 times using both methods for all samples. Dynamic viscosity was measured during the heating process under the temperature range 20-80°C. A digital rotational viscometer (Brookfield DV 2T) was used for dynamic viscosity detection. Electrical conductivity was measured using digital conductivity meter (Model 1152) in a temperature range from -5 to 30°C. The highest values of thermal parameters were reached in the diesel sample with the highest biofuel content. The dynamic viscosity of samples increased with higher concentration of bio-component rapeseed methyl esters. The electrical conductivity of blends also increased with rapeseed methyl esters content.

  16. What’s Driving Uncertainty? The Model or the Model Parameters (What’s Driving Uncertainty? The influences of model and model parameters in data analysis)

    DOE PAGES

    Anderson-Cook, Christine Michaela

    2017-03-01

    Here, one of the substantial improvements to the practice of data analysis in recent decades is the change from reporting just a point estimate for a parameter or characteristic, to now including a summary of uncertainty for that estimate. Understanding the precision of the estimate for the quantity of interest provides better understanding of what to expect and how well we are able to predict future behavior from the process. For example, when we report a sample average as an estimate of the population mean, it is good practice to also provide a confidence interval (or credible interval, if youmore » are doing a Bayesian analysis) to accompany that summary. This helps to calibrate what ranges of values are reasonable given the variability observed in the sample and the amount of data that were included in producing the summary.« less

  17. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  18. Mutagenicity of drinking water sampled from the Yangtze River and Hanshui River (Wuhan section) and correlations with water quality parameters.

    PubMed

    Lv, Xuemin; Lu, Yi; Yang, Xiaoming; Dong, Xiaorong; Ma, Kunpeng; Xiao, Sanhua; Wang, Yazhou; Tang, Fei

    2015-03-31

    A total of 54 water samples were collected during three different hydrologic periods (level period, wet period, and dry period) from Plant A and Plant B (a source for Yangtze River and Hanshui River water, respectively), and several water parameters, such as chemical oxygen demand (COD), turbidity, and total organic carbon (TOC), were simultaneously analyzed. The mutagenicity of the water samples was evaluated using the Ames test with Salmonella typhimurium strains TA98 and TA100. According to the results, the organic compounds in the water were largely frame-shift mutagens, as positive results were found for most of the tests using TA98. All of the finished water samples exhibited stronger mutagenicity than the relative raw and distribution water samples, with water samples collected from Plant B presenting stronger mutagenic strength than those from Plant A. The finished water samples from Plant A displayed a seasonal-dependent variation. Water parameters including COD (r = 0.599, P = 0.009), TOC (r = 0.681, P = 0.02), UV254 (r = 0.711, P = 0.001), and total nitrogen (r = 0.570, P = 0.014) exhibited good correlations with mutagenicity (TA98), at 2.0 L/plate, which bolsters the argument of the importance of using mutagenicity as a new parameter to assess the quality of drinking water.

  19. Mutagenicity of drinking water sampled from the Yangtze River and Hanshui River (Wuhan section) and correlations with water quality parameters

    PubMed Central

    Lv, Xuemin; Lu, Yi; Yang, Xiaoming; Dong, Xiaorong; Ma, Kunpeng; Xiao, Sanhua; Wang, Yazhou; Tang, Fei

    2015-01-01

    A total of 54 water samples were collected during three different hydrologic periods (level period, wet period, and dry period) from Plant A and Plant B (a source for Yangtze River and Hanshui River water, respectively), and several water parameters, such as chemical oxygen demand (COD), turbidity, and total organic carbon (TOC), were simultaneously analyzed. The mutagenicity of the water samples was evaluated using the Ames test with Salmonella typhimurium strains TA98 and TA100. According to the results, the organic compounds in the water were largely frame-shift mutagens, as positive results were found for most of the tests using TA98. All of the finished water samples exhibited stronger mutagenicity than the relative raw and distribution water samples, with water samples collected from Plant B presenting stronger mutagenic strength than those from Plant A. The finished water samples from Plant A displayed a seasonal-dependent variation. Water parameters including COD (r = 0.599, P = 0.009), TOC (r = 0.681, P = 0.02), UV254 (r = 0.711, P = 0.001), and total nitrogen (r = 0.570, P = 0.014) exhibited good correlations with mutagenicity (TA98), at 2.0 L/plate, which bolsters the argument of the importance of using mutagenicity as a new parameter to assess the quality of drinking water. PMID:25825837

  20. Application of an ETV-ICP system for the determination of elements in human hair*1

    NASA Astrophysics Data System (ADS)

    Plantikow-Voβgätter, F.; Denkhaus, E.

    1996-01-01

    When determining element contents in hair samples without sample digestion it is necessary to analyze large sample volumes in order to minimize problems of inhomogeneity of biological sample materials. Therefore an electrothermal vaporization system (ETV) is used for solid sample introduction into an inductively coupled plasma (ICP) for the determination of matrix and trace elements in hair. This paper concentrates on the instrumental aspects without time consuming sample preparation. The results obtained for optimization tests, ETV operating parameters and ICP operating parameters, are shown and discussed. Standard additions are used for calibration for the determination of Zn, Mg, and Mn in human hair. Studies including reproducibility and detection limits for chosen elements have been carried out on certified reference materials (CRMs). The determination of reproducibility (relative standard deviation (RSD) of n = 10) and detection limits (DLs) of Zn (RSD < 8.5%, DL < 0.8 μ g -1), Mn (RSD < 14.1%, DL < 0.3 μ g -1), and Mg (RSD < 7.4%, DL < 6.6 μ g -1) are satisfactory. The concentration values found show good agreement with the corresponding certified values. Further sample preparation steps, including hair sampling, washing procedure and homogenization for hair, relating to measurements of real hair samples are described.

  1. Physicochemical Characteristics of Larval Habitat Waters of Mosquitoes (Diptera: Culicidae) in Qom Province, Central Iran

    PubMed Central

    Abai, Mohammad Reza; Saghafipour, Abedin; Ladonni, Hossein; Jesri, Nahid; Omidi, Saeed; Azari-Hamidian, Shahyad

    2016-01-01

    Background: Mosquitoes lay eggs in a wide range of habitats with different physicochemical parameters. Ecological data, including physicochemical factors of oviposition sites, play an important role in integrated vector management. Those data help the managers to make the best decision in controlling the aquatic stages of vectors especially using source reduction. Methods: To study some physicochemical characteristics of larval habitat waters, an investigation was carried out in Qom Province, central Iran, during spring and summer 2008 and 2009. Water samples were collected during larval collection from ten localities. The chemical parameters of water samples were analyzed based on mg/l using standard methods. Water temperature (°C), turbidity (NTU), total dissolved solids (ppm), electrical conductivity (μS/cm), and acidity (pH) were measured using digital testers. Thermotolerant coliforms of water samples were analyzed based on MPN/100ml. Data were assessed by Kruskal-Wallis test and Spearman Correlation analysis. Results: In total, 371 mosquito larvae were collected including 14 species representing four genera. Some physicochemical parameters of water in Emamzadeh Esmail, Qomrood, Qom City, and Rahjerd showed significant differences among localities (P< 0.05). The physicochemical and microbial parameters did not show any significant differences among different species (P> 0.05). There was no significant correlation between the abundance of larvae and the different physicochemical and microbial parameters (P> 0.05). Conclusion: The means of EC, TDS, and phosphate of localities and species were remarkably higher than those of the previous studies. Other parameters seem to be in the range of other investigations. PMID:27047973

  2. Physicochemical Characteristics of Larval Habitat Waters of Mosquitoes (Diptera: Culicidae) in Qom Province, Central Iran.

    PubMed

    Abai, Mohammad Reza; Saghafipour, Abedin; Ladonni, Hossein; Jesri, Nahid; Omidi, Saeed; Azari-Hamidian, Shahyad

    2016-03-01

    Mosquitoes lay eggs in a wide range of habitats with different physicochemical parameters. Ecological data, including physicochemical factors of oviposition sites, play an important role in integrated vector management. Those data help the managers to make the best decision in controlling the aquatic stages of vectors especially using source reduction. To study some physicochemical characteristics of larval habitat waters, an investigation was carried out in Qom Province, central Iran, during spring and summer 2008 and 2009. Water samples were collected during larval collection from ten localities. The chemical parameters of water samples were analyzed based on mg/l using standard methods. Water temperature (°C), turbidity (NTU), total dissolved solids (ppm), electrical conductivity (μS/cm), and acidity (pH) were measured using digital testers. Thermotolerant coliforms of water samples were analyzed based on MPN/100ml. Data were assessed by Kruskal-Wallis test and Spearman Correlation analysis. In total, 371 mosquito larvae were collected including 14 species representing four genera. Some physicochemical parameters of water in Emamzadeh Esmail, Qomrood, Qom City, and Rahjerd showed significant differences among localities (P< 0.05). The physicochemical and microbial parameters did not show any significant differences among different species (P> 0.05). There was no significant correlation between the abundance of larvae and the different physicochemical and microbial parameters (P> 0.05). The means of EC, TDS, and phosphate of localities and species were remarkably higher than those of the previous studies. Other parameters seem to be in the range of other investigations.

  3. Effects of aging on sleep structure throughout adulthood: a population-based study.

    PubMed

    Moraes, Walter; Piovezan, Ronaldo; Poyares, Dalva; Bittencourt, Lia Rita; Santos-Silva, Rogerio; Tufik, Sergio

    2014-04-01

    Although many studies have shown the evolution of sleep parameters across the lifespan, not many have included a representative sample of the general population. The objective of this study was to describe age-related changes in sleep structure, sleep respiratory parameters and periodic limb movements of the adult population of São Paulo. We selected a representative sample of the city of São Paulo, Brazil that included both genders and an age range of 20-80 years. Pregnant and lactating women, people with physical or mental impairments that prevent self-care and people who work every night were not included. This sample included 1024 individuals who were submitted to polysomnography and structured interviews. We subdivided our sample into five-year age groups. One-way analysis of variance was used to compare age groups. Pearson product-moment was used to evaluate correlation between age and sleep parameters. Total sleep time, sleep efficiency, percentage of rapid eye movement (REM) sleep and slow wave sleep showed a significant age-related decrease (P<0.05). WASO (night-time spent awake after sleep onset), arousal index, sleep latency, REM sleep latency, and the percentage of stages 1 and 2 showed a significant increase (P<0.05). Furthermore, apnea-hypopnea index increased and oxygen saturation decreased with age. The reduction in the percentage of REM sleep significantly correlated with age in women, whereas the reduction in the percentage of slow wave sleep correlated with age in men. The periodic limb movement (PLM) index increased with age in men and women. Sleep structure and duration underwent significant alterations throughout the aging process in the general population. There was an important correlation between age, sleep respiratory parameters and PLM index. In addition, men and women showed similar trends but with different effect sizes. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  5. The Kormendy relation of galaxies in the Frontier Fields clusters: Abell S1063 and MACS J1149.5+2223

    NASA Astrophysics Data System (ADS)

    Tortorelli, Luca; Mercurio, Amata; Paolillo, Maurizio; Rosati, Piero; Gargiulo, Adriana; Gobat, Raphael; Balestra, Italo; Caminha, G. B.; Annunziatella, Marianna; Grillo, Claudio; Lombardi, Marco; Nonino, Mario; Rettura, Alessandro; Sartoris, Barbara; Strazzullo, Veronica

    2018-06-01

    We analyse the Kormendy relations (KRs) of the two Frontier Fields clusters, Abell S1063, at z = 0.348, and MACS J1149.5+2223, at z = 0.542, exploiting very deep Hubble Space Telescope photometry and Very Large Telescope (VLT)/Multi Unit Spectroscopic Explorer (MUSE) integral field spectroscopy. With this novel data set, we are able to investigate how the KR parameters depend on the cluster galaxy sample selection and how this affects studies of galaxy evolution based on the KR. We define and compare four different galaxy samples according to (a) Sérsic indices: early-type (`ETG'), (b) visual inspection: `ellipticals', (c) colours: `red', (d) spectral properties: `passive'. The classification is performed for a complete sample of galaxies with mF814W ≤ 22.5 ABmag (M* ≳ 1010.0 M⊙). To derive robust galaxy structural parameters, we use two methods: (1) an iterative estimate of structural parameters using images of increasing size, in order to deal with closely separated galaxies and (2) different background estimations, to deal with the intracluster light contamination. The comparison between the KRs obtained from the different samples suggests that the sample selection could affect the estimate of the best-fitting KR parameters. The KR built with ETGs is fully consistent with the one obtained for ellipticals and passive. On the other hand, the KR slope built on the red sample is only marginally consistent with those obtained with the other samples. We also release the photometric catalogue with structural parameters for the galaxies included in the present analysis.

  6. Kinematics of our Galaxy from the PMA and TGAS catalogues

    NASA Astrophysics Data System (ADS)

    Velichko, Anna B.; Akhmetov, Volodymyr S.; Fedorov, Peter N.

    2018-04-01

    We derive and compare kinematic parameters of the Galaxy using the PMA and Gaia TGAS data. Two methods are used in calculations: evaluation of the Ogorodnikov-Milne model (OMM) parameters by the least square method (LSM) and a decomposition on a set of vector spherical harmonics (VSH). We trace dependencies on the distance of the derived parameters including the Oort constants A and B and the rotational velocity of the Galaxy V rot at the Solar distance for the common sample of stars of mixed spectral composition of the PMA and TGAS catalogues. The distances were obtained from the TGAS parallaxes or from reduced proper motions for fainter stars. The A, B and V rot parameters derived from proper motions of both catalogues used show identical behaviour but the values are systematically shifted by about 0.5 mas/yr. The Oort B parameter derived from the PMA sample of red giants shows gradual decrease with increasing the distance while the Oort A has a minimum at about 2 kpc and then gradually increases. As for models chosen for calculations, first, we confirm conclusions of other authors about the existence of extra-model harmonics in the stellar velocity field. Secondly, not all parameters of the OMM are statistically significant, and the set of parameters depends on the stellar sample used.

  7. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

  8. 40 CFR 60.58c - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... limits and/or to establish or re-establish operating parameters, as applicable, and a description, including sample calculations, of how the operating parameters were established or re-established, if....epa.gov/ttn/chief/ert/ert_tool.html. [62 FR 48382, Sept. 15, 1997, as amended at 74 FR 51413, Oct. 6...

  9. 40 CFR 60.58c - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... limits and/or to establish or re-establish operating parameters, as applicable, and a description, including sample calculations, of how the operating parameters were established or re-established, if....epa.gov/ttn/chief/ert/ert_tool.html. [62 FR 48382, Sept. 15, 1997, as amended at 74 FR 51413, Oct. 6...

  10. Mapping local anisotropy axis for scattering media using backscattering Mueller matrix imaging

    NASA Astrophysics Data System (ADS)

    He, Honghui; Sun, Minghao; Zeng, Nan; Du, E.; Guo, Yihong; He, Yonghong; Ma, Hui

    2014-03-01

    Mueller matrix imaging techniques can be used to detect the micro-structure variations of superficial biological tissues, including the sizes and shapes of cells, the structures in cells, and the densities of the organelles. Many tissues contain anisotropic fibrous micro-structures, such as collagen fibers, elastin fibers, and muscle fibers. Changes of these fibrous structures are potentially good indicators for some pathological variations. In this paper, we propose a quantitative analysis technique based on Mueller matrix for mapping local anisotropy axis of scattering media. By conducting both experiments on silk sample and Monte Carlo simulation based on the sphere-cylinder scattering model (SCSM), we extract anisotropy axis parameters from different backscattering Mueller matrix elements. Moreover, we testify the possible applications of these parameters for biological tissues. The preliminary experimental results of human cancerous samples show that, these parameters are capable to map the local axis of fibers. Since many pathological changes including early stage cancers affect the well aligned structures for tissues, the experimental results indicate that these parameters can be used as potential tools in clinical applications for biomedical diagnosis purposes.

  11. Application of artificial neural networks to assess pesticide contamination in shallow groundwater

    USGS Publications Warehouse

    Sahoo, G.B.; Ray, C.; Mehnert, E.; Keefer, D.A.

    2006-01-01

    In this study, a feed-forward back-propagation neural network (BPNN) was developed and applied to predict pesticide concentrations in groundwater monitoring wells. Pesticide concentration data are challenging to analyze because they tend to be highly censored. Input data to the neural network included the categorical indices of depth to aquifer material, pesticide leaching class, aquifer sensitivity to pesticide contamination, time (month) of sample collection, well depth, depth to water from land surface, and additional travel distance in the saturated zone (i.e., distance from land surface to midpoint of well screen). The output of the neural network was the total pesticide concentration detected in the well. The model prediction results produced good agreements with observed data in terms of correlation coefficient (R = 0.87) and pesticide detection efficiency (E = 89%), as well as good match between the observed and predicted "class" groups. The relative importance of input parameters to pesticide occurrence in groundwater was examined in terms of R, E, mean error (ME), root mean square error (RMSE), and pesticide occurrence "class" groups by eliminating some key input parameters to the model. Well depth and time of sample collection were the most sensitive input parameters for predicting the pesticide contamination potential of a well. This infers that wells tapping shallow aquifers are more vulnerable to pesticide contamination than those wells tapping deeper aquifers. Pesticide occurrences during post-application months (June through October) were found to be 2.5 to 3 times higher than pesticide occurrences during other months (November through April). The BPNN was used to rank the input parameters with highest potential to contaminate groundwater, including two original and five ancillary parameters. The two original parameters are depth to aquifer material and pesticide leaching class. When these two parameters were the only input parameters for the BPNN, they were not able to predict contamination potential. However, when they were used with other parameters, the predictive performance efficiency of the BPNN in terms of R, E, ME, RMSE, and pesticide occurrence "class" groups increased. Ancillary data include data collected during the study such as well depth and time of sample collection. The BPNN indicated that the ancillary data had more predictive power than the original data. The BPNN results will help researchers identify parameters to improve maps of aquifer sensitivity to pesticide contamination. ?? 2006 Elsevier B.V. All rights reserved.

  12. 40 CFR 85.2120 - Maintenance and submittal of records.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... testing program, including all production part sampling techniques used to verify compliance of the... subsequent analyses of that data; (7) A description of all the methodology, analysis, testing and/or sampling techniques used to ascertain the emission critical parameter specifications of the originial equipment part...

  13. Laboratory Studies on Surface Sampling of Bacillus anthracis Contamination: Summary, Gaps, and Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Amidan, Brett G.; Hu, Rebecca

    2011-11-28

    This report summarizes previous laboratory studies to characterize the performance of methods for collecting, storing/transporting, processing, and analyzing samples from surfaces contaminated by Bacillus anthracis or related surrogates. The focus is on plate culture and count estimates of surface contamination for swab, wipe, and vacuum samples of porous and nonporous surfaces. Summaries of the previous studies and their results were assessed to identify gaps in information needed as inputs to calculate key parameters critical to risk management in biothreat incidents. One key parameter is the number of samples needed to make characterization or clearance decisions with specified statistical confidence. Othermore » key parameters include the ability to calculate, following contamination incidents, the (1) estimates of Bacillus anthracis contamination, as well as the bias and uncertainties in the estimates, and (2) confidence in characterization and clearance decisions for contaminated or decontaminated buildings. Gaps in knowledge and understanding identified during the summary of the studies are discussed and recommendations are given for future studies.« less

  14. Monitoring systems for community water supplies

    NASA Technical Reports Server (NTRS)

    Taylor, R. E.; Brooks, R. R.; Jeffers, E. L.; Linton, A. T.; Poel, G. D.

    1978-01-01

    Water monitoring system includes equipment and techniques for waste water sampling sensors for determining levels of microorganisms, oxygen, chlorine, and many other important parameters. System includes data acquisition and display system that allows computation of water quality information for real time display.

  15. WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING

    PubMed Central

    Saegusa, Takumi; Wellner, Jon A.

    2013-01-01

    We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559

  16. Limited-sampling strategies for anti-infective agents: systematic review.

    PubMed

    Sprague, Denise A; Ensom, Mary H H

    2009-09-01

    Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or the characteristics of included patients, which limited their generalizability. Many limited-sampling strategies have been developed for anti-infective agents that do not have a clearly established link between AUC and clinical outcomes in humans. Future studies should first determine if there is an association between AUC monitoring and clinical outcomes. Thereafter, it may be worthwhile to prospectively develop and validate a limited-sampling strategy for the particular anti-infective agent in a similar population.

  17. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.

  18. Micromechanical potentiometric sensors

    DOEpatents

    Thundat, Thomas G.

    2000-01-01

    A microcantilever potentiometric sensor utilized for detecting and measuring physical and chemical parameters in a sample of media is described. The microcantilevered spring element includes at least one chemical coating on a coated region, that accumulates a surface charge in response to hydrogen ions, redox potential, or ion concentrations in a sample of the media being monitored. The accumulation of surface charge on one surface of the microcantilever, with a differing surface charge on an opposing surface, creates a mechanical stress and a deflection of the spring element. One of a multitude of deflection detection methods may include the use of a laser light source focused on the microcantilever, with a photo-sensitive detector receiving reflected laser impulses. The microcantilevered spring element is approximately 1 to 100 .mu.m long, approximately 1 to 50 .mu.m wide, and approximately 0.3 to 3.0 .mu.m thick. An accuracy of detection of deflections of the cantilever is provided in the range of 0.01 nanometers of deflection. The microcantilever apparatus and a method of detection of parameters require only microliters of a sample to be placed on, or near the spring element surface. The method is extremely sensitive to the detection of the parameters to be measured.

  19. Parameters for scale-up of lethal microwave treatment to eradicate cerambycid larvae infesting solid wood packing materials

    Treesearch

    Mary R. Fleming; John J. Janowiak; Joseph Kearns; Jeffrey E. Shield; Rustum Roy; Dinesh K. Agrawal; Leah S. Bauer; Kelli Hoover

    2004-01-01

    The use of microwave irradiation to eradicate insects infesting wood used to manufacture packing materials such as pallets and crateswas evaluated. The focus of this preliminary studywas to determinewhich microwave parameters, including chamber-volume to sample-volumeratios,variations ofpower and time, and energydensity (total microwavepower/woodvolume), affect the...

  20. The Effect of Including or Excluding Students with Testing Accommodations on IRT Calibrations.

    ERIC Educational Resources Information Center

    Karkee, Thakur; Lewis, Dan M.; Barton, Karen; Haug, Carolyn

    This study aimed to determine the degree to which the inclusion of accommodated students with disabilities in the calibration sample affects the characteristics of item parameters and the test results. Investigated were effects on test reliability, item fit to the applicable item response theory (IRT) model, item parameter estimates, and students'…

  1. Total Arsenic, Cadmium, and Lead Determination in Brazilian Rice Samples Using ICP-MS

    PubMed Central

    Buzzo, Márcia Liane; de Arauz, Luciana Juncioni; Carvalho, Maria de Fátima Henriques; Arakaki, Edna Emy Kumagai; Matsuzaki, Richard; Tiglea, Paulo

    2016-01-01

    This study is aimed at investigating a suitable method for rice sample preparation as well as validating and applying the method for monitoring the concentration of total arsenic, cadmium, and lead in rice by using Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Various rice sample preparation procedures were evaluated. The analytical method was validated by measuring several parameters including limit of detection (LOD), limit of quantification (LOQ), linearity, relative bias, and repeatability. Regarding the sample preparation, recoveries of spiked samples were within the acceptable range from 89.3 to 98.2% for muffle furnace, 94.2 to 103.3% for heating block, 81.0 to 115.0% for hot plate, and 92.8 to 108.2% for microwave. Validation parameters showed that the method fits for its purpose, being the total arsenic, cadmium, and lead within the Brazilian Legislation limits. The method was applied for analyzing 37 rice samples (including polished, brown, and parboiled), consumed by the Brazilian population. The total arsenic, cadmium, and lead contents were lower than the established legislative values, except for total arsenic in one brown rice sample. This study indicated the need to establish monitoring programs for emphasizing the study on this type of cereal, aiming at promoting the Public Health. PMID:27766178

  2. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  3. Modeling association among demographic parameters in analysis of open population capture-recapture data.

    PubMed

    Link, William A; Barker, Richard J

    2005-03-01

    We present a hierarchical extension of the Cormack-Jolly-Seber (CJS) model for open population capture-recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis-Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  4. Modeling association among demographic parameters in analysis of open population capture-recapture data

    USGS Publications Warehouse

    Link, William A.; Barker, Richard J.

    2005-01-01

    We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  5. Assessment of the quality of water from hand-dug wells in ghana.

    PubMed

    Nkansah, Marian Asantewah; Boadi, Nathaniel Owusu; Badu, Mercy

    2010-04-26

    This study focused upon the determination of physicochemical and microbial properties, including metals, selected anions and coliform bacteria in drinking water samples from hand-dug wells in the Kumasi metropolis of the Republic of Ghana. The purpose was to assess the quality of water from these sources. Ten different water samples were taken from different parts of Kumasi, the capital of the Ashanti region of Ghana and analyzed for physicochemical parameters including pH, electrical conductivity, total dissolved solids, alkalinity total hardness and coliform bacteria. Metals and anions analyzed were Ca, Mg, Fe, Mn, NO(3) (-), NO(2) (-), SO(4) (2-), PO(4) (2-), F(-) and Cl(-). Bacteria analysed were total coliform and Escherichia coli.THE DATA SHOWED VARIATION OF THE INVESTIGATED PARAMETERS IN SAMPLES AS FOLLOWS: pH, 6.30-0.70; conductivity (EC), 46-682 muS/cm; PO(4) (3-), 0.67-76.00 mg/L; F(-), 0.20-0.80 mg/L; NO(3) (-), 0-0.968 mg/L; NO(2) (-), 0-0.063 mg/L; SO(4) (2-), 3.0-07.0 mg/L; Fe, 0-1.2 mg/L; Mn, 0-0.018 mg/L. Total coliform and Escherichia coli were below the minimum detection limit (MDL) of 20 MPN per 100 ml in all the samples. The concentrations of most of the investigated parameters in the drinking water samples from Ashanti region were within the permissible limits of the World Health Organization drinking water quality guidelines.

  6. The Tully-Fisher relation for flat galaxies

    NASA Astrophysics Data System (ADS)

    Makarov, D. I.; Zaitseva, N. A.; Bizyaev, D. V.

    2018-06-01

    We construct a multiparametric Tully-Fisher (TF) relation for a large sample of edge-on galaxies from the Revised Flat Galaxy Catalog using H I data from the EDD database and parameters from the EGIS catalog. We incorporate a variety of additional parameters including structural parameters of edge-on galaxies in different bandpasses. Besides the rotation curve maximum, only the H I-to-optical luminosity ratio and optical colours play a statistically significant role in the multiparametric TF relation. We are able to decrease the standard deviation of the multiparametric TF relation down to 0.32 mag, which is at the level of best modern samples of galaxies used for studies of the matter motion in the Universe via the TF-relation.

  7. Aircraft data summaries for the SURE intensives. Final report. [Sampling done August 1977 near Rockport, Indiana and Duncan Falls, Ohio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumenthal, D.L.; Tommerdahl, J.B.; McDonald, J.A.

    1981-09-01

    As part of the EPRI sulfate regional experiment (SURE), Meteorology Research, Inc., (MRI) and Research Triangle Institute (RTI) conducted six air quality sampling programs in the eastern United States using instrumented aircraft. This volume includes the air quality and meteorological data obtained during the August 1977 Intensive when MRI sampled near the Rockport, Indiana, SURE Station and RTI sampled near the Duncan Falls, Ohio, SURE Station. Sampling data are presented for all measured parameters.

  8. Evaluation of the information content of long-term wastewater characteristics data in relation to activated sludge model parameters.

    PubMed

    Alikhani, Jamal; Takacs, Imre; Al-Omari, Ahmed; Murthy, Sudhir; Massoudieh, Arash

    2017-03-01

    A parameter estimation framework was used to evaluate the ability of observed data from a full-scale nitrification-denitrification bioreactor to reduce the uncertainty associated with the bio-kinetic and stoichiometric parameters of an activated sludge model (ASM). Samples collected over a period of 150 days from the effluent as well as from the reactor tanks were used. A hybrid genetic algorithm and Bayesian inference were used to perform deterministic and parameter estimations, respectively. The main goal was to assess the ability of the data to obtain reliable parameter estimates for a modified version of the ASM. The modified ASM model includes methylotrophic processes which play the main role in methanol-fed denitrification. Sensitivity analysis was also used to explain the ability of the data to provide information about each of the parameters. The results showed that the uncertainty in the estimates of the most sensitive parameters (including growth rate, decay rate, and yield coefficients) decreased with respect to the prior information.

  9. Effects of Chitin and Sepia Ink Hybrid Hemostatic Sponge on the Blood Parameters of Mice

    PubMed Central

    Zhang, Wei; Sun, Yu-Lin; Chen, Dao-Hai

    2014-01-01

    Chitin and sepia ink hybrid hemostatic sponge (CTSH sponge), a new biomedical material, was extensively studied for its beneficial biological properties of hemostasis and stimulation of healing. However, studies examining the safety of CTSH sponge in the blood system are lacking. This experiment aimed to examine whether CTSH sponge has negative effect on blood systems of mice, which were treated with a dosage of CTSH sponge (135 mg/kg) through a laparotomy. CTSH sponge was implanted into the abdominal subcutaneous and a laparotomy was used for blood sampling from abdominal aortic. Several kinds of blood parameters were detected at different time points, which were reflected by coagulation parameters including thrombin time (TT), prothrombin time (PT), activated partial thromboplatin time (APTT), fibrinogen (FIB) and platelet factor 4 (PF4); anticoagulation parameter including antithrombin III (AT-III); fibrinolytic parameters including plasminogen (PLG), fibrin degradation product (FDP) and D-dimer; hemorheology parameters including blood viscosity (BV) and plasma viscosity (PV). Results showed that CTSH sponge has no significant effect on the blood parameters of mice. The data suggested that CTSH sponge can be applied in the field of biomedical materials and has potential possibility to be developed into clinical drugs of hemostatic agents. PMID:24727395

  10. Blood Culture Testing via a Mobile App That Uses a Mobile Phone Camera: A Feasibility Study

    PubMed Central

    Chong, Yong Pil; Jang, Seongsoo; Kim, Mi Na; Kim, Jeong Hoon; Kim, Woo Sung

    2016-01-01

    Background To evaluate patients with fever of unknown origin or those with suspected bacteremia, the precision of blood culture tests is critical. An inappropriate step in the test process or error in a parameter could lead to a false-positive result, which could then affect the direction of treatment in critical conditions. Mobile health apps can be used to resolve problems with blood culture tests, and such apps can hence ensure that point-of-care guidelines are followed and processes are monitored for blood culture tests. Objective In this pilot project, we aimed to investigate the feasibility of using a mobile blood culture app to manage blood culture test quality. We implemented the app at a university hospital in South Korea to assess the potential for its utilization in a clinical environment by reviewing the usage data among a small group of users and by assessing their feedback and the data related to blood culture sampling. Methods We used an iOS-based blood culture app that uses an embedded camera to scan the patient identification and sample number bar codes. A total of 4 medical interns working at 2 medical intensive care units (MICUs) participated in this project, which spanned 3 weeks. App usage and blood culture sampling parameters (including sampler, sampling site, sampling time, and sample volume) were analyzed. The compliance of sampling parameter entry was also measured. In addition, the participants’ opinions regarding patient safety, timeliness, efficiency, and usability were recorded. Results In total, 356/644 (55.3%) of all blood culture samples obtained at the MICUs were examined using the app, including 254/356 (71.3%) with blood collection volumes of 5-7 mL and 256/356 (71.9%) with blood collection from the peripheral veins. The sampling volume differed among the participants. Sampling parameters were completely entered in 354/356 cases (99.4%). All the participants agreed that the app ensured good patient safety, disagreed on its timeliness, and did not believe that it was efficient. Although the bar code scanning speed was acceptable, the Wi-Fi environment required improvement. Moreover, the participants requested feedback regarding their sampling quality. Conclusions Although this app could be used in the clinical setting, improvements in the app functions, environment network, and internal policy of blood culture testing are needed to ensure hospital-wide use. PMID:27784649

  11. Blood Culture Testing via a Mobile App That Uses a Mobile Phone Camera: A Feasibility Study.

    PubMed

    Lee, Guna; Lee, Yura; Chong, Yong Pil; Jang, Seongsoo; Kim, Mi Na; Kim, Jeong Hoon; Kim, Woo Sung; Lee, Jae-Ho

    2016-10-26

    To evaluate patients with fever of unknown origin or those with suspected bacteremia, the precision of blood culture tests is critical. An inappropriate step in the test process or error in a parameter could lead to a false-positive result, which could then affect the direction of treatment in critical conditions. Mobile health apps can be used to resolve problems with blood culture tests, and such apps can hence ensure that point-of-care guidelines are followed and processes are monitored for blood culture tests. In this pilot project, we aimed to investigate the feasibility of using a mobile blood culture app to manage blood culture test quality. We implemented the app at a university hospital in South Korea to assess the potential for its utilization in a clinical environment by reviewing the usage data among a small group of users and by assessing their feedback and the data related to blood culture sampling. We used an iOS-based blood culture app that uses an embedded camera to scan the patient identification and sample number bar codes. A total of 4 medical interns working at 2 medical intensive care units (MICUs) participated in this project, which spanned 3 weeks. App usage and blood culture sampling parameters (including sampler, sampling site, sampling time, and sample volume) were analyzed. The compliance of sampling parameter entry was also measured. In addition, the participants' opinions regarding patient safety, timeliness, efficiency, and usability were recorded. In total, 356/644 (55.3%) of all blood culture samples obtained at the MICUs were examined using the app, including 254/356 (71.3%) with blood collection volumes of 5-7 mL and 256/356 (71.9%) with blood collection from the peripheral veins. The sampling volume differed among the participants. Sampling parameters were completely entered in 354/356 cases (99.4%). All the participants agreed that the app ensured good patient safety, disagreed on its timeliness, and did not believe that it was efficient. Although the bar code scanning speed was acceptable, the Wi-Fi environment required improvement. Moreover, the participants requested feedback regarding their sampling quality. Although this app could be used in the clinical setting, improvements in the app functions, environment network, and internal policy of blood culture testing are needed to ensure hospital-wide use.

  12. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  13. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less

  14. Degree of Ice Particle Surface Roughness Inferred from Polarimetric Observations

    NASA Technical Reports Server (NTRS)

    Hioki, Souichiro; Yang, Ping; Baum, Bryan A.; Platnick, Steven; Meyer, Kerry G.; King, Michael D.; Riedi, Jerome

    2016-01-01

    The degree of surface roughness of ice particles within thick, cold ice clouds is inferred from multidirectional, multi-spectral satellite polarimetric observations over oceans, assuming a column-aggregate particle habit. An improved roughness inference scheme is employed that provides a more noise-resilient roughness estimate than the conventional best-fit approach. The improvements include the introduction of a quantitative roughness parameter based on empirical orthogonal function analysis and proper treatment of polarization due to atmospheric scattering above clouds. A global 1-month data sample supports the use of a severely roughened ice habit to simulate the polarized reflectivity associated with ice clouds over ocean. The density distribution of the roughness parameter inferred from the global 1- month data sample and further analyses of a few case studies demonstrate the significant variability of ice cloud single-scattering properties. However, the present theoretical results do not agree with observations in the tropics. In the extra-tropics, the roughness parameter is inferred but 74% of the sample is out of the expected parameter range. Potential improvements are discussed to enhance the depiction of the natural variability on a global scale.

  15. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 2: Mission payloads subsystem description

    NASA Technical Reports Server (NTRS)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    The scheduling algorithm for mission planning and logistics evaluation (SAMPLE) is presented. Two major subsystems are included: The mission payloads program; and the set covering program. Formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.

  16. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    PubMed Central

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  17. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    NASA Technical Reports Server (NTRS)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  18. Effects of consensus training on the reliability of auditory perceptual ratings of voice quality.

    PubMed

    Iwarsson, Jenny; Reinholt Petersen, Niels

    2012-05-01

    This study investigates the effect of consensus training of listeners on intrarater and interrater reliability and agreement of perceptual voice analysis. The use of such training, including a reference voice sample, could be assumed to make the internal standards held in memory common and more robust, which is of great importance to reduce the variability of auditory perceptual ratings. A prospective design with testing before and after training. Thirteen students of audiologopedics served as listening subjects. The ratings were made using a multidimensional protocol with four-point equal-appearing interval scales. The stimuli consisted of text reading by authentic dysphonic patients. The consensus training for each perceptual voice parameter included (1) definition, (2) underlying physiology, (3) presentation of carefully selected sound examples representing the parameter in three different grades followed by group discussions of perceived characteristics, and (4) practical exercises including imitation to make use of the listeners' proprioception. Intrarater reliability and agreement showed a marked improvement for intermittent aphonia but not for vocal fry. Interrater reliability was high for most parameters before training with a slight increase after training. Interrater agreement showed marked increases for most voice quality parameters as a result of the training. The results support the recommendation of specific consensus training, including use of a reference voice sample material, to calibrate, equalize, and stabilize the internal standards held in memory by the listeners. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  19. Pelvic floor muscle training protocol for stress urinary incontinence in women: A systematic review.

    PubMed

    Oliveira, Marlene; Ferreira, Margarida; Azevedo, Maria João; Firmino-Machado, João; Santos, Paula Clara

    2017-07-01

    Strengthening exercises for pelvic floor muscles (SEPFM) are considered the first approach in the treatment of stress urinary incontinence (SUI). Nevertheless, there is no evidence about training parameters. To identify the protocol and/or most effective training parameters in the treatment of female SUI. A literature research was conducted in the PubMed, Cochrane Library, PEDro, Web of Science and Lilacs databases, with publishing dates ranging from January 1992 to March 2014. The articles included consisted of English-speaking experimental studies in which SEPFM were compared with placebo treatment (usual or untreated). The sample had a diagnosis of SUI and their age ranged between 18 and 65 years. The assessment of methodological quality was performed based on the PEDro scale. Seven high methodological quality articles were included in this review. The sample consisted of 331 women, mean age 44.4±5.51 years, average duration of urinary loss of 64±5.66 months and severity of SUI ranging from mild to severe. SEPFM programs included different training parameters concerning the PFM. Some studies have applied abdominal training and adjuvant techniques. Urine leakage cure rates varied from 28.6 to 80%, while the strength increase of PFM varied from 15.6 to 161.7%. The most effective training protocol consists of SEPFM by digital palpation combined with biofeedback monitoring and vaginal cones, including 12 week training parameters, and ten repetitions per series in different positions compared with SEPFM alone or a lack of treatment.

  20. The use of Landsat for monitoring water parameters in the coastal zone

    NASA Technical Reports Server (NTRS)

    Bowker, D. E.; Witte, W. G.

    1977-01-01

    Landsats 1 and 2 have been successful in detecting and quantifying suspended sediment and several other important parameters in the coastal zone, including chlorophyll, particles, alpha (light transmission), tidal conditions, acid and sewage dumps, and in some instances oil spills. When chlorophyll a is present in detectable quantities, however, it is shown to interfere with the measurement of sediment. The Landsat banding problem impairs the instrument resolution and places a requirement on the sampling program to collect surface data from a sufficiently large area. A sampling method which satisfies this condition is demonstrated.

  1. VizieR Online Data Catalog: Fundamental parameters of Kepler stars (Silva Aguirre+, 2015)

    NASA Astrophysics Data System (ADS)

    Silva Aguirre, V.; Davies, G. R.; Basu, S.; Christensen-Dalsgaard, J.; Creevey, O.; Metcalfe, T. S.; Bedding, T. R.; Casagrande, L.; Handberg, R.; Lund, M. N.; Nissen, P. E.; Chaplin, W. J.; Huber, D.; Serenelli, A. M.; Stello, D.; van Eylen, V.; Campante, T. L.; Elsworth, Y.; Gilliland, R. L.; Hekker, S.; Karoff, C.; Kawaler, S. D.; Kjeldsen, H.; Lundkvist, M. S.

    2016-02-01

    Our sample has been extracted from the 77 exoplanet host stars presented in Huber et al. (2013, Cat. J/ApJ/767/127). We have made use of the full time-base of observations from the Kepler satellite to uniformly determine precise fundamental stellar parameters, including ages, for a sample of exoplanet host stars where high-quality asteroseismic data were available. We devised a Bayesian procedure flexible in its input and applied it to different grids of models to study systematics from input physics and extract statistically robust properties for all stars. (4 data files).

  2. Is multidetector CT-based bone mineral density and quantitative bone microstructure assessment at the spine still feasible using ultra-low tube current and sparse sampling?

    PubMed

    Mei, Kai; Kopp, Felix K; Bippus, Rolf; Köhler, Thomas; Schwaiger, Benedikt J; Gersing, Alexandra S; Fehringer, Andreas; Sauter, Andreas; Münzel, Daniela; Pfeiffer, Franz; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B; Baum, Thomas

    2017-12-01

    Osteoporosis diagnosis using multidetector CT (MDCT) is limited to relatively high radiation exposure. We investigated the effect of simulated ultra-low-dose protocols on in-vivo bone mineral density (BMD) and quantitative trabecular bone assessment. Institutional review board approval was obtained. Twelve subjects with osteoporotic vertebral fractures and 12 age- and gender-matched controls undergoing routine thoracic and abdominal MDCT were included (average effective dose: 10 mSv). Ultra-low radiation examinations were achieved by simulating lower tube currents and sparse samplings at 50%, 25% and 10% of the original dose. BMD and trabecular bone parameters were extracted in T10-L5. Except for BMD measurements in sparse sampling data, absolute values of all parameters derived from ultra-low-dose data were significantly different from those derived from original dose images (p<0.05). BMD, apparent bone fraction and trabecular thickness were still consistently lower in subjects with than in those without fractures (p<0.05). In ultra-low-dose scans, BMD and microstructure parameters were able to differentiate subjects with and without vertebral fractures, suggesting osteoporosis diagnosis is feasible. However, absolute values differed from original values. BMD from sparse sampling appeared to be more robust. This dose-dependency of parameters should be considered for future clinical use. • BMD and quantitative bone parameters are assessable in ultra-low-dose in vivo MDCT scans. • Bone mineral density does not change significantly when sparse sampling is applied. • Quantitative trabecular bone microstructure measurements are sensitive to dose reduction. • Osteoporosis subjects could be differentiated even at 10% of original dose. • Radiation exposure should be considered when comparing quantitative bone parameters.

  3. Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.

    PubMed

    Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra

    2016-11-20

    The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Influence of Population Variation of Physiological Parameters in Computational Models of Space Physiology

    NASA Technical Reports Server (NTRS)

    Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.

    2016-01-01

    The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and intracranial pressure had dominant impact on the peak strains in the ONH and retro-laminar optic nerve, respectively; optic nerve and lamina cribrosa stiffness were also important. This investigation illustrates the ability of LHSPRCC to identify the most influential physiological parameters, which must therefore be well-characterized to produce the most accurate numerical results.

  5. Evaluation of the impact of RNA preservation methods of spiders for de novo transcriptome assembly.

    PubMed

    Kono, Nobuaki; Nakamura, Hiroyuki; Ito, Yusuke; Tomita, Masaru; Arakawa, Kazuharu

    2016-05-01

    With advances in high-throughput sequencing technologies, de novo transcriptome sequencing and assembly has become a cost-effective method to obtain comprehensive genetic information of a species of interest, especially in nonmodel species with large genomes such as spiders. However, high-quality RNA is essential for successful sequencing, and sample preservation conditions require careful consideration for the effective storage of field-collected samples. To this end, we report a streamlined feasibility study of various storage conditions and their effects on de novo transcriptome assembly results. The storage parameters considered include temperatures ranging from room temperature to -80°C; preservatives, including ethanol, RNAlater, TRIzol and RNAlater-ICE; and sample submersion states. As a result, intact RNA was extracted and assembly was successful when samples were preserved at low temperatures regardless of the type of preservative used. The assemblies as well as the gene expression profiles were shown to be robust to RNA degradation, when 30 million 150-bp paired-end reads are obtained. The parameters for sample storage, RNA extraction, library preparation, sequencing and in silico assembly considered in this work provide a guideline for the study of field-collected samples of spiders. © 2015 John Wiley & Sons Ltd.

  6. pypet: A Python Toolkit for Data Management of Parameter Explorations

    PubMed Central

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines. PMID:27610080

  7. pypet: A Python Toolkit for Data Management of Parameter Explorations.

    PubMed

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.

  8. Non-destructive sampling of a comet

    NASA Astrophysics Data System (ADS)

    Jessberger, H. L.; Kotthaus, M.

    1991-04-01

    Various conditions which must be met for the development of a nondestructive sampling and acquisition system are outlined and the development of a new robotic sampling system suited for use on a cometary surface is briefly discussed. The Rosetta mission of ESA will take samples of a comet nucleus and return both core and volatile samples to earth. Various considerations which must be taken into account for such a project are examined including the identification of design parameters for sample quality; the identification of the most probable site conditions; the development of a sample acquisition system with respect to these conditions; the production of model materials and model conditions; and the investigation of the relevant material properties. An adequate sampling system should also be designed and built, including various tools, and the system should be tested under simulated cometary conditions.

  9. The effect of uphill and downhill walking on gait parameters: A self-paced treadmill study.

    PubMed

    Kimel-Naor, Shani; Gottlieb, Amihai; Plotnik, Meir

    2017-07-26

    It has been shown that gait parameters vary systematically with the slope of the surface when walking uphill (UH) or downhill (DH) (Andriacchi et al., 1977; Crowe et al., 1996; Kawamura et al., 1991; Kirtley et al., 1985; McIntosh et al., 2006; Sun et al., 1996). However, gait trials performed on inclined surfaces have been subject to certain technical limitations including using fixed speed treadmills (TMs) or, alternatively, sampling only a few gait cycles on inclined ramps. Further, prior work has not analyzed upper body kinematics. This study aims to investigate effects of slope on gait parameters using a self-paced TM (SPTM) which facilitates more natural walking, including measuring upper body kinematics and gait coordination parameters. Gait of 11 young healthy participants was sampled during walking in steady state speed. Measurements were made at slopes of +10°, 0° and -10°. Force plates and a motion capture system were used to reconstruct twenty spatiotemporal gait parameters. For validation, previously described parameters were compared with the literature, and novel parameters measuring upper body kinematics and bilateral gait coordination were also analyzed. Results showed that most lower and upper body gait parameters were affected by walking slope angle. Specifically, UH walking had a higher impact on gait kinematics than DH walking. However, gait coordination parameters were not affected by walking slope, suggesting that gait asymmetry, left-right coordination and gait variability are robust characteristics of walking. The findings of the study are discussed in reference to a potential combined effect of slope and gait speed. Follow-up studies are needed to explore the relative effects of each of these factors. Copyright © 2017. Published by Elsevier Ltd.

  10. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  11. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  12. Analysing the 21 cm signal from the epoch of reionization with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Shimabukuro, Hayato; Semelin, Benoit

    2017-07-01

    The 21 cm signal from the epoch of reionization should be observed within the next decade. While a simple statistical detection is expected with Square Kilometre Array (SKA) pathfinders, the SKA will hopefully produce a full 3D mapping of the signal. To extract from the observed data constraints on the parameters describing the underlying astrophysical processes, inversion methods must be developed. For example, the Markov Chain Monte Carlo method has been successfully applied. Here, we test another possible inversion method: artificial neural networks (ANNs). We produce a training set that consists of 70 individual samples. Each sample is made of the 21 cm power spectrum at different redshifts produced with the 21cmFast code plus the value of three parameters used in the seminumerical simulations that describe astrophysical processes. Using this set, we train the network to minimize the error between the parameter values it produces as an output and the true values. We explore the impact of the architecture of the network on the quality of the training. Then we test the trained network on the new set of 54 test samples with different values of the parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameters at a given redshift, that including thermal noise and sample variance decreases the quality of the reconstruction and that using the power spectrum at several redshifts as an input to the ANN improves the quality of the reconstruction. We conclude that ANNs are a viable inversion method whose main strength is that they require a sparse exploration of the parameter space and thus should be usable with full numerical simulations.

  13. CRUMP 2003 Selected Water Sample Results

    EPA Pesticide Factsheets

    Point locations and water sampling results performed in 2003 by the Church Rock Uranium Monitoring Project (CRUMP) a consortium of organizations (Navajo Nation Environmental Protection Agency, US Environmental Protection Agency, New Mexico Scientific Laboratory Division, Navajo Tribal Utility Authority and NM Water Quality Control Commission). Samples include general description of the wells sampled, general chemistry, heavy metals and aestheic parameters, and selected radionuclides. Here only six sampling results are presented in this point shapefile, including: Gross Alpha (U-Nat Ref.) (pCi/L), Gross Beta (Sr/Y-90 Ref.) (pCi/L), Radium-226 (pCi/L), Radium-228 (pCi/L), Total Uranium (pCi/L), and Uranium mass (ug/L). The CRUMP samples were collected in the area of Churchrock, NM in the Eastern AUM Region of the Navajo Nation.

  14. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  15. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  16. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  17. L-moments and TL-moments of the generalized lambda distribution

    USGS Publications Warehouse

    Asquith, W.H.

    2007-01-01

    The 4-parameter generalized lambda distribution (GLD) is a flexible distribution capable of mimicking the shapes of many distributions and data samples including those with heavy tails. The method of L-moments and the recently developed method of trimmed L-moments (TL-moments) are attractive techniques for parameter estimation for heavy-tailed distributions for which the L- and TL-moments have been defined. Analytical solutions for the first five L- and TL-moments in terms of GLD parameters are derived. Unfortunately, numerical methods are needed to compute the parameters from the L- or TL-moments. Algorithms are suggested for parameter estimation. Application of the GLD using both L- and TL-moment parameter estimates from example data is demonstrated, and comparison of the L-moment fit of the 4-parameter kappa distribution is made. A small simulation study of the 98th percentile (far-right tail) is conducted for a heavy-tail GLD with high-outlier contamination. The simulations show, with respect to estimation of the 98th-percent quantile, that TL-moments are less biased (more robost) in the presence of high-outlier contamination. However, the robustness comes at the expense of considerably more sampling variability. ?? 2006 Elsevier B.V. All rights reserved.

  18. Estimating parameters of hidden Markov models based on marked individuals: use of robust design data

    USGS Publications Warehouse

    Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun

    2012-01-01

    Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).

  19. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  20. Aircraft data summaries for the SURE intensives. Final report. [Sampling done October, 1978 near Duncan Falls, Ohio and Giles County, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keifer, W.S.; Blumenthal, D.L.; Tommerdahl, J.B.

    1981-09-01

    As part of the EPRI sulfate regional experiment (SURE), Meteorology Research, Inc., (MRI) and Research Triangle Institute (RTI) conducted six air quality sampling programs in the eastern United States using instrumented aircraft. This volume includes the air quality and meteorological data obtained during the October 1978 intensive when MRI sampled near the Giles County, Tennessee, SURE Station and RTI sampled near the Duncan Falls, Ohio, SURE Station. Sampling data are presented for all measured parameters.

  1. Effects of Intra-Family Parameters: Educative Style and Academic Knowledge of Parents and Their Economic Conditions on Teenagers' Personality and Behavior

    ERIC Educational Resources Information Center

    Bakhtavar, Mohammad; Bayova, Rana

    2015-01-01

    The present study aims to investigate the effects of intra-family parameters; educative styles and academic knowledge of parents and their economic condition on teenagers' personality and behavior. The present study is a descriptive survey. The statistical sample of the study included 166 teenage students from Baku, Azerbaijan and 332 of their…

  2. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  3. Correlations of water quality parameters with mutagenicity of chlorinated drinking water samples.

    PubMed

    Schenck, Kathleen M; Sivaganesan, Mano; Rice, Glenn E

    2009-01-01

    Adverse health effects that may result from chronic exposure to mixtures of disinfection by-products (DBPs) present in drinking waters may be linked to both the types and concentrations of DBPs present. Depending on the characteristics of the source water and treatment processes used, both types and concentrations of DBPs found in drinking waters vary substantially. The composition of a drinking-water mixture also may change during distribution. This study evaluated the relationships between mutagenicity, using the Ames assay, and water quality parameters. The study included information on treatment, mutagenicity data, and water quality data for source waters, finished waters, and distribution samples collected from five full-scale drinking water treatment plants, which used chlorine exclusively for disinfection. Four of the plants used surface water sources and the fifth plant used groundwater. Correlations between mutagenicity and water quality parameters are presented. The highest correlation was observed between mutagenicity and the total organic halide concentrations in the treated samples.

  4. Measurement of Muon Neutrino Quasielastic Scattering on Carbon

    NASA Astrophysics Data System (ADS)

    Aguilar-Arevalo, A. A.; Bazarko, A. O.; Brice, S. J.; Brown, B. C.; Bugel, L.; Cao, J.; Coney, L.; Conrad, J. M.; Cox, D. C.; Curioni, A.; Djurcic, Z.; Finley, D. A.; Fleming, B. T.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Green, C.; Green, J. A.; Hart, T. L.; Hawker, E.; Imlay, R.; Johnson, R. A.; Kasper, P.; Katori, T.; Kobilarcik, T.; Kourbanis, I.; Koutsoliotas, S.; Laird, E. M.; Link, J. M.; Liu, Y.; Liu, Y.; Louis, W. C.; Mahn, K. B. M.; Marsh, W.; Martin, P. S.; McGregor, G.; Metcalf, W.; Meyers, P. D.; Mills, F.; Mills, G. B.; Monroe, J.; Moore, C. D.; Nelson, R. H.; Nienaber, P.; Ouedraogo, S.; Patterson, R. B.; Perevalov, D.; Polly, C. C.; Prebys, E.; Raaf, J. L.; Ray, H.; Roe, B. P.; Russell, A. D.; Sandberg, V.; Schirato, R.; Schmitz, D.; Shaevitz, M. H.; Shoemaker, F. C.; Smith, D.; Sorel, M.; Spentzouris, P.; Stancu, I.; Stefanski, R. J.; Sung, M.; Tanaka, H. A.; Tayloe, R.; Tzanov, M.; van de Water, R.; Wascko, M. O.; White, D. H.; Wilking, M. J.; Yang, H. J.; Zeller, G. P.; Zimmerman, E. D.

    2008-01-01

    The observation of neutrino oscillations is clear evidence for physics beyond the standard model. To make precise measurements of this phenomenon, neutrino oscillation experiments, including MiniBooNE, require an accurate description of neutrino charged current quasielastic (CCQE) cross sections to predict signal samples. Using a high-statistics sample of νμ CCQE events, MiniBooNE finds that a simple Fermi gas model, with appropriate adjustments, accurately characterizes the CCQE events observed in a carbon-based detector. The extracted parameters include an effective axial mass, MAeff=1.23±0.20GeV, that describes the four-momentum dependence of the axial-vector form factor of the nucleon, and a Pauli-suppression parameter, κ=1.019±0.011. Such a modified Fermi gas model may also be used by future accelerator-based experiments measuring neutrino oscillations on nuclear targets.

  5. Blytheville AFB, Arkansas. Water quality management survey. Final report 11-14 Apr 83

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    New, G.R.; Gibson, D.P. Jr.

    1983-05-01

    The USAF OEHL conducted an on site water quality management survey at Blytheville AFB. Main areas of interest were (1) the wastewater treatment plant effluent fecal coliform count, and residual chlorine content, and (2) the stream sampling protocol. The drinking water plant, landfill and industrial shops were also included in the survey. Results of the survey indicated that the low residual chlorine content caused high fecal coliform counts in the wastewater effluent. The chemical parameters sampled in the stream monitoring program did not coincide with the requirements of the State of Arkansas and required modification. Recommendations were made to increasemore » the residual chlorine content of the wastewater effluent and to increase the mixing of the chlorine contact chamber. A list of the chemical parameters was included in the report for stream monitoring.« less

  6. Seasonal variation of human sperm cells among 4,422 semen samples: A retrospective study in Turkey.

    PubMed

    Ozelci, Runa; Yılmaz, Saynur; Dilbaz, Berna; Akpınar, Funda; Akdag Cırık, Derya; Dilbaz, Serdar; Ocal, Aslı

    2016-12-01

    We aimed to assess the possible presence of a seasonal pattern in three parameters of semen analysis: sperm concentration, morphology, and motility as a function of the time of ejaculation and sperm production (spermatogenesis) in normal and oligozoospermic men. This retrospective study included a consecutive series of 4,422 semen samples that were collected from patients as a part of the basic evaluation of the infertile couples attending the Reproductive Endocrine Outpatient Clinic of a tertiary women's hospital in Ankara, Turkey, between January 1, 2012 and December 31, 2013. The samples were classified according to sperm concentration: ≥15 x10 6 /mL as normozoospermic samples and 4 -14.99 x10 6 /mL as oligozoospermic samples and seasonal analysis of the semen samples were carried out separately. When the data was analyzed according to the season of semen production, there was no seasonal effect on the sperm concentration. A gradual and consistent decrease in the rate of sperm with fast forward motility was observed from spring to fall with a recovery noticed during the winter. The percentage of sperms with normal morphology was found to be statistically significantly higher in the spring samples compared with the summer samples (p=0.001). Both normozoospermic and oligozoospermic semen samples appeared to have better sperm parameters in spring and winter. The circannual variation of semen parameters may be important in diagnosis and treatment desicions. WHO: World Health Organization; mRNA:messenger ribonucleic acid.

  7. Measurement of the hyperelastic properties of 44 pathological ex vivo breast tissue samples

    NASA Astrophysics Data System (ADS)

    O'Hagan, Joseph J.; Samani, Abbas

    2009-04-01

    The elastic and hyperelastic properties of biological soft tissues have been of interest to the medical community. There are several biomedical applications where parameters characterizing such properties are critical for a reliable clinical outcome. These applications include surgery planning, needle biopsy and brachtherapy where tissue biomechanical modeling is involved. Another important application is interpreting nonlinear elastography images. While there has been considerable research on the measurement of the linear elastic modulus of small tissue samples, little research has been conducted for measuring parameters that characterize the nonlinear elasticity of tissues included in tissue slice specimens. This work presents hyperelastic measurement results of 44 pathological ex vivo breast tissue samples. For each sample, five hyperelastic models have been used, including the Yeoh, N = 2 polynomial, N = 1 Ogden, Arruda-Boyce, and Veronda-Westmann models. Results show that the Yeoh, polynomial and Ogden models are the most accurate in terms of fitting experimental data. The results indicate that almost all of the parameters corresponding to the pathological tissues are between two times to over two orders of magnitude larger than those of normal tissues, with C11 showing the most significant difference. Furthermore, statistical analysis indicates that C02 of the Yeoh model, and C11 and C20 of the polynomial model have very good potential for cancer classification as they show statistically significant differences for various cancer types, especially for invasive lobular carcinoma. In addition to the potential for use in cancer classification, the presented data are very important for applications such as surgery planning and virtual reality based clinician training systems where accurate nonlinear tissue response modeling is required.

  8. Calculating background levels for ecological risk parameters in toxic harbor sediment

    USGS Publications Warehouse

    Leadon, C.J.; McDonnell, T.R.; Lear, J.; Barclift, D.

    2007-01-01

    Establishing background levels for biological parameters is necessary in assessing the ecological risks from harbor sediment contaminated with toxic chemicals. For chemicals in sediment, the term contaminated is defined as having concentrations above background and significant human health or ecological risk levels. For biological parameters, a site could be considered contaminated if levels of the parameter are either more or less than the background level, depending on the specific parameter. Biological parameters can include tissue chemical concentrations in ecological receptors, bioassay responses, bioaccumulation levels, and benthic community metrics. Chemical parameters can include sediment concentrations of a variety of potentially toxic chemicals. Indirectly, contaminated harbor sediment can impact shellfish, fish, birds, and marine mammals, and human populations. This paper summarizes the methods used to define background levels for chemical and biological parameters from a survey of ecological risk investigations of marine harbor sediment at California Navy bases. Background levels for regional biological indices used to quantify ecological risks for benthic communities are also described. Generally, background stations are positioned in relatively clean areas exhibiting the same physical and general chemical characteristics as nearby areas with contaminated harbor sediment. The number of background stations and the number of sample replicates per background station depend on the statistical design of the sediment ecological risk investigation, developed through the data quality objective (DQO) process. Biological data from the background stations can be compared to data from a contaminated site by using minimum or maximum background levels or comparative statistics. In Navy ecological risk assessments (ERA's), calculated background levels and appropriate ecological risk screening criteria are used to identify sampling stations and sites with contaminated sediments.

  9. Mathematical estimation of the level of microbial contamination on spacecraft surfaces by volumetric air sampling

    NASA Technical Reports Server (NTRS)

    Oxborrow, G. S.; Roark, A. L.; Fields, N. D.; Puleo, J. R.

    1974-01-01

    Microbiological sampling methods presently used for enumeration of microorganisms on spacecraft surfaces require contact with easily damaged components. Estimation of viable particles on surfaces using air sampling methods in conjunction with a mathematical model would be desirable. Parameters necessary for the mathematical model are the effect of angled surfaces on viable particle collection and the number of viable cells per viable particle. Deposition of viable particles on angled surfaces closely followed a cosine function, and the number of viable cells per viable particle was consistent with a Poisson distribution. Other parameters considered by the mathematical model included deposition rate and fractional removal per unit time. A close nonlinear correlation between volumetric air sampling and airborne fallout on surfaces was established with all fallout data points falling within the 95% confidence limits as determined by the mathematical model.

  10. Classification of adulterated honeys by multivariate analysis.

    PubMed

    Amiry, Saber; Esmaiili, Mohsen; Alizadeh, Mohammad

    2017-06-01

    In this research, honey samples were adulterated with date syrup (DS) and invert sugar syrup (IS) at three concentrations (7%, 15% and 30%). 102 adulterated samples were prepared in six batches with 17 replications for each batch. For each sample, 32 parameters including color indices, rheological, physical, and chemical parameters were determined. To classify the samples, based on type and concentrations of adulterant, a multivariate analysis was applied using principal component analysis (PCA) followed by a linear discriminant analysis (LDA). Then, 21 principal components (PCs) were selected in five sets. Approximately two-thirds were identified correctly using color indices (62.75%) or rheological properties (67.65%). A power discrimination was obtained using physical properties (97.06%), and the best separations were achieved using two sets of chemical properties (set 1: lactone, diastase activity, sucrose - 100%) (set 2: free acidity, HMF, ash - 95%). Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Groundwater quality assessment for irrigation purposes based on irrigation water quality index and its zoning with GIS in the villages of Chabahar, Sistan and Baluchistan, Iran.

    PubMed

    Abbasnia, Abbas; Radfard, Majid; Mahvi, Amir Hossein; Nabizadeh, Ramin; Yousefi, Mahmood; Soleimani, Hamed; Alimohammadi, Mahmood

    2018-08-01

    The present study was conducted to evaluate the groundwater quality and its suitability for irrigation purpose through GIS in villages of Chabahr city, Sistan and Baluchistan province in Iran. This cross-sectional study was carried out from 2010 to 2011 the 1-year-monitoring period. The water samples were collected from 40 open dug wells in order to investigate the water quality. Chemical parameters including EC, SAR, Na + , Cl - , pH, TDS, H C O 3 - and IWQI were analyzed. In order to calculate the irrigation water quality index subsequent five water quality parameters (EC, SAR, Na + , Cl - , and H C O 3 - ) were utilized. Among the total of 40 samples were analyzed for IWQI, 40% of the samples classified as excellent water, 60% of the samples in good water category.

  12. Inverse analysis of water profile in starch by non-contact photopyroelectric method

    NASA Astrophysics Data System (ADS)

    Frandas, A.; Duvaut, T.; Paris, D.

    2000-07-01

    The photopyroelectric (PPE) method in a non-contact configuration was proposed to study water migration in starch sheets used for biodegradable packaging. A 1-D theoretical model was developed, allowing the study of samples having a water profile characterized by an arbitrary continuous function. An experimental setup was designed or this purpose which included the choice of excitation source, detection of signals, signal and data processing, and cells for conditioning the samples. We report here the development of an inversion procedure allowing for the determination of the parameters that influence the PPE signal. This procedure led to the optimization of experimental conditions in order to identify the parameters related to the water profile in the sample, and to monitor the dynamics of the process.

  13. Estimation of Power Consumption in the Circular Sawing of Stone Based on Tangential Force Distribution

    NASA Astrophysics Data System (ADS)

    Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng

    2018-04-01

    Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.

  14. Variability estimation of urban wastewater biodegradable fractions by respirometry.

    PubMed

    Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie

    2005-11-01

    This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.

  15. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    PubMed

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  16. Impact on enzyme activity as a new quality index of wastewater.

    PubMed

    Balestri, Francesco; Moschini, Roberta; Cappiello, Mario; Del-Corso, Antonella; Mura, Umberto

    2013-03-15

    The aim of this study was to define a new indicator for the quality of wastewaters that are released into the environment. A quality index is proposed for wastewater samples in terms of the inertness of wastewater samples toward enzyme activity. This involves taking advantage of the sensitivity of enzymes to pollutants that may be present in the waste samples. The effect of wastewater samples on the rate of a number of different enzyme-catalyzed reactions was measured, and the results for all the selected enzymes were analyzed in an integrated fashion (multi-enzymatic sensor). This approach enabled us to define an overall quality index, the "Impact on Enzyme Function" (IEF-index), which is composed of three indicators: i) the Synoptic parameter, related to the average effect of the waste sample on each component of the enzymatic sensor; ii) the Peak parameter, related to the maximum effect observed among all the effects exerted by the sample on the sensor components; and, iii) the Interference parameter, related to the number of sensor components that are affected less than a fixed threshold value. A number of water based samples including public potable tap water, fluids from urban sewage systems, wastewater disposal from leather, paper and dye industries were analyzed and the IEF-index was then determined. Although the IEF-index cannot discriminate between different types of wastewater samples, it could be a useful parameter in monitoring the improvement of the quality of a specific sample. However, by analyzing an adequate number of waste samples of the same type, even from different local contexts, the profile of the impact of each component of the multi-enzymatic sensor could be typical for specific types of waste. The IEF-index is proposed as a supplementary qualification score for wastewaters, in addition to the certification of the waste's conformity to legal requirements. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data.

    PubMed

    O'Reilly, Joseph E; Donoghue, Philip C J

    2018-03-01

    Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.

  18. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data

    PubMed Central

    O’Reilly, Joseph E; Donoghue, Philip C J

    2018-01-01

    Abstract Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data. PMID:29106675

  19. Non-parametric cell-based photometric proxies for galaxy morphology: methodology and application to the morphologically defined star formation-stellar mass relation of spiral galaxies in the local universe

    NASA Astrophysics Data System (ADS)

    Grootes, M. W.; Tuffs, R. J.; Popescu, C. C.; Robotham, A. S. G.; Seibert, M.; Kelvin, L. S.

    2014-02-01

    We present a non-parametric cell-based method of selecting highly pure and largely complete samples of spiral galaxies using photometric and structural parameters as provided by standard photometric pipelines and simple shape fitting algorithms. The performance of the method is quantified for different parameter combinations, using purely human-based classifications as a benchmark. The discretization of the parameter space allows a markedly superior selection than commonly used proxies relying on a fixed curve or surface of separation. Moreover, we find structural parameters derived using passbands longwards of the g band and linked to older stellar populations, especially the stellar mass surface density μ* and the r-band effective radius re, to perform at least equally well as parameters more traditionally linked to the identification of spirals by means of their young stellar populations, e.g. UV/optical colours. In particular, the distinct bimodality in the parameter μ*, consistent with expectations of different evolutionary paths for spirals and ellipticals, represents an often overlooked yet powerful parameter in differentiating between spiral and non-spiral/elliptical galaxies. We use the cell-based method for the optical parameter set including re in combination with the Sérsic index n and the i-band magnitude to investigate the intrinsic specific star formation rate-stellar mass relation (ψ*-M*) for a morphologically defined volume-limited sample of local Universe spiral galaxies. The relation is found to be well described by ψ _* ∝ M_*^{-0.5} over the range of 109.5 ≤ M* ≤ 1011 M⊙ with a mean interquartile range of 0.4 dex. This is somewhat steeper than previous determinations based on colour-selected samples of star-forming galaxies, primarily due to the inclusion in the sample of red quiescent discs.

  20. Non-linear matter power spectrum covariance matrix errors and cosmological parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Blot, L.; Corasaniti, P. S.; Amendola, L.; Kitching, T. D.

    2016-06-01

    The covariance of the matter power spectrum is a key element of the analysis of galaxy clustering data. Independent realizations of observational measurements can be used to sample the covariance, nevertheless statistical sampling errors will propagate into the cosmological parameter inference potentially limiting the capabilities of the upcoming generation of galaxy surveys. The impact of these errors as function of the number of realizations has been previously evaluated for Gaussian distributed data. However, non-linearities in the late-time clustering of matter cause departures from Gaussian statistics. Here, we address the impact of non-Gaussian errors on the sample covariance and precision matrix errors using a large ensemble of N-body simulations. In the range of modes where finite volume effects are negligible (0.1 ≲ k [h Mpc-1] ≲ 1.2), we find deviations of the variance of the sample covariance with respect to Gaussian predictions above ˜10 per cent at k > 0.3 h Mpc-1. Over the entire range these reduce to about ˜5 per cent for the precision matrix. Finally, we perform a Fisher analysis to estimate the effect of covariance errors on the cosmological parameter constraints. In particular, assuming Euclid-like survey characteristics we find that a number of independent realizations larger than 5000 is necessary to reduce the contribution of sampling errors to the cosmological parameter uncertainties at subpercent level. We also show that restricting the analysis to large scales k ≲ 0.2 h Mpc-1 results in a considerable loss in constraining power, while using the linear covariance to include smaller scales leads to an underestimation of the errors on the cosmological parameters.

  1. Adaptive Peer Sampling with Newscast

    NASA Astrophysics Data System (ADS)

    Tölgyesi, Norbert; Jelasity, Márk

    The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.

  2. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  3. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  4. Tackling the conformational sampling of larger flexible compounds and macrocycles in pharmacology and drug discovery.

    PubMed

    Chen, I-Jen; Foloppe, Nicolas

    2013-12-15

    Computational conformational sampling underpins much of molecular modeling and design in pharmaceutical work. The sampling of smaller drug-like compounds has been an active area of research. However, few studies have tested in details the sampling of larger more flexible compounds, which are also relevant to drug discovery, including therapeutic peptides, macrocycles, and inhibitors of protein-protein interactions. Here, we investigate extensively mainstream conformational sampling methods on three carefully curated compound sets, namely the 'Drug-like', larger 'Flexible', and 'Macrocycle' compounds. These test molecules are chemically diverse with reliable X-ray protein-bound bioactive structures. The compared sampling methods include Stochastic Search and the recent LowModeMD from MOE, all the low-mode based approaches from MacroModel, and MD/LLMOD recently developed for macrocycles. In addition to default settings, key parameters of the sampling protocols were explored. The performance of the computational protocols was assessed via (i) the reproduction of the X-ray bioactive structures, (ii) the size, coverage and diversity of the output conformational ensembles, (iii) the compactness/extendedness of the conformers, and (iv) the ability to locate the global energy minimum. The influence of the stochastic nature of the searches on the results was also examined. Much better results were obtained by adopting search parameters enhanced over the default settings, while maintaining computational tractability. In MOE, the recent LowModeMD emerged as the method of choice. Mixed torsional/low-mode from MacroModel performed as well as LowModeMD, and MD/LLMOD performed well for macrocycles. The low-mode based approaches yielded very encouraging results with the flexible and macrocycle sets. Thus, one can productively tackle the computational conformational search of larger flexible compounds for drug discovery, including macrocycles. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Behavior of optical properties of coagulated blood sample at 633 nm wavelength

    NASA Astrophysics Data System (ADS)

    Morales Cruzado, Beatriz; Vázquez y Montiel, Sergio; Delgado Atencio, José Alberto

    2011-03-01

    Determination of tissue optical parameters is fundamental for application of light in either diagnostics or therapeutical procedures. However, in samples of biological tissue in vitro, the optical properties are modified by cellular death or cellular agglomeration that can not be avoided. This phenomena change the propagation of light within the biological sample. Optical properties of human blood tissue were investigated in vitro at 633 nm using an optical setup that includes a double integrating sphere system. We measure the diffuse transmittance and diffuse reflectance of the blood sample and compare these physical properties with those obtained by Monte Carlo Multi-Layered (MCML). The extraction of the optical parameters: absorption coefficient μa, scattering coefficient μs and anisotropic factor g from the measurements were carried out using a Genetic Algorithm, in which the search procedure is based in the evolution of a population due to selection of the best individual, evaluated by a function that compares the diffuse transmittance and diffuse reflectance of those individuals with the experimental ones. The algorithm converges rapidly to the best individual, extracting the optical parameters of the sample. We compare our results with those obtained by using other retrieve procedures. We found that the scattering coefficient and the anisotropic factor change dramatically due to the formation of clusters.

  6. Prospective, randomized, blinded evaluation of donor semen quality provided by seven commercial sperm banks.

    PubMed

    Carrell, Douglas T; Cartmill, Deborah; Jones, Kirtly P; Hatasaka, Harry H; Peterson, C Matthew

    2002-07-01

    To evaluate variability in donor semen quality between seven commercial donor sperm banks, within sperm banks, and between intracervical insemination and intrauterine insemination. Prospective, randomized, blind evaluation of commercially available donor semen samples. An academic andrology laboratory. Seventy-five cryopreserved donor semen samples were evaluated. Samples were coded, then blindly evaluated for semen quality. Standard semen quality parameters, including concentration, motility parameters, World Health Organization criteria morphology, and strict criteria morphology. Significant differences were observed between donor semen banks for most semen quality parameters analyzed in intracervical insemination samples. In general, the greatest variability observed between banks was in percentage progressive sperm motility (range, 8.8 +/- 5.8 to 42.4 +/- 5.5) and normal sperm morphology (strict criteria; range, 10.1 +/- 3.3 to 26.6 +/- 4.7). Coefficients of variation within sperm banks were generally high. These data demonstrate the variability of donor semen quality provided by commercial sperm banks, both between banks and within a given bank. No relationship was observed between the size or type of sperm bank and the degree of variability. The data demonstrate the lack of uniformity in the criteria used to screen potential semen donors and emphasize the need for more stringent screening criteria and strict quality control in processing samples.

  7. Study of positive and negative feedback sensitivity in psychosis using the Wisconsin Card Sorting Test.

    PubMed

    Farreny, Aida; Del Rey-Mejías, Ángel; Escartin, Gemma; Usall, Judith; Tous, Núria; Haro, Josep Maria; Ochoa, Susana

    2016-07-01

    Schizophrenia involves marked motivational and learning deficits that may reflect abnormalities in reward processing. The purpose of this study was to examine positive and negative feedback sensitivity in schizophrenia using computational modeling derived from the Wisconsin Card Sorting Test (WCST). We also aimed to explore feedback sensitivity in a sample with bipolar disorder. Eighty-three individuals with schizophrenia and 27 with bipolar disorder were included. Demographic, clinical and cognitive outcomes, together with the WCST, were considered in both samples. Computational modeling was performed using the R syntax to calculate 3 parameters based on trial-by-trial execution on the WCST: reward sensitivity (R), punishment sensitivity (P), and choice consistency (D). The associations between outcome variables and the parameters were investigated. Positive and negative sensitivity showed deficits, but P parameter was clearly diminished in schizophrenia. Cognitive variables, age, and symptoms were associated with R, P, and D parameters in schizophrenia. The sample with bipolar disorder would show cognitive deficits and feedback abnormalities to a lesser extent than individuals with schizophrenia. Negative feedback sensitivity demonstrated greater deficit in both samples. Idiosyncratic cognitive requirements in the WCST might introduce confusion when supposing model-free reinforcement learning. Negative symptoms of schizophrenia were related to lower feedback sensitivity and less goal-directed patterns of choice. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Analysis of hepatitis C viral dynamics using Latin hypercube sampling

    NASA Astrophysics Data System (ADS)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2012-12-01

    We consider a mathematical model comprising four coupled ordinary differential equations (ODEs) to study hepatitis C viral dynamics. The model includes the efficacies of a combination therapy of interferon and ribavirin. There are two main objectives of this paper. The first one is to approximate the percentage of cases in which there is a viral clearance in absence of treatment as well as percentage of response to treatment for various efficacy levels. The other is to better understand and identify the parameters that play a key role in the decline of viral load and can be estimated in a clinical setting. A condition for the stability of the uninfected and the infected steady states is presented. A large number of sample points for the model parameters (which are physiologically feasible) are generated using Latin hypercube sampling. An analysis of the simulated values identifies that, approximately 29.85% cases result in clearance of the virus during the early phase of the infection. Results from the χ2 and the Spearman's tests done on the samples, indicate a distinctly different distribution for certain parameters for the cases exhibiting viral clearance under the combination therapy.

  9. X-Ray Morphological Analysis of the Planck ESZ Clusters

    NASA Astrophysics Data System (ADS)

    Lovisari, Lorenzo; Forman, William R.; Jones, Christine; Ettori, Stefano; Andrade-Santos, Felipe; Arnaud, Monique; Démoclès, Jessica; Pratt, Gabriel W.; Randall, Scott; Kraft, Ralph

    2017-09-01

    X-ray observations show that galaxy clusters have a very large range of morphologies. The most disturbed systems, which are good to study how clusters form and grow and to test physical models, may potentially complicate cosmological studies because the cluster mass determination becomes more challenging. Thus, we need to understand the cluster properties of our samples to reduce possible biases. This is complicated by the fact that different experiments may detect different cluster populations. For example, Sunyaev-Zeldovich (SZ) selected cluster samples have been found to include a greater fraction of disturbed systems than X-ray selected samples. In this paper we determine eight morphological parameters for the Planck Early Sunyaev-Zeldovich (ESZ) objects observed with XMM-Newton. We found that two parameters, concentration and centroid shift, are the best to distinguish between relaxed and disturbed systems. For each parameter we provide the values that allow selecting the most relaxed or most disturbed objects from a sample. We found that there is no mass dependence on the cluster dynamical state. By comparing our results with what was obtained with REXCESS clusters, we also confirm that the ESZ clusters indeed tend to be more disturbed, as found by previous studies.

  10. X-Ray Morphological Analysis of the Planck ESZ Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovisari, Lorenzo; Forman, William R.; Jones, Christine

    2017-09-01

    X-ray observations show that galaxy clusters have a very large range of morphologies. The most disturbed systems, which are good to study how clusters form and grow and to test physical models, may potentially complicate cosmological studies because the cluster mass determination becomes more challenging. Thus, we need to understand the cluster properties of our samples to reduce possible biases. This is complicated by the fact that different experiments may detect different cluster populations. For example, Sunyaev–Zeldovich (SZ) selected cluster samples have been found to include a greater fraction of disturbed systems than X-ray selected samples. In this paper wemore » determine eight morphological parameters for the Planck Early Sunyaev–Zeldovich (ESZ) objects observed with XMM-Newton . We found that two parameters, concentration and centroid shift, are the best to distinguish between relaxed and disturbed systems. For each parameter we provide the values that allow selecting the most relaxed or most disturbed objects from a sample. We found that there is no mass dependence on the cluster dynamical state. By comparing our results with what was obtained with REXCESS clusters, we also confirm that the ESZ clusters indeed tend to be more disturbed, as found by previous studies.« less

  11. Investigation into the influence of build parameters on failure of 3D printed parts

    NASA Astrophysics Data System (ADS)

    Fornasini, Giacomo

    Additive manufacturing, including fused deposition modeling (FDM), is transforming the built world and engineering education. Deep understanding of parts created through FDM technology has lagged behind its adoption in home, work, and academic environments. Properties of parts created from bulk materials through traditional manufacturing are understood well enough to accurately predict their behavior through analytical models. Unfortunately, Additive Manufacturing (AM) process parameters create anisotropy on a scale that fundamentally affects the part properties. Understanding AM process parameters (implemented by program algorithms called slicers) is necessary to predict part behavior. Investigating algorithms controlling print parameters (slicers) revealed stark differences between the generation of part layers. In this work, tensile testing experiments, including a full factorial design, determined that three key factors, width, thickness, infill density, and their interactions, significantly affect the tensile properties of 3D printed test samples.

  12. Vergleich von hydraulischen und chemischen Sedimenteigenschaften aus Spül- und Kernbohrungen im Raum Peine (Norddeutschland)

    NASA Astrophysics Data System (ADS)

    Konrad, C.; Walther, W.; Reimann, T.; Rogge, A.; Stengel, P.; Well, R.

    2008-03-01

    Comparison of hydraulic and chemical properties of sediments from flush- and core drillings in the area of Peine (Germany). Because of financial constraints, investigations of nitrate metabolism are often based on disturbed borehole samples. It is arguable, however, whether disturbed samples are suitable for these types of investigations. Disadvantages of disturbed samples in comparison to undisturbed core samples are well known and include possible contamination of the sample by mud additives, destruction of the sediment formation and the insecurity concerning the correct depth allocation. In this study, boreholes were drilled at three locations to a maximum depth of 50 m. The extracted samples, as intact sediment cores and drill cuttings, were studied with regard to chemical and hydraulic parameters of the aquifer sediments. The results show: 1. hydraulic parameters are not affected by clay-based mud; 2. disturbed samples contain less fine grain material relative to the core samples, and the hydraulic conductivity can only be estimated from catch samples; 3. catch samples contain fewer reducing agents (sulphides, organic carbon) than core samples in hydraulically passive zones (defined as K < 10 6 m · s 1); 4. the results of analyses of disturbed and undisturbed core samples are in good agreement for hydraulically active zones (K ≥ 10 6 m · s 1).

  13. Aircraft data summaries for the SURE intensives. Final report. [Data obtained during January/February 1978 near Duncan Falls, Ohio and Lewisburg, Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keifer, W.S.; Blumenthal, D.L.; Tommerdahl, J.B.

    1981-09-01

    As part of the EPRI sulfate regional experiment (SURE), Meteorology Research, Inc., (MRI) and Research Triangle Institute (RTI) conducted six air quality sampling programs in the eastern United States using instrumented aircraft. This volume includes the air quality and meteorological data obtained during the January/February 1978 Intensive when MRI sampled near the Duncan Falls, Ohio, SURE Station and RTI sampled near the Lewisburg, Virginia, SURE Station. Sampling data are presented for all measured parameters.

  14. In vitro chronic hepatic disease characterization with a multiparametric ultrasonic approach.

    PubMed

    Meziri, M; Pereira, W C A; Abdelwahab, A; Degott, C; Laugier, P

    2005-03-01

    Although, high resolution, real-time ultrasonic (US) imaging is routinely available, image interpretation is based on grey-level and texture and quantitative evaluation is limited. Other potentially useful diagnostic information from US echoes may include modifications in tissue acoustic parameters (speed, attenuation and backscattering) resulting from disease development. Changes in acoustical parameters can be detected using time-of-flight and spectral analysis techniques. The objective of this study is to explore the potential of three parameters together (attenuation coefficient, US speed and integrated backscatter coefficient-IBC) to discriminate healthy and fibrosis subgroups in liver tissue. Echoes from 21 fresh in vitro samples of human liver and from a plane reflector were obtained using a 20-MHz central frequency transducer (6-30 MHz bandpass). The scan plane was parallel to the reflector placed beneath the liver. A 30 x 20 matrix of A-scans was obtained, with a 200-microm step. The samples were classified according to the Metavir scale in five different degrees of fibrosis. US speed, attenuation and IBC were estimated from standard methods described in the literature. Statistical tests were applied to the results of each parameter individually and indicated that it was not possible to identify all the fibrosis groups. Then a discriminant analysis was performed for the three parameters together resulting in a reasonable separation of fibrotic groups. Although the number of tissue samples is limited, this study opens the possibility of enhancing the discriminant capability of ultrasonic parameters of liver tissue disease when they are combined together.

  15. Attaining insight into interactions between hydrologic model parameters and geophysical attributes for national-scale model parameter estimation

    NASA Astrophysics Data System (ADS)

    Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.

    2017-12-01

    Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.

  16. Sample treatments prior to capillary electrophoresis-mass spectrometry.

    PubMed

    Hernández-Borges, Javier; Borges-Miquel, Teresa M; Rodríguez-Delgado, Miguel Angel; Cifuentes, Alejandro

    2007-06-15

    Sample preparation is a crucial part of chemical analysis and in most cases can become the bottleneck of the whole analytical process. Its adequacy is a key factor in determining the success of the analysis and, therefore, careful selection and optimization of the parameters controlling sample treatment should be carried out. This work revises the different strategies that have been developed for sample preparation prior to capillary electrophoresis-mass spectrometry (CE-MS). Namely the present work presents an exhaustive and critical revision of the different samples treatments used together with on-line CE-MS including works published from January 2000 to July 2006.

  17. Data compilation for assessing sediment and toxic chemical loads from the Green River to the lower Duwamish Waterway, Washington

    USGS Publications Warehouse

    Conn, Kathleen E.; Black, Robert W.

    2014-01-01

    Between February and June 2013, the U.S. Geological Survey collected representative samples of whole water, suspended sediment, and (or) bed sediment from a single strategically located site on the Duwamish River, Washington, during seven periods of different flow conditions. Samples were analyzed by Washington-State-accredited laboratories for a large suite of compounds, including polycyclic aromatic hydrocarbons and other semivolatile compounds, polychlorinated biphenyl Aroclors and the 209 congeners, metals, dioxins/furans, volatile organic compounds, pesticides, butyltins, hexavalent chromium, and total organic carbon. Chemical concentrations associated with bulk bed sediment (<2 mm) and fine bed sediment (<62.5 μm) fractions were compared to chemical concentrations associated with suspended sediment. Bulk bed sediment concentrations generally were lower than fine bed sediment and suspended-sediment concentrations. Concurrent with the chemistry sampling, additional parameters were measured, including instantaneous river discharge, suspended-sediment concentration, sediment particle-size distribution, and general water-quality parameters. From these data, estimates of instantaneous sediment and chemical loads from the Green River to the Lower Duwamish Waterway were calculated.

  18. Probabilistic evaluation of damage potential in earthquake-induced liquefaction in a 3-D soil deposit

    NASA Astrophysics Data System (ADS)

    Halder, A.; Miller, F. J.

    1982-03-01

    A probabilistic model to evaluate the risk of liquefaction at a site and to limit or eliminate damage during earthquake induced liquefaction is proposed. The model is extended to consider three dimensional nonhomogeneous soil properties. The parameters relevant to the liquefaction phenomenon are identified, including: (1) soil parameters; (2) parameters required to consider laboratory test and sampling effects; and (3) loading parameters. The fundamentals of risk based design concepts pertient to liquefaction are reviewed. A detailed statistical evaluation of the soil parameters in the proposed liquefaction model is provided and the uncertainty associated with the estimation of in situ relative density is evaluated for both direct and indirect methods. It is found that the liquefaction potential the uncertainties in the load parameters could be higher than those in the resistance parameters.

  19. Aqueous geochemical data from the analysis of stream-water samples collected in June and July 2006-Taylor Mountains 1:250,00-scale quadrangle, Alaska

    USGS Publications Warehouse

    Wang, Bronwen; Mueller, Seth; Stetson, Sarah; Bailey, Elizabeth; Lee, Greg

    2011-01-01

    We report on the chemical analysis of water samples collected from the Taylor Mountains 1:250,000-scale quadrangle, Alaska. Parameters for which data are reported include pH, conductivity, water temperature, major cation and anion concentrations, trace-element concentrations, and dissolved organic-carbon concentrations. Samples were collected as part of a multiyear U.S. Geological Survey project entitled ?Geologic and Mineral Deposit Data for Alaskan Economic Development.? Data presented here are from samples collected in June and July 2006. The data are being released at this time with minimal interpretation. This is the third release of aqueous geochemical data from this project; aqueous geochemical data from samples collected in 2004 and 2005 were published previously. The data in this report augment but do not duplicate or supersede the previous data release. Site selection was based on a regional sampling strategy that focused on first- and second-order drainages. Water sample site selection was based on landscape parameters that included physiography, wetland extent, lithological changes, and a cursory field review of mineralogy from pan concentrates. Stream water in the Taylor Mountains quadrangle is dominated by bicarbonate (HCO3-), although in a few samples more than 50 percent of the anionic charge can be attributed to sulfate (SO42-). The major-cation chemistry ranges from Ca2+/Mg2+ dominated to a mix of Ca2+/Mg2+/Na++K+. Generally, good agreement was found between the major cations and anions in the duplicate samples. Many trace elements in these samples were at or near the analytical method detection limit, but good agreement was found between duplicate samples for elements with detectable concentrations. All field blank major-ion and trace-element concentrations were below detection.

  20. Learning Maximal Entropy Models from finite size datasets: a fast Data-Driven algorithm allows to sample from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).

  1. Direct sample introduction-gas chromatography-mass spectrometry for the determination of haloanisole compounds in cork stoppers.

    PubMed

    Cacho, J I; Nicolás, J; Viñas, P; Campillo, N; Hernández-Córdoba, M

    2016-12-02

    A solventless analytical method is proposed for analyzing the compounds responsible for cork taint in cork stoppers. Direct sample introduction (DSI) is evaluated as a sample introduction system for the gas chromatography-mass spectrometry (GC-MS) determination of four haloanisoles (HAs) in cork samples. Several parameters affecting the DSI step, including desorption temperature and time, gas flow rate and other focusing parameters, were optimized using univariate and multivariate approaches. The proposed method shows high sensitivity and minimises sample handling, with detection limits of 1.6-2.6ngg -1 , depending on the compound. The suitability of the optimized procedure as a screening method was evaluated by obtaining decision limits (CCα) and detection capabilities (CCβ) for each analyte, which were found to be in 6.9-11.8 and 8.7-14.8ngg -1 , respectively, depending on the compound. Twenty-four cork samples were analysed, and 2,4,6-trichloroanisole was found in four of them at levels between 12.6 and 53ngg -1 . Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Coal liquefaction process streams characterization and evaluation: Analysis of Black Thunder coal and liquefaction products from HRI Bench Unit Run CC-15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugmire, R.J.; Solum, M.S.

    This study was designed to apply {sup 13}C-nuclear magnetic resonance (NMR) spectrometry to the analysis of direct coal liquefaction process-stream materials. {sup 13}C-NMR was shown to have a high potential for application to direct coal liquefaction-derived samples in Phase II of this program. In this Phase III project, {sup 13}C-NMR was applied to a set of samples derived from the HRI Inc. bench-scale liquefaction Run CC-15. The samples include the feed coal, net products and intermediate streams from three operating periods of the run. High-resolution {sup 13}C-NMR data were obtained for the liquid samples and solid-state CP/MAS {sup 13}C-NMR datamore » were obtained for the coal and filter-cake samples. The {sup 1}C-NMR technique is used to derive a set of twelve carbon structural parameters for each sample (CONSOL Table A). Average molecular structural descriptors can then be derived from these parameters (CONSOL Table B).« less

  3. VizieR Online Data Catalog: Orbital parameters of Kuiper Belt objects (Volk+, 2017)

    NASA Astrophysics Data System (ADS)

    Volk, K.; Malhotra, R.

    2017-11-01

    Our starting point is the list of minor planets in the outer solar system cataloged in the database of the Minor Planet Center (http://www.minorplanetcenter.net/iau/lists/t_centaurs.html and http://www.minorplanetcenter.net/iau/lists/t_tnos.html) as of 2016 October 20. The complete listing of our sample, including best-fit orbital parameters and sky locations, is provided in Table1. (1 data file).

  4. Band excitation method applicable to scanning probe microscopy

    DOEpatents

    Jesse, Stephen [Knoxville, TN; Kalinin, Sergei V [Knoxville, TN

    2010-08-17

    Methods and apparatus are described for scanning probe microscopy. A method includes generating a band excitation (BE) signal having finite and predefined amplitude and phase spectrum in at least a first predefined frequency band; exciting a probe using the band excitation signal; obtaining data by measuring a response of the probe in at least a second predefined frequency band; and extracting at least one relevant dynamic parameter of the response of the probe in a predefined range including analyzing the obtained data. The BE signal can be synthesized prior to imaging (static band excitation), or adjusted at each pixel or spectroscopy step to accommodate changes in sample properties (adaptive band excitation). An apparatus includes a band excitation signal generator; a probe coupled to the band excitation signal generator; a detector coupled to the probe; and a relevant dynamic parameter extractor component coupled to the detector, the relevant dynamic parameter extractor including a processor that performs a mathematical transform selected from the group consisting of an integral transform and a discrete transform.

  5. Band excitation method applicable to scanning probe microscopy

    DOEpatents

    Jesse, Stephen; Kalinin, Sergei V

    2013-05-28

    Methods and apparatus are described for scanning probe microscopy. A method includes generating a band excitation (BE) signal having finite and predefined amplitude and phase spectrum in at least a first predefined frequency band; exciting a probe using the band excitation signal; obtaining data by measuring a response of the probe in at least a second predefined frequency band; and extracting at least one relevant dynamic parameter of the response of the probe in a predefined range including analyzing the obtained data. The BE signal can be synthesized prior to imaging (static band excitation), or adjusted at each pixel or spectroscopy step to accommodate changes in sample properties (adaptive band excitation). An apparatus includes a band excitation signal generator; a probe coupled to the band excitation signal generator; a detector coupled to the probe; and a relevant dynamic parameter extractor component coupled to the detector, the relevant dynamic parameter extractor including a processor that performs a mathematical transform selected from the group consisting of an integral transform and a discrete transform.

  6. Determination of drugs and drug-like compounds in different samples with direct analysis in real time mass spectrometry.

    PubMed

    Chernetsova, Elena S; Morlock, Gertrud E

    2011-01-01

    Direct analysis in real time (DART), a relatively new ionization source for mass spectrometry, ionizes small-molecule components from different kinds of samples without any sample preparation and chromatographic separation. The current paper reviews the published data available on the determination of drugs and drug-like compounds in different matrices with DART-MS, including identification and quantitation issues. Parameters that affect ionization efficiency and mass spectra composition are also discussed. Copyright © 2011 Wiley Periodicals, Inc.

  7. Environmental characteristics of anopheline mosquito larval habitats in a malaria endemic area in Iran.

    PubMed

    Soleimani-Ahmadi, Moussa; Vatandoost, Hassan; Hanafi-Bojd, Ahmad-Ali; Zare, Mehdi; Safari, Reza; Mojahedi, Abdolrasul; Poorahmad-Garbandi, Fatemeh

    2013-07-01

    To determine the effects of environmental parameters of larval habitats on distribution and abundance of anopheline mosquitoes in Rudan county of Iran. This cross-sectional study was conducted during the mosquito breeding season from February 2010 to October 2011. The anopheline larvae were collected using the standard dipping method. The specimens were identified using a morphological-based key. Simultaneously with larval collection, environmental parameters of the larval habitats including water current and turbidity, sunlight situation, and substrate type of habitats were recorded. Water samples were taken from breeding sites during larval collection. Before collection of samples, the water temperature was measured. The water samples were analysed for turbidity, conductivity, total alkalinity, total dissolved solid, pH and ions including chloride, sulphate, calcium, and magnesium. Statistical correlation analysis and ANOVA test were used to analyze the association between environmental parameters and larval mosquito abundance. In total 2 973 larvae of the genus Anopheles were collected from 25 larval habitats and identified using morphological characters. They comprised of six species: An. dthali (53.21%), An. stephensi (24.22%), An. culicifacies (14.06%), An. superpictus (4.07%), An. turkhudi (3.30%), and An. apoci (1.14%). The most abundant species was An. dthali which were collected from all of the study areas. Larvae of two malaria vectors, An. dthali and An. stephensi, co-existed and collected in a wide range of habitats with different physico-chemical parameters. The most common larval habitats were man-made sites such as sand mining pools with clean and still water. The anopheline mosquitoes also preferred permanent habitats in sunlight with sandy substrates. The results indicated that there was a significant relationship between mean physico-chemical parameters such as water temperature, conductivity, total alkalinity, sulphate, chloride, and mosquito distribution and abundance. The results of this study showed a correlation between certain environmental parameters and mosquito larvae abundance, and these parameters should be considered in planning and implementing larval control programs. Copyright © 2013 Hainan Medical College. Published by Elsevier B.V. All rights reserved.

  8. Chemical concentrations in water and suspended sediment, Green River to Lower Duwamish Waterway near Seattle, Washington, 2016–17

    USGS Publications Warehouse

    Conn, Kathleen E.; Black, Robert W.; Peterson, Norman T.; Senter, Craig A.; Chapman, Elena A.

    2018-01-05

    From August 2016 to March 2017, the U.S. Geological Survey (USGS) collected representative samples of filtered and unfiltered water and suspended sediment (including the colloidal fraction) at USGS streamgage 12113390 (Duwamish River at Golf Course, at Tukwila, Washington) during 13 periods of differing flow conditions. Samples were analyzed by Washington-State-accredited laboratories for a large suite of compounds, including metals, dioxins/furans, semivolatile compounds including polycyclic aromatic hydrocarbons, butyltins, the 209 polychlorinated biphenyl (PCB) congeners, and total and dissolved organic carbon. Concurrent with the chemistry sampling, water-quality field parameters were measured, and representative water samples were collected and analyzed for river suspended-sediment concentration and particle-size distribution. The results provide new data that can be used to estimate sediment and chemical loads transported by the Green River to the Lower Duwamish Waterway.

  9. Sea Surface Scanner: An advanced catamaran to study the sea surface

    NASA Astrophysics Data System (ADS)

    Wurl, O.; Mustaffa, N. I. H.; Ribas Ribas, M.

    2016-02-01

    The Sea Surface Scanner is a remote-controlled catamaran with the capability to sample the sea-surface microlayer in high resolution. The catamaran is equipped with a suite of sensors to scan the sea surface on chemical, biological and physical parameters. Parameters include UV absorption, fluorescence spectra, chlorophyll-a, photosynthetic efficiency, chromophoric dissolved organic matter (CDOM), dissolved oxygen, pH, temperature, and salinity. A further feature is a capability to collect remotely discrete water samples for detailed lab analysis. We present the first high-resolution (< 30 sec) data on the sea surface microlayer. We discuss the variability of biochemical properties of the sea surface and its implication on air-sea interaction.

  10. Deposition of amorphous carbon thin films by aerosol-assisted CVD method

    NASA Astrophysics Data System (ADS)

    Fadzilah, A. N.; Dayana, K.; Rusop, M.

    2018-05-01

    This paper reports on the deposition of amorphous carbon (a-C) by Aerosol-assisted Chemical Vapor Deposition (AACVD) using natural source of camphor oil as the precursor material. 4 samples were deposited at 4 different deposition flow rate from 15 sccm to 20 sccm, with 5 sccm interval for each sample. The analysis includes the electrical, optical and structural analysis of the data. The a-C structure which came from the manipulation of synthesis parameter was characterized by the solar simulator system, UV-VIS-NIR, Raman spectroscope and AFM. The properties of a-C are highly dependent on the deposition techniques and deposition parameters; hence the influences of gas flow rate were studied.

  11. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  12. Performance evaluation of Samsung LABGEO(HC10) Hematology Analyzer.

    PubMed

    Park, Il Joong; Ahn, Sunhyun; Kim, Young In; Kang, Seon Joo; Cho, Sung Ran

    2014-08-01

    The Samsung LABGEO(HC10) Hematology Analyzer (LABGEO(HC10)) is a recently developed automated hematology analyzer that uses impedance technologies. The analyzer provides 18 parameters including 3-part differential at a maximum rate of 80 samples per hour. To evaluate the performance of the LABGEO(HC10). We evaluated precision, linearity, carryover, and relationship for complete blood cell count parameters between the LABGEO(HC10) and the LH780 (Beckman Coulter Inc) in a university hospital in Korea according to the Clinical and Laboratory Standards Institute guidelines. Sample stability and differences due to the anticoagulant used (K₂EDTA versus K₃EDTA) were also evaluated. The LABGEO(HC10) showed linearity over a wide range and minimal carryover (<1%) for white blood cell, hemoglobin, red blood cell, and platelet parameters. Correlation between the LABGEO(HC10) and the LH780 was good for all complete blood cell count parameters (R > 0.92) except for mean corpuscular hemoglobin concentration. The bias estimated was acceptable for all parameters investigated except for monocyte count. Most parameters were stable until 24 hours both at room temperature and at 4°C. The difference by anticoagulant type was statistically insignificant for all parameters except for a few red cell parameters. The accurate results achievable and simplicity of operation make the unit recommendable for small to medium-sized laboratories.

  13. Finding an optimal strategy for measuring the quality of groundwater as a source for drinking water

    NASA Astrophysics Data System (ADS)

    van Driezum, Inge; Saracevic, Ernis; Scheibz, Jürgen; Zessner, Matthias; Kirschner, Alexander; Sommer, Regina; Farnleitner, Andreas; Blaschke, Alfred Paul

    2015-04-01

    A good chemical and microbiological water quality is of great importance in riverbank filtration systems that are used as public water supplies. Water quality is ideally monitored frequently at the drinking water well using a steady pumping rate. Monitoring source water (like groundwater) however, can be more challenging. First of all, piezometers should be drilled in the correct layer of the aquifer. Secondly, the sampling design should include all preferred parameters (microbiological and chemical parameters) and should also take the hydrological conditions into account. In this study, we made use of different geophysical techniques (ERT and FDEM) to select the optimal placement of the piezometers. We also designed a sampling strategy which can be used to sample fecal indicators, biostability parameters, standard chemical parameters and a wide range of micropollutants. Several time series experiments were carried out in the study site Porous GroundWater Aquifer (PGWA) - an urban floodplain extending on the left bank of the river Danube downstream of the City of Vienna, Austria. The upper layer of the PGWA consist of silt and has a thickness from 1 to 6 meter. The underlying confined aquifer consists of sand and gravel and has a thickness of in between 3 and 15 meter. Hydraulic conductivities range from 5 x 10-2 m/s up to 5 x 10-5 m/s. Underneath the aquifer are alternating sand and clay/silt layers. As fecal markers Escherichia coli, enterococci and aerobic spores were measured. Biostability was measured using leucine incorporation. Additionally, several micropollutants and standard chemical parameters were measured. Results showed that physical and chemical parameters stayed stable in all monitoring wells during extended purging. A similar trend could be observed for E coli and enterococci. In the wells close to the river, aerobic spores and leucine incorporation decreased after 30 min. of pumping, whereas the well close to the backwater showed a different pattern. Overall, purging for 45 minutes was the optimal sampling procedure for the microbiological parameters. Samples for the detection of micropollutants were taken after 15 min. purging.

  14. Influence of the freezing method on the changes that occur in grape samples after frozen storage.

    PubMed

    Santesteban, Luis G; Miranda, Carlos; Royo, José B

    2013-09-01

    Sample freezing is frequently used in oenological laboratories as a compromise solution to increase the number of samples that can be analysed, despite the fact that some grape characteristics are known to change after frozen storage. However, freezing is usually performed using standard freezers, which provide a slow freezing. The aim of this work was to evaluate whether blast freezing would decrease the impact of standard freezing on grape composition. Grape quality parameters were assessed in fresh and in frozen stored samples that had been frozen using three different procedures: standard freezing and blast freezing using either a blast freezer or an ultra-freezer. The implications of frozen storage in grape samples reported in earlier research were observed for the three freezing methods evaluated. Although blast freezing improved repeatability for the most problematic parameters (tartaric acidity, TarA; total phenolics, TP), the improvement was not important from a practical point of view. However, TarA and TP were relatively repeatable among the three freezing procedures, which suggests that freezing had an effect on these parameters independently of the method used . According to our results, the salification potential of the must is probably implied in the changes observed for TarA, whereas for TP the precipitation of protoanthocyanins after association with cell wall material is hypothesized to cause the lack of repeatability between fresh and frozen grapes. Blast freezing would not imply a great improvement if implemented in oenological laboratories, at least for the parameters included in this study. © 2013 Society of Chemical Industry.

  15. Resonance region measurements of dysprosium and rhenium

    NASA Astrophysics Data System (ADS)

    Leinweber, Gregory; Block, Robert C.; Epping, Brian E.; Barry, Devin P.; Rapp, Michael J.; Danon, Yaron; Donovan, Timothy J.; Landsberger, Sheldon; Burke, John A.; Bishop, Mary C.; Youmans, Amanda; Kim, Guinyun N.; Kang, yeong-rok; Lee, Man Woo; Drindak, Noel J.

    2017-09-01

    Neutron capture and transmission measurements have been performed, and resonance parameter analysis has been completed for dysprosium, Dy, and rhenium, Re. The 60 MeV electron accelerator at RPI Gaerttner LINAC Center produced neutrons in the thermal and epithermal energy regions for these measurements. Transmission measurements were made using 6Li glass scintillation detectors. The neutron capture measurements were made with a 16-segment NaI multiplicity detector. The detectors for all experiments were located at ≈25 m except for thermal transmission, which was done at ≈15 m. The dysprosium samples included one highly enriched 164Dy metal, 6 liquid solutions of enriched 164Dy, two natural Dy metals. The Re samples were natural metals. Their capture yield normalizations were corrected for their high gamma attenuation. The multi-level R-matrix Bayesian computer code SAMMY was used to extract the resonance parameters from the data. 164Dy resonance data were analyzed up to 550 eV, other Dy isotopes up to 17 eV, and Re resonance data up to 1 keV. Uncertainties due to resolution function, flight path, burst width, sample thickness, normalization, background, and zero time were estimated and propagated using SAMMY. An additional check of sample-to-sample consistency is presented as an estimate of uncertainty. The thermal total cross sections and neutron capture resonance integrals of 164Dy and Re were determined from the resonance parameters. The NJOY and INTER codes were used to process and integrate the cross sections. Plots of the data, fits, and calculations using ENDF/B-VII.1 resonance parameters are presented.

  16. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  17. A Waterborne Outbreak and Detection of Cryptosporidium Oocysts in Drinking Water of an Older High-Rise Apartment Complex in Seoul

    PubMed Central

    Yang, Jin-Young; Lee, Eun-Sook; Kim, Se-Chul; Cha, So-Yang; Kim, Sung-Tek; Lee, Man-Ho; Han, Sun-Hee; Park, Young-Sang

    2013-01-01

    From May to June 2012, a waterborne outbreak of 124 cases of cryptosporidiosis occurred in the plumbing systems of an older high-rise apartment complex in Seoul, Republic of Korea. The residents of this apartment complex had symptoms of watery diarrhea and vomiting. Tap water samples in the apartment complex and its adjacent buildings were collected and tested for 57 parameters under the Korean Drinking Water Standards and for additional 11 microbiological parameters. The microbiological parameters included total colony counts, Clostridium perfringens, Enterococcus, fecal streptococcus, Salmonella, Shigella, Pseudomonas aeruginosa, Cryptosporidium oocysts, Giardia cysts, total culturable viruses, and Norovirus. While the tap water samples of the adjacent buildings complied with the Korean Drinking Water Standards for all parameters, fecal bacteria and Cryptosporidium oocysts were detected in the tap water samples of the outbreak apartment complex. It turned out that the agent of the disease was Cryptosporidium parvum. The drinking water was polluted with sewage from a septic tank in the apartment complex. To remove C. parvum oocysts, we conducted physical processes of cleaning the water storage tanks, flushing the indoor pipes, and replacing old pipes with new ones. Finally we restored the clean drinking water to the apartment complex after identification of no oocysts. PMID:24039290

  18. Measurement of complex terahertz dielectric properties of polymers using an improved free-space technique

    NASA Astrophysics Data System (ADS)

    Chang, Tianying; Zhang, Xiansheng; Yang, Chuanfa; Sun, Zhonglin; Cui, Hong-Liang

    2017-04-01

    The complex dielectric properties of non-polar solid polymer materials were measured in the terahertz (THz) band by a free-space technique employing a frequency-extended vector network analyzer (VNA), and by THz time-domain spectroscopy (TDS). Mindful of THz wave’s unique characteristics, the free-space method for measurement of material dielectric properties in the microwave band was expanded and improved for application in the THz frequency region. To ascertain the soundness and utility of the proposed method, measurements of the complex dielectric properties of a variety of polymers were carried out, including polytetrafluoroethylene (PTFE, known also by the brand name Teflon), polypropylene (PP), polyethylene (PE), and glass fiber resin (Composite Stone). The free-space method relies on the determination of electromagnetic scattering parameters (S-parameters) of the sample, with the gated-reflect-line (GRL) calibration technique commonly employed using a VNA. Subsequently, based on the S-parameters, the dielectric constant and loss characteristic of the sample were calculated by using a Newtonian iterative algorithm. To verify the calculated results, THz TDS technique, which produced Fresnel parameters such as reflection and transmission coefficients, was also used to independently determine the dielectric properties of these polymer samples, with results satisfactorily corroborating those obtained by the free-space extended microwave technique.

  19. A waterborne outbreak and detection of cryptosporidium oocysts in drinking water of an older high-rise apartment complex in seoul.

    PubMed

    Cho, Eun-Joo; Yang, Jin-Young; Lee, Eun-Sook; Kim, Se-Chul; Cha, So-Yang; Kim, Sung-Tek; Lee, Man-Ho; Han, Sun-Hee; Park, Young-Sang

    2013-08-01

    From May to June 2012, a waterborne outbreak of 124 cases of cryptosporidiosis occurred in the plumbing systems of an older high-rise apartment complex in Seoul, Republic of Korea. The residents of this apartment complex had symptoms of watery diarrhea and vomiting. Tap water samples in the apartment complex and its adjacent buildings were collected and tested for 57 parameters under the Korean Drinking Water Standards and for additional 11 microbiological parameters. The microbiological parameters included total colony counts, Clostridium perfringens, Enterococcus, fecal streptococcus, Salmonella, Shigella, Pseudomonas aeruginosa, Cryptosporidium oocysts, Giardia cysts, total culturable viruses, and Norovirus. While the tap water samples of the adjacent buildings complied with the Korean Drinking Water Standards for all parameters, fecal bacteria and Cryptosporidium oocysts were detected in the tap water samples of the outbreak apartment complex. It turned out that the agent of the disease was Cryptosporidium parvum. The drinking water was polluted with sewage from a septic tank in the apartment complex. To remove C. parvum oocysts, we conducted physical processes of cleaning the water storage tanks, flushing the indoor pipes, and replacing old pipes with new ones. Finally we restored the clean drinking water to the apartment complex after identification of no oocysts.

  20. VizieR Online Data Catalog: RPA Southern Pilot Search of 107 Stars (Hansen+, 2018)

    NASA Astrophysics Data System (ADS)

    Hansen, T. T.; Holmbeck, E. M.; Beers, T. C.; Placco, V. M.; Roederer, I. U.; Frebel, A.; Sakari, C. M.; Simon, J. D.; Thompson, I. B.

    2018-03-01

    Complete equivalent width measurements of FeI and FeII lines for all stars in our sample used to derive spectroscopic stellar parameters. Also included are the derived abundances for each line. (2 data files).

  1. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.

  2. Dual concentric crystal low energy photon detector

    DOEpatents

    Guilmette, R.A.

    A photon detector for biological samples includes a block of NaI(T1) having a hole containing a thin walled cylinder of CsI(T1). At least three photo multiplier tubes are evenly spaced around the parameter of the block. Biological samples are placed within the hole, and emissions which are sensed by at least two of the photo multipliers from only the NaI(T1) detector are counted.

  3. Traffic-Adaptive, Flow-Specific Medium Access for Wireless Networks

    DTIC Science & Technology

    2009-09-01

    hybrid, contention and non-contention schemes are shown to be special cases. This work also compares the energy efficiency of centralized and distributed...solutions and proposes an energy efficient version of traffic-adaptive CWS-MAC that includes an adaptive sleep cycle coordinated through the use of...preamble sampling. A preamble sampling probability parameter is introduced to manage the trade-off between energy efficiency and throughput and delay

  4. Apparatus and methods for manipulation and optimization of biological systems

    NASA Technical Reports Server (NTRS)

    Sun, Ren (Inventor); Ho, Chih-Ming (Inventor); Wong, Pak Kin (Inventor); Yu, Fuqu (Inventor)

    2012-01-01

    The invention provides systems and methods for manipulating, e.g., optimizing and controlling, biological systems, e.g., for eliciting a more desired biological response of biological sample, such as a tissue, organ, and/or a cell. In one aspect, systems and methods of the invention operate by efficiently searching through a large parametric space of stimuli and system parameters to manipulate, control, and optimize the response of biological samples sustained in the system, e.g., a bioreactor. In alternative aspects, systems include a device for sustaining cells or tissue samples, one or more actuators for stimulating the samples via biochemical, electromagnetic, thermal, mechanical, and/or optical stimulation, one or more sensors for measuring a biological response signal of the samples resulting from the stimulation of the sample. In one aspect, the systems and methods of the invention use at least one optimization algorithm to modify the actuator's control inputs for stimulation, responsive to the sensor's output of response signals. The compositions and methods of the invention can be used, e.g., to for systems optimization of any biological manufacturing or experimental system, e.g., bioreactors for proteins, e.g., therapeutic proteins, polypeptides or peptides for vaccines, and the like, small molecules (e.g., antibiotics), polysaccharides, lipids, and the like. Another use of the apparatus and methods includes combination drug therapy, e.g. optimal drug cocktail, directed cell proliferations and differentiations, e.g. in tissue engineering, e.g. neural progenitor cells differentiation, and discovery of key parameters in complex biological systems.

  5. CO2 response (ACi) gas exchange, calculated Vcmax & Jmax parameters, Feb2016-May2016, PA-SLZ, PA-PNM: Panama

    DOE Data Explorer

    Rogers, Alistair [Brookhaven National Lab; Serbin, Shawn [Brookhaven National Lab; Ely, Kim [Brookhaven National Lab; Wu, Jin [BNL; Wolfe, Brett [Smithsonian; Dickman, Turin [Los Alamos National Lab; Collins, Adam [Los Alamos National Lab; Detto, Matteo [Princeton; Grossiord, Charlotte [Los Alamos National Lab; McDowell, Nate [Los Alamos National Lab; Michaletz, Sean

    2017-01-01

    CO2 response (ACi) gas exchange measured on leaves collected from sunlit canopy trees on a monthly basis from Feb to May 2016 at SLZ and PNM. Dataset includes calculated Vcmax and Jmax parameters. This data was collected as part of the 2016 ENSO campaign. See related datasets (existing and future) for further sample details, leaf water potential, LMA, leaf spectra, other gas exchange and leaf chemistry.

  6. The use of a genetic algorithm-based search strategy in geostatistics: application to a set of anisotropic piezometric head data

    NASA Astrophysics Data System (ADS)

    Abedini, M. J.; Nasseri, M.; Burn, D. H.

    2012-04-01

    In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.

  7. Correlation of Blood Gas Parameters with Central Venous Pressure in Patients with Septic Shock; a Pilot Study

    PubMed Central

    Baratloo, Alireza; Rahmati, Farhad; Rouhipour, Alaleh; Motamedi, Maryam; Gheytanchi, Elmira; Amini, Fariba; Safari, Saeed

    2014-01-01

    Objective: To determine the correlation between blood gas parameters and central venous pressure (CVP) in patients suffering from septic shock. Methods: Forty adult patients with diagnosis of septic shock who were admitted to the emergency department (ED) of Shohadaye Tajrish Hospital affiliated with Shahid Beheshti University of Medical Sciences, and met inclusion and exclusion criteria were enrolled. For all patients, sampling was done for venous blood gas analysis, serum sodium and chlorine levels. At the time of sampling; blood pressure, pulse rate and CVP were recorded. Correlation between blood gas parameters and hemodynamic indices were. Results: A significant direct correlation between CVP with anion gap (AG) and inversely with base deficit (BD) and bicarbonate. CVP also showed a relative correlation with pH, whereas it was not correlated with BD/ AG ratio and serum chlorine level. There was no significant association between CVP and clinical parameters including shock index (SI) and mean arterial pressure (MAP). Conclusion: It seems that some of non invasive blood gas parameters could be served as alternative to invasive measures such as CVP in treatment planning of patients referred to an ED with septic shock. PMID:27162870

  8. Botanical discrimination of Greek unifloral honeys with physico-chemical and chemometric analyses.

    PubMed

    Karabagias, Ioannis K; Badeka, Anastasia V; Kontakos, Stavros; Karabournioti, Sofia; Kontominas, Michael G

    2014-12-15

    The aim of the present study was to investigate the possibility of characterisation and classification of Greek unifloral honeys (pine, thyme, fir and orange blossom) according to botanical origin using volatile compounds, conventional physico-chemical parameters and chemometric analyses (MANOVA and Linear Discriminant Analysis). For this purpose, 119 honey samples were collected during the harvesting period 2011 from 14 different regions in Greece known to produce unifloral honey of good quality. Physico-chemical analysis included the identification and semi quantification of fifty five volatile compounds performed by Headspace Solid Phase Microextraction coupled to gas chromatography/mass spectroscopy and the determination of conventional quality parameters such as pH, free, lactonic, total acidity, electrical conductivity, moisture, ash, lactonic/free acidity ratio and colour parameters L, a, b. Results showed that using 40 diverse variables (30 volatile compounds of different classes and 10 physico-chemical parameters) the honey samples were satisfactorily classified according to botanical origin using volatile compounds (84.0% correct prediction), physicochemical parameters (97.5% correct prediction), and the combination of both (95.8% correct prediction) indicating that multi element analysis comprises a powerful tool for honey discrimination purposes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Characteristics of a random sample of emergency food program users in New York: II. Soup kitchens.

    PubMed Central

    Bowering, J; Clancy, K L; Poppendieck, J

    1991-01-01

    A random sample of soup kitchen clients in New York City was studied and specific comparisons made on various parameters including homelessness. Compared with the general population of low income persons, soup kitchen users were overwhelmingly male, disproportionately African-American, and more likely to live alone. The homeless (41 percent of the sample) were less likely to receive food stamps or free food, or to use food pantries. Fewer of them received Medicaid or had health insurance. Forty-seven percent had no income in contrast to 29 percent of the total sample. PMID:2053673

  10. Cosmological parameters, shear maps and power spectra from CFHTLenS using Bayesian hierarchical inference

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Heavens, Alan; Jaffe, Andrew H.

    2017-04-01

    We apply two Bayesian hierarchical inference schemes to infer shear power spectra, shear maps and cosmological parameters from the Canada-France-Hawaii Telescope (CFHTLenS) weak lensing survey - the first application of this method to data. In the first approach, we sample the joint posterior distribution of the shear maps and power spectra by Gibbs sampling, with minimal model assumptions. In the second approach, we sample the joint posterior of the shear maps and cosmological parameters, providing a new, accurate and principled approach to cosmological parameter inference from cosmic shear data. As a first demonstration on data, we perform a two-bin tomographic analysis to constrain cosmological parameters and investigate the possibility of photometric redshift bias in the CFHTLenS data. Under the baseline ΛCDM (Λ cold dark matter) model, we constrain S_8 = σ _8(Ω _m/0.3)^{0.5} = 0.67+0.03-0.03 (68 per cent), consistent with previous CFHTLenS analyses but in tension with Planck. Adding neutrino mass as a free parameter, we are able to constrain ∑mν < 4.6 eV (95 per cent) using CFHTLenS data alone. Including a linear redshift-dependent photo-z bias Δz = p2(z - p1), we find p_1=-0.25+0.53-0.60 and p_2 = -0.15+0.17-0.15, and tension with Planck is only alleviated under very conservative prior assumptions. Neither the non-minimal neutrino mass nor photo-z bias models are significantly preferred by the CFHTLenS (two-bin tomography) data.

  11. Parameter Estimation for Compact Binaries with Ground-Based Gravitational-Wave Observations Using the LALInference

    NASA Technical Reports Server (NTRS)

    Veitch, J.; Raymond, V.; Farr, B.; Farr, W.; Graff, P.; Vitale, S.; Aylott, B.; Blackburn, K.; Christensen, N.; Coughlin, M.

    2015-01-01

    The Advanced LIGO and Advanced Virgo gravitational wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star (BNS), a neutron star - black hole binary (NSBH) and a binary black hole (BBH), where we show a cross-comparison of results obtained using three independent sampling algorithms. These systems were analysed with non-spinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analysing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms, and describe the general and problem-specific sampling techniques we have used to improve the efficiency of sampling the compact binary coalescence (CBC) parameter space.

  12. Comparison of different parameters for recording sagittal maxillo mandibular relation using natural head posture: A cephalometric study

    PubMed Central

    Singh, Ashish Kumar; Ganeshkar, Sanjay V.; Mehrotra, Praveen; Bhagchandani, Jitendra

    2013-01-01

    Background: Commonly used parameters for anteroposterior assessment of the jaw relationship includes several analyses such as ANB, NA-Pog, AB-NPog, Wits appraisal, Harvold's unit length difference, Beta angle. Considering the fact that there are several parameters (with different range and values) which account for sagittal relation, and still the published literature for comparisons and correlation of these measurements is scarce. Therefore, the objective of this study was to correlate these values in subjects of Indian origin. Materials and Methods: The sample consisted of fifty adult individuals (age group 18-26 years) with equal number of males and females. The selection criteria included subjects with no previous history of orthodontic and/or orthognathic surgical treatment; orthognathic facial profile; Angle's Class I molar relation; clinical Frankfort Mandibular plane angle FMA of 30±5° and no gross facial asymmetry. The cephalograms were taken in natural head position (NHP). Seven sagittal skeletal parameters were measured in the cephalograms and subjected to statistical evaluation with Wits reading on the true horizontal as reference. A correlation coefficient analysis was done to assess the significance of association between these variables. Results: ANB angle showed statistically significant correlation for the total sample, though the values were insignificant for the individual groups and therefore may not be very accurate. Wits appraisal was seen to have a significant correlation only in the female sample group. Conclusions: If cephalograms cannot be recorded in a NHP, then the best indicator for recording A-P skeletal dimension would be angle AB-NPog, followed by Harvold's unit length difference. However, considering biologic variability, more than one reading should necessarily be used to verify the same. PMID:24987638

  13. Development of Carbotrap B-packed needle trap device for determination of volatile organic compounds in air.

    PubMed

    Poormohammadi, Ali; Bahrami, Abdulrahman; Farhadian, Maryam; Ghorbani Shahna, Farshid; Ghiasvand, Alireza

    2017-12-08

    Carbotrap B as a highly pure surface sorbent with excellent adsorption/desorption properties was packed into a stainless steel needle to develop a new needle trap device (NTD). The performance of the prepared NTD was investigated for sampling, pre-concentration and injection of benzene, toluene, ethyl benzene, o-xylene, and p-xylene (BTEX) into the column of gas chromatography-mass spectrometry (GC-MS) device. Response surface methodology (RSM) with central composite design (CCD) was also employed in two separate consecutive steps to optimize the sampling and device parameters. First, the sampling parameters such as sampling temperature and relative humidity were optimized. Afterwards, the RSM was used for optimizing the desorption parameters including desorption temperature and time. The results indicated that the peak area responses of the analytes of interest decreased with increasing sampling temperature and relative humidity. The optimum values of desorption temperature were in the range 265-273°C, and desorption time were in the range 3.4-3.8min. The limits of detection (LODs) and limits of quantitation (LOQs) of the studied analytes were found over the range of 0.03-0.04ng/mL, and 0.1-0.13ng/mL, respectively. These results demonstrated that the NTD packed with Carbotrap B offers a high sensitive procedure for sampling and analysis of BTEX in concentration range of 0.03-25ng/mL in air. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. An evaluation of the ELT-8 hematology analyzer.

    PubMed

    Raik, E; McPherson, J; Barton, L; Hewitt, B S; Powell, E G; Gordon, S

    1982-04-01

    The TMELT-8 Hematology Analyzer is a fully automated cell counter which utilizes laser light scattering and hydrodynamic focusing to provide an 8 parameter whole blood count. The instrument consists of a sample handler with ticket printer, and a data handler with visual display unit, It accepts 100 microliter samples of venous or capillary blood and prints the values for WCC, RCC, Hb, Hct, MCV, MCH, MCHC and platelet count on to a standard result card. All operational and quality control functions, including graphic display of relative cell size distribution, can be obtained from the visual display unit and can also be printed as a permanent record if required. In a limited evaluation of the ELT-8, precision, linearity, accuracy, lack of sample carry-over and user acceptance were excellent. Reproducible values were obtained for all parameters after overnight storage of samples. Reagent usage and running costs were lower than for the Coulter S and the Coulter S Plus. The ease of processing capillary samples was considered to be a major advantage. The histograms served to alert the operator to a number of abnormalities, some of which were clinically significant.

  15. Impact assessment of on-site sanitation system on groundwater quality in alluvial settings: A case study from Lucknow city in North India.

    PubMed

    Jangam, Chandrakant; Ramya Sanam, S; Chaturvedi, M K; Padmakar, C; Pujari, Paras R; Labhasetwar, Pawan K

    2015-10-01

    The present case study has been undertaken to investigate the impact of on-site sanitation on groundwater quality in alluvial settings in Lucknow City in India. The groundwater samples have been collected in the areas of Lucknow City where the on-site sanitation systems have been implemented. The groundwater samples have been analyzed for the major physicochemical parameters and fecal coliform. The results of analysis reveal that none of the groundwater samples exceeded the Bureau of Indian Standards (BIS) limits for all the parameters. Fecal coliform was not found in majority of the samples including those samples which were very close to the septic tank. The study area has a thick alluvium cover as a top layer which acts as a natural barrier for groundwater contamination from the on-site sanitation system. The t test has been performed to assess the seasonal effect on groundwater quality. The statistical t test implies that there is a significant effect of season on groundwater quality in the study area.

  16. Data Validation Package May 2016 Groundwater Sampling at the Sherwood, Washington, Disposal Site August 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreie, Ken; Traub, David

    The 2001 Long-Term Surveillance Plan (LTSP) for the US. Department of Energy Sherwood Project (UMI'RCA Title II) Reclamation Cell, Wellpinit, Washington, does not require groundwater compliance monitoring at the Sherwood site. However, the LTSP stipulates limited groundwater monitoring for chloride and sulfate (designated indicator parameters) and total dissolved solids (TDS) as a best management practice. Samples were collected from the background well, MW-2B, and the two downgradient wells, MW-4 and MW-10, in accordance with the LTSP. Sampling and analyses were conducted as specified in the Sampling and Analysis Plan for US. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351,more » continually updated). Water levels were measured in all wells prior to sampling and in four piezometers completed in the tailings dam. Time-concentration graphs included in this report indicate that the chloride, sulfate, and TDS concentrations are consistent with historical measurements. The concentrations of chloride and sulfate are well below the State of Washington water quality criteria value of 250 milligrams per liter (mg/L) for both parameters.« less

  17. Remote sensing-aided systems for snow qualification, evapotranspiration estimation, and their application in hydrologic models

    NASA Technical Reports Server (NTRS)

    Korram, S.

    1977-01-01

    The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.

  18. The Multigroup Multilevel Categorical Latent Growth Curve Models

    ERIC Educational Resources Information Center

    Hung, Lai-Fa

    2010-01-01

    Longitudinal data describe developmental patterns and enable predictions of individual changes beyond sampled time points. Major methodological issues in longitudinal data include modeling random effects, subject effects, growth curve parameters, and autoregressive residuals. This study embedded the longitudinal model within a multigroup…

  19. Evaluating Parametrization Protocols for Hydration Free Energy Calculations with the AMOEBA Polarizable Force Field.

    PubMed

    Bradshaw, Richard T; Essex, Jonathan W

    2016-08-09

    Hydration free energy (HFE) calculations are often used to assess the performance of biomolecular force fields and the quality of assigned parameters. The AMOEBA polarizable force field moves beyond traditional pairwise additive models of electrostatics and may be expected to improve upon predictions of thermodynamic quantities such as HFEs over and above fixed-point-charge models. The recent SAMPL4 challenge evaluated the AMOEBA polarizable force field in this regard but showed substantially worse results than those using the fixed-point-charge GAFF model. Starting with a set of automatically generated AMOEBA parameters for the SAMPL4 data set, we evaluate the cumulative effects of a series of incremental improvements in parametrization protocol, including both solute and solvent model changes. Ultimately, the optimized AMOEBA parameters give a set of results that are not statistically significantly different from those of GAFF in terms of signed and unsigned error metrics. This allows us to propose a number of guidelines for new molecule parameter derivation with AMOEBA, which we expect to have benefits for a range of biomolecular simulation applications such as protein-ligand binding studies.

  20. Spatial and temporal distribution of benthic macroinvertebrates in a Southeastern Brazilian river.

    PubMed

    Silveira, M P; Buss, D F; Nessimian, J L; Baptista, D F

    2006-05-01

    Benthic macroinvertebrate assemblages are structured according to physical and chemical parameters that define microhabitats, including food supply, shelter to escape predators, and other biological parameters that influence reproductive success. The aim of this study is to investigate spatial and temporal distribution of macroinvertebrate assemblages at the Macaé river basin, in Rio de Janeiro state, Southeastern Brazil. According to the "Habitat Assessment Field Data Sheet--High Gradient Streams" (Barbour et al., 1999), the five sampling sites are considered as a reference condition. Despite the differences in hydrological parameters (mean width, depth and discharge) among sites, the physicochemical parameters and functional feeding groups' general structure were similar, except for the less impacted area, which showed more shredders. According to the Detrended Correspondence Analysis based on substrates, there is a clear distinction between pool and riffle assemblages. In fact, the riffle litter substrate had higher taxa in terms of richness and abundance, but the pool litter substrate had the greatest number of exclusive taxa. A Cluster Analysis based on sampling sites data showed that temporal variation was the main factor in structuring macroinvertebrate assemblages in the studied habitats.

  1. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  2. Determination of serum levels of imatinib mesylate in patients with chronic myeloid leukemia: validation and application of a new analytical method to monitor treatment compliance

    PubMed Central

    Rezende, Vinícius Marcondes; Rivellis, Ariane Julio; Gomes, Melissa Medrano; Dörr, Felipe Augusto; Novaes, Mafalda Megumi Yoshinaga; Nardinelli, Luciana; Costa, Ariel Lais de Lima; Chamone, Dalton de Alencar Fisher; Bendit, Israel

    2013-01-01

    Objective The goal of this study was to monitor imatinib mesylate therapeutically in the Tumor Biology Laboratory, Department of Hematology and Hemotherapy, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo (USP). A simple and sensitive method to quantify imatinib and its metabolite (CGP74588) in human serum was developed and fully validated in order to monitor treatment compliance. Methods The method used to quantify these compounds in serum included protein precipitation extraction followed by instrumental analysis using high performance liquid chromatography coupled with mass spectrometry. The method was validated for several parameters, including selectivity, precision, accuracy, recovery and linearity. Results The parameters evaluated during the validation stage exhibited satisfactory results based on the Food and Drug Administration and the Brazilian Health Surveillance Agency (ANVISA) guidelines for validating bioanalytical methods. These parameters also showed a linear correlation greater than 0.99 for the concentration range between 0.500 µg/mL and 10.0 µg/mL and a total analysis time of 13 minutes per sample. This study includes results (imatinib serum concentrations) for 308 samples from patients being treated with imatinib mesylate. Conclusion The method developed in this study was successfully validated and is being efficiently used to measure imatinib concentrations in samples from chronic myeloid leukemia patients to check treatment compliance. The imatinib serum levels of patients achieving a major molecular response were significantly higher than those of patients who did not achieve this result. These results are thus consistent with published reports concerning other populations. PMID:23741187

  3. Genuine non-self-averaging and ultraslow convergence in gelation.

    PubMed

    Cho, Y S; Mazza, M G; Kahng, B; Nagler, J

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.

  4. Aquifer Hydrogeologic Layer Zonation at the Hanford Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savelieva-Trofimova, Elena A.; Kanevski, Mikhail; timonin, v.

    2003-09-10

    Sedimentary aquifer layers are characterized by spatial variability of hydraulic properties. Nevertheless, zones with similar values of hydraulic parameters (parameter zones) can be distinguished. This parameter zonation approach is an alternative to the analysis of spatial variation of the continuous hydraulic parameters. The parameter zonation approach is primarily motivated by the lack of measurements that would be needed for direct spatial modeling of the hydraulic properties. The current work is devoted to the problem of zonation of the Hanford formation, the uppermost sedimentary aquifer unit (U1) included in hydrogeologic models at the Hanford site. U1 is characterized by 5 zonesmore » with different hydraulic properties. Each sampled location is ascribed to a parameter zone by an expert. This initial classification is accompanied by a measure of quality (also indicated by an expert) that addresses the level of classification confidence. In the current study, the coneptual zonation map developed by an expert geologist was used as an a priori model. The parameter zonation problem was formulated as a multiclass classification task. Different geostatistical and machine learning algorithms were adapted and applied to solve this problem, including: indicator kriging, conditional simulations, neural networks of different architectures, and support vector machines. All methods were trained using additional soft information based on expert estimates. Regularization methods were used to overcome possible overfitting. The zonation problem was complicated because there were few samples for some zones (classes) and by the spatial non-stationarity of the data. Special approaches were developed to overcome these complications. The comparison of different methods was performed using qualitative and quantitative statistical methods and image analysis. We examined the correspondence of the results with the geologically based interpretation, including the reproduction of the spatial orientation of the different classes and the spatial correlation structure of the classes. The uncertainty of the classification task was examined using both probabilistic interpretation of the estimators and by examining the results of a set of stochastic realizations. Characterization of the classification uncertainty is the main advantage of the proposed methods.« less

  5. Four-year stability of anthropometric and cardio-metabolic parameters in a prospective cohort of older adults.

    PubMed

    Jackson, Sarah E; van Jaarsveld, Cornelia Hm; Beeken, Rebecca J; Gunter, Marc J; Steptoe, Andrew; Wardle, Jane

    2015-01-01

    To examine the medium-term stability of anthropometric and cardio-metabolic parameters in the general population. Participants were 5160 men and women from the English Longitudinal Study of Ageing (age ≥50 years) assessed in 2004 and 2008. Anthropometric data included height, weight, BMI and waist circumference. Cardio-metabolic parameters included blood pressure, serum lipids (total cholesterol, HDL, LDL, triglycerides), hemoglobin, fasting glucose, fibrinogen and C-reactive protein. Stability of anthropometric variables was high (all intraclass correlations >0.92), although mean values changed slightly (-0.01 kg weight, +1.33 cm waist). Cardio-metabolic parameters showed more variation: correlations ranged from 0.43 (glucose) to 0.81 (HDL). The majority of participants (71-97%) remained in the same grouping relative to established clinical cut-offs. Over a 4-year period, anthropometric and cardio-metabolic parameters showed good stability. These findings suggest that when no means to obtain more recent data exist, a one-time sample will give a reasonable approximation to average levels over the medium-term, although reliability is reduced.

  6. Experimental investigation of the tip based micro/nano machining

    NASA Astrophysics Data System (ADS)

    Guo, Z.; Tian, Y.; Liu, X.; Wang, F.; Zhou, C.; Zhang, D.

    2017-12-01

    Based on the self-developed three dimensional micro/nano machining system, the effects of machining parameters and sample material on micro/nano machining are investigated. The micro/nano machining system is mainly composed of the probe system and micro/nano positioning stage. The former is applied to control the normal load and the latter is utilized to realize high precision motion in the xy plane. A sample examination method is firstly introduced to estimate whether the sample is placed horizontally. The machining parameters include scratching direction, speed, cycles, normal load and feed. According to the experimental results, the scratching depth is significantly affected by the normal load in all four defined scratching directions but is rarely influenced by the scratching speed. The increase of scratching cycle number can increase the scratching depth as well as smooth the groove wall. In addition, the scratching tests of silicon and copper attest that the harder material is easier to be removed. In the scratching with different feed amount, the machining results indicate that the machined depth increases as the feed reduces. Further, a cubic polynomial is used to fit the experimental results to predict the scratching depth. With the selected machining parameters of scratching direction d3/d4, scratching speed 5 μm/s and feed 0.06 μm, some more micro structures including stair, sinusoidal groove, Chinese character '田', 'TJU' and Chinese panda have been fabricated on the silicon substrate.

  7. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Gronewold, A.; Alameddine, I.; Anderson, R. M.

    2009-12-01

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United States Environmental Protection Agency (USEPA) total maximum daily load (TMDL) program, as well as those addressing coastal population dynamics and sea level rise. Our approach has several advantages, including the propagation of parameter uncertainty through a nonparametric probability distribution which avoids common pitfalls of fitting parameters and model error structure to a predetermined parametric distribution function. In addition, by explicitly acknowledging correlation between model parameters (and reflecting those correlations in our predictive model) our model yields relatively efficient prediction intervals (unlike those in the current literature which are often unnecessarily large, and may lead to overly-conservative management actions). Finally, our model helps improve understanding of the rainfall-runoff process by identifying model parameters (and associated catchment attributes) which are most sensitive to current and future land use change patterns. Disclaimer: Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.

  8. The Influence of Porosity on Fatigue Crack Initiation in Additively Manufactured Titanium Components.

    PubMed

    Tammas-Williams, S; Withers, P J; Todd, I; Prangnell, P B

    2017-08-04

    Without post-manufacture HIPing the fatigue life of electron beam melting (EBM) additively manufactured parts is currently dominated by the presence of porosity, exhibiting large amounts of scatter. Here we have shown that the size and location of these defects is crucial in determining the fatigue life of EBM Ti-6Al-4V samples. X-ray computed tomography has been used to characterise all the pores in fatigue samples prior to testing and to follow the initiation and growth of fatigue cracks. This shows that the initiation stage comprises a large fraction of life (>70%). In these samples the initiating defect was often some way from being the largest (merely within the top 35% of large defects). Using various ranking strategies including a range of parameters, we found that when the proximity to the surface and the pore aspect ratio were included the actual initiating defect was within the top 3% of defects ranked most harmful. This lays the basis for considering how the deposition parameters can be optimised to ensure that the distribution of pores is tailored to the distribution of applied stresses in additively manufactured parts to maximise the fatigue life for a given loading cycle.

  9. Detection of E. coli O157:H7 in complex matrices under varying flow parameters with a robotic fluorometric assay system

    NASA Astrophysics Data System (ADS)

    Leskinen, Stephaney D.; Schlemmer, Sarah M.; Kearns, Elizabeth A.; Lim, Daniel V.

    2009-02-01

    The development of rapid assays for detection of microbial pathogens in complex matrices is needed to protect public health due to continued outbreaks of disease from contaminated foods and water. An Escherichia coli O157:H7 detection assay was designed using a robotic, fluorometric assay system. The system integrates optics, fluidics, robotics and software for the detection of foodborne pathogens or toxins in as many as four samples simultaneously. It utilizes disposable fiber optic waveguides coated with biotinylated antibodies for capture of target analytes from complex sample matrices. Computer-controlled rotation of sample cups allows complete contact between the sample and the waveguide. Detection occurs via binding of a fluorophore-labeled antibody to the captured target, which leads to an increase in the fluorescence signal. Assays are completed within twenty-five minutes. Sample matrices included buffer, retentate (material recovered from the filter of the Automated Concentration System (ACS) following hollow fiber ultrafiltration), spinach wash and ground beef. The matrices were spiked with E. coli O157:H7 (103-105 cells/ml) and the limits of detection were determined. The effect of sample rotation on assay sensitivity was also examined. Rotation parameters for each sample matrix included 10 ml with rotation, 5 ml with rotation and 0.1 ml without rotation. Detection occurred at 104 cells/ml in buffer and spinach wash and at 105 cells/ml in retentate and ground beef. Detection was greater for rotated samples in each matrix except ground beef. Enhanced detection of E. coli from large, rotated volumes of complex matrices was confirmed.

  10. Local versus field scale soil heterogeneity characterization - a challenge for representative sampling in pollution studies

    NASA Astrophysics Data System (ADS)

    Kardanpour, Z.; Jacobsen, O. S.; Esbensen, K. H.

    2015-06-01

    This study is a contribution to development of a heterogeneity characterisation facility for "next generation" sampling aimed at more realistic and controllable pesticide variability in laboratory pots in experimental environmental contaminant assessment. The role of soil heterogeneity on quantification of a set of exemplar parameters, organic matter, loss on ignition (LOI), biomass, soil microbiology, MCPA sorption and mineralization is described, including a brief background on how heterogeneity affects sampling/monitoring procedures in environmental pollutant studies. The Theory of Sampling (TOS) and variographic analysis has been applied to develop a fit-for-purpose heterogeneity characterization approach. All parameters were assessed in large-scale profile (1-100 m) vs. small-scale (0.1-1 m) replication sampling pattern. Variographic profiles of experimental analytical results concludes that it is essential to sample at locations with less than a 2.5 m distance interval to benefit from spatial auto-correlation and thereby avoid unnecessary, inflated compositional variation in experimental pots; this range is an inherent characteristic of the soil heterogeneity and will differ among soils types. This study has a significant carrying-over potential for related research areas e.g. soil science, contamination studies, and environmental monitoring and environmental chemistry.

  11. Modeling end-use quality in U. S. soft wheat germplasm

    USDA-ARS?s Scientific Manuscript database

    End-use quality in soft wheat (Triticum aestivum L.) can be assessed by a wide array of measurements, generally categorized into grain, milling, and baking characteristics. Samples were obtained from four regional nurseries. Selected parameters included: test weight, kernel hardness, kernel size, ke...

  12. Apparatus and methods for controlling electron microscope stages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duden, Thomas

    Methods and apparatus for generating an image of a specimen with a microscope (e.g., TEM) are disclosed. In one aspect, the microscope may generally include a beam generator, a stage, a detector, and an image generator. A plurality of crystal parameters, which describe a plurality of properties of a crystal sample, are received. In a display associated with the microscope, an interactive control sphere based at least in part on the received crystal parameters and that is rotatable by a user to different sphere orientations is presented. The sphere includes a plurality of stage coordinates that correspond to a pluralitymore » of positions of the stage and a plurality of crystallographic pole coordinates that correspond to a plurality of polar orientations of the crystal sample. Movement of the sphere causes movement of the stage, wherein the stage coordinates move in conjunction with the crystallographic coordinates represented by pole positions so as to show a relationship between stage positions and the pole positions.« less

  13. Material characterization in partially filled waveguides using inverse scattering and multiple sample orientations

    NASA Astrophysics Data System (ADS)

    Sjöberg, Daniel; Larsson, Christer

    2015-06-01

    We present a method aimed at reducing uncertainties and instabilities when characterizing materials in waveguide setups. The method is based on measuring the S parameters for three different orientations of a rectangular sample block in a rectangular waveguide. The corresponding geometries are modeled in a commercial full-wave simulation program, taking any material parameters as input. The material parameters of the sample are found by minimizing the squared distance between measured and calculated S parameters. The information added by the different sample orientations is quantified using the Cramér-Rao lower bound. The flexibility of the method allows the determination of material parameters of an arbitrarily shaped sample that fits in the waveguide.

  14. High-throughput imaging of heterogeneous cell organelles with an X-ray laser (CXIDB ID 25)

    DOE Data Explorer

    Hantke, Max, F.

    2014-11-17

    Preprocessed detector images that were used for the paper "High-throughput imaging of heterogeneous cell organelles with an X-ray laser". The CXI file contains the entire recorded data - including both hits and blanks. It also includes down-sampled images and LCLS machine parameters. Additionally, the Cheetah configuration file is attached that was used to create the pre-processed data.

  15. Characterizing the Effects of Washing by Different Detergents on the Wavelength-Scale Microstructures of Silk Samples Using Mueller Matrix Polarimetry.

    PubMed

    Dong, Yang; He, Honghui; He, Chao; Zhou, Jialing; Zeng, Nan; Ma, Hui

    2016-08-10

    Silk fibers suffer from microstructural changes due to various external environmental conditions including daily washings. In this paper, we take the backscattering Mueller matrix images of silk samples for non-destructive and real-time quantitative characterization of the wavelength-scale microstructure and examination of the effects of washing by different detergents. The 2D images of the 16 Mueller matrix elements are reduced to the frequency distribution histograms (FDHs) whose central moments reveal the dominant structural features of the silk fibers. A group of new parameters are also proposed to characterize the wavelength-scale microstructural changes of the silk samples during the washing processes. Monte Carlo (MC) simulations are carried out to better understand how the Mueller matrix parameters are related to the wavelength-scale microstructure of silk fibers. The good agreement between experiments and simulations indicates that the Mueller matrix polarimetry and FDH based parameters can be used to quantitatively detect the wavelength-scale microstructural features of silk fibers. Mueller matrix polarimetry may be used as a powerful tool for non-destructive and in situ characterization of the wavelength-scale microstructures of silk based materials.

  16. Characterizing the Effects of Washing by Different Detergents on the Wavelength-Scale Microstructures of Silk Samples Using Mueller Matrix Polarimetry

    PubMed Central

    Dong, Yang; He, Honghui; He, Chao; Zhou, Jialing; Zeng, Nan; Ma, Hui

    2016-01-01

    Silk fibers suffer from microstructural changes due to various external environmental conditions including daily washings. In this paper, we take the backscattering Mueller matrix images of silk samples for non-destructive and real-time quantitative characterization of the wavelength-scale microstructure and examination of the effects of washing by different detergents. The 2D images of the 16 Mueller matrix elements are reduced to the frequency distribution histograms (FDHs) whose central moments reveal the dominant structural features of the silk fibers. A group of new parameters are also proposed to characterize the wavelength-scale microstructural changes of the silk samples during the washing processes. Monte Carlo (MC) simulations are carried out to better understand how the Mueller matrix parameters are related to the wavelength-scale microstructure of silk fibers. The good agreement between experiments and simulations indicates that the Mueller matrix polarimetry and FDH based parameters can be used to quantitatively detect the wavelength-scale microstructural features of silk fibers. Mueller matrix polarimetry may be used as a powerful tool for non-destructive and in situ characterization of the wavelength-scale microstructures of silk based materials. PMID:27517919

  17. Impact of ADC parameters on linear optical sampling systems

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung-Hien; Gay, Mathilde; Gomez-Agis, Fausto; Lobo, Sébastien; Sentieys, Olivier; Simon, Jean-Claude; Peucheret, Christophe; Bramerie, Laurent

    2017-11-01

    Linear optical sampling (LOS), based on the coherent photodetection of an optical signal under test with a low repetition-rate signal originating from a pulsed local oscillator (LO), enables the characterization of the temporal electric field of optical sources. Thanks to this technique, low-speed photodetectors and analog-to-digital converters (ADCs) can be integrated in the LOS system providing a cost-effective tool for characterizing high-speed signals. However, the impact of photodetector and ADC parameters on such LOS systems has not been explored in detail so far. These parameters, including the integration time of the track-and-hold function, the effective number of bits (ENOB) of the ADC, as well as the combined limited bandwidth of the photodetector and ADC are experimentally and numerically investigated in a LOS system for the first time. More specifically, by reconstructing 10-Gbit/s non-return-to-zero on-off keying (NRZ-OOK) and 10-Gbaud NRZ-quadrature phase-shift-keying (QPSK) signals, it is shown that a short integration time provides a better recovered signal fidelity. Furthermore, an ENOB of 6 bits and an ADC bandwidth normalized to the sampling rate of 2.8 are found to be sufficient in order to reliably monitor the considered signals.

  18. FORGE Milford Triaxial Test Data and Summary from EGI labs

    DOE Data Explorer

    Joe Moore

    2016-03-01

    Six samples were evaluated in unconfined and triaxial compression, their data are included in separate excel spreadsheets, and summarized in the word document. Three samples were plugged along the axis of the core (presumed to be nominally vertical) and three samples were plugged perpendicular to the axis of the core. A designation of "V"indicates vertical or the long axis of the plugged sample is aligned with the axis of the core. Similarly, "H" indicates a sample that is nominally horizontal and cut orthogonal to the axis of the core. Stress-strain curves were made before and after the testing, and are included in the word doc.. The confining pressure for this test was 2800 psi. A series of tests are being carried out on to define a failure envelope, to provide representative hydraulic fracture design parameters and for future geomechanical assessments. The samples are from well 52-21, which reaches a maximum depth of 3581 ft +/- 2 ft into a gneiss complex.

  19. Constitutive parameter de-embedding using inhomogeneously-filled rectangular waveguides with longitudinal section modes

    NASA Technical Reports Server (NTRS)

    Park, A.; Dominek, A. K.

    1990-01-01

    Constitutive parameter extraction from S parameter data using a rectangular waveguide whose cross section is partially filled with a material sample as opposed to being completely filled was examined. One reason for studying a partially filled geometry is to analyze the effect of air gaps between the sample and fixture for the extraction of constitutive parameters. Air gaps can occur in high temperature parameter measurements when the sample was prepared at room temperature. Single port and two port measurement approaches to parameter extraction are also discussed.

  20. Technical and investigative support for high density digital satellite recording systems

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Availability of tape in suitable widths continue to obstruct standardized methods on tests where width is an important parameter. These tests include flexibility, coefficients of friction, and abrasivity. The Fuji Beridox tape samples evaluated were obtained from a 1/2 inch video cassette.

  1. Fast and Versatile Fabrication of PMMA Microchip Electrophoretic Devices by Laser Engraving

    PubMed Central

    Gabriel, Ellen Flávia Moreira; Coltro, Wendell Karlos Tomazelli; Garcia, Carlos D.

    2014-01-01

    This paper describes the effects of different modes and engraving parameters on the dimensions of microfluidic structures produced in PMMA using laser engraving. The engraving modes included raster and vector while the explored engraving parameters included power, speed, frequency, resolution, line-width and number of passes. Under the optimum conditions, the technique was applied to produce channels suitable for CE separations. Taking advantage of the possibility to cut-through the substrates, the laser was also used to define solution reservoirs (buffer, sample, and waste) and a PDMS-based decoupler. The final device was used to perform the analysis of a model mixture of phenolic compounds within 200 s with baseline resolution. PMID:25113407

  2. SPIDER - I. Sample and galaxy parameters in the grizYJHK wavebands

    NASA Astrophysics Data System (ADS)

    La Barbera, F.; de Carvalho, R. R.; de La Rosa, I. G.; Lopes, P. A. A.; Kohl-Moreira, J. L.; Capelato, H. V.

    2010-11-01

    This is the first paper of a series presenting the Spheroids Panchromatic Investigation in Different Environmental Regions (SPIDER). The sample of spheroids consists of 5080 bright (Mr < -20) early-type galaxies (ETGs), in the redshift range of 0.05 to 0.095, with optical (griz) photometry and spectroscopy from the Sloan Digital Sky Survey Data Release 6 (SDSS-DR6) and near-infrared (YJHK) photometry from the UKIRT Infrared Deep Sky Survey-Large Area Survey (UKIDSS-LAS) (DR4). We describe how homogeneous photometric parameters (galaxy colours and structural parameters) are derived using grizYJHK wavebands. We find no systematic steepening of the colour-magnitude relation when probing the baseline from g - r to g - K, implying that internal colour gradients drive most of the mass-metallicity relation in ETGs. As far as structural parameters are concerned we find that the mean effective radius of ETGs smoothly decreases, by 30 per cent, from g through K, while no significant dependence on waveband is detected for the axial ratio, Sersic index and a4 parameters. Furthermore, velocity dispersions are remeasured for all the ETGs using STARLIGHT and compared to those obtained by SDSS. The velocity dispersions are rederived using a combination of simple stellar population models as templates, hence accounting for the kinematics of different galaxy stellar components. We compare our (2DPHOT) measurements of total magnitude, effective radius and mean surface brightness with those obtained as part of the SDSS pipeline (PHOTO). Significant differences are found and reported, including comparisons with a third and independent part. A full characterization of the sample completeness in all wavebands is presented, establishing the limits of application of the characteristic parameters presented here for the analysis of the global scaling relations of ETGs.

  3. Characterization of Factors Affecting Nanoparticle Tracking Analysis Results With Synthetic and Protein Nanoparticles.

    PubMed

    Krueger, Aaron B; Carnell, Pauline; Carpenter, John F

    2016-04-01

    In many manufacturing and research areas, the ability to accurately monitor and characterize nanoparticles is becoming increasingly important. Nanoparticle tracking analysis is rapidly becoming a standard method for this characterization, yet several key factors in data acquisition and analysis may affect results. Nanoparticle tracking analysis is prone to user input and bias on account of a high number of parameters available, contains a limited analysis volume, and individual sample characteristics such as polydispersity or complex protein solutions may affect analysis results. This study systematically addressed these key issues. The integrated syringe pump was used to increase the sample volume analyzed. It was observed that measurements recorded under flow caused a reduction in total particle counts for both polystyrene and protein particles compared to those collected under static conditions. In addition, data for polydisperse samples tended to lose peak resolution at higher flow rates, masking distinct particle populations. Furthermore, in a bimodal particle population, a bias was seen toward the larger species within the sample. The impacts of filtration on an agitated intravenous immunoglobulin sample and operating parameters including "MINexps" and "blur" were investigated to optimize the method. Taken together, this study provides recommendations on instrument settings and sample preparations to properly characterize complex samples. Copyright © 2016. Published by Elsevier Inc.

  4. Banking of environmental samples for short-term biochemical and chemical monitoring of organic contamination in coastal marine environments: the GICBEM experience (1986-1990). Groupe Interface Chimie Biologie des Ecosystèmes, Marins.

    PubMed

    Garrigues, P; Narbonne, J F; Lafaurie, M; Ribera, D; Lemaire, P; Raoux, C; Michel, X; Salaun, J P; Monod, J L; Romeo, M

    1993-11-01

    The GICBEM (Groupe Interface Chimie Biologie des Ecosystèmes Marins) program consists of an evaluation of the ecosystem health status in the Mediterranean Sea mainly based on chemical and biochemical approaches. Specific chemical contaminants (polycyclic aromatic hydrocarbons (PAH), polychlorobiphenyls (PCB), heavy metals) in waters, sediments, and related biotransformation indicators in target organisms (mussels, fish) have been selected for a complete survey of the coastal waters. In order to provide an appropriate sampling program for standardization for each sampling cruise, various aspects have been studied: (a) parameters for the choice of the sample sites; (b) ways of collection the samples (waters, sediments, marine organisms); and (c) preparation of the samples for a short term storage on board ship and for further analyses in the ground laboratory. Methods of preparation and storage of the samples are described and could be used to initiate an environmental banking program including both possible retrospective analyses of chemical pollutants and biochemical indicators. Moreover, the correlation between chemicals (PAH) and biochemical (mixed function oxygenase activities) parameters has been studied and this demonstrates the capability of the enzyme activities as reliable pollution biomarkers.

  5. Can arsenic occurrence rate in bedrock aquifers be predicted?

    USGS Publications Warehouse

    Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan

    2012-01-01

    A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 μg L–1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 μg L–1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology.

  6. Can arsenic occurrence rates in bedrock aquifers be predicted?

    PubMed Central

    Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan

    2012-01-01

    A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 µg L−1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 µg L−1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology. PMID:22260208

  7. Statistical Inference for Data Adaptive Target Parameters.

    PubMed

    Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J

    2016-05-01

    Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.

  8. WaveAR: A software tool for calculating parameters for water waves with incident and reflected components

    NASA Astrophysics Data System (ADS)

    Landry, Blake J.; Hancock, Matthew J.; Mei, Chiang C.; García, Marcelo H.

    2012-09-01

    The ability to determine wave heights and phases along a spatial domain is vital to understanding a wide range of littoral processes. The software tool presented here employs established Stokes wave theory and sampling methods to calculate parameters for the incident and reflected components of a field of weakly nonlinear waves, monochromatic at first order in wave slope and propagating in one horizontal dimension. The software calculates wave parameters over an entire wave tank and accounts for reflection, weak nonlinearity, and a free second harmonic. Currently, no publicly available program has such functionality. The included MATLAB®-based open source code has also been compiled for Windows®, Mac® and Linux® operating systems. An additional companion program, VirtualWave, is included to generate virtual wave fields for WaveAR. Together, the programs serve as ideal analysis and teaching tools for laboratory water wave systems.

  9. Displayed Trees Do Not Determine Distinguishability Under the Network Multispecies Coalescent

    PubMed Central

    Zhu, Sha; Degnan, James H.

    2017-01-01

    Abstract Recent work in estimating species relationships from gene trees has included inferring networks assuming that past hybridization has occurred between species. Probabilistic models using the multispecies coalescent can be used in this framework for likelihood-based inference of both network topologies and parameters, including branch lengths and hybridization parameters. A difficulty for such methods is that it is not always clear whether, or to what extent, networks are identifiable—that is whether there could be two distinct networks that lead to the same distribution of gene trees. For cases in which incomplete lineage sorting occurs in addition to hybridization, we demonstrate a new representation of the species network likelihood that expresses the probability distribution of the gene tree topologies as a linear combination of gene tree distributions given a set of species trees. This representation makes it clear that in some cases in which two distinct networks give the same distribution of gene trees when sampling one allele per species, the two networks can be distinguished theoretically when multiple individuals are sampled per species. This result means that network identifiability is not only a function of the trees displayed by the networks but also depends on allele sampling within species. We additionally give an example in which two networks that display exactly the same trees can be distinguished from their gene trees even when there is only one lineage sampled per species. PMID:27780899

  10. Complex structures of different CaFe2As2 samples

    PubMed Central

    Saparov, Bayrammurad; Cantoni, Claudia; Pan, Minghu; Hogan, Thomas C.; II, William Ratcliff; Wilson, Stephen D.; Fritsch, Katharina; Gaulin, Bruce D.; Sefat, Athena S.

    2014-01-01

    The interplay between magnetism and crystal structures in three CaFe2As2 samples is studied. For the nonmagnetic quenched crystals, different crystalline domains with varying lattice parameters are found, and three phases (orthorhombic, tetragonal, and collapsed tetragonal) coexist between TS = 95 K and 45 K. Annealing of the quenched crystals at 350°C leads to a strain relief through a large (~1.3%) expansion of the c-parameter and a small (~0.2%) contraction of the a-parameter, and to local ~0.2 Å displacements at the atomic-level. This annealing procedure results in the most homogeneous crystals for which the antiferromagnetic and orthorhombic phase transitions occur at TN/TS = 168(1) K. In the 700°C-annealed crystal, an intermediate strain regime takes place, with tetragonal and orthorhombic structural phases coexisting between 80 to 120 K. The origin of such strong shifts in the transition temperatures are tied to structural parameters. Importantly, with annealing, an increase in the Fe-As length leads to more localized Fe electrons and higher local magnetic moments on Fe ions. Synergistic contribution of other structural parameters, including a decrease in the Fe-Fe distance, and a dramatic increase of the c-parameter, which enhances the Fermi surface nesting in CaFe2As2, are also discussed. PMID:24844399

  11. Proton-pump inhibitor use does not affect semen quality in subfertile men.

    PubMed

    Keihani, Sorena; Craig, James R; Zhang, Chong; Presson, Angela P; Myers, Jeremy B; Brant, William O; Aston, Kenneth I; Emery, Benjamin R; Jenkins, Timothy G; Carrell, Douglas T; Hotaling, James M

    2018-01-01

    Proton-pump inhibitors (PPIs) are among the most widely used drugs worldwide. PPI use has recently been linked to adverse changes in semen quality in healthy men; however, the effects of PPI use on semen parameters remain largely unknown specifically in cases with male factor infertility. We examined whether PPI use was associated with detrimental effects on semen parameters in a large population of subfertile men. We retrospectively reviewed data from 12 257 subfertile men who had visited our fertility clinic from 2003 to 2013. Patients who reported using any PPIs for >3 months before semen sample collection were included; 7698 subfertile men taking no medication served as controls. Data were gathered on patient age, medication use, and conventional semen parameters; patients taking any known spermatotoxic medication were excluded. Linear mixed-effect regression models were used to test the effect of PPI use on semen parameters adjusting for age. A total of 248 patients (258 samples) used PPIs for at least 3 months before semen collection. In regression models, PPI use (either as the only medication or when used in combination with other nonspermatotoxic medications) was not associated with statistically significant changes in semen parameters. To our knowledge, this is the largest study to compare PPI use with semen parameters in subfertile men. Using PPIs was not associated with detrimental effects on semen quality in this retrospective study.

  12. Pharmacokinetic design optimization in children and estimation of maturation parameters: example of cytochrome P450 3A4.

    PubMed

    Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel

    2011-02-01

    The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the use of this approach. First-pass effects, alternative elimination routes, and combined elimination pathways should also be investigated.

  13. Development of techniques and associated instrumentation for high temperature emissivity measurements

    NASA Technical Reports Server (NTRS)

    Cunnington, G. R.; Funai, A. I.

    1972-01-01

    The progress during the sixth quarterly period is reported on construction and assembly of a test facility to determine the high temperature emittance properties of candidate thermal protection system materials for the space shuttle. This facility will provide simulation of such reentry environment parameters as temperature, pressure, and gas flow rate to permit studies of the effects of these parameters on the emittance stability of the materials. Also reported are the completed results for emittance tests on a set of eight Rene 41 samples and one anodized titanium alloy sample which were tested at temperatures up to 1600 F in vacuum. The data includes calorimetric determinations of total hemispherical emittance, radiometric determinations of total and spectral normal emittance, and pre- and post-test room temperature reflectance measurements.

  14. Prediction of compressibility parameters of the soils using artificial neural network.

    PubMed

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan

    2016-01-01

    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  15. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  16. Wavelength dispersive X-ray fluorescence analysis using fundamental parameter approach of Catha edulis and other related plant samples

    NASA Astrophysics Data System (ADS)

    Shaltout, Abdallah A.; Moharram, Mohammed A.; Mostafa, Nasser Y.

    2012-01-01

    This work is the first attempt to quantify trace elements in the Catha edulis plant (Khat) with a fundamental parameter approach. C. edulis is a famous drug plant in east Africa and Arabian Peninsula. We have previously confirmed that hydroxyapatite represents one of the main inorganic compounds in the leaves and stalks of C. edulis. Comparable plant leaves from basil, mint and green tea were included in the present investigation as well as trifolium leaves were included as a non-related plant. The elemental analyses of the plants were done by Wavelength Dispersive X-Ray Fluorescence (WDXRF) spectroscopy. Standard-less quantitative WDXRF analysis was carried out based on the fundamental parameter approaches. According to the standard-less analysis algorithms, there is an essential need for an accurate determination of the amount of organic material in the sample. A new approach, based on the differential thermal analysis, was successfully used for the organic material determination. The obtained results based on this approach were in a good agreement with the commonly used methods. Depending on the developed method, quantitative analysis results of eighteen elements including; Al, Br, Ca, Cl, Cu, Fe, K, Na, Ni, Mg, Mn, P, Rb, S, Si, Sr, Ti and Zn were obtained for each plant. The results of the certified reference materials of green tea (NCSZC73014, China National Analysis Center for Iron and Steel, Beijing, China) confirmed the validity of the proposed method.

  17. COMPARISON OF TWO DIFFERENT SOLID PHASE EXTRACTION/LARGE VOLUME INJECTION PROCEDURES FOR METHOD 8270

    EPA Science Inventory

    Two solid phase (SPE) and one traditional continuous liquid-liquid extraction method are compared for analysis of Method 8270 SVOCs. Productivity parameters include data quality, sample volume, analysis time and solvent waste.

    One SPE system, unique in the U.S., uses aut...

  18. A survey of catfish pond water chemistry parameters for copper toxicity modelling

    USDA-ARS?s Scientific Manuscript database

    Water samples were collected from 20 catfish ponds in 2015 to obtain data useful in predicting copper toxicity and chemical behavior. Ponds were located in major catfish producing areas of west Alabama, east Arkansas, and Mississippi. Pond types included traditional levee ponds, split-ponds, water...

  19. Aqueous Geochemical Data From the Analysis of Stream-Water Samples Collected in June and July 2005--Taylor Mountains 1:250,000 Scale Quadrangle, Alaska

    USGS Publications Warehouse

    Wang, Bronwen; Mueller, Seth; Stetson, Sarah; Bailey, Elizabeth; Lee, Greg

    2006-01-01

    We report on the chemical analysis of water samples collected from the Taylor Mountains 1:250,000-scale quadrangle. Parameters for which data are reported include pH, conductivity, water temperature, major cation and anion concentrations, trace-element concentrations, and dissolved organic-carbon concentrations. Samples were collected as part of a multiyear U.S. Geological Survey project 'Geologic and Mineral Deposit Data for Alaskan Economic Development.' Data presented here are from samples collected in June and July of 2005. The data are being released at this time with minimal interpretation. This is the second release of aqueous geochemical data from this project; 2004 aqueous geochemical data were published previously (Wang and others, 2006). The data in this report augment but do not duplicate or supersede the previous data release. Site selection was based on a regional sampling strategy that focused on first- and second-order drainages. Water sample site selection was based on landscape parameters that included physiography, wetland extent, lithological changes, and a cursory field review of mineralogy from pan concentrates. Stream water in the Taylor Mountians quadrangle is dominated by bicarbonate (HCO3-), though in a few samples more than 50 percent of the anionic charge can be attributed to sulfate (SO42-). The major-cation chemistry ranges from Ca2+/Mg2+ dominated to a mix of Ca2+/Mg2+/Na++K+. In general, good agreement was found between the major cations and anions in the duplicate samples. Many trace elements in these samples were at or near the analytical method detection limit, but good agreement was found between duplicate samples for elements with detectable concentrations. With the exception of a total mercury concentration of 0.33 ng/L detected in a field blank, field blank major-ion and trace-elements concentrations were below detection.

  20. A generic implementation of replica exchange with solute tempering (REST2) algorithm in NAMD for complex biophysical simulations

    NASA Astrophysics Data System (ADS)

    Jo, Sunhwan; Jiang, Wei

    2015-12-01

    Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.

  1. Process Parameter Optimization of Extrusion-Based 3D Metal Printing Utilizing PW-LDPE-SA Binder System.

    PubMed

    Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan

    2017-03-16

    Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW-LDPE-SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.

  2. Process Parameter Optimization of Extrusion-Based 3D Metal Printing Utilizing PW–LDPE–SA Binder System

    PubMed Central

    Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan

    2017-01-01

    Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity. PMID:28772665

  3. Application of selected methods of remote sensing for detecting carbonaceous water pollution

    NASA Technical Reports Server (NTRS)

    Davis, E. M.; Fosbury, W. J.

    1973-01-01

    A reach of the Houston Ship Channel was investigated during three separate overflights correlated with ground truth sampling on the Channel. Samples were analyzed for such conventional parameters as biochemical oxygen demand, chemical oxygen demand, total organic carbon, total inorganic carbon, turbidity, chlorophyll, pH, temperature, dissolved oxygen, and light penetration. Infrared analyses conducted on each sample included reflectance ATR analysis, carbon tetrachloride extraction of organics and subsequent scanning, and KBr evaporate analysis of CCl4 extract concentrate. Imagery which was correlated with field and laboratory data developed from ground truth sampling included that obtained from aerial KA62 hardware, RC-8 metric camera systems, and the RS-14 infrared scanner. The images were subjected to analysis by three film density gradient interpretation units. Data were then analyzed for correlations between imagery interpretation as derived from the three instruments and laboratory infrared signatures and other pertinent field and laboratory analyses.

  4. Reconstructing gravitational wave source parameters via direct comparisons to numerical relativity I: Method

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei

    2016-03-01

    In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.

  5. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    PubMed Central

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  6. Quadrant III RFI draft report: Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-12-01

    The purpose of the RCRA Facility Investigation (RFI) at The Portsmouth Gaseous Diffusion Plant (PORTS) is to acquire, analyze and interpret data that will: characterize the environmental setting, including ground water, surface water and sediment, soil and air; define and characterize sources of contamination; characterize the vertical and horizontal extent and degree of contamination of the environment; assess the risk to human health and the environment resulting from possible exposure to contaminants; and support the Corrective Measures Study (CMS), which will follow the RFI, if required. A total of 18 Solid Waste Management Units (SWMU's) were investigated. All surficial soilmore » samples (0--2 ft), sediment samples and surface-water samples proposed in the approved Quadrant III RFI Work Plan were collected as specified in the approved work plan and RFI Sampling Plan. All soil, sediment and surface-water samples were analyzed for parameters specified from the Target Compound List and Target Analyte List (TCL/TAL) as listed in the US EPA Statement of Work for Inorganic (7/88a) and Organic (2/88b) analyses for Soil and Sediment, and analyses for fluoride, Freon-113 and radiological parameters (total uranium, gross alpha, gross beta and technetium).« less

  7. Quadrant III RFI draft report: Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-12-01

    The purpose of the RCRA Facility Investigation (RFI) at The Portsmouth Gaseous Diffusion Plant (PORTS) is to acquire, analyze and interpret data that will: characterize the environmental setting, including ground water, surface water and sediment, soil and air; define and characterize sources of contamination; characterize the vertical and horizontal extent and degree of contamination of the environment; assess the risk to human health and the environment resulting from possible exposure to contaminants; and support the Corrective Measures Study (CMS), which will follow the RFI, if required. A total of 18 Solid Waste Management Units (SWMU`s) were investigated. All surficial soilmore » samples (0--2 ft), sediment samples and surface-water samples proposed in the approved Quadrant III RFI Work Plan were collected as specified in the approved work plan and RFI Sampling Plan. All soil, sediment and surface-water samples were analyzed for parameters specified from the Target Compound List and Target Analyte List (TCL/TAL) as listed in the US EPA Statement of Work for Inorganic (7/88a) and Organic (2/88b) analyses for Soil and Sediment, and analyses for fluoride, Freon-113 and radiological parameters (total uranium, gross alpha, gross beta and technetium).« less

  8. Quantitative tissue parameters of Achilles tendon and plantar fascia in healthy subjects using a handheld myotonometer.

    PubMed

    Orner, Sarah; Kratzer, Wolfgang; Schmidberger, Julian; Grüner, Beate

    2018-01-01

    The aim of the study was to examine the quantitative tissue properties of the Achilles tendon and plantar fascia using a handheld, non-invasive MyotonPRO device, in order to generate normal values and examine the biomechanical relationship of both structures. Prospective study of a large, healthy sample population. The study sample included 207 healthy subjects (87 males and 120 females) for the Achilles tendon and 176 healthy subjects (73 males and 103 females) for the plantar fascia. For the correlations of the tissue parameters of the Achilles tendon and plantar fascia an intersection of both groups was formed which included 150 healthy subjects (65 males and 85 females). All participants were measured in a prone position. Consecutive measurements of the Achilles tendon and plantar fascia were performed by MyotonPRO device at defined sites. For the left and right Achilles tendons and plantar fasciae all five MyotonPRO parameters (Frequency [Hz], Decrement, Stiffness [N/m], Creep and Relaxation Time [ms]) were calculated of healthy males and females. The correlation of the tissue parameters of the Achilles tendon and plantar fascia showed a significant positive correlation of all parameters on the left as well as on the right side. The MyotonPRO is a feasible device for easy measurement of passive tissue properties of the Achilles tendon and plantar fascia in a clinical setting. The generated normal values of the Achilles tendon and plantar fascia are important for detecting abnormalities in patients with Achilles tendinopathy or plantar fasciitis in the future. Biomechanically, both structures are positively correlated. This may provide new aspects in the diagnostics and therapy of plantar fasciitis and Achilles tendinopathy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Developing a methodology for the inverse estimation of root architectural parameters from field based sampling schemes

    NASA Astrophysics Data System (ADS)

    Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry

    2017-04-01

    Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.

  10. Sensitivity and specificity of univariate MRI analysis of experimentally degraded cartilage

    PubMed Central

    Lin, Ping-Chang; Reiter, David A.; Spencer, Richard G.

    2010-01-01

    MRI is increasingly used to evaluate cartilage in tissue constructs, explants, and animal and patient studies. However, while mean values of MR parameters, including T1, T2, magnetization transfer rate km, apparent diffusion coefficient ADC, and the dGEMRIC-derived fixed charge density, correlate with tissue status, the ability to classify tissue according to these parameters has not been explored. Therefore, the sensitivity and specificity with which each of these parameters was able to distinguish between normal and trypsin- degraded, and between normal and collagenase-degraded, cartilage explants were determined. Initial analysis was performed using a training set to determine simple group means to which parameters obtained from a validation set were compared. T1 and ADC showed the greatest ability to discriminate between normal and degraded cartilage. Further analysis with k-means clustering, which eliminates the need for a priori identification of sample status, generally performed comparably. Use of fuzzy c-means (FCM) clustering to define centroids likewise did not result in improvement in discrimination. Finally, a FCM clustering approach in which validation samples were assigned in a probabilistic fashion to control and degraded groups was implemented, reflecting the range of tissue characteristics seen with cartilage degradation. PMID:19705467

  11. Modeling the Atmospheric Phase Effects of a Digital Antenna Array Communications System

    NASA Technical Reports Server (NTRS)

    Tkacenko, A.

    2006-01-01

    In an antenna array system such as that used in the Deep Space Network (DSN) for satellite communication, it is often necessary to account for the effects due to the atmosphere. Typically, the atmosphere induces amplitude and phase fluctuations on the transmitted downlink signal that invalidate the assumed stationarity of the signal model. The degree to which these perturbations affect the stationarity of the model depends both on parameters of the atmosphere, including wind speed and turbulence strength, and on parameters of the communication system, such as the sampling rate used. In this article, we focus on modeling the atmospheric phase fluctuations in a digital antenna array communications system. Based on a continuous-time statistical model for the atmospheric phase effects, we show how to obtain a related discrete-time model based on sampling the continuous-time process. The effects of the nonstationarity of the resulting signal model are investigated using the sample matrix inversion (SMI) algorithm for minimum mean-squared error (MMSE) equalization of the received signal

  12. Data acquisition techniques for exploiting the uniqueness of the time-of-flight mass spectrometer: Application to sampling pulsed gas systems

    NASA Technical Reports Server (NTRS)

    Lincoln, K. A.

    1980-01-01

    Mass spectra are produced in most mass spectrometers by sweeping some parameter within the instrument as the sampled gases flow into the ion source. It is evident that any fluctuation in the gas during the sweep (mass scan) of the instrument causes the output spectrum to be skewed in its mass peak intensities. The time of flight mass spectrometer (TOFMS) with its fast, repetitive mode of operation produces spectra without skewing or varying instrument parameters and because all ion species are ejected from the ion source simultaneously, the spectra are inherently not skewed despite rapidly changing gas pressure or composition in the source. Methods of exploiting this feature by utilizing fast digital data acquisition systems, such as transient recorders and signal averagers which are commercially available are described. Applications of this technique are presented including TOFMS sampling of vapors produced by both pulsed and continuous laser heating of materials.

  13. Characterizing the kinetics of suspended cylindrical particles by polarization measurements

    NASA Astrophysics Data System (ADS)

    Liao, Ran; Ou, Xueheng; Ma, Hui

    2015-09-01

    Polarization has promising potential to retrieve the information of the steady samples, such as tissues. However, for the fast changing sample such as the suspended algae in the water, the kinetics of the particles also influence the scattered polarization. The present paper will show our recent results to extract the information about the kinetics of the suspended cylindrical particles by polarization measurements. The sample is the aqueous suspension of the glass fibers stirred by a magnetic stirrer. We measure the scattered polarization of the fibers by use of a simultaneous polarization measurement system and obtain the time series of two orthogonal polarization components. By use of correlation analysis, we obtain the time parameters from the auto-correlation functions of the polarization components, and observe the changes with the stirring speeds. Results show that these time parameters indicate the immigration of the fibers. After discussion, we find that they may further characterize the kinetics, including the translation and rotation, of the glass fibers in the fluid field.

  14. 15N CSA tensors and 15N-1H dipolar couplings of protein hydrophobic core residues investigated by static solid-state NMR

    NASA Astrophysics Data System (ADS)

    Vugmeyster, Liliya; Ostrovsky, Dmitry; Fu, Riqiang

    2015-10-01

    In this work, we assess the usefulness of static 15N NMR techniques for the determination of the 15N chemical shift anisotropy (CSA) tensor parameters and 15N-1H dipolar splittings in powder protein samples. By using five single labeled samples of the villin headpiece subdomain protein in a hydrated lyophilized powder state, we determine the backbone 15N CSA tensors at two temperatures, 22 and -35 °C, in order to get a snapshot of the variability across the residues and as a function of temperature. All sites probed belonged to the hydrophobic core and most of them were part of α-helical regions. The values of the anisotropy (which include the effect of the dynamics) varied between 130 and 156 ppm at 22 °C, while the values of the asymmetry were in the 0.32-0.082 range. The Leu-75 and Leu-61 backbone sites exhibited high mobility based on the values of their temperature-dependent anisotropy parameters. Under the assumption that most differences stem from dynamics, we obtained the values of the motional order parameters for the 15N backbone sites. While a simple one-dimensional line shape experiment was used for the determination of the 15N CSA parameters, a more advanced approach based on the ;magic sandwich; SAMMY pulse sequence (Nevzorov and Opella, 2003) was employed for the determination of the 15N-1H dipolar patterns, which yielded estimates of the dipolar couplings. Accordingly, the motional order parameters for the dipolar interaction were obtained. It was found that the order parameters from the CSA and dipolar measurements are highly correlated, validating that the variability between the residues is governed by the differences in dynamics. The values of the parameters obtained in this work can serve as reference values for developing more advanced magic-angle spinning recoupling techniques for multiple labeled samples.

  15. The ionisation parameter of star-forming galaxies evolves with the specific star formation rate

    NASA Astrophysics Data System (ADS)

    Kaasinen, Melanie; Kewley, Lisa; Bian, Fuyan; Groves, Brent; Kashino, Daichi; Silverman, John; Kartaltepe, Jeyhan

    2018-04-01

    We investigate the evolution of the ionisation parameter of star-forming galaxies using a high-redshift (z ˜ 1.5) sample from the FMOS-COSMOS survey and matched low-redshift samples from the Sloan Digital Sky Survey. By constructing samples of low-redshift galaxies for which the stellar mass (M*), star formation rate (SFR) and specific star formation rate (sSFR) are matched to the high-redshift sample we remove the effects of an evolution in these properties. We also account for the effect of metallicity by jointly constraining the metallicity and ionisation parameter of each sample. We find an evolution in the ionisation parameter for main-sequence, star-forming galaxies and show that this evolution is driven by the evolution of sSFR. By analysing the matched samples as well as a larger sample of z < 0.3, star-forming galaxies we show that high ionisation parameters are directly linked to high sSFRs and are not simply the byproduct of an evolution in metallicity. Our results are physically consistent with the definition of the ionisation parameter, a measure of the hydrogen ionising photon flux relative to the number density of hydrogen atoms.

  16. Changes in the color of white chocolate during storage: potential roles of lipid oxidation and non-enzymatic browning reactions.

    PubMed

    Rossini, Karina; Noreña, Caciano P Z; Brandelli, Adriano

    2011-06-01

    Three different samples of white chocolate were prepared: a sample with a synthetic antioxidant, another with casein peptides as natural antioxidant, and a third sample without any kind of antioxidant. Parameters associated with lipid oxidation and non-enzymatic browning were evaluated in the different samples of white chocolate during 10 months storage at 20 and 28°C. Acidity, thiobarbituric acid reactive substances and peroxide values increased with the incubation time. Samples stored at 20°C often showed lower values for these parameters than those stored at 28°C, although the differences were not always significant. The values for water activity increased from 0.4 to 0.53-0.57 during the period of 10 months. The color parameter a* was increased in samples stored at 28°C from month 5, and the parameter b* was lower in samples containing antioxidants from month 2. The addition of antioxidants did not significantly influence most the parameters studied, suggesting that the main parameters governing the alterations of white chocolate during its shelf-life was the storage temperature and increase in water activity.

  17. Auditing of chromatographic data.

    PubMed

    Mabie, J T

    1998-01-01

    During a data audit, it is important to ensure that there is clear documentation and an audit trail. The Quality Assurance Unit should review all areas, including the laboratory, during the conduct of the sample analyses. The analytical methodology that is developed should be documented prior to sample analyses. This is an important document for the auditor, as it is the instrumental piece used by the laboratory personnel to maintain integrity throughout the process. It is expected that this document will give insight into the sample analysis, run controls, run sequencing, instrument parameters, and acceptance criteria for the samples. The sample analysis and all supporting documentation should be audited in conjunction with this written analytical method and any supporting Standard Operating Procedures to ensure the quality and integrity of the data.

  18. High energy PIXE: A tool to characterize multi-layer thick samples

    NASA Astrophysics Data System (ADS)

    Subercaze, A.; Koumeir, C.; Métivier, V.; Servagent, N.; Guertin, A.; Haddad, F.

    2018-02-01

    High energy PIXE is a useful and non-destructive tool to characterize multi-layer thick samples such as cultural heritage objects. In a previous work, we demonstrated the possibility to perform quantitative analysis of simple multi-layer samples using high energy PIXE, without any assumption on their composition. In this work an in-depth study of the parameters involved in the method previously published is proposed. Its extension to more complex samples with a repeated layer is also presented. Experiments have been performed at the ARRONAX cyclotron using 68 MeV protons. The thicknesses and sequences of a multi-layer sample including two different layers of the same element have been determined. Performances and limits of this method are presented and discussed.

  19. Random sampling and validation of covariance matrices of resonance parameters

    NASA Astrophysics Data System (ADS)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  20. Comprehensive automation of the solid phase extraction gas chromatographic mass spectrometric analysis (SPE-GC/MS) of opioids, cocaine, and metabolites from serum and other matrices.

    PubMed

    Lerch, Oliver; Temme, Oliver; Daldrup, Thomas

    2014-07-01

    The analysis of opioids, cocaine, and metabolites from blood serum is a routine task in forensic laboratories. Commonly, the employed methods include many manual or partly automated steps like protein precipitation, dilution, solid phase extraction, evaporation, and derivatization preceding a gas chromatography (GC)/mass spectrometry (MS) or liquid chromatography (LC)/MS analysis. In this study, a comprehensively automated method was developed from a validated, partly automated routine method. This was possible by replicating method parameters on the automated system. Only marginal optimization of parameters was necessary. The automation relying on an x-y-z robot after manual protein precipitation includes the solid phase extraction, evaporation of the eluate, derivatization (silylation with N-methyl-N-trimethylsilyltrifluoroacetamide, MSTFA), and injection into a GC/MS. A quantitative analysis of almost 170 authentic serum samples and more than 50 authentic samples of other matrices like urine, different tissues, and heart blood on cocaine, benzoylecgonine, methadone, morphine, codeine, 6-monoacetylmorphine, dihydrocodeine, and 7-aminoflunitrazepam was conducted with both methods proving that the analytical results are equivalent even near the limits of quantification (low ng/ml range). To our best knowledge, this application is the first one reported in the literature employing this sample preparation system.

  1. On the Accuracy of Atmospheric Parameter Determination in BAFGK Stars

    NASA Astrophysics Data System (ADS)

    Ryabchikova, T.; Piskunov, N.; Shulyak, D.

    2015-04-01

    During the past few years, many papers determining the atmospheric parameters in FGK stars appeared in the literature where the accuracy of effective temperatures is given as 20-40 K. For main sequence stars within the 5 000-13 000 K temperature range, we have performed a comparative analysis of the parameters derived from the spectra by using the SME (Spectroscopy Made Easy) package and those found in the literature. Our sample includes standard stars Sirius, Procyon, δ Eri, and the Sun. Combining different spectral regions in the fitting procedure, we investigated an effect different atomic species have on the derived atmospheric parameters. The temperature difference may exceed 100 K depending on the spectral regions used in the SME procedure. It is shown that the atmospheric parameters derived with the SME procedure which includes wings of hydrogen lines in fitting agrees better with the results derived by the other methods and tools across a large part of the main sequence. For three stars—π Cet, 21 Peg, and Procyon—the atmospheric parameters were also derived by fitting a calculated energy distribution to the observed one. We found a substantial difference in the parameters inferred from different sets and combinations of spectrophotometric observations. An intercomparison of our results and literature data shows that the average accuracy of effective temperature determination for cool stars and for the early B-stars is 70-85 K and 170-200 K, respectively.

  2. Fast and versatile fabrication of PMMA microchip electrophoretic devices by laser engraving.

    PubMed

    Moreira Gabriel, Ellen Flávia; Tomazelli Coltro, Wendell Karlos; Garcia, Carlos D

    2014-08-01

    This paper describes the effects of different modes and engraving parameters on the dimensions of microfluidic structures produced in PMMA using laser engraving. The engraving modes included raster and vector, while the explored engraving parameters included power, speed, frequency, resolution, line-width, and number of passes. Under the optimum conditions, the technique was applied to produce channels suitable for CE separations. Taking advantage of the possibility to cut-through the substrates, the laser was also used to define solution reservoirs (buffer, sample, and waste) and a PDMS-based decoupler. The final device was used to perform the analysis of a model mixture of phenolic compounds within 200 s with baseline resolution. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Phase-Angle Dependence of Determinations of Diameter, Albedo, and Taxonomy: A Case Study of NEO 3691 Bede

    NASA Technical Reports Server (NTRS)

    Wooden, Diane H.; Lederer, Susan M.; Jehin, Emmanuel; Howell, Ellen S.; Fernandez, Yan; Harker, David E.; Ryan, Erin; Lovell, Amy; Woodward, Charles E.; Benner, Lance A.

    2015-01-01

    Parameters important for NEO risk assessment and mitigation include Near-Earth Object diameter and taxonomic classification, which translates to surface composition. Diameters of NEOs are derived from the thermal fluxes measured by WISE, NEOWISE, Spitzer Warm Mission and ground-based telescopes including the IRTF and UKIRT. Diameter and its coupled parameters Albedo and IR beaming parameter (a proxy for thermal inertia and/or surface roughness) are dependent upon the phase angle, which is the Sun-target-observer angle. Orbit geometries of NEOs, however, typically provide for observations at phase angles greater than 20 degrees. At higher phase angles, the observed thermal emission is sampling both the day and night sides of the NEO. We compare thermal models for NEOs that exclude (NEATM) and include (NESTM) night-side emission. We present a case study of NEO 3691 Bede, which is a higher albedo object, X (Ec) or Cgh taxonomy, to highlight the range of H magnitudes for this object (depending on the albedo and phase function slope parameter G), and to examine at different phase angles the taxonomy and thermal model fits for this NEO. Observations of 3691 Bede include our observations with IRTF+SpeX and with the 10 micrometer UKIRT+Michelle instrument, as well as WISE and Spitzer Warm mission data. By examining 3691 Bede as a case study, we highlight the interplay between the derivation of basic physical parameters and observing geometry, and we discuss the uncertainties in H magnitude, taxonomy assignment amongst the X-class (P, M, E), and diameter determinations. Systematic dependencies in the derivation of basic characterization parameters of H-magnitude, diameter, albedo and taxonomy with observing geometry are important to understand. These basic characterization parameters affect the statistical assessments of the NEO population, which in turn, affects the assignment of statistically-assessed basic parameters to discovered but yet-to-be-fully-characterized NEOs.

  4. Selection of sampling rate for digital control of aircrafts

    NASA Technical Reports Server (NTRS)

    Katz, P.; Powell, J. D.

    1974-01-01

    The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.

  5. Photospheric Magnetic Field Properties of Flaring versus Flare-quiet Active Regions. II. Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Leka, K. D.; Barnes, G.

    2003-10-01

    We apply statistical tests based on discriminant analysis to the wide range of photospheric magnetic parameters described in a companion paper by Leka & Barnes, with the goal of identifying those properties that are important for the production of energetic events such as solar flares. The photospheric vector magnetic field data from the University of Hawai'i Imaging Vector Magnetograph are well sampled both temporally and spatially, and we include here data covering 24 flare-event and flare-quiet epochs taken from seven active regions. The mean value and rate of change of each magnetic parameter are treated as separate variables, thus evaluating both the parameter's state and its evolution, to determine which properties are associated with flaring. Considering single variables first, Hotelling's T2-tests show small statistical differences between flare-producing and flare-quiet epochs. Even pairs of variables considered simultaneously, which do show a statistical difference for a number of properties, have high error rates, implying a large degree of overlap of the samples. To better distinguish between flare-producing and flare-quiet populations, larger numbers of variables are simultaneously considered; lower error rates result, but no unique combination of variables is clearly the best discriminator. The sample size is too small to directly compare the predictive power of large numbers of variables simultaneously. Instead, we rank all possible four-variable permutations based on Hotelling's T2-test and look for the most frequently appearing variables in the best permutations, with the interpretation that they are most likely to be associated with flaring. These variables include an increasing kurtosis of the twist parameter and a larger standard deviation of the twist parameter, but a smaller standard deviation of the distribution of the horizontal shear angle and a horizontal field that has a smaller standard deviation but a larger kurtosis. To support the ``sorting all permutations'' method of selecting the most frequently occurring variables, we show that the results of a single 10-variable discriminant analysis are consistent with the ranking. We demonstrate that individually, the variables considered here have little ability to differentiate between flaring and flare-quiet populations, but with multivariable combinations, the populations may be distinguished.

  6. [Stature estimation for Sichuan Han nationality female based on X-ray technology with measurement of lumbar vertebrae].

    PubMed

    Qing, Si-han; Chang, Yun-feng; Dong, Xiao-ai; Li, Yuan; Chen, Xiao-gang; Shu, Yong-kang; Deng, Zhen-hua

    2013-10-01

    To establish the mathematical models of stature estimation for Sichuan Han female with measurement of lumbar vertebrae by X-ray to provide essential data for forensic anthropology research. The samples, 206 Sichuan Han females, were divided into three groups including group A, B and C according to the ages. Group A (206 samples) consisted of all ages, group B (116 samples) were 20-45 years old and 90 samples over 45 years old were group C. All the samples were examined lumbar vertebrae through CR technology, including the parameters of five centrums (L1-L5) as anterior border, posterior border and central heights (x1-x15), total central height of lumbar spine (x16), and the real height of every sample. The linear regression analysis was produced using the parameters to establish the mathematical models of stature estimation. Sixty-two trained subjects were tested to verify the accuracy of the mathematical models. The established mathematical models by hypothesis test of linear regression equation model were statistically significant (P<0.05). The standard errors of the equation were 2.982-5.004 cm, while correlation coefficients were 0.370-0.779 and multiple correlation coefficients were 0.533-0.834. The return tests of the highest correlation coefficient and multiple correlation coefficient of each group showed that the highest accuracy of the multiple regression equation, y = 100.33 + 1.489 x3 - 0.548 x6 + 0.772 x9 + 0.058 x12 + 0.645 x15, in group A were 80.6% (+/- lSE) and 100% (+/- 2SE). The established mathematical models in this study could be applied for the stature estimation for Sichuan Han females.

  7. Microbiological Quality of Raw Dried Pasta from the German Market, with Special Emphasis on Cronobacter Species.

    PubMed

    Akineden, Ömer; Murata, Kristina Johanna; Gross, Madeleine; Usleber, Ewald

    2015-12-01

    The microbiological quality of 132 dried pasta products available on the German market, originating from 11 different countries, was studied. Sample materials included soft or durum wheat products, some of which produced with other ingredients such as eggs, spices, or vegetables. Parameters included hygiene indicators (aerobic plate count, mold count, the presence of Enterobacteriaceae) and pathogenic/toxinogenic bacterial species (Salmonella spp., Staphylococcus aureus, presumptive Bacillus cereus, and Cronobacter spp.). The overall results of hygiene parameters indicated a satisfactory quality. Salmonella was not found in any sample. Three samples were positive for S. aureus (10(2) to 10(4) colony forming unit (CFU)/g). Presumptive B. cereus at levels of 10(3) to 10(4) CFU/g were detected in 3 samples. Cronobacter spp. were isolated from 14 (10.6%) products. Of these, 9 isolates were identified as C. sakazakii, 2 each as C. turicensis and C. malonaticus, and 1 as C. muytjensii. The isolates were assigned to 9 multilocus sequence typing (MLST) sequence types and to 14 different PFGE profiles. Although pasta products are typically cooked before consumption, some consumers, and children in particular, may also eat raw pasta as nibbles. Raw pasta seems to be a relevant source of exposure to dietary Cronobacter spp., although health risks are probably restricted to vulnerable consumers. High numbers of presumptive B. cereus as found in some samples may be a risk after improper storage of cooked pasta products because toxinogenic strains are frequently found within this species. © 2015 Institute of Food Technologists®

  8. Influence of meteorological parameters on air quality

    NASA Astrophysics Data System (ADS)

    Gioda, Adriana; Ventura, Luciana; Lima, Igor; Luna, Aderval

    2013-04-01

    The physical characterization representative of ambient air particle concentrations is becoming a topic of great interest for urban air quality monitoring and human exposure assessment. Human exposure to particulate matter of less than 2.5 µm in diameter (PM2.5) can result in a variety of adverse health impacts, including reduced lung function and premature mortality. Numerous studies have shown that fine airborne inhalable particulate matter particles (PM2.5) are more dangerous to human health than coarse particles, e.g. PM10. This study investigates meteorological parameter impacts on PM2.5 concentrations in the atmosphere of Rio de Janeiro, Brazil. Samples were collected during 24 h every six days using a high-volume sampler from six sites in the metropolitan area of Rio de Janeiro from January to December 2011. The particles mass was determined by Gravimetry. Meteorological parameters were obtained from automatic stations near the sampling sites. The average PM2.5 concentrations ranged from 9 to 32 µg/m3 for all sites, exceeding the suggested annual limit of WHO (10 µg/m3). The relationship between the effects of temperature, relative humidity, wind speed and direction and particle concentration was examined using a Principal Component Analysis (PCA) for the different sites and seasons. The results for each sampling point and season presented different principal component numbers, varying from 2 to 4, and extremely different relationships with the parameters. This clearly shows that changes in meteorological conditions exert a marked influence on air quality.

  9. Rheological Characterization and Cluster Classification of Iranian Commercial Foods, Drinks and Desserts to Recommend for Esophageal Dysphagia Diets

    PubMed Central

    ZARGARAAN, Azizollaah; OMARAEE, Yasaman; RASTMANESH, Reza; TAHERI, Negin; FADAVI, Ghasem; FADAEI, Morteza; MOHAMMADIFAR, Mohammad Amin

    2013-01-01

    Abstract Background In the absence of dysphagia-oriented food products, rheological characterization of available food items is of importance for safe swallowing and adequate nutrient intake of dysphagic patients. In this way, introducing alternative items (with similar ease of swallow) is helpful to improve quality of life and nutritional intake of esophageal cancer dysphagia patients. The present study aimed at rheological characterization and cluster classification of potentially suitable foodstuffs marketed in Iran for their possible use in dysphagia diets. Methods In this descriptive study, rheological data were obtained during January and February 2012 in Rheology Lab of National Nutrition and Food Technology Research Institute Tehran, Iran. Steady state and oscillatory shear parameters of 39 commercial samples were obtained using a Physica MCR 301 rheometer (Anton-Paar, GmbH, Graz, Austria). Matlab Fuzzy Logic Toolbox (R2012 a) was utilized for cluster classification of the samples. Results Using an extended list of rheological parameters and fuzzy logic methods, 39 commercial samples (drinks, main courses and desserts) were divided to 5 clusters and degree of membership to each cluster was stated by a number between 0 and 0.99. Conclusion Considering apparent viscosity of foodstuffs as a single criterion for classification of dysphagia-oriented food products is shortcoming of current guidelines in dysphagia diets. Authors proposed to some revisions in classification of dysphagia-oriented food products and including more rheological parameters (especially, viscoelastic parameters) in the classification. PMID:26060647

  10. Rheological Characterization and Cluster Classification of Iranian Commercial Foods, Drinks and Desserts to Recommend for Esophageal Dysphagia Diets.

    PubMed

    Zargaraan, Azizollaah; Omaraee, Yasaman; Rastmanesh, Reza; Taheri, Negin; Fadavi, Ghasem; Fadaei, Morteza; Mohammadifar, Mohammad Amin

    2013-12-01

    In the absence of dysphagia-oriented food products, rheological characterization of available food items is of importance for safe swallowing and adequate nutrient intake of dysphagic patients. In this way, introducing alternative items (with similar ease of swallow) is helpful to improve quality of life and nutritional intake of esophageal cancer dysphagia patients. The present study aimed at rheological characterization and cluster classification of potentially suitable foodstuffs marketed in Iran for their possible use in dysphagia diets. In this descriptive study, rheological data were obtained during January and February 2012 in Rheology Lab of National Nutrition and Food Technology Research Institute Tehran, Iran. Steady state and oscillatory shear parameters of 39 commercial samples were obtained using a Physica MCR 301 rheometer (Anton-Paar, GmbH, Graz, Austria). Matlab Fuzzy Logic Toolbox (R2012 a) was utilized for cluster classification of the samples. Using an extended list of rheological parameters and fuzzy logic methods, 39 commercial samples (drinks, main courses and desserts) were divided to 5 clusters and degree of membership to each cluster was stated by a number between 0 and 0.99. Considering apparent viscosity of foodstuffs as a single criterion for classification of dysphagia-oriented food products is shortcoming of current guidelines in dysphagia diets. Authors proposed to some revisions in classification of dysphagia-oriented food products and including more rheological parameters (especially, viscoelastic parameters) in the classification.

  11. Nondestructive prediction of pork freshness parameters using multispectral scattering images

    NASA Astrophysics Data System (ADS)

    Tang, Xiuying; Li, Cuiling; Peng, Yankun; Chao, Kuanglin; Wang, Mingwu

    2012-05-01

    Optical technology is an important and immerging technology for non-destructive and rapid detection of pork freshness. This paper studied on the possibility of using multispectral imaging technique and scattering characteristics to predict the freshness parameters of pork meat. The pork freshness parameters selected for prediction included total volatile basic nitrogen (TVB-N), color parameters (L *, a *, b *), and pH value. Multispectral scattering images were obtained from pork sample surface by a multispectral imaging system developed by ourselves; they were acquired at the selected narrow wavebands whose center wavelengths were 517,550, 560, 580, 600, 760, 810 and 910nm. In order to extract scattering characteristics from multispectral images at multiple wavelengths, a Lorentzian distribution (LD) function with four parameters (a: scattering asymptotic value; b: scattering peak; c: scattering width; d: scattering slope) was used to fit the scattering curves at the selected wavelengths. The results show that the multispectral imaging technique combined with scattering characteristics is promising for predicting the freshness parameters of pork meat.

  12. Factors affecting hematology and plasma biochemistry in the southwest carpet python (Morelia spilota imbricata).

    PubMed

    Bryant, Gillian L; Fleming, Patricia A; Twomey, Leanne; Warren, Kristin A

    2012-04-01

    Despite increased worldwide popularity of keeping reptiles as pets, we know little about hematologic and biochemical parameters of most reptile species, or how these measures may be influenced by intrinsic and extrinsic factors. Blood samples from 43 wild-caught pythons (Morelia spilota imbricata) were collected at various stages of a 3-yr ecological study in Western Australia. Reference intervals are reported for 35 individuals sampled at the commencement of the study. As pythons were radiotracked for varying lengths of time (radiotransmitters were surgically implanted), repeated sampling was undertaken from some individuals. However, because of our ad hoc sampling design we cannot be definitive about temporal factors that were most important or that exclusively influenced blood parameters. There was no significant effect of sex or the presence of a hemogregarine parasite on blood parameters. Erythrocyte measures were highest for pythons captured in the jarrah forest and at the stage of radiotransmitter implantation, which was also linked with shorter time in captivity. Basophil count, the only leukocyte influenced by the factors tested, was highest when the python was anesthetized, as was globulin concentration. Albumin and the albumin:globulin ratio were more concentrated in summer (as was phosphorous) and at the initial stage of radiotransmitter placement (as was calcium). No intrinsic or extrinsic factors influenced creatinine kinase, aspartate aminotransferase, uric acid, or total protein. This study demonstrates that factors including season, location, surgical radiotransmitter placement, and anesthetic state can influence blood parameters of M. s. imbricata. For accurate diagnosis, veterinarians should be aware that the current reference intervals used to identify the health status of individuals for this species are outdated and the interpretation and an understanding of the influence of intrinsic and extrinsic factors are limited.

  13. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  14. Measurement of neutrino and antineutrino oscillations by the T2K experiment including a new additional sample of νe interactions at the far detector

    NASA Astrophysics Data System (ADS)

    Abe, K.; Amey, J.; Andreopoulos, C.; Antonova, M.; Aoki, S.; Ariga, A.; Ashida, Y.; Ban, S.; Barbi, M.; Barker, G. J.; Barr, G.; Barry, C.; Batkiewicz, M.; Berardi, V.; Berkman, S.; Bhadra, S.; Bienstock, S.; Blondel, A.; Bolognesi, S.; Bordoni, S.; Boyd, S. B.; Brailsford, D.; Bravar, A.; Bronner, C.; Buizza Avanzini, M.; Calland, R. G.; Campbell, T.; Cao, S.; Cartwright, S. L.; Catanesi, M. G.; Cervera, A.; Chappell, A.; Checchia, C.; Cherdack, D.; Chikuma, N.; Christodoulou, G.; Coleman, J.; Collazuol, G.; Coplowe, D.; Cudd, A.; Dabrowska, A.; De Rosa, G.; Dealtry, T.; Denner, P. F.; Dennis, S. R.; Densham, C.; Di Lodovico, F.; Dolan, S.; Drapier, O.; Duffy, K. E.; Dumarchez, J.; Dunne, P.; Emery-Schrenk, S.; Ereditato, A.; Feusels, T.; Finch, A. J.; Fiorentini, G. A.; Fiorillo, G.; Friend, M.; Fujii, Y.; Fukuda, D.; Fukuda, Y.; Garcia, A.; Giganti, C.; Gizzarelli, F.; Golan, T.; Gonin, M.; Hadley, D. R.; Haegel, L.; Haigh, J. T.; Hansen, D.; Harada, J.; Hartz, M.; Hasegawa, T.; Hastings, N. C.; Hayashino, T.; Hayato, Y.; Hillairet, A.; Hiraki, T.; Hiramoto, A.; Hirota, S.; Hogan, M.; Holeczek, J.; Hosomi, F.; Huang, K.; Ichikawa, A. K.; Ikeda, M.; Imber, J.; Insler, J.; Intonti, R. A.; Ishida, T.; Ishii, T.; Iwai, E.; Iwamoto, K.; Izmaylov, A.; Jamieson, B.; Jiang, M.; Johnson, S.; Jonsson, P.; Jung, C. K.; Kabirnezhad, M.; Kaboth, A. C.; Kajita, T.; Kakuno, H.; Kameda, J.; Karlen, D.; Katori, T.; Kearns, E.; Khabibullin, M.; Khotjantsev, A.; Kim, H.; Kim, J.; King, S.; Kisiel, J.; Knight, A.; Knox, A.; Kobayashi, T.; Koch, L.; Koga, T.; Koller, P. P.; Konaka, A.; Kormos, L. L.; Koshio, Y.; Kowalik, K.; Kudenko, Y.; Kurjata, R.; Kutter, T.; Lagoda, J.; Lamont, I.; Lamoureux, M.; Lasorak, P.; Laveder, M.; Lawe, M.; Licciardi, M.; Lindner, T.; Liptak, Z. J.; Litchfield, R. P.; Li, X.; Longhin, A.; Lopez, J. P.; Lou, T.; Ludovici, L.; Lu, X.; Magaletti, L.; Mahn, K.; Malek, M.; Manly, S.; Maret, L.; Marino, A. D.; Martin, J. F.; Martins, P.; Martynenko, S.; Maruyama, T.; Matveev, V.; Mavrokoridis, K.; Ma, W. Y.; Mazzucato, E.; McCarthy, M.; McCauley, N.; McFarland, K. S.; McGrew, C.; Mefodiev, A.; Metelko, C.; Mezzetto, M.; Minamino, A.; Mineev, O.; Mine, S.; Missert, A.; Miura, M.; Moriyama, S.; Morrison, J.; Mueller, Th. A.; Nakadaira, T.; Nakahata, M.; Nakamura, K. G.; Nakamura, K.; Nakamura, K. D.; Nakanishi, Y.; Nakayama, S.; Nakaya, T.; Nakayoshi, K.; Nantais, C.; Nielsen, C.; Nishikawa, K.; Nishimura, Y.; Novella, P.; Nowak, J.; O'Keeffe, H. M.; Okumura, K.; Okusawa, T.; Oryszczak, W.; Oser, S. M.; Ovsyannikova, T.; Owen, R. A.; Oyama, Y.; Palladino, V.; Palomino, J. L.; Paolone, V.; Patel, N. D.; Paudyal, P.; Pavin, M.; Payne, D.; Petrov, Y.; Pickering, L.; Pinzon Guerra, E. S.; Pistillo, C.; Popov, B.; Posiadala-Zezula, M.; Poutissou, J.-M.; Pritchard, A.; Przewlocki, P.; Quilain, B.; Radermacher, T.; Radicioni, E.; Ratoff, P. N.; Rayner, M. A.; Reinherz-Aronis, E.; Riccio, C.; Rodrigues, P. A.; Rondio, E.; Rossi, B.; Roth, S.; Ruggeri, A. C.; Rychter, A.; Sakashita, K.; Sánchez, F.; Scantamburlo, E.; Scholberg, K.; Schwehr, J.; Scott, M.; Seiya, Y.; Sekiguchi, T.; Sekiya, H.; Sgalaberna, D.; Shah, R.; Shaikhiev, A.; Shaker, F.; Shaw, D.; Shiozawa, M.; Shirahige, T.; Smy, M.; Sobczyk, J. T.; Sobel, H.; Steinmann, J.; Stewart, T.; Stowell, P.; Suda, Y.; Suvorov, S.; Suzuki, A.; Suzuki, S. Y.; Suzuki, Y.; Tacik, R.; Tada, M.; Takeda, A.; Takeuchi, Y.; Tamura, R.; Tanaka, H. K.; Tanaka, H. A.; Thakore, T.; Thompson, L. F.; Tobayama, S.; Toki, W.; Tomura, T.; Tsukamoto, T.; Tzanov, M.; Vagins, M.; Vallari, Z.; Vasseur, G.; Vilela, C.; Vladisavljevic, T.; Wachala, T.; Walter, C. W.; Wark, D.; Wascko, M. O.; Weber, A.; Wendell, R.; Wilking, M. J.; Wilkinson, C.; Wilson, J. R.; Wilson, R. J.; Wret, C.; Yamada, Y.; Yamamoto, K.; Yanagisawa, C.; Yano, T.; Yen, S.; Yershov, N.; Yokoyama, M.; Yu, M.; Zalewska, A.; Zalipska, J.; Zambelli, L.; Zaremba, K.; Ziembicki, M.; Zimmerman, E. D.; Zito, M.; T2K Collaboration

    2017-11-01

    The T2K experiment reports an updated analysis of neutrino and antineutrino oscillations in appearance and disappearance channels. A sample of electron neutrino candidates at Super-Kamiokande in which a pion decay has been tagged is added to the four single-ring samples used in previous T2K oscillation analyses. Through combined analyses of these five samples, simultaneous measurements of four oscillation parameters, |Δ m322 |, sin2θ23, sin2θ13, and δCP and of the mass ordering are made. A set of studies of simulated data indicates that the sensitivity to the oscillation parameters is not limited by neutrino interaction model uncertainty. Multiple oscillation analyses are performed, and frequentist and Bayesian intervals are presented for combinations of the oscillation parameters with and without the inclusion of reactor constraints on sin2θ13. When combined with reactor measurements, the hypothesis of C P conservation (δCP=0 or π ) is excluded at 90% confidence level. The 90% confidence region for δCP is [-2.95 ,-0.44 ] ([-1.47 ,-1.27 ] ) for normal (inverted) ordering. The central values and 68% confidence intervals for the other oscillation parameters for normal (inverted) ordering are Δ m322=2.54 ±0.08 (2.51 ±0.08 )×10-3 eV2/c4 and sin2θ23 =0.5 5-0.09+0.05 (0.5 5-0.08+0.05), compatible with maximal mixing. In the Bayesian analysis, the data weakly prefer normal ordering (Bayes factor 3.7) and the upper octant for sin2θ23 (Bayes factor 2.4).

  15. A systematic review of the reporting of Data Monitoring Committees' roles, interim analysis and early termination in pediatric clinical trials

    PubMed Central

    2009-01-01

    Background Decisions about interim analysis and early stopping of clinical trials, as based on recommendations of Data Monitoring Committees (DMCs), have far reaching consequences for the scientific validity and clinical impact of a trial. Our aim was to evaluate the frequency and quality of the reporting on DMC composition and roles, interim analysis and early termination in pediatric trials. Methods We conducted a systematic review of randomized controlled clinical trials published from 2005 to 2007 in a sample of four general and four pediatric journals. We used full-text databases to identify trials which reported on DMCs, interim analysis or early termination, and included children or adolescents. Information was extracted on general trial characteristics, risk of bias, and a set of parameters regarding DMC composition and roles, interim analysis and early termination. Results 110 of the 648 pediatric trials in this sample (17%) reported on DMC or interim analysis or early stopping, and were included; 68 from general and 42 from pediatric journals. The presence of DMCs was reported in 89 of the 110 included trials (81%); 62 papers, including 46 of the 89 that reported on DMCs (52%), also presented information about interim analysis. No paper adequately reported all DMC parameters, and nine (15%) reported all interim analysis details. Of 32 trials which terminated early, 22 (69%) did not report predefined stopping guidelines and 15 (47%) did not provide information on statistical monitoring methods. Conclusions Reporting on DMC composition and roles, on interim analysis results and on early termination of pediatric trials is incomplete and heterogeneous. We propose a minimal set of reporting parameters that will allow the reader to assess the validity of trial results. PMID:20003383

  16. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  17. Monitoring of Harmful Algal Blooms through Drinking Water Treatment Facilities Located on Lake Erie in the 2014 and 2015 Bloom Seasons

    EPA Science Inventory

    A number of drinking water treatment plants on Lake Erie have supplied water samples on a monthly basis for analysis related to the occurrence of harmful algal blooms (HABs). General water quality parameters including total organic carbon (TOC), orthophosphate, and chlorophyll-A ...

  18. Calibration of collection procedures for the determination of precipitation chemistry

    Treesearch

    James N. Galloway; Gene E. Likens

    1976-01-01

    Precipitation is currently collected by several methods, including several different designs of collection apparatus. We are investigating these differing methods and designs to determine which gives the most representative sample of precipitation for the analysis of some 25 chemical parameters. The experimental site, located in Ithaca, New York, has 22 collectors of...

  19. Ionic-liquid-impregnated resin for the microwave-assisted solid-liquid extraction of triazine herbicides in honey.

    PubMed

    Wu, Lijie; Song, Ying; Hu, Mingzhu; Yu, Cui; Zhang, Hanqi; Yu, Aimin; Ma, Qiang; Wang, Ziming

    2015-09-01

    Microwave-assisted ionic-liquid-impregnated resin solid-liquid extraction was developed for the extraction of triazine herbicides, including cyanazine, metribuzin, desmetryn, secbumeton, terbumeton, terbuthylazine, dimethametryn, and dipropetryn in honey samples. The ionic-liquid-impregnated resin was prepared by immobilizing 1-hexyl-3-methylimidazolium hexafluorophosphate in the microspores of resin. The resin was used as the extraction adsorbent. The extraction and enrichment of analytes were performed in a single step. The extraction time can be shortened greatly with the help of microwave. The effects of experimental parameters including type of resin, type of ionic liquid, mass ratio of resin to ionic liquid, extraction time, amount of the impregnated resin, extraction temperature, salt concentration, and desorption conditions on the extraction efficiency, were investigated. A Box-Behnken design was applied to the selection of the experimental parameters. The recoveries were in the range of 80.1 to 103.4% and the relative standard deviations were lower than 6.8%. The present method was applied to the analysis of honey samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Prototype Development of Remote Operated Hot Uniaxial Press (ROHUP) to Fabricate Advanced Tc-99 Bearing Ceramic Waste Forms - 13381

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alaniz, Ariana J.; Delgado, Luc R.; Werbick, Brett M.

    The objective of this senior student project is to design and build a prototype construction of a machine that simultaneously provides the proper pressure and temperature parameters to sinter ceramic powders in-situ to create pellets of rather high densities of above 90% (theoretical). This ROHUP (Remote Operated Hot Uniaxial Press) device is designed specifically to fabricate advanced ceramic Tc-99 bearing waste forms and therefore radiological barriers have been included in the system. The HUP features electronic control and feedback systems to set and monitor pressure, load, and temperature parameters. This device operates wirelessly via portable computer using Bluetooth{sup R} technology.more » The HUP device is designed to fit in a standard atmosphere controlled glove box to further allow sintering under inert conditions (e.g. under Ar, He, N{sub 2}). This will further allow utilizing this HUP for other potential applications, including radioactive samples, novel ceramic waste forms, advanced oxide fuels, air-sensitive samples, metallic systems, advanced powder metallurgy, diffusion experiments and more. (authors)« less

  1. The evaluation of an analytical protocol for the determination of substances in waste for hazard classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennebert, Pierre, E-mail: pierre.hennebert@ineris.fr; Papin, Arnaud; Padox, Jean-Marie

    Highlights: • Knowledge of wastes in substances will be necessary to assess HP1–HP15 hazard properties. • A new analytical protocol is proposed for this and tested by two service laboratories on 32 samples. • Sixty-three percentage of the samples have a satisfactory analytical balance between 90% and 110%. • Eighty-four percentage of the samples were classified identically (Seveso Directive) for their hazardousness by the two laboratories. • The method, in progress, is being normalized in France and is be proposed to CEN. - Abstract: The classification of waste as hazardous could soon be assessed in Europe using largely the hazardmore » properties of its constituents, according to the the Classification, Labelling and Packaging (CLP) regulation. Comprehensive knowledge of the component constituents of a given waste will therefore be necessary. An analytical protocol for determining waste composition is proposed, which includes using inductively coupled plasma (ICP) screening methods to identify major elements and gas chromatography/mass spectrometry (GC–MS) screening techniques to measure organic compounds. The method includes a gross or indicator measure of ‘pools’ of higher molecular weight organic substances that are taken to be less bioactive and less hazardous, and of unresolved ‘mass’ during the chromatography of volatile and semi-volatile compounds. The concentration of some elements and specific compounds that are linked to specific hazard properties and are subject to specific regulation (examples include: heavy metals, chromium(VI), cyanides, organo-halogens, and PCBs) are determined by classical quantitative analysis. To check the consistency of the analysis, the sum of the concentrations (including unresolved ‘pools’) should give a mass balance between 90% and 110%. Thirty-two laboratory samples comprising different industrial wastes (liquids and solids) were tested by two routine service laboratories, to give circa 7000 parameter results. Despite discrepancies in some parameters, a satisfactory sum of estimated or measured concentrations (analytical balance) of 90% was reached for 20 samples (63% of the overall total) during this first test exercise, with identified reasons for most of the unsatisfactory results. Regular use of this protocol (which is now included in the French legislation) has enabled service laboratories to reach a 90% mass balance for nearly all the solid samples tested, and most of liquid samples (difficulties were caused in some samples from polymers in solution and vegetable oil). The protocol is submitted to French and European normalization bodies (AFNOR and CEN) and further improvements are awaited.« less

  2. Computer-Aided Diagnosis Of Leukemic Blood Cells

    NASA Astrophysics Data System (ADS)

    Gunter, U.; Harms, H.; Haucke, M.; Aus, H. M.; ter Meulen, V.

    1982-11-01

    In a first clinical test, computer programs are being used to diagnose leukemias. The data collected include blood samples from patients suffering from acute myelomonocytic-, acute monocytic- and acute promyelocytic, myeloblastic, prolymphocytic, chronic lymphocytic leukemias and leukemic transformed immunocytoma. The proper differentiation of the leukemic cells is essential because the therapy depends on the type of leukemia. The algorithms analyse the fine chromatin texture and distribution in the nuclei as well as size and shape parameters from the cells and nuclei. Cells with similar nuclei from different leukemias can be distinguished from each other by analyzing the cell cytoplasm images. Recognition of these subtle differences in the cells require an image sampling rate of 15-30 pixel/micron. The results for the entire data set correlate directly to established hematological parameters and support the previously published initial training set .

  3. A Study of the Gamma-Ray Burst Fundamental Plane

    DOE PAGES

    Dainotti, M. G.; Hernandez, X.; Postnikov, S.; ...

    2017-10-17

    Long gamma-ray bursts (GRBs) with a plateau phase in their X-ray afterglows obey a 3D relation, between the rest-frame time at the end of the plateau, T a, its corresponding X-ray luminosity, L a, and the peak luminosity in the prompt emission, L peak. This 3D relation identifies a GRB fundamental plane whose existence we here confirm. Here we include the most recent GRBs observed by Swift to define a "gold sample" (45 GRBs) and obtain an intrinsic scatter about the plane compatible within 1σ with the previous result. We compare GRB categories, such as short GRBs with extended emissionmore » (SEE), X-ray flashes, GRBs associated with supernovae, a sample of only long-duration GRBs (132), selected from the total sample by excluding GRBs of the previous categories, and the gold sample, composed by GRBs with light curves with good data coverage and relatively flat plateaus. We find that the relation planes for each of these categories are not statistically different from the gold fundamental plane, with the exception of the SSE, which are hence identified as a physically distinct class. The gold fundamental plane has an intrinsic scatter smaller than any plane derived from the other sample categories. Thus, the distance of any particular GRB category from this plane becomes a key parameter. We computed the several category planes with T a as a dependent parameter obtaining for each category smaller intrinsic scatters (reaching a reduction of 24% for the long GRBs). The fundamental plane is independent from several prompt and afterglow parameters.« less

  4. Analysis of the contaminants released from municipal solid waste landfill site: A case study.

    PubMed

    Samadder, S R; Prabhakar, R; Khan, D; Kishan, D; Chauhan, M S

    2017-02-15

    Release and transport of leachate from municipal solid waste landfills pose a potential hazard to both surrounding ecosystems and human populations. In the present study, soil, groundwater, and surface water samples were collected from the periphery of a municipal solid waste landfill (located at Ranital of Jabalpur, Madhya Pradesh, India) for laboratory analysis to understand the release of contaminants. The landfill does not receive any solid wastes for dumping now as the same is under a landfill closure plan. Groundwater and soil samples were collected from the bore holes of 15m deep drilled along the periphery of the landfill and the surface water samples were collected from the existing surface water courses near the landfill. The landfill had neither any bottom liner nor any leachate collection and treatment system. Thus the leachate generated from the landfills finds paths into the groundwater and surrounding surface water courses. Concentrations of various physico-chemical parameters including some toxic metals (in collected groundwater, soil, and surface water samples) and microbiological parameters (in surface water samples) were determined. The analyzed data were integrated into ArcGIS environment and the spatial distribution of the metals and other physic- chemical parameter across the landfill was extrapolated to observe the distribution. The statistical analysis and spatial variations indicated the leaching of metals from the landfill to the groundwater aquifer system. The study will help the readers and the municipal engineers to understand the release of contaminants from landfills for better management of municipal solid wastes. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Comparing molecular composition of dissolved organic matter in soil and stream water: Influence of land use and chemical characteristics.

    PubMed

    Seifert, Anne-Gret; Roth, Vanessa-Nina; Dittmar, Thorsten; Gleixner, Gerd; Breuer, Lutz; Houska, Tobias; Marxsen, Jürgen

    2016-11-15

    Electrospray ionization Fourier transform ion cyclotron resonance mass spectrometry (ESI-FT-ICR-MS) was used to examine the molecular composition of dissolved organic matter (DOM) from soils under different land use regimes and how the DOM composition in the catchment is reflected in adjacent streams. The study was carried out in a small area of the Schwingbach catchment, an anthropogenic-influenced landscape in central Germany. We investigated 30 different soil water samples from 4 sites and different depths (managed meadow (0-5cm, 40-50cm), deciduous forest (0-5cm), mixed-coniferous forest (0-5cm) and agricultural land (0-5cm, 40-50cm)) and 8 stream samples. 6194 molecular formulae and their magnitude-weighted parameters ((O/C)w, (H/C)w, (N/C)w, (AI-mod)w, (DBE/C)w, (DBE/O)w, (DBE-O)w, (C#)w, (MW)w) were used to describe the molecular composition of the samples. The samples can be roughly divided in three groups. Group 1 contains samples from managed meadow 40-50cm and stream water, which are characterized by high saturation compared to samples from group 2 including agricultural samples and samples from the surface meadow (0-5cm), which held more nitrogen containing and aromatic compounds. Samples from both forested sites (group 3) are characterized by higher molecular weight and O/C ratio. Environmental parameters vary between sites and among these parameters pH and nitrate significantly affect chemical composition of DOM. Results indicate that most DOM in streams is of terrestrial origin. However, 120 molecular formulae were detected only in streams and not in any of the soil samples. These compounds share molecular formulae with peptides, unsaturated aliphatics and saturated FA-CHO/FA-CHOX. Compounds only found in soil samples are much more aromatic, have more double bonds and a much lower H/C ratio but higher oxygen content, which indicates the availability of fresh plant material and less microbial processed material compared to stream samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. [Non-destructive detection research for hollow heart of potato based on semi-transmission hyperspectral imaging and SVM].

    PubMed

    Huang, Tao; Li, Xiao-yu; Xu, Meng-ling; Jin, Rui; Ku, Jing; Xu, Sen-miao; Wu, Zhen-zhong

    2015-01-01

    The quality of potato is directly related to their edible value and industrial value. Hollow heart of potato, as a physiological disease occurred inside the tuber, is difficult to be detected. This paper put forward a non-destructive detection method by using semi-transmission hyperspectral imaging with support vector machine (SVM) to detect hollow heart of potato. Compared to reflection and transmission hyperspectral image, semi-transmission hyperspectral image can get clearer image which contains the internal quality information of agricultural products. In this study, 224 potato samples (149 normal samples and 75 hollow samples) were selected as the research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images (390-1 040 nn) of the potato samples, and then the average spectrum of region of interest were extracted for spectral characteristics analysis. Normalize was used to preprocess the original spectrum, and prediction model were developed based on SVM using all wave bands, the accurate recognition rate of test set is only 87. 5%. In order to simplify the model competitive.adaptive reweighed sampling algorithm (CARS) and successive projection algorithm (SPA) were utilized to select important variables from the all 520 spectral variables and 8 variables were selected (454, 601, 639, 664, 748, 827, 874 and 936 nm). 94. 64% of the accurate recognition rate of test set was obtained by using the 8 variables to develop SVM model. Parameter optimization algorithms, including artificial fish swarm algorithm (AFSA), genetic algorithm (GA) and grid search algorithm, were used to optimize the SVM model parameters: penalty parameter c and kernel parameter g. After comparative analysis, AFSA, a new bionic optimization algorithm based on the foraging behavior of fish swarm, was proved to get the optimal model parameter (c=10. 659 1, g=0. 349 7), and the recognition accuracy of 10% were obtained for the AFSA-SVM model. The results indicate that combining the semi-transmission hyperspectral imaging technology with CARS-SPA and AFSA-SVM can accurately detect hollow heart of potato, and also provide technical support for rapid non-destructive detecting of hollow heart of potato.

  7. Spectral Line Parameters Including Temperature Dependences of Self- and Air-Broadening in the 2 (left arrow) 0 Band of CO at 2.3 micrometers

    NASA Technical Reports Server (NTRS)

    Devi, V. Malathy; Benner, D. Chris; Smith, M. A. H.; Mantz, A. W.; Sung, K.; Brown, L. R.; Predoi-Cross, A.

    2012-01-01

    Temperature dependences of pressure-broadened half-width and pressure-induced shift coefficients along with accurate positions and intensities have been determined for transitions in the 2<--0 band of C-12 O-16 from analyzing high-resolution and high signal-to-noise spectra recorded with two different Fourier transform spectrometers. A total of 28 spectra, 16 self-broadened and 12 air-broadened, recorded using high- purity (greater than or equal to 99.5% C-12-enriched) CO samples and CO diluted with dry air(research grade) at different temperatures and pressures, were analyzed simultaneously to maximize the accuracy of the retrieved parameters. The sample temperatures ranged from 150 to 298K and the total pressures varied between 5 and 700 Torr. A multispectrum nonlinear least squares spectrum fitting technique was used to adjust the rovibrational constants (G, B, D, etc.) and intensity parameters (including Herman-Wallis coefficients), rather than determining individual line positions and intensities. Self-and air-broadened Lorentz half-width coefficients, their temperature dependence exponents, self- and air-pressure-induced shift coefficients, their temperature dependences, self- and air-line mixing coefficients, their temperature dependences and speed dependence have been retrieved from the analysis. Speed-dependent line shapes with line mixing employing off-diagonal relaxation matrix element formalism were needed to minimize the fit residuals. This study presents a precise and complete set of spectral line parameters that consistently reproduce the spectrum of carbon monoxide over terrestrial atmospheric conditions.

  8. STRUCTURAL PARAMETERS FOR 10 HALO GLOBULAR CLUSTERS IN M33

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jun, E-mail: majun@nao.cas.cn

    2015-05-15

    In this paper, we present the properties of 10 halo globular clusters (GCs) with luminosities L ≃ 5–7 × 10{sup 5} L{sub ⊙} in the Local Group galaxy M33 using images from the Hubble Space Telescope WFPC2 in the F555W and F814W bands. We obtained the ellipticities, position angles, and surface brightness profiles for each GC. In general, the ellipticities of the M33 sample clusters are similar to those of the M31 clusters. The structural and dynamical parameters are derived by fitting the profiles to three different models combined with mass-to-light ratios (M/L values) from population-synthesis models. The structural parametersmore » include core radii, concentration, half-light radii, and central surface brightness. The dynamical parameters include the integrated cluster mass, integrated binding energy, central surface mass density, and predicted line of sight velocity dispersion at the cluster center. The velocity dispersions of the four clusters predicted here agree well with the observed dispersions by Larsen et al. The results here showed that the majority of the sample halo GCs are better fitted by both the King model and the Wilson model than the Sérsic model. In general, the properties of the clusters in M33, M31, and the Milky Way fall in the same regions of parameter spaces. The tight correlations of cluster properties indicate a “fundamental plane” for clusters, which reflects some universal physical conditions and processes operating at the epoch of cluster formation.« less

  9. Orbit/attitude estimation with LANDSAT Landmark data

    NASA Technical Reports Server (NTRS)

    Hall, D. L.; Waligora, S.

    1979-01-01

    The use of LANDSAT landmark data for orbit/attitude and camera bias estimation was studied. The preliminary results of these investigations are presented. The Goddard Trajectory Determination System (GTDS) error analysis capability was used to perform error analysis studies. A number of questions were addressed including parameter observability and sensitivity, effects on the solve-for parameter errors of data span, density, and distribution an a priori covariance weighting. The use of the GTDS differential correction capability with acutal landmark data was examined. The rms line and element observation residuals were studied as a function of the solve-for parameter set, a priori covariance weighting, force model, attitude model and data characteristics. Sample results are presented. Finally, verfication and preliminary system evaluation of the LANDSAT NAVPAK system for sequential (extended Kalman Filter) estimation of orbit, and camera bias parameters is given.

  10. Comparison of sampling techniques for Bayesian parameter estimation

    NASA Astrophysics Data System (ADS)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  11. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  12. Adaptable structural synthesis using advanced analysis and optimization coupled by a computer operating system

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1979-01-01

    A finite element program is linked with a general purpose optimization program in a 'programing system' which includes user supplied codes that contain problem dependent formulations of the design variables, objective function and constraints. The result is a system adaptable to a wide spectrum of structural optimization problems. In a sample of numerical examples, the design variables are the cross-sectional dimensions and the parameters of overall shape geometry, constraints are applied to stresses, displacements, buckling and vibration characteristics, and structural mass is the objective function. Thin-walled, built-up structures and frameworks are included in the sample. Details of the system organization and characteristics of the component programs are given.

  13. VizieR Online Data Catalog: The ESO DIBs Large Exploration Survey (Cox+, 2017)

    NASA Astrophysics Data System (ADS)

    Cox, N. L. J.; Cami, J.; Farhang, A.; Smoker, J.; Monreal-Ibero, A.; Lallement, R.; Sarre, P. J.; Marshall, C. C. M.; Smith, K. T.; Evans, C. J.; Royer, P.; Linnartz, H.; Cordiner, M. A.; Joblin, C.; van Loon, J. T.; Foing, B. H.; Bhatt, N. H.; Bron, E.; Elyajouri, M.; de Koter, A.; Ehrenfreund, P.; Javadi, A.; Kaper, L.; Khosroshadi, H. G.; Laverick, M.; Le Petit, F.; Mulas, G.; Roueff, E.; Salama, F.; Spaans, M.

    2018-01-01

    We constructed a statistically representative survey sample that probes a wide range of interstellar environment parameters including reddening E(B-V), visual extinction AV, total-to-selective extinction ratio RV, and molecular hydrogen fraction fH2. EDIBLES provides the community with optical (~305-1042nm) spectra at high spectral resolution (R~70000 in the blue arm and 100000 in the red arm) and high signal-to-noise (S/N; median value ~500-1000), for a statistically significant sample of interstellar sightlines. Many of the >100 sightlines included in the survey already have auxiliary available ultraviolet, infrared and/or polarisation data on the dust and gas components. (2 data files).

  14. Measurement of sample temperatures under magic-angle spinning from the chemical shift and spin-lattice relaxation rate of 79Br in KBr powder

    PubMed Central

    Thurber, Kent R.; Tycko, Robert

    2009-01-01

    Accurate determination of sample temperatures in solid state nuclear magnetic resonance (NMR) with magic-angle spinning (MAS) can be problematic, particularly because frictional heating and heating by radio-frequency irradiation can make the internal sample temperature significantly different from the temperature outside the MAS rotor. This paper demonstrates the use of 79Br chemical shifts and spin-lattice relaxation rates in KBr powder as temperature-dependent parameters for the determination of internal sample temperatures. Advantages of this method include high signal-to-noise, proximity of the 79Br NMR frequency to that of 13C, applicability from 20 K to 320 K or higher, and simultaneity with adjustment of the MAS axis direction. We show that spin-lattice relaxation in KBr is driven by a quadrupolar mechanism. We demonstrate a simple approach to including KBr powder in hydrated samples, such as biological membrane samples, hydrated amyloid fibrils, and hydrated microcrystalline proteins, that allows direct assessment of the effects of frictional and radio-frequency heating under experimentally relevant conditions. PMID:18930418

  15. Measurement of sample temperatures under magic-angle spinning from the chemical shift and spin-lattice relaxation rate of 79Br in KBr powder.

    PubMed

    Thurber, Kent R; Tycko, Robert

    2009-01-01

    Accurate determination of sample temperatures in solid state nuclear magnetic resonance (NMR) with magic-angle spinning (MAS) can be problematic, particularly because frictional heating and heating by radio-frequency irradiation can make the internal sample temperature significantly different from the temperature outside the MAS rotor. This paper demonstrates the use of (79)Br chemical shifts and spin-lattice relaxation rates in KBr powder as temperature-dependent parameters for the determination of internal sample temperatures. Advantages of this method include high signal-to-noise, proximity of the (79)Br NMR frequency to that of (13)C, applicability from 20 K to 320 K or higher, and simultaneity with adjustment of the MAS axis direction. We show that spin-lattice relaxation in KBr is driven by a quadrupolar mechanism. We demonstrate a simple approach to including KBr powder in hydrated samples, such as biological membrane samples, hydrated amyloid fibrils, and hydrated microcrystalline proteins, that allows direct assessment of the effects of frictional and radio-frequency heating under experimentally relevant conditions.

  16. Applications of DART-MS for food quality and safety assurance in food supply chain.

    PubMed

    Guo, Tianyang; Yong, Wei; Jin, Yong; Zhang, Liya; Liu, Jiahui; Wang, Sai; Chen, Qilong; Dong, Yiyang; Su, Haijia; Tan, Tianwei

    2017-03-01

    Direct analysis in real time (DART) represents a new generation of ion source which is used for rapid ionization of small molecules under ambient conditions. The combination of DART and various mass spectrometers allows analyzing multiple food samples with simple or no sample treatment, or in conjunction with prevailing protocolized sample preparation methods. Abundant applications by DART-MS have been reviewed in this paper. The DART-MS strategy applied to food supply chain (FSC), including production, processing, and storage and transportation, provides a comprehensive solution to various food components, contaminants, authenticity, and traceability. Additionally, typical applications available in food analysis by other ambient ionization mass spectrometers were summarized, and fundamentals mainly including mechanisms, devices, and parameters were discussed as well. © 2015 Wiley Periodicals, Inc. Mass Spec Rev. 36:161-187, 2017. © 2015 Wiley Periodicals, Inc.

  17. Growth reference for Saudi preschool children: LMS parameters and percentiles.

    PubMed

    Shaik, Shaffi Ahamed; El Mouzan, Mohammad Issa; AlSalloum, Abdullah Abdulmohsin; AlHerbish, Abdullah Sulaiman

    2016-01-01

    Previous growth charts for Saudi children have not included detailed tables and parameters needed for research and incorporation in electronic records. The objective of this report is to publish the L, M, and S parameters and percentiles as well as the corresponding growth charts for Saudi preschool children. Community-based survey and measurement of growth parameters in a sample selected by a multistage probability procedure. A stratified listing of the Saudi population. Raw data from the previous nationally-representative sample were reanalyzed using the Lambda-Mu-Sigma (LMS) methodology to calculate the L, M, and S parameters of percentiles (from 3rd to 97th) for weight, length/height, head circumference, and body mass index-for-age, and weight for-length/height for boys and girls from birth to 60 months. Length or height and weight of Saudi preschool children. There were 15601 Saudi children younger than 60 months of age, 7896 (50.6 %) were boys. The LMS parameters for weight for age from birth to 60 months (5 years) are reported for the 3rd, 5th, 10th, 25th, 50th, 75th, 90th, 95th, and 97th percentiles as well as the corresponding graphs. Similarly, the LMS parameters for length/height-for-age, head circumference-for-age, weight-for-length/height and body mass index-for-age (BMi) are shown with the corresponding graphs for boys and girls. Using the data in this report, clinicians and researchers can assess the growth of Saudi preschool children. The report does not reflect interregional variations in growth.

  18. Modeling the shape and composition of the human body using dual energy X-ray absorptiometry images

    PubMed Central

    Shepherd, John A.; Fan, Bo; Schwartz, Ann V.; Cawthon, Peggy; Cummings, Steven R.; Kritchevsky, Stephen; Nevitt, Michael; Santanasto, Adam; Cootes, Timothy F.

    2017-01-01

    There is growing evidence that body shape and regional body composition are strong indicators of metabolic health. The purpose of this study was to develop statistical models that accurately describe holistic body shape, thickness, and leanness. We hypothesized that there are unique body shape features that are predictive of mortality beyond standard clinical measures. We developed algorithms to process whole-body dual-energy X-ray absorptiometry (DXA) scans into body thickness and leanness images. We performed statistical appearance modeling (SAM) and principal component analysis (PCA) to efficiently encode the variance of body shape, leanness, and thickness across sample of 400 older Americans from the Health ABC study. The sample included 200 cases and 200 controls based on 6-year mortality status, matched on sex, race and BMI. The final model contained 52 points outlining the torso, upper arms, thighs, and bony landmarks. Correlation analyses were performed on the PCA parameters to identify body shape features that vary across groups and with metabolic risk. Stepwise logistic regression was performed to identify sex and race, and predict mortality risk as a function of body shape parameters. These parameters are novel body composition features that uniquely identify body phenotypes of different groups and predict mortality risk. Three parameters from a SAM of body leanness and thickness accurately identified sex (training AUC = 0.99) and six accurately identified race (training AUC = 0.91) in the sample dataset. Three parameters from a SAM of only body thickness predicted mortality (training AUC = 0.66, validation AUC = 0.62). Further study is warranted to identify specific shape/composition features that predict other health outcomes. PMID:28423041

  19. The ionization parameter of star-forming galaxies evolves with the specific star formation rate

    NASA Astrophysics Data System (ADS)

    Kaasinen, Melanie; Kewley, Lisa; Bian, Fuyan; Groves, Brent; Kashino, Daichi; Silverman, John; Kartaltepe, Jeyhan

    2018-07-01

    We investigate the evolution of the ionization parameter of star-forming galaxies using a high-redshift (z˜ 1.5) sample from the FMOS-COSMOS (Fibre Multi-Object Spectrograph-COSMic evOlution Survey) and matched low-redshift samples from the Sloan Digital Sky Survey. By constructing samples of low-redshift galaxies for which the stellar mass (M*), star formation rate (SFR), and specific star formation rate (sSFR) are matched to the high-redshift sample, we remove the effects of an evolution in these properties. We also account for the effect of metallicity by jointly constraining the metallicity and ionization parameter of each sample. We find an evolution in the ionization parameter for main-sequence, star-forming galaxies and show that this evolution is driven by the evolution of sSFR. By analysing the matched samples as well as a larger sample of z< 0.3, star-forming galaxies we show that high ionization parameters are directly linked to high sSFRs and are not simply the by-product of an evolution in metallicity. Our results are physically consistent with the definition of the ionization parameter, a measure of the hydrogen ionizing photon flux relative to the number density of hydrogen atoms.

  20. Effect of intravenous sodium salicylate administration prior to castration on plasma cortisol and electroencephalography parameters in calves.

    PubMed

    Bergamasco, L; Coetzee, J F; Gehring, R; Murray, L; Song, T; Mosher, R A

    2011-12-01

    Nociception is an unavoidable consequence of many routine management procedures such as castration in cattle. This study investigated electroencephalography (EEG) parameters and cortisol levels in calves receiving intravenous sodium salicylate in response to a castration model. Twelve Holstein calves were randomly assigned to the following groups: (i) castrated, untreated controls, (ii) 50 mg/kg sodium salicylate IV precastration, were blood sampled at 0, 5, 10, 20, 30, 45, 60, 90, 120, 150, 180, 240, 360, and 480 min postcastration. The EEG recording included baseline, castration, immediate recovery (0-5 min after castration), middle recovery (5-10 min after castration), and late recovery (10-20 min after castration). Samples were analyzed by competitive chemiluminescent immunoassay and fluorescence polarization immunoassay for cortisol and salicylate, respectively. EEG visual inspection and spectral analysis were performed. Statistical analyses included anova repeated measures and correlations between response variable. No treatment effect was noted between the two groups for cortisol and EEG measurements, namely an attenuation of acute cortisol response and EEG desynchronization in sodium salicylate group. Time effects were noted for EEG measurements, cortisol and salicylates levels. Significant correlations between cortisol and EEG parameters were noted. These findings have implications for designing effective analgesic regimens, and they suggest that EEG can be useful to monitor pain attributable to castration. © 2011 Blackwell Publishing Ltd.

  1. Comparison of two blood sampling techniques for the determination of coagulation parameters in the horse: Jugular venipuncture and indwelling intravenous catheter.

    PubMed

    Mackenzie, C J; McGowan, C M; Pinchbeck, G; Carslake, H B

    2018-05-01

    Evaluation of coagulation status is an important component of critical care. Ongoing monitoring of coagulation status in hospitalised horses has previously been via serial venipuncture due to concerns that sampling directly from the intravenous catheter (IVC) may alter the accuracy of the results. Adverse effects such as patient anxiety and trauma to the sampled vessel could be avoided by the use of an indwelling IVC for repeat blood sampling. To compare coagulation parameters from blood obtained by jugular venipuncture with IVC sampling in critically ill horses. Prospective observational study. A single set of paired blood samples were obtained from horses (n = 55) admitted to an intensive care unit by direct jugular venipuncture and, following removal of a presample, via an indwelling IVC. The following coagulation parameters were measured on venipuncture and IVC samples: whole blood prothrombin time (PT), fresh plasma PT and activated partial thromboplastin time (aPTT) and stored plasma antithrombin activity (AT) and fibrinogen concentration. D-dimer concentration was also measured in some horses (n = 22). Comparison of venipuncture and IVC results was performed using Lin's concordance correlation coefficient. Agreement between paired results was assessed using Bland Altman analysis. Correlation was substantial and agreement was good between sample methods for all parameters except AT and D-dimers. Each coagulation parameter was tested using only one assay. Sampling was limited to a convenience sample and timing of sample collection was not standardised in relation to when the catheter was flushed with heparinised saline. With the exception of AT and D-dimers, coagulation parameters measured on blood samples obtained via an IVC have clinically equivalent values to those obtained by jugular venipuncture. © 2017 EVJ Ltd.

  2. Implementing reduced-risk integrated pest management in fresh-market cabbage: influence of sampling parameters, and validation of binomial sequential sampling plans for the cabbage looper (Lepidoptera Noctuidae).

    PubMed

    Burkness, Eric C; Hutchison, W D

    2009-10-01

    Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.

  3. Uncertainty quantification and risk analyses of CO2 leakage in heterogeneous geological formations

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Murray, C. J.; Rockhold, M. L.

    2012-12-01

    A stochastic sensitivity analysis framework is adopted to evaluate the impact of spatial heterogeneity in permeability on CO2 leakage risk. The leakage is defined as the total mass of CO2 moving into the overburden through the caprock-overburden interface, in both gaseous and liquid (dissolved) phases. The entropy-based framework has the ability to quantify the uncertainty associated with the input parameters in the form of prior pdfs (probability density functions). Effective sampling of the prior pdfs enables us to fully explore the parameter space and systematically evaluate the individual and combined effects of the parameters of interest on CO2 leakage risk. The parameters that are considered in the study include: mean, variance, and horizontal to vertical spatial anisotropy ratio for caprock permeability, and those same parameters for reservoir permeability. Given the sampled spatial variogram parameters, multiple realizations of permeability fields were generated using GSLIB subroutines. For each permeability field, a numerical simulator, STOMP, (in the water-salt-CO2-energy operational mode) is used to simulate the CO2 migration within the reservoir and caprock up to 50 years after injection. Due to intensive computational demand, we run both a scalable version simulator eSTOMP and serial STOMP on various supercomputers. We then perform statistical analyses and summarize the relationships between the parameters of interest (mean/variance/anisotropy ratio of caprock and reservoir permeability) and CO2 leakage ratio. We also present the effects of those parameters on CO2 plume radius and reservoir injectivity. The statistical analysis provides a reduced order model that can be used to estimate the impact of heterogeneity on caprock leakage.

  4. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.

  5. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  6. A Principle Component Analysis of Galaxy Properties from a Large, Gas-Selected Sample

    DOE PAGES

    Chang, Yu-Yen; Chao, Rikon; Wang, Wei-Hao; ...

    2012-01-01

    Disney emore » t al. (2008) have found a striking correlation among global parameters of H i -selected galaxies and concluded that this is in conflict with the CDM model. Considering the importance of the issue, we reinvestigate the problem using the principal component analysis on a fivefold larger sample and additional near-infrared data. We use databases from the Arecibo Legacy Fast Arecibo L -band Feed Array Survey for the gas properties, the Sloan Digital Sky Survey for the optical properties, and the Two Micron All Sky Survey for the near-infrared properties. We confirm that the parameters are indeed correlated where a single physical parameter can explain 83% of the variations. When color ( g - i ) is included, the first component still dominates but it develops a second principal component. In addition, the near-infrared color ( i - J ) shows an obvious second principal component that might provide evidence of the complex old star formation. Based on our data, we suggest that it is premature to pronounce the failure of the CDM model and it motivates more theoretical work.« less

  7. Influence of combined pretreatments on color parameters during convective drying of Mirabelle plum ( Prunus domestica subsp. syriaca)

    NASA Astrophysics Data System (ADS)

    Dehghannya, Jalal; Gorbani, Rasoul; Ghanbarzadeh, Babak

    2017-07-01

    Discoloration and browning are caused primarily by various reactions, including Maillard condensation of hexoses and amino components, phenol polymerization and pigment destruction. Convective drying can be combined with various pretreatments to help reduce undesired color changes and improve color parameters of dried products. In this study, effects of ultrasound-assisted osmotic dehydration as a pretreatment before convective drying on color parameters of Mirabelle plum were investigated. Variations of L* (lightness), a* (redness/greenness), b* (yellowness/blueness), total color change (ΔE), chroma, hue angle and browning index values were presented versus drying time during convective drying of control and pretreated Mirabelle plums as influenced by ultrasonication time, osmotic solution concentration and immersion time in osmotic solution. Samples pretreated with ultrasound for 30 min and osmotic solution concentration of 70% had a more desirable color among all other pretreated samples, with the closest L*, a* and b* values to the fresh one, showing that ultrasound and osmotic dehydration are beneficial to the color of final products after drying.

  8. Plasma biochemistry values in emperor geese (Chen canagica) in Alaska: comparisons among age, sex, incubation, and molt.

    USGS Publications Warehouse

    Franson, J. Christian; Hoffman, D.J.; Schmutz, J.A.

    2009-01-01

    Reduced populations of emperor geese (Chen canagica), a Bering Sea endemic, provided the need to assess plasma biochemistry values as indicators of population health. A precursory step to such an investigation was to evaluate patterns of variability in plasma biochemistry values among age, sex, and reproductive period. Plasma from 63 emperor geese was collected on their breeding grounds on the Yukon-Kuskokwim Delta in western Alaska, USA. The geese sampled included 18 incubating adult females captured, in mid June, on their nests by using bow nets, and 30 adults and 15 goslings captured in corral traps in late July and early August, when the adults were molting their wing feathers and the goslings were 5-6 weeks old. Plasma was evaluated for 15 biochemical parameters, by comparing results among age, sex, and sampling period (incubation versus wing-feather molt). Ten of the 15 biochemical parameters assayed differed among adults during incubation, the adults during molt, and the goslings at molt, whereas sex differences were noted in few parameters.

  9. Investigating the Effect of Cosmic Opacity on Standard Candles

    NASA Astrophysics Data System (ADS)

    Hu, J.; Yu, H.; Wang, F. Y.

    2017-02-01

    Standard candles can probe the evolution of dark energy over a large redshift range. But the cosmic opacity can degrade the quality of standard candles. In this paper, we use the latest observations, including Type Ia supernovae (SNe Ia) from the “joint light-curve analysis” sample and Hubble parameters, to probe the opacity of the universe. A joint fitting of the SNe Ia light-curve parameters, cosmological parameters, and opacity is used in order to avoid the cosmological dependence of SNe Ia luminosity distances. The latest gamma-ray bursts are used in order to explore the cosmic opacity at high redshifts. The cosmic reionization process is considered at high redshifts. We find that the sample supports an almost transparent universe for flat ΛCDM and XCDM models. Meanwhile, free electrons deplete photons from standard candles through (inverse) Compton scattering, which is known as an important component of opacity. This Compton dimming may play an important role in future supernova surveys. From analysis, we find that about a few per cent of the cosmic opacity is caused by Compton dimming in the two models, which can be corrected.

  10. Protonation of Different Goethite Surfaces - Unified Models for NaNO3 and NaCl Media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutzenkirchen, Johannes; Boily, Jean F.; Gunneriusson, Lars

    2008-01-01

    Acid-base titration data for two goethites samples in sodium nitrate and sodium chloride media are discussed. The data are modelled based on various surface complexation models in the framework of the MUlti SIte Complexation (MUSIC) model. Various assumptions with respect to the goethite morphology are considered in determining the site density of the surface functional groups. The results from the various model applications are not statistically significant in terms of goodness of fit. More importantly, various published assumptions with respect to the goethite morphology (i.e. the contributions of different crystal planes and their repercussions on the “overall” site densities ofmore » the various surface functional groups) do not significantly affect the final model parameters. The simultaneous fit of the chloride and nitrate data results in electrolyte binding constants, which are applicable over a wide range of electrolyte concentrations including mixtures of chloride and nitrate. Model parameters for the high surface area goethite sample are in excellent agreement with parameters that were independently obtained by another group on different goethite titration data sets.« less

  11. Analysing neutron scattering data using McStas virtual experiments

    NASA Astrophysics Data System (ADS)

    Udby, L.; Willendrup, P. K.; Knudsen, E.; Niedermayer, Ch.; Filges, U.; Christensen, N. B.; Farhi, E.; Wells, B. O.; Lefmann, K.

    2011-04-01

    With the intention of developing a new data analysis method using virtual experiments we have built a detailed virtual model of the cold triple-axis spectrometer RITA-II at PSI, Switzerland, using the McStas neutron ray-tracing package. The parameters characterising the virtual instrument were carefully tuned against real experiments. In the present paper we show that virtual experiments reproduce experimentally observed linewidths within 1-3% for a variety of samples. Furthermore we show that the detailed knowledge of the instrumental resolution found from virtual experiments, including sample mosaicity, can be used for quantitative estimates of linewidth broadening resulting from, e.g., finite domain sizes in single-crystal samples.

  12. 300 Area treated effluent disposal facility sampling schedule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loll, C.M.

    1994-10-11

    This document is the interface between the 300 Area Liquid Effluent Process Engineering (LEPE) group and the Waste Sampling and Characterization Facility (WSCF), concerning process control samples. It contains a schedule for process control samples at the 300 Area TEDF which describes the parameters to be measured, the frequency of sampling and analysis, the sampling point, and the purpose for each parameter.

  13. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  14. Retarding potential analyzer for the Pioneer-Venus Orbiter Mission

    NASA Technical Reports Server (NTRS)

    Knudsen, W. C.; Bakke, J.; Spenner, K.; Novak, V.

    1979-01-01

    The retarding potential analyzer on the Pioneer-Venus Orbiter Mission has been designed to measure most of the thermal plasma parameters within and near the Venusian ionosphere. Parameters include total ion concentration, concentrations of the more abundant ions, ion temperatures, ion drift velocity, electron temperature, and low-energy (0-50 eV) electron distribution function. To accomplish these measurements on a spinning vehicle with a small telemetry bit rate, several functions, including decision functions not previously used in RPA's, have been developed and incorporated into this instrument. The more significant functions include automatic electrometer ranging with background current compensation; digital, quadratic retarding potential step generation for the ion and low-energy electron scans; a current sampling interval of 2 ms throughout all scans; digital logic inflection point detection and data selection; and automatic ram direction detection. Extensive numerical simulation and plasma chamber tests have been conducted to verify adequacy of the design for the Pioneer Mission.

  15. VizieR Online Data Catalog: Optical spectroscopic atlas of MOJAVE AGNs (Torrealba+, 2012)

    NASA Astrophysics Data System (ADS)

    Torrealba, J.; Chavushyan, V.; Cruz-Gonzalez, I.; Arshakian, T. G.; Bertone, E.; Rosa-Gonzalez, D.

    2014-09-01

    The atlas includes spectral parameters for the emission lines Hβ, [OIII] 5007, MgII 2798 and/or CIV 1549 and corresponding data for the continuum, as well as the luminosities and equivalent widths of the FeII UV/optical. It also contains homogeneous photometric information in the B-band for 242 sources of the MOJAVE/2cm sample. These data were acquired at 2.1m mexican telescopes: Observatorio Astronomico Nacional in San Pedro Martir (OAN-SPM), B. C., Mexico and at Observatorio Astronomico Guillermo Haro, in Cananea, Sonora (OAGH), Mexico. It is supplemented with spectroscopic data found in the archives of the Sloan Digital Sky Survey (SDSS), the Hubble Space Telescope (HST), in the AGN sample of Marziani et al. (2003ApJS..145..199M, Cat. J/ApJS/145/199), and in Lawrence et al. 1996ApJS..107..541L. We present the continuum emission and/or line parameters for 41 sources in the Hβ region, 78 in the MgII region, and 35 in the CIV region. Also, there are 14 sources with information available for both Hβ and MgII regions, 12 with MgII and CIV, and 5 with Hβ, MgII and CIV. The spectroscopic information information for the statistically complete sample MOJAVE-1 (Lister & Homan, 2005AJ....130.1389L, Cat. J/AJ/130/1389) included in the Atlas is as follows: 28 sources in the Hβ region, 46 in the MgII region, and 23 in the CIV region. All the emission lines parameters are for the broad component of the line, except for [OIII] 5007. (7 data files).

  16. Trajectory Dispersed Vehicle Process for Space Launch System

    NASA Technical Reports Server (NTRS)

    Statham, Tamara; Thompson, Seth

    2017-01-01

    The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.

  17. Noise in NC-AFM measurements with significant tip–sample interaction

    PubMed Central

    Lübbe, Jannis; Temmen, Matthias

    2016-01-01

    The frequency shift noise in non-contact atomic force microscopy (NC-AFM) imaging and spectroscopy consists of thermal noise and detection system noise with an additional contribution from amplitude noise if there are significant tip–sample interactions. The total noise power spectral density D Δ f(f m) is, however, not just the sum of these noise contributions. Instead its magnitude and spectral characteristics are determined by the strongly non-linear tip–sample interaction, by the coupling between the amplitude and tip–sample distance control loops of the NC-AFM system as well as by the characteristics of the phase locked loop (PLL) detector used for frequency demodulation. Here, we measure D Δ f(f m) for various NC-AFM parameter settings representing realistic measurement conditions and compare experimental data to simulations based on a model of the NC-AFM system that includes the tip–sample interaction. The good agreement between predicted and measured noise spectra confirms that the model covers the relevant noise contributions and interactions. Results yield a general understanding of noise generation and propagation in the NC-AFM and provide a quantitative prediction of noise for given experimental parameters. We derive strategies for noise-optimised imaging and spectroscopy and outline a full optimisation procedure for the instrumentation and control loops. PMID:28144538

  18. A human fecal contamination index for ranking impaired ...

    EPA Pesticide Factsheets

    Human fecal pollution of surface water remains a public health concern worldwide. As a result, there is a growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for recreational water quality risk management. The transition from a research subject to a management tool requires the integration of standardized water sampling, laboratory, and data analysis procedures. In this study, a standardized HF183/BacR287 qPCR method was combined with a water sampling strategy and Bayesian data algorithm to establish a human fecal contamination index that can be used to rank impaired recreational water sites polluted with human waste. Stability and bias of index predictions were investigated under various parameters including siteswith different pollution levels, sampling period time range (1-15 weeks), and number of qPCR replicates per sample (2-14 replicates). Sensitivity analyses were conducted with simulated data sets (100 iterations) seeded with HF183/BacR287 qPCR laboratory measurements from water samples collected from three Southern California sites (588 qPCR measurements). Findings suggest that site ranking is feasible and that all parameters tested influence stability and bias in human fecal contamination indexscoring. Trends identified by sensitivity analyses will provide managers with the information needed to design and conduct field studies to rank impaired recreational water sites based

  19. Noise in NC-AFM measurements with significant tip-sample interaction.

    PubMed

    Lübbe, Jannis; Temmen, Matthias; Rahe, Philipp; Reichling, Michael

    2016-01-01

    The frequency shift noise in non-contact atomic force microscopy (NC-AFM) imaging and spectroscopy consists of thermal noise and detection system noise with an additional contribution from amplitude noise if there are significant tip-sample interactions. The total noise power spectral density D Δ f ( f m ) is, however, not just the sum of these noise contributions. Instead its magnitude and spectral characteristics are determined by the strongly non-linear tip-sample interaction, by the coupling between the amplitude and tip-sample distance control loops of the NC-AFM system as well as by the characteristics of the phase locked loop (PLL) detector used for frequency demodulation. Here, we measure D Δ f ( f m ) for various NC-AFM parameter settings representing realistic measurement conditions and compare experimental data to simulations based on a model of the NC-AFM system that includes the tip-sample interaction. The good agreement between predicted and measured noise spectra confirms that the model covers the relevant noise contributions and interactions. Results yield a general understanding of noise generation and propagation in the NC-AFM and provide a quantitative prediction of noise for given experimental parameters. We derive strategies for noise-optimised imaging and spectroscopy and outline a full optimisation procedure for the instrumentation and control loops.

  20. Dimensions of design space: a decision-theoretic approach to optimal research design.

    PubMed

    Conti, Stefano; Claxton, Karl

    2009-01-01

    Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.

  1. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  2. Studies of the micromorphology of sputtered TiN thin films by autocorrelation techniques

    NASA Astrophysics Data System (ADS)

    Smagoń, Kamil; Stach, Sebastian; Ţălu, Ştefan; Arman, Ali; Achour, Amine; Luna, Carlos; Ghobadi, Nader; Mardani, Mohsen; Hafezi, Fatemeh; Ahmadpourian, Azin; Ganji, Mohsen; Grayeli Korpi, Alireza

    2017-12-01

    Autocorrelation techniques are crucial tools for the study of the micromorphology of surfaces: They provide the description of anisotropic properties and the identification of repeated patterns on the surface, facilitating the comparison of samples. In the present investigation, some fundamental concepts of these techniques including the autocorrelation function and autocorrelation length have been reviewed and applied in the study of titanium nitride thin films by atomic force microscopy (AFM). The studied samples were grown on glass substrates by reactive magnetron sputtering at different substrate temperatures (from 25 {}°C to 400 {}°C , and their micromorphology was studied by AFM. The obtained AFM data were analyzed using MountainsMap Premium software obtaining the correlation function, the structure of isotropy and the spatial parameters according to ISO 25178 and EUR 15178N. These studies indicated that the substrate temperature during the deposition process is an important parameter to modify the micromorphology of sputtered TiN thin films and to find optimized surface properties. For instance, the autocorrelation length exhibited a maximum value for the sample prepared at a substrate temperature of 300 {}°C , and the sample obtained at 400 {}°C presented a maximum angle of the direction of the surface structure.

  3. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  4. Vitrification of neat semen alters sperm parameters and DNA integrity.

    PubMed

    Khalili, Mohammad Ali; Adib, Maryam; Halvaei, Iman; Nabi, Ali

    2014-05-06

    Our aim was to evaluate the effect of neat semen vitrification on human sperm vital parameters and DNA integrity in men with normal and abnormal sperm parameters. Semen samples were 17 normozoospermic samples and 17 specimens with abnormal sperm parameters. Semen analysis was performed according to World Health Organization (WHO) criteria. Then, the smear was provided from each sample and fixed for terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) staining. Vitrification of neat semen was done by plunging cryoloops directly into liquid nitrogen and preserved for 7 days. The samples were warmed and re-evaluated for sperm parameters as well as DNA integrity. Besides, the correlation between sperm parameters and DNA fragmentation was assessed pre- and post vitrification. Cryopreserved spermatozoa showed significant decrease in sperm motility, viability and normal morphology after thawing in both normal and abnormal semen. Also, the rate of sperm DNA fragmentation was significantly higher after vitrification compared to fresh samples in normal (24.76 ± 5.03 and 16.41 ± 4.53, P = .002) and abnormal (34.29 ± 10.02 and 23.5 ± 8.31, P < .0001), respectively. There was negative correlation between sperm motility and sperm DNA integrity in both groups after vitrification. Vitrification of neat ejaculates has negative impact on sperm parameters as well as DNA integrity, particularly among abnormal semen subjects. It is, therefore, recommend to process semen samples and vitrify the sperm pellets.

  5. FGGE/ERBZ tape specification and shipping letter description

    NASA Technical Reports Server (NTRS)

    Han, D.; Lo, H.

    1983-01-01

    The FGGE/ERBZ tape contains 5 parameters which are extracted and reformatted from the Nimbus-7 ERB Zonal Means Tape. There are three types of files on a FGGE/ERBZ tape: a tape header file, and data files. Physical characteristics, gross format, and file specifications are given. A sample tape check/document printout (shipping letter) is included.

  6. Agents Which Mediate Pulmonary Edema

    DTIC Science & Technology

    1990-12-01

    described in this report has focused on delineating various approach s to understanding mechanisms of pathological changes leading to pulmonary edema... pathological changes leading to pulmonary edema. Baseline parameters including hemodynamics, gas exchange and lymph flow were determined for the sheep...mediastinal lymph node is catheterized permitting frequent sampling of lung lymph. This model permits the monitoring of changes in pulmonary vascular

  7. Radiation budget and related measurements in 1985 and beyond. [earth radiation budget satellite system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Development of systems for obtaining radiation budget and cloud data is discussed. Instruments for measuring total solar irradiance, total infrared flux, reflected solar flux, and cloud heights and properties are considered. Other topics discussed include sampling by multiple satellites, user identification, and determination of the parameters that need to be measured.

  8. Gender in Adolescent Autonomy: Distinction between Boys and Girls Accelerates at 16 Years of Age

    ERIC Educational Resources Information Center

    Fleming, Manuela

    2005-01-01

    Introduction: Autonomy is a major developmental feature of adolescents. Its success mediates transition into adulthood. It involves a number of psychological parameters, including desire, conflict with parents and actual achievement. Method: How male and female adolescents view autonomy was investigated in a large sample of 12-17 year-old…

  9. In-flight friction and wear mechanism

    NASA Technical Reports Server (NTRS)

    Devine, E. J.; Evans, H. E.

    1975-01-01

    A unique mechanism developed for conducting friction and wear experiments in orbit is described. The device is capable of testing twelve material samples simultaneously. Parameters considered critical include: power, weight, volume, mounting, cleanliness, and thermal designs. The device performed flawlessly in orbit over an eighteen month period and demonstrated the usefulness of this design for future unmanned spacecraft or shuttle applications.

  10. Buy now, saved later? The critical impact of time-to-pandemic uncertainty on pandemic cost-effectiveness analyses.

    PubMed

    Drake, Tom; Chalabi, Zaid; Coker, Richard

    2015-02-01

    Investment in pandemic preparedness is a long-term gamble, with the return on investment coming at an unknown point in the future. Many countries have chosen to stockpile key resources, and the number of pandemic economic evaluations has risen sharply since 2009. We assess the importance of uncertainty in time-to-pandemic (and associated discounting) in pandemic economic evaluation, a factor frequently neglected in the literature to-date. We use a probability tree model and Monte Carlo parameter sampling to consider the cost effectiveness of antiviral stockpiling in Cambodia under parameter uncertainty. Mean elasticity and mutual information (MI) are used to assess the importance of time-to-pandemic compared with other parameters. We also consider the sensitivity to choice of sampling distribution used to model time-to-pandemic uncertainty. Time-to-pandemic and discount rate are the primary drivers of sensitivity and uncertainty in pandemic cost effectiveness models. Base case cost effectiveness of antiviral stockpiling ranged between is US$112 and US$3599 per DALY averted using historical pandemic intervals for time-to-pandemic. The mean elasticities for time-to-pandemic and discount rate were greater than all other parameters. Similarly, the MI scores for time to pandemic and discount rate were greater than other parameters. Time-to-pandemic and discount rate were key drivers of uncertainty in cost-effectiveness results regardless of time-to-pandemic sampling distribution choice. Time-to-pandemic assumptions can "substantially" affect cost-effectiveness results and, in our model, is a greater contributor to uncertainty in cost-effectiveness results than any other parameter. We strongly recommend that cost-effectiveness models include probabilistic analysis of time-to-pandemic uncertainty. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.

  11. Effects of sampling close relatives on some elementary population genetics analyses.

    PubMed

    Wang, Jinliang

    2018-01-01

    Many molecular ecology analyses assume the genotyped individuals are sampled at random from a population and thus are representative of the population. Realistically, however, a sample may contain excessive close relatives (ECR) because, for example, localized juveniles are drawn from fecund species. Our knowledge is limited about how ECR affect the routinely conducted elementary genetics analyses, and how ECR are best dealt with to yield unbiased and accurate parameter estimates. This study quantifies the effects of ECR on some popular population genetics analyses of marker data, including the estimation of allele frequencies, F-statistics, expected heterozygosity (H e ), effective and observed numbers of alleles, and the tests of Hardy-Weinberg equilibrium (HWE) and linkage equilibrium (LE). It also investigates several strategies for handling ECR to mitigate their impact and to yield accurate parameter estimates. My analytical work, assisted by simulations, shows that ECR have large and global effects on all of the above marker analyses. The naïve approach of simply ignoring ECR could yield low-precision and often biased parameter estimates, and could cause too many false rejections of HWE and LE. The bold approach, which simply identifies and removes ECR, and the cautious approach, which estimates target parameters (e.g., H e ) by accounting for ECR and using naïve allele frequency estimates, eliminate the bias and the false HWE and LE rejections, but could reduce estimation precision substantially. The likelihood approach, which accounts for ECR in estimating allele frequencies and thus target parameters relying on allele frequencies, usually yields unbiased and the most accurate parameter estimates. Which of the four approaches is the most effective and efficient may depend on the particular marker analysis to be conducted. The results are discussed in the context of using marker data for understanding population properties and marker properties. © 2017 John Wiley & Sons Ltd.

  12. Characterization of bovine cartilage by fiber Bragg grating-based stress relaxation measurements

    NASA Astrophysics Data System (ADS)

    Baier, V.; Marchi, G.; Foehr, P.; Burgkart, R.; Roths, J.

    2017-04-01

    A fiber-based device for testing mechanical properties of cartilage is presented within this study. The measurement principle is based on stepwise indentation into the tissue and observing of corresponding relaxation of the stress. The indenter tip is constituted of a cleaved optical fiber that includes a fiber Bragg grating which is used as the force sensor. Stress relaxation measurements at 25 different positions on a healthy bovine cartilage sample were performed to assess the behavior of healthy cartilage. For each indentation step a good agreement was found with a viscoelastic model that included two time constants. The model parameters showed low variability and a clear dependence with indentation depth. The parameters can be used as reference values for discriminating healthy and degenerated cartilage.

  13. 1976 water-quality data in Bear Creek basin, Medford, Oregon

    USGS Publications Warehouse

    McKenzie, Stuart W.; Wittenberg, Loren A.

    1977-01-01

    The U.S. Geological Survey, in cooperation with the Rogue Valley Council of Governments, is studying surface-water-quality problems and their causes in the Bear Creek basin of southwestern Oregon. Two specific areas of investigation include: measurements of the quality and quantity of water in the irrigation canals and drainage system and the diel (during a 24-hour period) variation of water-quality parameters in the main stem of Bear Creek. The irrigation and drainage study involves 25 sites in canals and natural drainageways. One hundred thirty-three samples were collected for analysis, and discharge was determined at the time of collection. The diel study includes six sites on Bear Creek. On August 23-24, four parameters were monitored at all six sites during a 24-hour period.

  14. Water-quality data for selected stations in the East Everglades, Florida

    USGS Publications Warehouse

    Waller, Bradley G.

    1981-01-01

    The results of water-quality samples collected from April 1978 through April 1980 from three canal stations, four marsh stations, and two ground-water stations within the East Everglades, Dade County, Florida, are tabulated in 37 tables. The major categories of parameters analyzed are field measurements, physical characteristics, macronutrients (carbon, nitrogen, and phosphorus), major ions, trace elements, and algae. Chemical data for bulk-precipitation stations within and adjacent to the East Everglades are also given. The parameters analyzed include macronutrients, major ions, and trace elements. The period of record for these stations is October 1977 through April 1980. Bottom material at the canal and marsh stations was collected twice during the investigation. These data include analyses for macronutrients, trace elements, and chlorinated-hydrocarbon insecticides. (USGS)

  15. Pioneer Venus Orbiter planar retarding potential analyzer plasma experiment

    NASA Technical Reports Server (NTRS)

    Knudsen, W. C.; Bakke, J.; Spenner, K.; Novak, V.

    1980-01-01

    The retarding potential analyzer (RPA) on the Pioneer Venus Orbiter Mission measures most of the thermal plasma parameters within and near the Venusian ionosphere. Parameters include total ion concentration, concentrations of the more abundant ions, ion temperatures, ion drift velocity, electron temperature, and low-energy (0-50 eV) electron distribution function. Several functions not previously used in RPA's were developed and incorporated into this instrument to accomplish these measurements on a spinning spacecraft with a small bit rate. The more significant functions include automatic electrometer ranging with background current compensation; digital, quadratic retarding potential step generation for the ion and low-energy electron scans; a current sampling interval of 2 ms throughout all scans; digital logic inflection point detection and data selection; and automatic ram direction detection.

  16. THE MAYAK WORKER DOSIMETRY SYSTEM (MWDS-2013) FOR INTERNALLY DEPOSITED PLUTONIUM: AN OVERVIEW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchall, A.; Vostrotin, V.; Puncher, M.

    The Mayak Worker Dosimetry System (MWDS-2013) is a system for interpreting measurement data from Mayak workers from both internal and external sources. This paper is concerned with the calculation of annual organ doses for Mayak workers exposed to plutonium aerosols, where the measurement data consists mainly of activity of plutonium in urine samples. The system utilises the latest biokinetic and dosimetric models, and unlike its predecessors, takes explicit account of uncertainties in both the measurement data and model parameters. The aim of this paper is to describe the complete MWDS-2013 system (including model parameter values and their uncertainties) and themore » methodology used (including all the relevant equations) and the assumptions made. Where necessary, supplementary papers which justify specific assumptions are cited.« less

  17. Dictionary Indexing of Electron Channeling Patterns.

    PubMed

    Singh, Saransh; De Graef, Marc

    2017-02-01

    The dictionary-based approach to the indexing of diffraction patterns is applied to electron channeling patterns (ECPs). The main ingredients of the dictionary method are introduced, including the generalized forward projector (GFP), the relevant detector model, and a scheme to uniformly sample orientation space using the "cubochoric" representation. The GFP is used to compute an ECP "master" pattern. Derivative free optimization algorithms, including the Nelder-Mead simplex and the bound optimization by quadratic approximation are used to determine the correct detector parameters and to refine the orientation obtained from the dictionary approach. The indexing method is applied to poly-silicon and shows excellent agreement with the calibrated values. Finally, it is shown that the method results in a mean disorientation error of 1.0° with 0.5° SD for a range of detector parameters.

  18. Sampling design for long-term regional trends in marine rocky intertidal communities

    USGS Publications Warehouse

    Irvine, Gail V.; Shelley, Alice

    2013-01-01

    Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.

  19. Contributions to ultrasound monitoring of the process of milk curdling.

    PubMed

    Jiménez, Antonio; Rufo, Montaña; Paniagua, Jesús M; Crespo, Abel T; Guerrero, M Patricia; Riballo, M José

    2017-04-01

    Ultrasound evaluation permits the state of milk being curdled to be determined quickly and cheaply, thus satisfying the demands faced by today's dairy product producers. This paper describes the non-invasive ultrasonic method of in situ monitoring the changing physical properties of milk during the renneting process. The basic objectives of the study were, on the one hand, to confirm the usefulness of conventional non-destructive ultrasonic testing (time-of-flight and attenuation of the ultrasound waves) in monitoring the process in the case of ewe's milk, and, on the other, to include other ultrasound parameters which have not previously been considered in studies on this topic, in particular, parameters provided by the Fast Fourier Transform technique. The experimental study was carried out in a dairy industry environment on four 52-l samples of raw milk in which were immersed 500kHz ultrasound transducers. Other physicochemical parameters of the raw milk (pH, dry matter, protein, Gerber fat test, and lactose) were measured, as also were the pH and temperature of the curdled samples simultaneously with the ultrasound tests. Another contribution of this study is the linear correlation analysis of the aforementioned ultrasound parameters and the physicochemical properties of the curdled milk. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Building Better Planet Populations for EXOSIMS

    NASA Astrophysics Data System (ADS)

    Garrett, Daniel; Savransky, Dmitry

    2018-01-01

    The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.

  1. Relationship between the properties of raw and cooked spaghetti - new indices for pasta quality evaluation

    NASA Astrophysics Data System (ADS)

    Biernacka, Beata; Dziki, Dariusz; Różyło, Renata; Wójcik, Monika; Miś, Antoni; Romankiewicz, Daria; Krzysiak, Zbigniew

    2018-04-01

    The quality of pasta can be evaluated by measuring the characteristics which encompass the most important quality parameters, such as colour, cooking properties and texture. The aim of the study was to suggest new indices which can be used to evaluate the quality of pasta. For the tests, 15 samples of spaghetti (produced from either semolina or common wheat flour) were used. The bending test was performed for the determination of the strength properties of raw pasta, while the pasta colour parameters were evaluated via the Commission Internationale de l'Eclairage system. The pasta cooking test included the evaluation of optimum cooking time, weight increase index and cooking loss. The samples of cooked spaghetti were cut, and the parameters describing pasta texture were determined. Statistical analysis showed significant correlations (α = 0.05) between colour parameters (lightness and redness) and pasta ash content (R = -0.90 and 0.84, respectively). The mechanical properties of raw pasta correlated positively with pasta density. The strongest correlation was found between pasta density and flexural strength. The destruction force for raw spaghetti during the bending test correlated significantly and positively with the cutting force of the cooked pasta. The obtained correlations can be helpful in pasta quality evaluation.

  2. Uncertainty quantification analysis of the dynamics of an electrostatically actuated microelectromechanical switch model

    NASA Astrophysics Data System (ADS)

    Snow, Michael G.; Bajaj, Anil K.

    2015-08-01

    This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.

  3. 300 Area treated effluent disposal facility sampling schedule. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loll, C.M.

    1995-03-28

    This document is the interface between the 300 Area liquid effluent process engineering (LEPE) group and the waste sampling and characterization facility (WSCF), concerning process control samples. It contains a schedule for process control samples at the 300 Area TEDF which describes the parameters to be measured, the frequency of sampling and analysis, the sampling point, and the purpose for each parameter.

  4. Does the cognitive reflection test measure cognitive reflection? A mathematical modeling approach.

    PubMed

    Campitelli, Guillermo; Gerrans, Paul

    2014-04-01

    We used a mathematical modeling approach, based on a sample of 2,019 participants, to better understand what the cognitive reflection test (CRT; Frederick In Journal of Economic Perspectives, 19, 25-42, 2005) measures. This test, which is typically completed in less than 10 min, contains three problems and aims to measure the ability or disposition to resist reporting the response that first comes to mind. However, since the test contains three mathematically based problems, it is possible that the test only measures mathematical abilities, and not cognitive reflection. We found that the models that included an inhibition parameter (i.e., the probability of inhibiting an intuitive response), as well as a mathematical parameter (i.e., the probability of using an adequate mathematical procedure), fitted the data better than a model that only included a mathematical parameter. We also found that the inhibition parameter in males is best explained by both rational thinking ability and the disposition toward actively open-minded thinking, whereas in females this parameter was better explained by rational thinking only. With these findings, this study contributes to the understanding of the processes involved in solving the CRT, and will be particularly useful for researchers who are considering using this test in their research.

  5. Quantitative analysis of iris parameters in keratoconus patients using optical coherence tomography.

    PubMed

    Bonfadini, Gustavo; Arora, Karun; Vianna, Lucas M; Campos, Mauro; Friedman, David; Muñoz, Beatriz; Jun, Albert S

    2015-01-01

    To investigate the relationship between quantitative iris parameters and the presence of keratoconus. Cross-sectional observational study that included 15 affected eyes of 15 patients with keratoconus and 26 eyes of 26 normal age- and sex-matched controls. Iris parameters (area, thickness, and pupil diameter) of affected and unaffected eyes were measured under standardized light and dark conditions using anterior segment optical coherence tomography (AS-OCT). To identify optimal iris thickness cutoff points to maximize the sensitivity and specificity when discriminating keratoconus eyes from normal eyes, the analysis included the use of receiver operating characteristic (ROC) curves. Iris thickness and area were lower in keratoconus eyes than in normal eyes. The mean thickness at the pupillary margin under both light and dark conditions was found to be the best parameter for discriminating normal patients from keratoconus patients. Diagnostic performance was assessed by the area under the ROC curve (AROC), which had a value of 0.8256 with 80.0% sensitivity and 84.6% specificity, using a cutoff of 0.4125 mm. The sensitivity increased to 86.7% when a cutoff of 0.4700 mm was used. In our sample, iris thickness was lower in keratoconus eyes than in normal eyes. These results suggest that tomographic parameters may provide novel adjunct approaches for keratoconus screening.

  6. Laser Shot Peening System Final Report CRADA No. TC-1369-96

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuart, B. C.; Harris, F.

    This CRADA project was established with a primary goal to develop a laser shot peening system which could operate at production throughput rates and produce the desired depth and intensity of induced shots. The first objective was to understand all parameters required for acceptable peening, including pulse energy, pulse temporal format, pulse spatial format, sample configuration and tamping mechanism. The next objective was to demonstrate the technique on representative samples and then on representative parts. The final objective was to implement the technology into a meaningful industrial peen.

  7. Lubricant Evaluation and Performance 2

    DTIC Science & Technology

    1992-01-01

    IDENTIFICATION OF SAMPLES USED IN ANALYTICAL FERROGRAPHY STUDY INCLUDING DESCRIPTION OF FERROGRAM DEBRIS 223 89. ANALYTICAL FERROGRAPH DATA FOR DOD-L-85734(AS...testing under various test parameters for determining effects on lubricant stability. Ferrography of the wear test samples showed a change in type of... ATLA i -- 0 A~ LAO0 O0 ui C.6j 0 0 -0 VI. CD 0-. Li C . LiC: a.- CD C). > C) : 132 L Y = 1.04X - 1.60 (r = 0.99997) 0 Y = 1.03X - 1.08 (r = 0.99999) 600

  8. Sub-nanometer Resolution Imaging with Amplitude-modulation Atomic Force Microscopy in Liquid

    PubMed Central

    Farokh Payam, Amir; Piantanida, Luca; Cafolla, Clodomiro; Voïtchovsky, Kislon

    2016-01-01

    Atomic force microscopy (AFM) has become a well-established technique for nanoscale imaging of samples in air and in liquid. Recent studies have shown that when operated in amplitude-modulation (tapping) mode, atomic or molecular-level resolution images can be achieved over a wide range of soft and hard samples in liquid. In these situations, small oscillation amplitudes (SAM-AFM) enhance the resolution by exploiting the solvated liquid at the surface of the sample. Although the technique has been successfully applied across fields as diverse as materials science, biology and biophysics and surface chemistry, obtaining high-resolution images in liquid can still remain challenging for novice users. This is partly due to the large number of variables to control and optimize such as the choice of cantilever, the sample preparation, and the correct manipulation of the imaging parameters. Here, we present a protocol for achieving high-resolution images of hard and soft samples in fluid using SAM-AFM on a commercial instrument. Our goal is to provide a step-by-step practical guide to achieving high-resolution images, including the cleaning and preparation of the apparatus and the sample, the choice of cantilever and optimization of the imaging parameters. For each step, we explain the scientific rationale behind our choices to facilitate the adaptation of the methodology to every user's specific system. PMID:28060262

  9. Effects of slow recovery rates on water column geochemistry in aquitard wells

    USGS Publications Warehouse

    Schilling, K.E.

    2011-01-01

    Monitoring wells are often installed in aquitards to verify effectiveness for preventing migration of surface contaminants to underlying aquifers. However, water sampling of aquitard wells presents a challenge due to the slow recovery times for water recharging the wells, which can take as long as weeks, months or years to recharge depending on the sample volume needed. In this study, downhole profiling and sampling of aquitard wells was used to assess geochemical changes that occur in aquitard wells during water level recovery. Wells were sampled on three occasions spanning 11years, 1year and 1week after they were purged and casing water showed substantial water chemistry variations. Temperature decreased with depth, whereas pH and specific conductance increased with depth in the water column after 11years of water level recovery. Less stable parameters such as dissolved O2 (DO) and Eh showed strong zonation in the well column, with DO stratification occurring as the groundwater slowly entered the well. Oxidation of reduced till groundwater along with degassing of CO2 from till pore water affects mineral solubility and dissolved solid concentrations. Recommendations for sampling slowly recovering aquitard wells include identifying the zone of DO and Eh stratification in the well column and collecting water samples from below the boundary to better measure unstable geochemical parameters. ?? 2011 Elsevier Ltd.

  10. Toxicological study of pesticides in air and precipitations of Paris by means of a bioluminescence method.

    PubMed

    Trajkovska, S; Mbaye, M; Gaye Seye, M D; Aaron, J J; Chevreuil, M; Blanchoud, H

    2009-06-01

    A detailed toxicological study on several pesticides, including chlorothalonil, cyprodynil, dichlobénil, pendimethaline, trifluraline, and alpha-endosulfan, present at trace levels in air and total atmospheric precipitations of Paris is presented. The pesticides contained in the atmospheric samples, collected during sampling campaigns in February-March 2007, are identified and quantified by a high-performance liquid chromatographic (HPLC)-UV detection method. The toxicity measurements are performed by means of the Microtox bioluminescence method, based on the evaluation of the bioluminescence inhibition of the Vibrio fischeri marine bacteria at two exposure times to the pesticide solutions. The specific toxicity, corresponding to the particular toxicity of the compound under study and represented by the EC(50) parameter, is determined for these pesticides. Also, the global toxicity, which is the toxicity of all micro-pollutants present in the sample under study, is estimated for the extracts of air and atmospheric precipitation (rainwater) samples. The specific toxicities strongly vary with the nature of the pesticide, the EC(50) parameter values being comprised between 0.17 and 0.83 mg/mL and 0.15 and 0.66 mg/mL, respectively, for exposure times of 5 and 15 min. The importance of the atmospheric samples' global toxicity and the respective contribution of the toxic potency of the various pesticides contained in these samples are discussed.

  11. Development of automated high throughput single molecular microfluidic detection platform for signal transduction analysis

    NASA Astrophysics Data System (ADS)

    Huang, Po-Jung; Baghbani Kordmahale, Sina; Chou, Chao-Kai; Yamaguchi, Hirohito; Hung, Mien-Chie; Kameoka, Jun

    2016-03-01

    Signal transductions including multiple protein post-translational modifications (PTM), protein-protein interactions (PPI), and protein-nucleic acid interaction (PNI) play critical roles for cell proliferation and differentiation that are directly related to the cancer biology. Traditional methods, like mass spectrometry, immunoprecipitation, fluorescence resonance energy transfer, and fluorescence correlation spectroscopy require a large amount of sample and long processing time. "microchannel for multiple-parameter analysis of proteins in single-complex (mMAPS)"we proposed can reduce the process time and sample volume because this system is composed by microfluidic channels, fluorescence microscopy, and computerized data analysis. In this paper, we will present an automated mMAPS including integrated microfluidic device, automated stage and electrical relay for high-throughput clinical screening. Based on this result, we estimated that this automated detection system will be able to screen approximately 150 patient samples in a 24-hour period, providing a practical application to analyze tissue samples in a clinical setting.

  12. Food-service establishment wastewater characterization.

    PubMed

    Lesikar, B J; Garza, O A; Persyn, R A; Kenimer, A L; Anderson, M T

    2006-08-01

    Food-service establishments that use on-site wastewater treatment systems are experiencing pretreatment system and/or drain field hydraulic and/or organic overloading. This study included characterization of four wastewater parameters (five-day biochemical oxygen demand [BOD5]; total suspended solids [TSS]; food, oil, and grease [FOG]; and flow) from 28 restaurants located in Texas during June, July, and August 2002. The field sampling methodology included taking a grab sample from each restaurant for 6 consecutive days at approximately the same time each day, followed by a 2-week break, and then sampling again for another 6 consecutive days, for a total of 12 samples per restaurant and 336 total observations. The analysis indicates higher organic (BOD5) and hydraulic values for restaurants than those typically found in the literature. The design values for this study for BOD5, TSS, FOG, and flow were 1523, 664, and 197 mg/L, and 96 L/day-seat respectively, which captured over 80% of the data collected.

  13. Environmental parameters of the Tennessee River in Alabama. 1: Thermal stratification

    NASA Technical Reports Server (NTRS)

    Rosing, L. M.

    1976-01-01

    Thermal stratification data of a transect across Wheeler Reservoir are correlated with the climatological data at the time of sampling. This portion of the Tennessee River is used as a heat sink for the effluent from the three reactor Browns Ferry Nuclear Power Plant. The transect sampling line is 1.3 miles below this point of effluence. Data are presented by weekly samplings for one year prior to plant operations. Post-operational data are presented with one reactor in operation and with two reactors in partial operation. Data gathering was terminated when the plant ceased operations. The results indicate that the effluent for partial plant operation were inconclusive. As a result, recommendations include continuing the sampling when the plant resumes operation at full capacity. Recommendations also include developing math models with the presented thermal and climatological data to be used for predicting the effluent impact in the river with varying climatological conditions and also to predict the effectiveness of the cooling towers.

  14. The ARIEL mission reference sample

    NASA Astrophysics Data System (ADS)

    Zingales, Tiziano; Tinetti, Giovanna; Pillitteri, Ignazio; Leconte, Jérémy; Micela, Giuseppina; Sarkar, Subhajit

    2018-02-01

    The ARIEL (Atmospheric Remote-sensing Exoplanet Large-survey) mission concept is one of the three M4 mission candidates selected by the European Space Agency (ESA) for a Phase A study, competing for a launch in 2026. ARIEL has been designed to study the physical and chemical properties of a large and diverse sample of exoplanets and, through those, understand how planets form and evolve in our galaxy. Here we describe the assumptions made to estimate an optimal sample of exoplanets - including already known exoplanets and expected ones yet to be discovered - observable by ARIEL and define a realistic mission scenario. To achieve the mission objectives, the sample should include gaseous and rocky planets with a range of temperatures around stars of different spectral type and metallicity. The current ARIEL design enables the observation of ˜1000 planets, covering a broad range of planetary and stellar parameters, during its four year mission lifetime. This nominal list of planets is expected to evolve over the years depending on the new exoplanet discoveries.

  15. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  16. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  17. Residential distance to major roadways and semen quality, sperm DNA integrity, chromosomal disomy, and serum reproductive hormones among men attending a fertility clinic.

    PubMed

    Nassan, Feiby L; Chavarro, Jorge E; Mínguez-Alarcón, Lidia; Williams, Paige L; Tanrikut, Cigdem; Ford, Jennifer B; Dadd, Ramace; Perry, Melissa J; Hauser, Russ; Gaskins, Audrey J

    2018-06-01

    We examined associations of residential distance to major roadways, as a proxy for traffic-related air pollution exposures, with sperm characteristics and male reproductive hormones. The cohort included 797 men recruited from Massachusetts General Hospital Fertility Center between 2000 and 2015 to participate in fertility research studies. Men reported their residential addresses at enrollment and provided 1-6 semen samples and a blood sample during follow-up. We estimated the Euclidean distance to major roadways (e.g. interstates and highways: limited access highways, multi-lane highways (not limited access), other numbered routes, and major roads) using information from the Massachusetts Department of Geographic Information Systems. Semen parameters (1238 semen samples), sperm DNA integrity (389 semen samples), chromosomal disomy (101 semen samples), and serum reproductive hormones (405 serum samples) were assessed following standard procedures. Men in this cohort were primarily Caucasian (86%), not current smokers (92%), with a college or higher education (88%), and had an average age of 36 years and BMI of 27.7 kg/m 2 . The median (interquartile range) residential distance to a major roadway was 111 (37, 248) meters. Residential proximity to major roadways was not associated with semen parameters, sperm DNA integrity, chromosomal disomy, or serum reproductive hormone concentrations. The adjusted percent change (95% CI) in semen quality parameters associated with a 500 m increase in residential distance to a major roadway was -1.0% (-6.3, 4.5) for semen volume, 4.3% (-5.8, 15.7) for sperm concentration, 3.1% (-7.2, 14.5) for sperm count, 1.1% (-1.2, 3.4) for % total motile sperm, and 0.1% (-0.3, 0.5) for % morphologically normal sperm. Results were consistent when we modeled the semen parameters dichotomized according to WHO 2010 reference values. Residential distance to major roadways, as a proxy for traffic-related air pollution exposure, was not related to sperm characteristics or serum reproductive hormones among men attending a fertility clinic in Massachusetts. Copyright © 2018 Elsevier GmbH. All rights reserved.

  18. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    NASA Astrophysics Data System (ADS)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.

  19. Development of a copula-based particle filter (CopPF) approach for hydrologic data assimilation under consideration of parameter interdependence

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, G. H.; Baetz, B. W.; Li, Y. P.; Huang, K.

    2017-06-01

    In this study, a copula-based particle filter (CopPF) approach was developed for sequential hydrological data assimilation by considering parameter correlation structures. In CopPF, multivariate copulas are proposed to reflect parameter interdependence before the resampling procedure with new particles then being sampled from the obtained copulas. Such a process can overcome both particle degeneration and sample impoverishment. The applicability of CopPF is illustrated with three case studies using a two-parameter simplified model and two conceptual hydrologic models. The results for the simplified model indicate that model parameters are highly correlated in the data assimilation process, suggesting a demand for full description of their dependence structure. Synthetic experiments on hydrologic data assimilation indicate that CopPF can rejuvenate particle evolution in large spaces and thus achieve good performances with low sample size scenarios. The applicability of CopPF is further illustrated through two real-case studies. It is shown that, compared with traditional particle filter (PF) and particle Markov chain Monte Carlo (PMCMC) approaches, the proposed method can provide more accurate results for both deterministic and probabilistic prediction with a sample size of 100. Furthermore, the sample size would not significantly influence the performance of CopPF. Also, the copula resampling approach dominates parameter evolution in CopPF, with more than 50% of particles sampled by copulas in most sample size scenarios.

  20. Application of LANDSAT to the surveillance of lake eutrophication in the Great Lakes basin. [Saginaw Bay, Michigan

    NASA Technical Reports Server (NTRS)

    Rogers, R. H.; Smith, V. E.; Scherz, J. P.; Woelkerling, W. J.; Adams, M. S.; Gannon, J. E. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. A step-by-step procedure for establishing and monitoring the trophic status of inland lakes with the use of LANDSAT data, surface sampling, laboratory analysis, and aerial observations were demonstrated. The biomass was related to chlorophyll-a concentrations, water clarity, and trophic state. A procedure was developed for using surface sampling, LANDSAT data, and linear regression equations to produce a color-coded image of large lakes showing the distribution and concentrations of water quality parameters, causing eutrophication as well as parameters which indicate its effects. Cover categories readily derived from LANDSAT were those for which loading rates were available and were known to have major effects on the quality and quantity of runoff and lake eutrophication. Urban, barren land, cropland, grassland, forest, wetlands, and water were included.

  1. New software to model energy dispersive X-ray diffraction in polycrystalline materials

    NASA Astrophysics Data System (ADS)

    Ghammraoui, B.; Tabary, J.; Pouget, S.; Paulus, C.; Moulin, V.; Verger, L.; Duvauchelle, Ph.

    2012-02-01

    Detection of illicit materials, such as explosives or drugs, within mixed samples is a major issue, both for general security and as part of forensic analyses. In this paper, we describe a new code simulating energy dispersive X-ray diffraction patterns in polycrystalline materials. This program, SinFullscat, models diffraction of any object in any diffractometer system taking all physical phenomena, including amorphous background, into account. Many system parameters can be tuned: geometry, collimators (slit and cylindrical), sample properties, X-ray source and detector energy resolution. Good agreement between simulations and experimental data was obtained. Simulations using explosive materials indicated that parameters such as the diffraction angle or the energy resolution of the detector have a significant impact on the diffraction signature of the material inspected. This software will be a convenient tool to test many diffractometer configurations, providing information on the one that best restores the spectral diffraction signature of the materials of interest.

  2. Reconnaissance study of water quality in the mining-affected Aries River Basin, Romania

    USGS Publications Warehouse

    Friedel, Michael J.; Tindall, James A.; Sardan, Daniel; Fey, David L.; Poputa, G.L.

    2008-01-01

    The Aries River basin of western Romania has been subject to mining activities as far back as Roman times. Present mining activities are associated with the extraction and processing of various metals including Au, Cu, Pb, and Zn. To understand the effects of these mining activities on the environment, this study focused on three objectives: (1) establish a baseline set of physical parameters, and water- and sediment-associated concentrations of metals in river-valley floors and floodplains; (2) establish a baseline set of physical and chemical measurements of pore water and sediment in tailings; and (3) provide training in sediment and water sampling to personnel in the National Agency for Mineral Resources and the Rosia Poieni Mine. This report summarizes basin findings of physical parameters and chemistry (sediment and water), and ancillary data collected during the low-flow synoptic sampling of May 2006.

  3. Comparative treatment effectiveness of conventional trench and seepage pit systems.

    PubMed

    Field, J P; Farrell-Poe, K L; Walworth, J L

    2007-03-01

    On-site wastewater treatment systems can be a potential source of groundwater contamination in regions throughout the United States and other parts of the world. Here, we evaluate four conventional trench systems and four seepage pit systems to determine the relative effectiveness of these systems for the treatment of septic tank effluent in medium- to coarse-textured arid and semiarid soils. Soil borings were advanced up to twice the depth of the trenches (4 m) and seepage pits (15 m) at two horizontal distances (30 cm and 1.5 m) from the sidewalls of the systems. Soil samples were analyzed for various biological and chemical parameters, including Escherichia coli, total coliform, pH, total organic carbon, total dissolved solids, total nitrogen, ammonium-nitrogen, and nitrate-nitrogen. Most soil parameters investigated approached background levels more rapidly near the trenches than the seepage pits, as sampling distance increased both vertically and horizontally from the sidewalls of the systems.

  4. New isotonic drinks with antioxidant and biological capacities from berries (maqui, açaí and blackthorn) and lemon juice.

    PubMed

    Gironés-Vilaplana, Amadeo; Villaño, Débora; Moreno, Diego A; García-Viguera, Cristina

    2013-11-01

    The aim of the study was to design new isotonic drinks with lemon juice and berries: maqui [Aristotelia chilensis (Molina) Stuntz], açaí (Euterpe oleracea Mart.) and blackthorn (Prunus spinosa L.), following on from previous research. Quality parameters - including colour (CIELab parameters), minerals, phytochemical identification and quantification by high-performance liquid chromatography with diode array detector, total phenolic content by the Folin-Ciocalteu reagent, the antioxidant capacity (ABTS(+), DPPH• and [Formula: see text] assays) and biological activities (in vitro alpha-glucosidase and lipase inhibitory effects) - were tested in the samples and compared to commercially available isotonic drinks. The new isotonic blends with lemon and anthocyanins-rich berries showed an attractive colour, especially in maqui samples, which is essential for consumer acceptance. Significantly higher antioxidant and biological effects were determined in the new blends, in comparison with the commercial isotonic beverages.

  5. Psychomotor development, environmental stimulation, and socioeconomic level of preschoolers in Temuco, Chile.

    PubMed

    Doussoulin Sanhueza, Arlette

    2006-01-01

    This research was designed to describe the psychomotor development, environmental stimulation, and the socioeconomic condition of preschool children attending three educational institutions in the city of Temuco, Chile. The sample included 81 boys and girls whose age ranged from three to four years. The Test de Desarrollo Psicomotor (The Psychomotor Development Test), or TEPSI, was used to assess psychomotor development; the Home Observation Measurement of the Environment (HOME) Scale was used to evaluate environmental stimulation; and the Socioeconomic Standardization Model was used to categorize children's socioeconomic status. The highest statistical correlation was observed between psychomotor development and environmental stimulation when comparing all three parameters across the sample. Environmental stimulation may be the most relevant parameter in the study of psychomotor development of children. Socioeconomic status alone does not seem to be strongly related to children's psychomotor development in the Temuco region of Chile.

  6. VizieR Online Data Catalog: PTPS stars. III. The evolved stars sample (Niedzielski+, 2016)

    NASA Astrophysics Data System (ADS)

    Niedzielski, A.; Deka-Szymankiewicz, B.; Adamczyk, M.; Adamow, M.; Nowak, G.; Wolszczan, A.

    2015-11-01

    We present basic atmospheric parameters (Teff, logg, vt and [Fe/H]), rotation velocities and absolute radial velocities as well as luminosities, masses, ages and radii for 402 stars (including 11 single-lined spectroscopic binaries), mostly subgiants and giants. For 272 of them we present parameters for the first time. For another 53 stars we present estimates of Teff and log g based on photometric calibrations. We also present basic properties of the complete list of 744 stars that form the PTPS evolved stars sample. We examined stellar masses for 1255 stars in five other planet searches and found some of them likely to be significantly overestimated. Applying our uniformly determined stellar masses we confirm the apparent increase of companions masses for evolved stars, and we explain it, as well as lack of close-in planets with limited effective radial velocity precision for those stars due to activity. (5 data files).

  7. Apparatus and Methods for Manipulation and Optimization of Biological Systems

    NASA Technical Reports Server (NTRS)

    Sun, Ren (Inventor); Ho, Chih-Ming (Inventor); Wong, Pak Kin (Inventor); Yu, Fuqu (Inventor)

    2014-01-01

    The invention provides systems and methods for manipulating biological systems, for example to elicit a more desired biological response from a biological sample, such as a tissue, organ, and/or a cell. In one aspect, the invention operates by efficiently searching through a large parametric space of stimuli and system parameters to manipulate, control, and optimize the response of biological samples sustained in the system. In one aspect, the systems and methods of the invention use at least one optimization algorithm to modify the actuator's control inputs for stimulation, responsive to the sensor's output of response signals. The invention can be used, e.g., to optimize any biological system, e.g., bioreactors for proteins, and the like, small molecules, polysaccharides, lipids, and the like. Another use of the apparatus and methods includes is for the discovery of key parameters in complex biological systems.

  8. Anisotropic analysis of trabecular architecture in human femur bone radiographs using quaternion wavelet transforms.

    PubMed

    Sangeetha, S; Sujatha, C M; Manamalli, D

    2014-01-01

    In this work, anisotropy of compressive and tensile strength regions of femur trabecular bone are analysed using quaternion wavelet transforms. The normal and abnormal femur trabecular bone radiographic images are considered for this study. The sub-anatomic regions, which include compressive and tensile regions, are delineated using pre-processing procedures. These delineated regions are subjected to quaternion wavelet transforms and statistical parameters are derived from the transformed images. These parameters are correlated with apparent porosity, which is derived from the strength regions. Further, anisotropy is also calculated from the transformed images and is analyzed. Results show that the anisotropy values derived from second and third phase components of quaternion wavelet transform are found to be distinct for normal and abnormal samples with high statistical significance for both compressive and tensile regions. These investigations demonstrate that architectural anisotropy derived from QWT analysis is able to differentiate normal and abnormal samples.

  9. Experimental investigation of effective parameters on signal enhancement in spark assisted laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Hassanimatin, M. M.; Tavassoli, S. H.

    2018-05-01

    A combination of electrical spark and laser induced breakdown spectroscopy (LIBS), which is called spark assisted LIBS (SA-LIBS), has shown its capability in plasma spectral emission enhancement. The aim of this paper is a detailed study of plasma emission to determine the effect of plasma and experimental parameters on increasing the spectral signal. An enhancement ratio of SA-LIBS spectral lines compared with LIBS is theoretically introduced. The parameters affecting the spectral enhancement ratio including ablated mass, plasma temperature, the lifetime of neutral and ionic spectral lines, plasma volume, and electron density are experimentally investigated and discussed. By substitution of the effective parameters, the theoretical spectral enhancement ratio is calculated and compared with the experimental one. Two samples of granite as a dielectric and aluminum as a metal at different laser pulse energies are studied. There is a good agreement between the calculated and the experimental enhancement ratio.

  10. Color separation in forensic image processing using interactive differential evolution.

    PubMed

    Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb

    2015-01-01

    Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.

  11. Relative effectiveness of kinetic analysis vs single point readings for classifying environmental samples based on community-level physiological profiles (CLPP)

    NASA Technical Reports Server (NTRS)

    Garland, J. L.; Mills, A. L.; Young, J. S.

    2001-01-01

    The relative effectiveness of average-well-color-development-normalized single-point absorbance readings (AWCD) vs the kinetic parameters mu(m), lambda, A, and integral (AREA) of the modified Gompertz equation fit to the color development curve resulting from reduction of a redox sensitive dye from microbial respiration of 95 separate sole carbon sources in microplate wells was compared for a dilution series of rhizosphere samples from hydroponically grown wheat and potato ranging in inoculum densities of 1 x 10(4)-4 x 10(6) cells ml-1. Patterns generated with each parameter were analyzed using principal component analysis (PCA) and discriminant function analysis (DFA) to test relative resolving power. Samples of equivalent cell density (undiluted samples) were correctly classified by rhizosphere type for all parameters based on DFA analysis of the first five PC scores. Analysis of undiluted and 1:4 diluted samples resulted in misclassification of at least two of the wheat samples for all parameters except the AWCD normalized (0.50 abs. units) data, and analysis of undiluted, 1:4, and 1:16 diluted samples resulted in misclassification for all parameter types. Ordination of samples along the first principal component (PC) was correlated to inoculum density in analyses performed on all of the kinetic parameters, but no such influence was seen for AWCD-derived results. The carbon sources responsible for classification differed among the variable types with the exception of AREA and A, which were strongly correlated. These results indicate that the use of kinetic parameters for pattern analysis in CLPP may provide some additional information, but only if the influence of inoculum density is carefully considered. c2001 Elsevier Science Ltd. All rights reserved.

  12. SPOTting Model Parameters Using a Ready-Made Python Package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2017-04-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  13. SPOTting Model Parameters Using a Ready-Made Python Package.

    PubMed

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  14. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  15. Estimating the Expected Value of Sample Information Using the Probabilistic Sensitivity Analysis Sample

    PubMed Central

    Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny

    2015-01-01

    Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269

  16. Comparison of haematology, coagulation and clinical chemistry parameters in blood samples from the sublingual vein and vena cava in Sprague-Dawley rats.

    PubMed

    Seibel, J; Bodié, K; Weber, S; Bury, D; Kron, M; Blaich, G

    2010-10-01

    The investigation of clinical pathology parameters (haematology, clinical chemistry and coagulation) is an important part of the preclinical evaluation of drug safety. However, the blood sampling method employed should avoid or minimize stress and injury in laboratory animals. In the present study, we compared the clinical pathology results from blood samples collected terminally from the vena cava (VC) immediately before necropsy with samples taken from the sublingual vein (VS) also prior to necropsy in order to determine whether the sampling method has an influence on clinical pathology parameters. Forty-six 12-week-old male Sprague-Dawley rats were assigned to two groups (VC or VS; n = 23 each). All rats were anaesthetized with isoflurane prior to sampling. In the VC group, blood was withdrawn from the inferior VC. For VS sampling, the tongue was gently pulled out and the VS was punctured. The haematology, coagulation and clinical chemistry parameters were compared. Equivalence was established for 13 parameters, such as mean corpuscular volume, white blood cells and calcium. No equivalence was found for the remaining 26 parameters, although they were considered to be similar when compared with the historical data and normal ranges. The most conspicuous finding was that activated prothrombin time was 30.3% less in blood taken from the VC (16.6 ± 0.89 s) than that in the VS samples (23.8 ± 1.58 s). Summing up, blood sampling from the inferior VC prior to necropsy appears to be a suitable and reliable method for terminal blood sampling that reduces stress and injury to laboratory rats in preclinical drug safety studies.

  17. Time Courses of Inflammatory Markers after Aneurysmal Subarachnoid Hemorrhage and Their Possible Relevance for Future Studies.

    PubMed

    Höllig, Anke; Stoffel-Wagner, Birgit; Clusmann, Hans; Veldeman, Michael; Schubert, Gerrit A; Coburn, Mark

    2017-01-01

    Aneurysmal subarachnoid hemorrhage triggers an intense inflammatory response, which is suspected to increase the risk for secondary complications such as delayed cerebral ischemia (DCI). However, to date, the monitoring of the inflammatory response to detect secondary complications such as DCI has not become part of the clinical routine diagnostic. Here, we aim to illustrate the time courses of inflammatory parameters after aneurysmal subarachnoid hemorrhage (aSAH) and discuss the problems of inflammatory parameters as biomarkers but also their possible relevance for deeper understanding of the pathophysiology after aSAH and sophisticated planning of future studies. In this prospective cohort study, 109 patients with aSAH were initially included, n  = 28 patients had to be excluded. Serum and-if possible-cerebral spinal fluid samples ( n  = 48) were retrieved at days 1, 4, 7, 10, and 14 after aSAH. Samples were analyzed for leukocyte count and C-reactive protein (CRP) (serum samples only) as well as matrix metallopeptidase 9 (MMP9), intercellular adhesion molecule 1 (ICAM1), and leukemia inhibitory factor (LIF) [both serum and cerebrospinal fluid (CSF) samples]. Time courses of the inflammatory parameters were displayed and related to the occurrence of DCI. We illustrate the time courses of leukocyte count, CRP, MMP9, ICAM1, and LIF in patients' serum samples from the first until the 14th day after aSAH. Time courses of MMP9, ICAM1, and LIF in CSF samples are demonstrated. Furthermore, no significant difference was shown relating the time courses to the occurrence of DCI. We estimate that the wide range of the measured values hampers their interpretation and usage as a biomarker. However, understanding the inflammatory response after aSAH and generating a multicenter database may facilitate further studies: realistic sample size calculations on the basis of a multicenter database will increase the quality and clinical relevance of the acquired results.

  18. CosmoSIS: A system for MC parameter estimation

    DOE PAGES

    Bridle, S.; Dodelson, S.; Jennings, E.; ...

    2015-12-23

    CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less

  19. Design of state-feedback controllers including sensitivity reduction, with applications to precision pointing

    NASA Technical Reports Server (NTRS)

    Hadass, Z.

    1974-01-01

    The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.

  20. Extraction of acetanilides in rice using ionic liquid-based matrix solid phase dispersion-solvent flotation.

    PubMed

    Zhang, Liyuan; Wang, Changyuan; Li, Zuotong; Zhao, Changjiang; Zhang, Hanqi; Zhang, Dongjie

    2018-04-15

    Ionic liquid-based matrix solid phase dispersion-solvent flotation coupled with high performance liquid chromatography was developed for the determination of the acetanilide herbicides, including metazachlor, propanil, alachlor, propisochlor, pretilachlor, and butachlor in rice samples. Some experimental parameters, including the type of dispersant, the mass ratio of dispersant to sample, pH of sample solution, the type of extraction solvent, the type of ionic liquid, flotation time, and flow rate of N 2 were optimized. The average recoveries of the acetanilide herbicides at spiked concentrations of 50, 125, and 250 µg/kg ranged from 89.4% to 108.7%, and relative standard deviations were equal to or lower than 7.1%, the limits of quantification were in the range of 38.0 to 84.7 µg/kg. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Statistical and population genetics issues of two Hungarian datasets from the aspect of DNA evidence interpretation.

    PubMed

    Szabolcsi, Zoltán; Farkas, Zsuzsa; Borbély, Andrea; Bárány, Gusztáv; Varga, Dániel; Heinrich, Attila; Völgyi, Antónia; Pamjav, Horolma

    2015-11-01

    When the DNA profile from a crime-scene matches that of a suspect, the weight of DNA evidence depends on the unbiased estimation of the match probability of the profiles. For this reason, it is required to establish and expand the databases that reflect the actual allele frequencies in the population applied. 21,473 complete DNA profiles from Databank samples were used to establish the allele frequency database to represent the population of Hungarian suspects. We used fifteen STR loci (PowerPlex ESI16) including five, new ESS loci. The aim was to calculate the statistical, forensic efficiency parameters for the Databank samples and compare the newly detected data to the earlier report. The population substructure caused by relatedness may influence the frequency of profiles estimated. As our Databank profiles were considered non-random samples, possible relationships between the suspects can be assumed. Therefore, population inbreeding effect was estimated using the FIS calculation. The overall inbreeding parameter was found to be 0.0106. Furthermore, we tested the impact of the two allele frequency datasets on 101 randomly chosen STR profiles, including full and partial profiles. The 95% confidence interval estimates for the profile frequencies (pM) resulted in a tighter range when we used the new dataset compared to the previously published ones. We found that the FIS had less effect on frequency values in the 21,473 samples than the application of minimum allele frequency. No genetic substructure was detected by STRUCTURE analysis. Due to the low level of inbreeding effect and the high number of samples, the new dataset provides unbiased and precise estimates of LR for statistical interpretation of forensic casework and allows us to use lower allele frequencies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. [Synchronous extraction and determination of phenoxy acid herbicides in water by on-line monolithic solid phase microextraction-high performance liquid chromatography].

    PubMed

    Wang, Jiabin; Wu, Fangling; Zhao, Qi

    2015-08-01

    A C18 monolithic capillary column was utilized as the solid phase microextraction column to construct an in-tube SPME-HPLC system which was used to simultaneously extract and detect five phenoxy acid herbicides, including 2,4-dichlorophenoxyacetic acid (2,4-D), 2- (2-chloro)-phenoxy propionic acid (2,2-CPPA), 2-(3-chloro)-phenoxy propionic acid (2,3- CPPA), phenoxy propionic acid (PPA) and 2-(2,4-dichlorophenoxy) propionic acid (2,4-DP). The operating parameters of the in-tube SPME-HPLC system, including the length of the monolithic column, the sampling flow rate, the sampling time, the elution flow rate and the elution time, had been investigated in detail. The optimized operating parameters of the in-tube SPME-HPLC system were as follow: the length of the monolithic column was 20 cm, the sampling flow rate was 0. 04 mL/min, sampling time was 13 min; the elution flow rate was 0.02 mL/min, elution time was 5 min. Under the optimized conditions, the detection limits of the five phenoxy acid herbicides were as follows: 9 µg/L for PPA, 4 µg/L for 2,2-CPPA, 4 µg/L for 2,3-CPPA, 5 µg/L for 2,4-D, 5 µg/L for 2,4-DP. Compared with the HPLC method with direct injection, the combined system showed a good enrichment factors to the analytes. The recoveries of the five phenoxy acid herbicides were between 79.0% and 98.0% (RSD ≤ 3.9%). This method was successfully used to detect the five phenoxy acid herbicides in water samples with satisfactory results.

  3. Blood gases, biochemistry and haematology of Galápagos hawksbill turtles (Eretmochelys imbricata)

    PubMed Central

    Muñoz-Pérez, Juan Pablo; Hirschfeld, Maximilian; Alarcón-Ruales, Daniela; Denkinger, Judith; Castañeda, Jason Guillermo; García, Juan; Lohmann, Kenneth J.

    2017-01-01

    Abstract The hawksbill turtle, Eretmochelys imbricata, is a marine chelonian with a circum-global distribution, but the species is critically endangered and has nearly vanished from the eastern Pacific. Although reference blood parameter intervals have been published for many chelonian species and populations, including nesting Atlantic hawksbills, no such baseline biochemical and blood gas values have been reported for wild Pacific hawksbill turtles. Blood samples were drawn from eight hawksbill turtles captured in near shore foraging locations within the Galápagos archipelago over a period of four sequential years; three of these turtles were recaptured and sampled on multiple occasions. Of the eight sea turtles sampled, five were immature and of unknown sex, and the other three were females. A portable blood analyzer was used to obtain near immediate field results for a suite of blood gas and chemistry parameters. Values affected by temperature were corrected in two ways: (i) with standard formulas and (ii) with auto-corrections made by the portable analyzer. A bench top blood chemistry analyzer was used to measure a series of biochemistry parameters from plasma. Standard laboratory haematology techniques were employed for red and white blood cell counts and to determine haematocrit manually, which was compared to the haematocrit values generated by the portable analyzer. The values reported in this study provide reference data that may be useful in comparisons among populations and in detecting changes in health status among Galápagos sea turtles. The findings might also be helpful in future efforts to demonstrate associations between specific biochemical parameters and disease or environmental disasters. PMID:28496982

  4. Blood gases, biochemistry and haematology of Galápagos hawksbill turtles (Eretmochelys imbricata).

    PubMed

    Muñoz-Pérez, Juan Pablo; Lewbart, Gregory A; Hirschfeld, Maximilian; Alarcón-Ruales, Daniela; Denkinger, Judith; Castañeda, Jason Guillermo; García, Juan; Lohmann, Kenneth J

    2017-01-01

    The hawksbill turtle, Eretmochelys imbricata , is a marine chelonian with a circum-global distribution, but the species is critically endangered and has nearly vanished from the eastern Pacific. Although reference blood parameter intervals have been published for many chelonian species and populations, including nesting Atlantic hawksbills, no such baseline biochemical and blood gas values have been reported for wild Pacific hawksbill turtles. Blood samples were drawn from eight hawksbill turtles captured in near shore foraging locations within the Galápagos archipelago over a period of four sequential years; three of these turtles were recaptured and sampled on multiple occasions. Of the eight sea turtles sampled, five were immature and of unknown sex, and the other three were females. A portable blood analyzer was used to obtain near immediate field results for a suite of blood gas and chemistry parameters. Values affected by temperature were corrected in two ways: (i) with standard formulas and (ii) with auto-corrections made by the portable analyzer. A bench top blood chemistry analyzer was used to measure a series of biochemistry parameters from plasma. Standard laboratory haematology techniques were employed for red and white blood cell counts and to determine haematocrit manually, which was compared to the haematocrit values generated by the portable analyzer. The values reported in this study provide reference data that may be useful in comparisons among populations and in detecting changes in health status among Galápagos sea turtles. The findings might also be helpful in future efforts to demonstrate associations between specific biochemical parameters and disease or environmental disasters.

  5. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  6. On the relation between correlation dimension, approximate entropy and sample entropy parameters, and a fast algorithm for their calculation

    NASA Astrophysics Data System (ADS)

    Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw

    2012-12-01

    We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.

  7. Model-based quantification of image quality

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.

    1989-01-01

    In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.

  8. Multi-parameters monitoring during traditional Chinese medicine concentration process with near infrared spectroscopy and chemometrics

    NASA Astrophysics Data System (ADS)

    Liu, Ronghua; Sun, Qiaofeng; Hu, Tian; Li, Lian; Nie, Lei; Wang, Jiayue; Zhou, Wanhui; Zang, Hengchang

    2018-03-01

    As a powerful process analytical technology (PAT) tool, near infrared (NIR) spectroscopy has been widely used in real-time monitoring. In this study, NIR spectroscopy was applied to monitor multi-parameters of traditional Chinese medicine (TCM) Shenzhiling oral liquid during the concentration process to guarantee the quality of products. Five lab scale batches were employed to construct quantitative models to determine five chemical ingredients and physical change (samples density) during concentration process. The paeoniflorin, albiflorin, liquiritin and samples density were modeled by partial least square regression (PLSR), while the content of the glycyrrhizic acid and cinnamic acid were modeled by support vector machine regression (SVMR). Standard normal variate (SNV) and/or Savitzkye-Golay (SG) smoothing with derivative methods were adopted for spectra pretreatment. Variable selection methods including correlation coefficient (CC), competitive adaptive reweighted sampling (CARS) and interval partial least squares regression (iPLS) were performed for optimizing the models. The results indicated that NIR spectroscopy was an effective tool to successfully monitoring the concentration process of Shenzhiling oral liquid.

  9. Optimization of the coplanar interdigital capacitive sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yunzhi; Zhan, Zheng; Bowler, Nicola

    2017-02-01

    Interdigital capacitive sensors are applied in nondestructive testing and material property characterization of low-conductivity materials. The sensor performance is typically described based on the penetration depth of the electric field into the sample material, the sensor signal strength and its sensitivity. These factors all depend on the geometry and material properties of the sensor and sample. In this paper, a detailed analysis is provided, through finite element simulations, of the ways in which the sensor's geometrical parameters affect its performance. The geometrical parameters include the number of digits forming the interdigital electrodes and the ratio of digit width to their separation. In addition, the influence of the presence or absence of a metal backplane on the sample is analyzed. Further, the effects of sensor substrate thickness and material on signal strength are studied. The results of the analysis show that it is necessary to take into account a trade-off between the desired sensitivity and penetration depth when designing the sensor. Parametric equations are presented to assist the sensor designer or nondestructive evaluation specialist in optimizing the design of a capacitive sensor.

  10. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  11. Probing microstructural information of anisotropic scattering media using rotation-independent polarization parameters.

    PubMed

    Sun, Minghao; He, Honghui; Zeng, Nan; Du, E; Guo, Yihong; Peng, Cheng; He, Yonghong; Ma, Hui

    2014-05-10

    Polarization parameters contain rich information on the micro- and macro-structure of scattering media. However, many of these parameters are sensitive to the spatial orientation of anisotropic media, and may not effectively reveal the microstructural information. In this paper, we take polarization images of different textile samples at different azimuth angles. The results demonstrate that the rotation insensitive polarization parameters from rotating linear polarization imaging and Mueller matrix transformation methods can be used to distinguish the characteristic features of different textile samples. Further examinations using both experiments and Monte Carlo simulations reveal that the residue rotation dependence in these polarization parameters is due to the oblique incidence illumination. This study shows that such rotation independent parameters are potentially capable of quantitatively classifying anisotropic samples, such as textiles or biological tissues.

  12. On the effect of model parameters on forecast objects

    NASA Astrophysics Data System (ADS)

    Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott

    2018-04-01

    Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map. The field for some quantities generally consists of spatially coherent and disconnected objects. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.

  13. Reporting of various methodological and statistical parameters in negative studies published in prominent Indian Medical Journals: a systematic review.

    PubMed

    Charan, J; Saxena, D

    2014-01-01

    Biased negative studies not only reflect poor research effort but also have an impact on 'patient care' as they prevent further research with similar objectives, leading to potential research areas remaining unexplored. Hence, published 'negative studies' should be methodologically strong. All parameters that may help a reader to judge validity of results and conclusions should be reported in published negative studies. There is a paucity of data on reporting of statistical and methodological parameters in negative studies published in Indian Medical Journals. The present systematic review was designed with an aim to critically evaluate negative studies published in prominent Indian Medical Journals for reporting of statistical and methodological parameters. Systematic review. All negative studies published in 15 Science Citation Indexed (SCI) medical journals published from India were included in present study. Investigators involved in the study evaluated all negative studies for the reporting of various parameters. Primary endpoints were reporting of "power" and "confidence interval." Power was reported in 11.8% studies. Confidence interval was reported in 15.7% studies. Majority of parameters like sample size calculation (13.2%), type of sampling method (50.8%), name of statistical tests (49.1%), adjustment of multiple endpoints (1%), post hoc power calculation (2.1%) were reported poorly. Frequency of reporting was more in clinical trials as compared to other study designs and in journals having impact factor more than 1 as compared to journals having impact factor less than 1. Negative studies published in prominent Indian medical journals do not report statistical and methodological parameters adequately and this may create problems in the critical appraisal of findings reported in these journals by its readers.

  14. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology

    PubMed Central

    Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal

    2009-01-01

    The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455

  15. Solid-phase microextraction gas chromatography-mass spectrometry determination of fragrance allergens in baby bathwater.

    PubMed

    Lamas, J Pablo; Sanchez-Prado, Lucia; Garcia-Jares, Carmen; Llompart, Maria

    2009-07-01

    A method based on solid-phase microextraction (SPME) and gas chromatography-mass spectrometry (GC-MS) has been optimized for the determination of fragrance allergens in water samples. This is the first study devoted to this family of cosmetic ingredients performed by SPME. The influence of parameters such as fibre coating, extraction and desorption temperatures, salting-out effect and sampling mode on the extraction efficiency has been studied by means of a mixed-level factorial design, which allowed the study of the main effects as well as two-factor interactions. Excluding desorption temperature, the other parameters were, in general, very important for the achievement of high response. The final procedure was based on headspace sampling at 100 degrees C, using polydimethylsiloxane/divinylbenzene fibres. The method showed good linearity and precision for all compounds, with detection limits ranging from 0.001 to 0.3 ng mL(-1). Reliability was demonstrated through the evaluation of the recoveries in different real water samples, including baby bathwater and swimming pool water. The absence of matrix effects allowed the use of external standard calibration to quantify the target compounds in the samples. The proposed procedure was applied to the determination of allergens in several real samples. All the target compounds were found in the samples, and, in some cases, at quite high concentrations. The presence and the levels of these chemicals in baby bathwater should be a matter of concern.

  16. Ultra-low velocity zones beneath the Philippine and Tasman Seas revealed by a trans-dimensional Bayesian waveform inversion

    NASA Astrophysics Data System (ADS)

    Pachhai, Surya; Dettmer, Jan; Tkalčić, Hrvoje

    2015-11-01

    Ultra-low velocity zones (ULVZs) are small-scale structures in the Earth's lowermost mantle inferred from the analysis of seismological observations. These structures exhibit a strong decrease in compressional (P)-wave velocity, shear (S)-wave velocity, and an increase in density. Quantifying the elastic properties of ULVZs is crucial for understanding their physical origin, which has been hypothesized either as partial melting, iron enrichment, or a combination of the two. Possible disambiguation of these hypotheses can lead to a better understanding of the dynamic processes of the lowermost mantle, such as, percolation, stirring and thermochemical convection. To date, ULVZs have been predominantly studied by forward waveform modelling of seismic waves that sample the core-mantle boundary region. However, ULVZ parameters (i.e. velocity, density, and vertical and lateral extent) obtained through forward modelling are poorly constrained because inferring Earth structure from seismic observations is a non-linear inverse problem with inherent non-uniqueness. To address these issues, we developed a trans-dimensional hierarchical Bayesian inversion that enables rigorous estimation of ULVZ parameter values and their uncertainties, including the effects of model selection. The model selection includes treating the number of layers and the vertical extent of the ULVZ as unknowns. The posterior probability density (solution to the inverse problem) of the ULVZ parameters is estimated by reversible jump Markov chain Monte Carlo sampling that employs parallel tempering to improve efficiency/convergence. First, we apply our method to study the resolution of complex ULVZ structure (including gradually varying structure) by probabilistically inverting simulated noisy waveforms. Then, two data sets sampling the CMB beneath the Philippine and Tasman Seas are considered in the inversion. Our results indicate that both ULVZs are more complex than previously suggested. For the Philippine Sea data, we find a strong decrease in S-wave velocity, which indicates the presence of iron-rich material, albeit this result is accompanied with larger parameter uncertainties than in a previous study. For the Tasman Sea data, our analysis yields a well-constrained S-wave velocity that gradually decreases with depth. We conclude that this ULVZ represents a partial melt of iron-enriched material with higher melt content near its bottom.

  17. Occurrence of pesticides in groundwater and sediments and mineralogy of sediments and grain coatings underlying the Rutgers Agricultural Research and Extension Center, Upper Deerfield, New Jersey, 2007

    USGS Publications Warehouse

    Reilly, Timothy J.; Smalling, Kelly L.; Meyer, Michael T.; Sandstrom, Mark W.; Hladik, Michelle; Boehlke, Adam R.; Fishman, Neil S.; Battaglin, William A.; Kuivila, Kathryn

    2014-01-01

    Water and sediment samples were collected from June through October 2007 from seven plots at the Rutgers Agricultural Research and Extension Center in Upper Deerfield, New Jersey, and analyzed for a suite of pesticides (including fungicides) and other physical and chemical parameters (including sediment mineralogy) by the U.S. Geological Survey. Plots were selected for inclusion in this study on the basis of the crops grown and the pesticides used. Forty-one pesticides were detected in 14 water samples; these include 5 fungicides, 13 herbicides, 1 insecticide, and 22 pesticide degradates. The following pesticides and pesticide degradates were detected in 50 percent or more of the groundwater samples: 1-amide-4-hydroxy-chorothalonil, alachlor sulfonic acid, metolachlor oxanilic acid, metolachlor sulfonic acid, metalaxyl, and simazine. Dissolved-pesticide concentrations ranged from below their instrumental limit of detection to 36 micrograms per liter (for metolachlor sulfonic acid, a degradate of the herbicide metolachlor). The total number of pesticides found in groundwater samples ranged from 0 to 29. Fourteen pesticides were detected in sediment samples from continuous cores collected within each of the seven sampled plots; these include 4 fungicides, 2 herbicides, and 7 pesticide degradates. Pesticide concentrations in sediment samples ranged from below their instrumental limit of detection to 34.2 nanograms per gram (for azoxystrobin). The total number of pesticides found in sediment samples ranged from 0 to 8. Quantitative whole-rock and grain-coating mineralogy of sediment samples were determined by x-ray diffraction. Whole-rock analysis indicated that sediments were predominantly composed of quartz. The materials coating the quartz grains were removed to allow quantification of the trace mineral phases present.

  18. A MegaCam Survey of Outer Halo Satellites. III. Photometric and Structural Parameters

    NASA Astrophysics Data System (ADS)

    Muñoz, Ricardo R.; Côté, Patrick; Santana, Felipe A.; Geha, Marla; Simon, Joshua D.; Oyarzún, Grecco A.; Stetson, Peter B.; Djorgovski, S. G.

    2018-06-01

    We present structural parameters from a wide-field homogeneous imaging survey of Milky Way satellites carried out with the MegaCam imagers on the 3.6 m Canada–France–Hawaii Telescope and 6.5 m Magellan-Clay telescope. Our survey targets an unbiased sample of “outer halo” satellites (i.e., substructures having galactocentric distances greater than 25 kpc) and includes classical dSph galaxies, ultra-faint dwarfs, and remote globular clusters. We combine deep, panoramic gr imaging for 44 satellites and archival gr imaging for 14 additional objects (primarily obtained with the DECam instrument as part of the Dark Energy Survey) to measure photometric and structural parameters for 58 outer halo satellites. This is the largest and most uniform analysis of Milky Way satellites undertaken to date and represents roughly three-quarters (58/81 ≃ 72%) of all known outer halo satellites. We use a maximum-likelihood method to fit four density laws to each object in our survey: exponential, Plummer, King, and Sérsic models. We systematically examine the isodensity contour maps and color–magnitude diagrams for each of our program objects, present a comparison with previous results, and tabulate our best-fit photometric and structural parameters, including ellipticities, position angles, effective radii, Sérsic indices, absolute magnitudes, and surface brightness measurements. We investigate the distribution of outer halo satellites in the size–magnitude diagram and show that the current sample of outer halo substructures spans a wide range in effective radius, luminosity, and surface brightness, with little evidence for a clean separation into star cluster and galaxy populations at the faintest luminosities and surface brightnesses.

  19. Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1996-01-01

    Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.

  20. Influence of growth conditions on exchange bias of NiMn-based spin valves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wienecke, Anja; Kruppe, Rahel; Rissing, Lutz

    2015-05-07

    As shown in previous investigations, a correlation between a NiMn-based spin valve's thermal stability and its inherent exchange bias exists, even if the blocking temperature of the antiferromagnet is clearly above the heating temperature and the reason for thermal degradation is mainly diffusion and not the loss of exchange bias. Samples with high exchange bias are thermally more stable than samples with low exchange bias. Those structures promoting a high exchange bias are seemingly the same suppressing thermally induced diffusion processes (A. Wienecke and L. Rissing, “Relationship between thermal stability and layer-stack/structure of NiMn-based GMR systems,” in IEEE Transaction onmore » Magnetic Conference (EMSA 2014)). Many investigations were carried out on the influence of the sputtering parameters as well as the layer thickness on the magnetoresistive effect. The influence of these parameters on the exchange bias and the sample's thermal stability, respectively, was hardly taken into account. The investigation described here concentrates on the last named issue. The focus lies on the influence of the sputtering parameters and layer thickness of the “starting layers” in the stack and the layers forming the (synthetic) antiferromagnet. This paper includes a guideline for the evaluated sputtering conditions and layer thicknesses to realize a high exchange bias and presumably good thermal stability for NiMn-based spin valves with a synthetic antiferromagnet.« less

  1. Optimization of crystallization conditions for biological macromolecules.

    PubMed

    McPherson, Alexander; Cudney, Bob

    2014-11-01

    For the successful X-ray structure determination of macromolecules, it is first necessary to identify, usually by matrix screening, conditions that yield some sort of crystals. Initial crystals are frequently microcrystals or clusters, and often have unfavorable morphologies or yield poor diffraction intensities. It is therefore generally necessary to improve upon these initial conditions in order to obtain better crystals of sufficient quality for X-ray data collection. Even when the initial samples are suitable, often marginally, refinement of conditions is recommended in order to obtain the highest quality crystals that can be grown. The quality of an X-ray structure determination is directly correlated with the size and the perfection of the crystalline samples; thus, refinement of conditions should always be a primary component of crystal growth. The improvement process is referred to as optimization, and it entails sequential, incremental changes in the chemical parameters that influence crystallization, such as pH, ionic strength and precipitant concentration, as well as physical parameters such as temperature, sample volume and overall methodology. It also includes the application of some unique procedures and approaches, and the addition of novel components such as detergents, ligands or other small molecules that may enhance nucleation or crystal development. Here, an attempt is made to provide guidance on how optimization might best be applied to crystal-growth problems, and what parameters and factors might most profitably be explored to accelerate and achieve success.

  2. Optimization of crystallization conditions for biological macromolecules

    PubMed Central

    McPherson, Alexander; Cudney, Bob

    2014-01-01

    For the successful X-ray structure determination of macromolecules, it is first necessary to identify, usually by matrix screening, conditions that yield some sort of crystals. Initial crystals are frequently microcrystals or clusters, and often have unfavorable morphologies or yield poor diffraction intensities. It is therefore generally necessary to improve upon these initial conditions in order to obtain better crystals of sufficient quality for X-ray data collection. Even when the initial samples are suitable, often marginally, refinement of conditions is recommended in order to obtain the highest quality crystals that can be grown. The quality of an X-ray structure determination is directly correlated with the size and the perfection of the crystalline samples; thus, refinement of conditions should always be a primary component of crystal growth. The improvement process is referred to as optimization, and it entails sequential, incremental changes in the chemical parameters that influence crystallization, such as pH, ionic strength and precipitant concentration, as well as physical parameters such as temperature, sample volume and overall methodology. It also includes the application of some unique procedures and approaches, and the addition of novel components such as detergents, ligands or other small molecules that may enhance nucleation or crystal development. Here, an attempt is made to provide guidance on how optimization might best be applied to crystal-growth problems, and what parameters and factors might most profitably be explored to accelerate and achieve success. PMID:25372810

  3. Global Seabed Materials and Habitats Mapped: The Computational Methods

    NASA Astrophysics Data System (ADS)

    Jenkins, C. J.

    2016-02-01

    What the seabed is made of has proven difficult to map on the scale of whole ocean-basins. Direct sampling and observation can be augmented with proxy-parameter methods such as acoustics. Both avenues are essential to obtain enough detail and coverage, and also to validate the mapping methods. We focus on the direct observations such as samplings, photo and video, probes, diver and sub reports, and surveyed features. These are often in word-descriptive form: over 85% of the records for site materials are in this form, whether as sample/view descriptions or classifications, or described parameters such as consolidation, color, odor, structures and components. Descriptions are absolutely necessary for unusual materials and for processes - in other words, for research. This project dbSEABED not only has the largest collection of seafloor materials data worldwide, but it uses advanced computing math to obtain the best possible coverages and detail. Included in those techniques are linguistic text analysis (e.g., Natural Language Processing, NLP), fuzzy set theory (FST), and machine learning (ML, e.g., Random Forest). These techniques allow efficient and accurate import of huge datasets, thereby optimizing the data that exists. They merge quantitative and qualitative types of data for rich parameter sets, and extrapolate where the data are sparse for best map production. The dbSEABED data resources are now very widely used worldwide in oceanographic research, environmental management, the geosciences, engineering and survey.

  4. The effect of cryopreservation on goat semen characteristics related to sperm freezability.

    PubMed

    Dorado, J; Muñoz-Serrano, A; Hidalgo, M

    2010-08-01

    Seminal quality parameters were used to evaluate the effect of freeze-thawing procedure on goat sperm characteristics, and to relate possible changes in sperm parameters to cryopreservation success. Semen samples (n=110) were frozen with TRIS and milk-based extenders and thawed. Sperm quality parameters (motility, morphology and acrosome) were compared between fresh and frozen-thawed samples. Sperm freezability was judged by classifying the semen samples as "suitable" or "not suitable" according to the sperm quality parameters assessed before and after thawing. Fertility data was obtained after cervical insemination with frozen semen doses. The ejaculates were grouped into two categories according to their fertility results. In experiment 1, significant differences were found between semen extenders (P<0.001), bucks (P<0.05) and ejaculates within the same male (P<0.05) in terms of sperm quality. There was no seasonal effect (P>0.05) on the majority of the sperm parameters assessed after thawing. Moreover, significant differences (P<0.001) in semen parameters assessed in fresh semen and frozen-thawed samples were found between groups. The effect of the freeze-thawing procedure on sperm quality parameters was also different (P<0.05) between extenders within the same group. The number of sperm quality parameters that had changed after cryopreservation was lower in "suitable" semen samples before and after thawing. In experiment 2, no differences (P>0.05) in semen parameters assessed in fresh semen and frozen-thawed samples were found between groups. The effect of freezing and thawing on sperm quality parameters were different (P<0.05) between extenders within the same group. Only mean beat cross frequency (BCF) values were significantly higher (P<0.05) in TRIS diluted samples that led to successful pregnancies after artificial insemination. In conclusion, CASA-derived motility parameters, together with traditional semen assessment methods, give valuable information on sperm quality before and after freezing. Therefore, the identification of ejaculates as "good" or "bad" based on fresh and post-thaw semen parameters studied in the present experiment were good indicators of goat semen freezability, although the fertilizing capacity of frozen-thawed goat spermatozoa are not revealed by this quality study. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  5. The Impact of Uncertain Physical Parameters on HVAC Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Elizondo, Marcelo A.; Lu, Shuai

    HVAC units are currently one of the major resources providing demand response (DR) in residential buildings. Models of HVAC with DR function can improve understanding of its impact on power system operations and facilitate the deployment of DR technologies. This paper investigates the importance of various physical parameters and their distributions to the HVAC response to DR signals, which is a key step to the construction of HVAC models for a population of units with insufficient data. These parameters include the size of floors, insulation efficiency, the amount of solid mass in the house, and efficiency of the HVAC units.more » These parameters are usually assumed to follow Gaussian or Uniform distributions. We study the effect of uncertainty in the chosen parameter distributions on the aggregate HVAC response to DR signals, during transient phase and in steady state. We use a quasi-Monte Carlo sampling method with linear regression and Prony analysis to evaluate sensitivity of DR output to the uncertainty in the distribution parameters. The significance ranking on the uncertainty sources is given for future guidance in the modeling of HVAC demand response.« less

  6. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  7. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  8. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    PubMed

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  9. Trans-dimensional joint inversion of seabed scattering and reflection data.

    PubMed

    Steininger, Gavin; Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2013-03-01

    This paper examines joint inversion of acoustic scattering and reflection data to resolve seabed interface roughness parameters (spectral strength, exponent, and cutoff) and geoacoustic profiles. Trans-dimensional (trans-D) Bayesian sampling is applied with both the number of sediment layers and the order (zeroth or first) of auto-regressive parameters in the error model treated as unknowns. A prior distribution that allows fluid sediment layers over an elastic basement in a trans-D inversion is derived and implemented. Three cases are considered: Scattering-only inversion, joint scattering and reflection inversion, and joint inversion with the trans-D auto-regressive error model. Including reflection data improves the resolution of scattering and geoacoustic parameters. The trans-D auto-regressive model further improves scattering resolution and correctly differentiates between strongly and weakly correlated residual errors.

  10. Dirac-Born-Infeld inflation using a one-parameter family of throat geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gmeiner, Florian; White, Chris D, E-mail: fgmeiner@nikhef.nl, E-mail: cwhite@nikhef.nl

    2008-02-15

    We demonstrate the possibility of examining cosmological signatures in the Dirac-Born-Infeld (DBI) inflation setup using the BGMPZ solution, a one-parameter family of geometries for the warped throat which interpolate between the Maldacena-Nunez and Klebanov-Strassler solutions. The warp factor is determined numerically and is subsequently used to calculate cosmological observables, including the scalar and tensor spectral indices, for a sample point in the parameter space. As one moves away from the Klebanov-Strassler (KS) solution for the throat, the warp factor is qualitatively different, which leads to a significant change for the observables, but also generically increases the non-Gaussianity of the models.more » We argue that the different models can potentially be differentiated by current and future experiments.« less

  11. Physical and chemical parameter correlations with technical and technological characteristics of heating systems and the presence of Legionella spp. in the hot water supply.

    PubMed

    Rakić, Anita; Štambuk-Giljanović, Nives

    2016-02-01

    The purpose of this study was to evaluate the prevalence of Legionella spp. and compare the quality of hot water between four facilities for accommodation located in Southern Croatia (the Split-Dalmatian County). The research included data collection on the technical and technological characteristics in the period from 2009 to 2012. The survey included a type of construction material for the distribution and internal networks, heating system water heater type, and water consumption. Changes in water quality were monitored by determination of the physical and chemical parameters (temperature, pH, free chlorine residual concentrations, iron, zinc, copper and manganese) in the samples, as well as the presence and concentration of bacteria Legionella spp. The temperature is an important factor for the development of biofilms, and it is in negative correlation with the appearance of Legionella spp. Positive correlations between the Fe and Zn concentrations and Legionella spp. were established, while the inhibitory effect of a higher Cu concentration on the Legionella spp. concentration was proven. Legionella spp. were identified in 38/126 (30.2%) of the water samples from the heating system with zinc-coated pipes, as well as in 78/299 (26.1%) of the samples from systems with plastic pipes. A similar number of Legionella spp. positive samples were established regardless of the type of the water heating system (central or independent). The study confirms the necessity of regular microbial contamination monitoring of the drinking water distribution systems (DWDSs).

  12. Stability evaluation of quality parameters for palm oil products at low temperature storage.

    PubMed

    Ramli, Nur Aainaa Syahirah; Mohd Noor, Mohd Azmil; Musa, Hajar; Ghazali, Razmah

    2018-07-01

    Palm oil is one of the major oils and fats produced and traded worldwide. The value of palm oil products is mainly influenced by their quality. According to ISO 17025:2005, accredited laboratories require a quality control procedure with respect to monitoring the validity of tests for determination of quality parameters. This includes the regular use of internal quality control using secondary reference materials. Unfortunately, palm oil reference materials are not currently available. To establish internal quality control samples, the stability of quality parameters needs to be evaluated. In the present study, the stability of quality parameters for palm oil products was examined over 10 months at low temperature storage (6 ± 2 °C). The palm oil products tested included crude palm oil (CPO); refined, bleached and deodorized (RBD) palm oil (RBDPO); RBD palm olein (RBDPOo); and RBD palm stearin (RBDPS). The quality parameters of the oils [i.e. moisture content, free fatty acid content (FFA), iodine value (IV), fatty acids composition (FAC) and slip melting point (SMP)] were determined prior to and throughout the storage period. The moisture, FFA, IV, FAC and SMP for palm oil products changed significantly (P < 0.05), whereas the moisture content for CPO, IV for RBDPO and RBDPOo, stearic acid composition for CPO and linolenic acid composition for CPO, RBDPO, RBDPOo and RBDPS did not (P > 0.05). The stability study indicated that the quality of the palm oil products was stable within the specified limits throughout the storage period at low temperature. The storage conditions preserved the quality of palm oil products throughout the storage period. These findings qualify the use of the palm oil products CPO, RBDPO, RBDPOo and RBDPS as control samples in the validation of test results. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  13. Global fits of GUT-scale SUSY models with GAMBIT

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  14. The inference of vector magnetic fields from polarization measurements with limited spectral resolution

    NASA Technical Reports Server (NTRS)

    Lites, B. W.; Skumanich, A.

    1985-01-01

    A method is presented for recovery of the vector magnetic field and thermodynamic parameters from polarization measurement of photospheric line profiles measured with filtergraphs. The method includes magneto-optic effects and may be utilized on data sampled at arbitrary wavelengths within the line profile. The accuracy of this method is explored through inversion of synthetic Stokes profiles subjected to varying levels of random noise, instrumental wave-length resolution, and line profile sampling. The level of error introduced by the systematic effect of profile sampling over a finite fraction of the 5 minute oscillation cycle is also investigated. The results presented here are intended to guide instrumental design and observational procedure.

  15. Generation of uniformly distributed dose points for anatomy-based three-dimensional dose optimization methods in brachytherapy.

    PubMed

    Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N

    2000-05-01

    We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.

  16. Effects of diluting medium and holding time on sperm motility analysis by CASA in ram.

    PubMed

    Mostafapor, Somayeh; Farrokhi Ardebili, Farhad

    2014-01-01

    The aim of this study was to evaluate the effects of dilution rate and holding time on various motility parameters using computer-assisted sperm analysis (CASA). The semen samples were collected from three Ghezel rams. Samples were diluted in seminal plasma (SP), phosphate-buffered saline (PBS) containing 1% bovine serum albumin (BSA) and Bioexcell. The motility parameters that computed and recorded by CASA include curvilinear velocity (VCL), straight line velocity (VSL), average path velocity (VAP), straightness (STR), linearity (LIN), amplitude of lateral head displacement (ALH), and beat cross frequency (BCF). In all diluters, there was a decrease in the average of all three parameters of sperms movement velocity as the time passed, but density of this decrease was more intensive in SP. The average of ALH between diluters indicated a significant difference, as it was more in Bioexcell in comparison with the similar amount in SP and PBS. The average of LIN in the diluted sperms in Bioexcell was less than two other diluters in all three times. The motility parameters of the diluted sperms in Bioexcell and PBS indicated an important and considerable difference with the diluted sperms in SP. According to the gained results, the Bioexcell has greater ability in preserving motility of sperm in comparison with the other diluters but as SP is considered as physiological environment for sperm. It seems that the evaluation of the motility parameters in Bioexcell and PBS cannot be an accurate and comparable evaluation with SP.

  17. High-Resolution Infrared Spectroscopy and Analysis of the ν_2/ν_4 Bending Dyad and ν_3 Stretching Fundamental of Ruthenium Tetroxide

    NASA Astrophysics Data System (ADS)

    Faye, Mbaye; Reymond-Laruinaz, Sébastien; Vander Auwera, Jean; Boudon, Vincent; Doizi, Denis; Manceron, Laurent

    2017-06-01

    RuO_4 is a heavy tetrahedral molecule which has practical uses for several industrial fields. Due to its chemical toxicity and the radiological impact of its 103 and 106 isotopologues, the possible remote sensing of this compound in the atmosphere has renewed interest in its spectroscopic properties. We investigate here for the first time at high resolution the bending dyad region in the far IR and the line intensities in the ν_3 stretching region. Firstly, new high resolution FTIR spectra of the bending modes region in the far infrared have been recorded at room temperature, using a specially constructed cell and an isotopically pure sample of {}^{102}RuO_4. New assignments and effective Hamiltonian parameter fits for this main isotopologue have been performed, treating the whole ν_2/ν_4 bending mode dyad. We provide precise effective Hamiltonian parameters, including band centers and Coriolis interaction parameters. Secondly, we investigate the line intensities for the strongly infrared active stretching mode ν_3, in the mid infrared window near 10 μm. New high resolution FTIR spectra have also been recorded at room temperature, using the same cell and sample. Using assignments and effective Hamiltonian parameter for {}^{102}RuO_4, line intensities have been retrieved and the dipole moment parameters fitted for the ν_3 fundamental. A frequency and intensity line list is proposed.

  18. Quality parameters and antioxidant and antibacterial properties of some Mexican honeys.

    PubMed

    Rodríguez, Beatriz A; Mendoza, Sandra; Iturriga, Montserrat H; Castaño-Tostado, Eduardo

    2012-01-01

    A total of 14 Mexican honeys were screened for quality parameters including color, moisture, proline, and acidity. Antioxidant properties of complete honey and its methanolic extracts were evaluated by the DPPH, ABTS, and FRAP assays. In addition, the antimicrobial activity of complete honeys against Bacillus cereus ATCC 10876, Listeria monocytogenes Scott A, Salmonella Typhimurium ATCC 14028, and Sthapylococcus aureus ATCC 6538 was determined. Most of honeys analyzed showed values within quality parameters established by the Codex Alimentarius Commission in 2001. Eucalyptus flower honey and orange blossom honey showed the highest phenolic contents and antioxidant capacity. Bell flower, orange blossom, and eucalyptus flower honeys inhibited the growth of the 4 evaluated microorganisms. The remaining honeys affected at least 1 of the estimated growth parameters (increased lag phase, decreased growth rate, and/or maximum population density). Microorganism sensitivity to the antimicrobial activity of honeys followed the order B. cereus > L. monocytogenes > Salmonella Typhimurium > S. aureus. The monofloral honey samples from orange blossoms, and eucalyptus flowers demonstrated to be good sources of antioxidant and antimicrobial compounds. All the Mexican honey samples examined proved to be good sources of antioxidants and antimicrobial agents that might serve to maintain health and protect against several diseases. The results of the study showed that Mexican honeys display good quality parameters and antioxidant and antimicrobial activities. Mexican honey can be used as an additive in the food industry to increase the nutraceutical value of products. © 2011 Institute of Food Technologists®

  19. Accelerated Brain DCE-MRI Using Iterative Reconstruction With Total Generalized Variation Penalty for Quantitative Pharmacokinetic Analysis: A Feasibility Study.

    PubMed

    Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng

    2017-08-01

    To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.

  20. Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts

    NASA Astrophysics Data System (ADS)

    Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo

    This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.

  1. The Dutch Pancreas Biobank Within the Parelsnoer Institute: A Nationwide Biobank of Pancreatic and Periampullary Diseases.

    PubMed

    Strijker, Marin; Gerritsen, Arja; van Hilst, Jony; Bijlsma, Maarten F; Bonsing, Bert A; Brosens, Lodewijk A; Bruno, Marco J; van Dam, Ronald M; Dijk, Frederike; van Eijck, Casper H; Farina Sarasqueta, Arantza; Fockens, Paul; Gerhards, Michael F; Groot Koerkamp, Bas; van der Harst, Erwin; de Hingh, Ignace H; van Hooft, Jeanin E; Huysentruyt, Clément J; Kazemier, Geert; Klaase, Joost M; van Laarhoven, Cornelis J; van Laarhoven, Hanneke W; Liem, Mike S; de Meijer, Vincent E; van Rijssen, L Bengt; van Santvoort, Hjalmar C; Suker, Mustafa; Verhagen, Judith H; Verheij, Joanne; Verspaget, Hein W; Wennink, Roos A; Wilmink, Johanna W; Molenaar, I Quintus; Boermeester, Marja A; Busch, Olivier R; Besselink, Marc G

    2018-04-01

    Large biobanks with uniform collection of biomaterials and associated clinical data are essential for translational research. The Netherlands has traditionally been well organized in multicenter clinical research on pancreatic diseases, including the nationwide multidisciplinary Dutch Pancreatic Cancer Group and Dutch Pancreatitis Study Group. To enable high-quality translational research on pancreatic and periampullary diseases, these groups established the Dutch Pancreas Biobank. The Dutch Pancreas Biobank is part of the Parelsnoer Institute and involves all 8 Dutch university medical centers and 5 nonacademic hospitals. Adult patients undergoing pancreatic surgery (all indications) are eligible for inclusion. Preoperative blood samples, tumor tissue from resected specimens, pancreatic cyst fluid, and follow-up blood samples are collected. Clinical parameters are collected in conjunction with the mandatory Dutch Pancreatic Cancer Audit. Between January 2015 and May 2017, 488 patients were included in the first 5 participating centers: 4 university medical centers and 1 nonacademic hospital. Over 2500 samples were collected: 1308 preoperative blood samples, 864 tissue samples, and 366 follow-up blood samples. Prospective collection of biomaterials and associated clinical data has started in the Dutch Pancreas Biobank. Subsequent translational research will aim to improve treatment decisions based on disease characteristics.

  2. Recent BESII Results of Charmonium Decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, X. H.

    2006-02-11

    Using 14 million {psi}' data sample collected with BES at BEPC, the recent results relevant to {chi}cJ states are presented, which include measurement of {chi}cJ resonance parameters, study of {chi}cJ {yields} VV and K{sub S}{sup 0}K{sub S}{sup 0} contained final state, and partial wave analysis of {chi}c0 {yields} {pi}+{pi}-K+K-.

  3. A COMPARATIVE STUDY ON PARAMETERS USED FOR CHARACTERIZING COTTON SHORT FIBERS

    USDA-ARS?s Scientific Manuscript database

    The quantity of short cotton fibers in a cotton sample is an important cotton quality parameter which impacts yarn production performance and yarn quality. Researchers have proposed different parameters for characterizing the amount of short fibers in a cotton sample. A comprehensive study was car...

  4. Safe Use of Acoustic Vestibular-Evoked Myogenic Potential Stimuli: Protocol and Patient-Specific Considerations.

    PubMed

    Portnuff, Cory D F; Kleindienst, Samantha; Bogle, Jamie M

    2017-09-01

    Vestibular-evoked myogenic potentials (VEMPs) are commonly used clinical assessments for patients with complaints of dizziness. However, relatively high air-conducted stimuli are required to elicit the VEMP, and ultimately may compromise safe noise exposure limits. Recently, research has reported the potential for noise-induced hearing loss (NIHL) from VEMP stimulus exposure through studies of reduced otoacoustic emission levels after VEMP testing, as well as a recent case study showing permanent sensorineural hearing loss associated with VEMP exposure. The purpose of this report is to review the potential for hazardous noise exposure from VEMP stimuli and to suggest clinical parameters for safe VEMP testing. Literature review with presentation of clinical guidelines and a clinical tool for estimating noise exposure. The literature surrounding VEMP stimulus-induced hearing loss is reviewed, including several cases of overexposure. The article then presents a clinical calculation tool for the estimation of a patient's safe noise exposure from VEMP stimuli, considering stimulus parameters, and includes a discussion of how varying stimulus parameters affect a patient's noise exposure. Finally, recommendations are provided for recognizing and managing specific patient populations who may be at higher risk for NIHL from VEMP stimulus exposure. A sample protocol is provided that allows for safe noise exposure. VEMP stimuli have the potential to cause NIHL due to high sound exposure levels. However, with proper safety protocols in place, clinicians may reduce or eliminate this risk to their patients. Use of the tools provided, including the noise exposure calculation tool and sample protocols, may help clinicians to understand and ensure safe use of VEMP stimuli. American Academy of Audiology

  5. A Spectroscopic Survey and Analysis of Bright, Hydrogen-rich White Dwarfs

    NASA Astrophysics Data System (ADS)

    Gianninas, A.; Bergeron, P.; Ruiz, M. T.

    2011-12-01

    We have conducted a spectroscopic survey of over 1300 bright (V <= 17.5), hydrogen-rich white dwarfs based largely on the last published version of the McCook & Sion catalog. The complete results from our survey, including the spectroscopic analysis of over 1100 DA white dwarfs, are presented. High signal-to-noise ratio optical spectra were obtained for each star and were subsequently analyzed using our standard spectroscopic technique where the observed Balmer line profiles are compared to synthetic spectra computed from the latest generation of model atmospheres appropriate for these stars. First, we present the spectroscopic content of our sample, which includes many misclassifications as well as several DAB, DAZ, and magnetic white dwarfs. Next, we look at how the new Stark broadening profiles affect the determination of the atmospheric parameters. When necessary, specific models and analysis techniques are used to derive the most accurate atmospheric parameters possible. In particular, we employ M dwarf templates to obtain better estimates of the atmospheric parameters for those white dwarfs that are in DA+dM binary systems. Certain unique white dwarfs and double-degenerate binary systems are also analyzed in greater detail. We then examine the global properties of our sample including the mass distribution and their distribution as a function of temperature. We then proceed to test the accuracy and robustness of our method by comparing our results to those of other surveys such as SPY and Sloan Digital Sky Survey. Finally, we revisit the ZZ Ceti instability strip and examine how the determination of its empirical boundaries is affected by the latest line profile calculations. Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under program ID 078.D-0824(A).

  6. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  7. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  8. Impact of Reservoir Fluid Saturation on Seismic Parameters: Endrod Gas Field, Hungary

    NASA Astrophysics Data System (ADS)

    El Sayed, Abdel Moktader A.; El Sayed, Nahla A.

    2017-12-01

    Outlining the reservoir fluid types and saturation is the main object of the present research work. 37 core samples were collected from three different gas bearing zones in the Endrod gas field in Hungary. These samples are belonging to the Miocene and the Upper - Lower Pliocene. These samples were prepared and laboratory measurements were conducted. Compression and shear wave velocity were measured using the Sonic Viewer-170-OYO. The sonic velocities were measured at the frequencies of 63 and 33 kHz for compressional and shear wave respectively. All samples were subjected to complete petrophysical investigations. Sonic velocities and mechanical parameters such as young’s modulus, rigidity, and bulk modulus were measured when samples were saturated by 100%-75%-0% brine water. Several plots have been performed to show the relationship between seismic parameters and saturation percentages. Robust relationships were obtained, showing the impact of fluid saturation on seismic parameters. Seismic velocity, Poisson’s ratio, bulk modulus and rigidity prove to be applicable during hydrocarbon exploration or production stages. Relationships among the measured seismic parameters in gas/water fully and partially saturated samples are useful to outline the fluid type and saturation percentage especially in gas/water transitional zones.

  9. Simulation-based Extraction of Key Material Parameters from Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Alsafi, Huseen; Peninngton, Gray

    Models for the atomic force microscopy (AFM) tip and sample interaction contain numerous material parameters that are often poorly known. This is especially true when dealing with novel material systems or when imaging samples that are exposed to complicated interactions with the local environment. In this work we use Monte Carlo methods to extract sample material parameters from the experimental AFM analysis of a test sample. The parameterized theoretical model that we use is based on the Virtual Environment for Dynamic AFM (VEDA) [1]. The extracted material parameters are then compared with the accepted values for our test sample. Using this procedure, we suggest a method that can be used to successfully determine unknown material properties in novel and complicated material systems. We acknowledge Fisher Endowment Grant support from the Jess and Mildred Fisher College of Science and Mathematics,Towson University.

  10. Safety of Reiki Therapy for Newborns at Risk for Neonatal Abstinence Syndrome

    PubMed Central

    Wright-Esber, Sandra; Zupancic, Julie; Gargiulo, Deb; Woodall, Patricia

    2018-01-01

    The incidence of opioid abuse and subsequent drug withdrawal is exponentially on the rise in the United States for many populations including newborns who are born to drug-addicted mothers. These newborns often exhibit symptoms of neonatal abstinence syndrome (NAS) within 24 to 72 hours of birth. Treatment of NAS includes monitoring of withdrawal symptoms, managing physiological parameters, and the use of supportive and pharmacologic treatments. Although a few randomized controlled trials exist, studies on supportive intervention are generally limited by small sample sizes, case study reports, expert opinions, and descriptive design. Few studies address the safety of Reiki for newborns at risk for NAS using neonatal parameters. This pilot study addresses feasibility and demonstrates that Reiki is safe when administered to this high-risk population. Considerations for future studies are discussed. PMID:29315084

  11. Discrete element weld model, phase 2

    NASA Technical Reports Server (NTRS)

    Prakash, C.; Samonds, M.; Singhal, A. K.

    1987-01-01

    A numerical method was developed for analyzing the tungsten inert gas (TIG) welding process. The phenomena being modeled include melting under the arc and the flow in the melt under the action of buoyancy, surface tension, and electromagnetic forces. The latter entails the calculation of the electric potential and the computation of electric current and magnetic field therefrom. Melting may occur at a single temperature or over a temperature range, and the electrical and thermal conductivities can be a function of temperature. Results of sample calculations are presented and discussed at length. A major research contribution has been the development of numerical methodology for the calculation of phase change problems in a fixed grid framework. The model has been implemented on CHAM's general purpose computer code PHOENICS. The inputs to the computer model include: geometric parameters, material properties, and weld process parameters.

  12. Copper(II)-rubeanic acid coprecipitation system for separation-preconcentration of trace metal ions in environmental samples for their flame atomic absorption spectrometric determinations.

    PubMed

    Soylak, Mustafa; Erdogan, Nilgun D

    2006-09-21

    A simple and facile preconcentration procedure based on the coprecipitation of trace heavy metal ions with copper(II)-rubeanic acid complex has been developed. The analytical parameters including pH, amounts of rubeanic acid, sample volume, etc. was investigated for the quantitative recoveries of Pb(II), Fe(III), Cd(II), Au(III), Pd(II) and Ni(II). No interferic effects were observed from the concomitant ions. The detection limits for analyte ions by 3 sigma were in the range of 0.14 microg/l for iron-3.4 microg/l for lead. The proposed coprecipitation method was successfully applied to water samples from Palas Lake-Kayseri, soil and sediment samples from Kayseri and Yozgat-Turkey.

  13. Stack Characterization in CryoSat Level1b SAR/SARin Baseline C

    NASA Astrophysics Data System (ADS)

    Scagliola, Michele; Fornari, Marco; Di Giacinto, Andrea; Bouffard, Jerome; Féménias, Pierre; Parrinello, Tommaso

    2015-04-01

    CryoSat was launched on the 8th April 2010 and is the first European ice mission dedicated to the monitoring of precise changes in the thickness of polar ice sheets and floating sea ice. CryoSat is the first altimetry mission operating in SAR mode and it carries an innovative radar altimeter called the Synthetic Aperture Interferometric Altimeter (SIRAL), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. The current CryoSat IPF (Instrument Processing Facility), Baseline B, was released in operation in February 2012. After more than 2 years of development, the release in operations of the Baseline C is expected in the first half of 2015. It is worth recalling here that the CryoSat SAR/SARin IPF1 generates 20Hz waveforms in correspondence of an approximately equally spaced set of ground locations on the Earth surface, i.e. surface samples, and that a surface sample gathers a collection of single-look echoes coming from the processed bursts during the time of visibility. Thus, for a given surface sample, the stack can be defined as the collection of all the single-look echoes pointing to the current surface sample, after applying all the necessary range corrections. The L1B product contains the power average of all the single-look echoes in the stack: the multi-looked L1B waveform. This reduces the data volume, while removing some information contained in the single looks, useful for characterizing the surface and modelling the L1B waveform. To recover such information, a set of parameters has been added to the L1B product: the stack characterization or beam behaviour parameters. The stack characterization, already included in previous Baselines, has been reviewed and expanded in Baseline C. This poster describes all the stack characterization parameters, detailing what they represent and how they have been computed. In details, such parameters can be summarized in: - Stack statistical parameters, such as skewness and kurtosis - Look angle (i.e. the angle at which the surfaces sample is seen with respect to the nadir direction of the satellite) and Doppler angle (i.e. the angle at which the surfaces sample is seen with respect to the normal to the velocity vector) for the first and the last single-look echoes in the stack. - Number of single-looks averaged in the stack (in Baseline C a stack-weighting has been applied that reduces the number of looks). With the correct use of these parameters, users will be able to retrieve some of the 'lost' information contained within the stack and fully exploit the L1B product.

  14. Usage of K-cluster and factor analysis for grouping and evaluation the quality of olive oil in accordance with physico-chemical parameters

    NASA Astrophysics Data System (ADS)

    Milev, M.; Nikolova, Kr.; Ivanova, Ir.; Dobreva, M.

    2015-11-01

    25 olive oils were studied- different in origin and ways of extraction, in accordance with 17 physico-chemical parameters as follows: color parameters - a and b, light, fluorescence peaks, pigments - chlorophyll and β-carotene, fatty-acid content. The goals of the current study were: Conducting correlation analysis to find the inner relation between the studied indices; By applying factor analysis with the help of the method of Principal Components (PCA), to reduce the great number of variables into a few factors, which are of main importance for distinguishing the different types of olive oil;Using K-means cluster to compare and group the tested types olive oils based on their similarity. The inner relation between the studied indices was found by applying correlation analysis. A factor analysis using PCA was applied on the basis of the found correlation matrix. Thus the number of the studied indices was reduced to 4 factors, which explained 79.3% from the entire variation. The first one unified the color parameters, β-carotene and the related with oxidative products fluorescence peak - about 520 nm. The second one was determined mainly by the chlorophyll content and related to it fluorescence peak - about 670 nm. The third and the fourth factors were determined by the fatty-acid content of the samples. The third one unified the fatty-acids, which give us the opportunity to distinguish olive oil from the other plant oils - oleic, linoleic and stearin acids. The fourth factor included fatty-acids with relatively much lower content in the studied samples. It is enquired the number of clusters to be determined preliminary in order to apply the K-Cluster analysis. The variant K = 3 was worked out because the types of the olive oil were three. The first cluster unified all salad and pomace olive oils, the second unified the samples of extra virgin oilstaken as controls from producers, which were bought from the trade network. The third cluster unified samples from pomace and extra virgin oils, which distinguish one from another in accordance with their parameters from the natural olive oils, because of presence of plant oils impurities.

  15. Influence of scanning parameters on the estimation accuracy of control points of B-spline surfaces

    NASA Astrophysics Data System (ADS)

    Aichinger, Julia; Schwieger, Volker

    2018-04-01

    This contribution deals with the influence of scanning parameters like scanning distance, incidence angle, surface quality and sampling width on the average estimated standard deviations of the position of control points from B-spline surfaces which are used to model surfaces from terrestrial laser scanning data. The influence of the scanning parameters is analyzed by the Monte Carlo based variance analysis. The samples were generated for non-correlated and correlated data, leading to the samples generated by Latin hypercube and replicated Latin hypercube sampling algorithms. Finally, the investigations show that the most influential scanning parameter is the distance from the laser scanner to the object. The angle of incidence shows a significant effect for distances of 50 m and longer, while the surface quality contributes only negligible effects. The sampling width has no influence. Optimal scanning parameters can be found in the smallest possible object distance at an angle of incidence close to 0° in the highest surface quality. The consideration of correlations improves the estimation accuracy and underlines the importance of complete stochastic models for TLS measurements.

  16. Development of an infrared analyzer following the

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A radar calibration subsystem for measuring the radar backscattering characteristics of an imaged terrain is described. To achieve the required accuracy for the backscattering coefficient measurement (about 2 dB with 80 percent confidence), the space hardware design includes a means of monitoring the state parameters of the radar. For example, the transmitter output power is sampled and a replica of its output waveform is circulated through the receiver. These are recorded digitally and are used on the ground to determine such radar parameters as the transmitter power and the receiver gain. This part of the data is needed by the ground processor to measure the terrain backscattering characteristics.

  17. R2 Water Quality Portal Monitoring Stations

    EPA Pesticide Factsheets

    The Water Quality Data Portal (WQP) provides an easy way to access data stored in various large water quality databases. The WQP provides various input parameters on the form including location, site, sampling, and date parameters to filter and customize the returned results. The The Water Quality Portal (WQP) is a cooperative service sponsored by the United States Geological Survey (USGS), the Environmental Protection Agency (EPA) and the National Water Quality Monitoring Council (NWQMC) that integrates publicly available water quality data from the USGS National Water Information System (NWIS) the EPA STOrage and RETrieval (STORET) Data Warehouse, and the USDA ARS Sustaining The Earth??s Watersheds - Agricultural Research Database System (STEWARDS).

  18. Visual Inspection of Surfaces

    NASA Technical Reports Server (NTRS)

    Hughes, David; Perez, Xavier

    2007-01-01

    This presentation evaluates the parameters that affect visual inspection of cleanliness. Factors tested include surface reflectance, surface roughness, size of the largest particle, exposure time, inspector and distance from sample surface. It is concluded that distance predictions were not great, particularly because the distance at which contamination is seen may depend on more variables than those tested. Most parameters estimates had confidence of 95% or better, except for exposure and reflectance. Additionally, the distance at which surface is visibly contaminated decreases with increasing reflectance, roughness, and exposure. The distance at which the surface is visually contaminated increased with the largest particle size. These variables were only slightly affected the observer.

  19. Effects of whirling disease on selected hematological parameters in rainbow trout

    USGS Publications Warehouse

    Densmore, Christine L.; Blazer, V.S.; Waldrop, T.B.; Pooler, P.S.

    2001-01-01

    Hematological responses to whirling disease in rainbow trout (Oncorhynchus mykiss) were investigated. Two-mo-old fingerling rainbow trout were exposed to cultured triactinomyxon spores of Myxobolus cerebralis at 9,000 spores/fish in December, 1997. Twenty-four wks post-exposure, fish were taken from infected and uninfected groups for peripheral blood and cranial tissue sampling. Histological observations on cranial tissues confirmed M. cerebralis infection in all exposed fish. Differences in hematological parameters between the two groups included significantly lower total leukocyte and small lymphocyte counts for the infected fish. No effects on hematocrit, plasma protein concentration, or other differential leukocyte counts were noted.

  20. Variation of semen parameters in healthy medical students due to exam stress.

    PubMed

    Lampiao, Fanuel

    2009-12-01

    This study was aimed at investigating semen parameters that vary most in samples of healthy donors undergoing stressful examination period. Samples were left to liquefy in an incubator at 37 degrees C, 5% CO2 for 30 minutes before volume was measured. Concentration and motility parameters were measured by means of computer assisted semen analysis (CASA) using Sperm Class Analyzer (Microptic S.L, Madrid, Spain). Sperm concentration was significantly decreased in samples donated close to the exam period as well as samples donated during the exam period when compared to samples donated at the beginning of the semester. Stress levels of donors might prove to be clinically relevant and important when designing experiment protocols.

  1. Bootstrap rolling window estimation approach to analysis of the Environment Kuznets Curve hypothesis: evidence from the USA.

    PubMed

    Aslan, Alper; Destek, Mehmet Akif; Okumus, Ilyas

    2018-01-01

    This study aims to examine the validity of inverted U-shaped Environmental Kuznets Curve by investigating the relationship between economic growth and environmental pollution for the period from 1966 to 2013 in the USA. Previous studies based on the assumption of parameter stability and obtained parameters do not change over the full sample. This study uses bootstrap rolling window estimation method to detect the possible changes in causal relations and also obtain the parameters for sub-sample periods. The results show that the parameter of economic growth has increasing trend in 1982-1996 sub-sample periods, and it has decreasing trend in 1996-2013 sub-sample periods. Therefore, the existence of inverted U-shaped Environmental Kuznets Curve is confirmed in the USA.

  2. Model-based recovery of histological parameters from multispectral images of the colon

    NASA Astrophysics Data System (ADS)

    Hidovic-Rowe, Dzena; Claridge, Ela

    2005-04-01

    Colon cancer alters the macroarchitecture of the colon tissue. Common changes include angiogenesis and the distortion of the tissue collagen matrix. Such changes affect the colon colouration. This paper presents the principles of a novel optical imaging method capable of extracting parameters depicting histological quantities of the colon. The method is based on a computational, physics-based model of light interaction with tissue. The colon structure is represented by three layers: mucosa, submucosa and muscle layer. Optical properties of the layers are defined by molar concentration and absorption coefficients of haemoglobins; the size and density of collagen fibres; the thickness of the layer and the refractive indexes of collagen and the medium. Using the entire histologically plausible ranges for these parameters, a cross-reference is created computationally between the histological quantities and the associated spectra. The output of the model was compared to experimental data acquired in vivo from 57 histologically confirmed normal and abnormal tissue samples and histological parameters were extracted. The model produced spectra which match well the measured data, with the corresponding spectral parameters being well within histologically plausible ranges. Parameters extracted for the abnormal spectra showed the increase in blood volume fraction and changes in collagen pattern characteristic of the colon cancer. The spectra extracted from multi-spectral images of ex-vivo colon including adenocarcinoma show the characteristic features associated with normal and abnormal colon tissue. These findings suggest that it should be possible to compute histological quantities for the colon from the multi-spectral images.

  3. Spectral parameters for Dawn FC color data: Carbonaceous chondrites and aqueous alteration products as potential cerean analog materials

    NASA Astrophysics Data System (ADS)

    Schäfer, Tanja; Nathues, Andreas; Mengel, Kurt; Izawa, Matthew R. M.; Cloutis, Edward A.; Schäfer, Michael; Hoffmann, Martin

    2016-02-01

    We identified a set of spectral parameters based on Dawn Framing Camera (FC) bandpasses, covering the wavelength range 0.4-1.0 μm, for mineralogical mapping of potential chondritic material and aqueous alteration products on dwarf planet Ceres. Our parameters are inferred from laboratory spectra of well-described and clearly classified carbonaceous chondrites representative for a dark component. We additionally investigated the FC signatures of candidate bright materials including carbonates, sulfates and hydroxide (brucite), which can possibly be exposed on the cerean surface by impact craters or plume activity. Several materials mineralogically related to carbonaceous chondrites, including pure ferromagnesian phyllosilicates, and serpentinites were also investigated. We tested the potential of the derived FC parameters for distinguishing between different carbonaceous chondritic materials, and between other plausible cerean surface materials. We found that the major carbonaceous chondrite groups (CM, CO, CV, CK, and CR) are distinguishable using the FC filter ratios 0.56/0.44 μm and 0.83/0.97 μm. The absorption bands of Fe-bearing phyllosilicates at 0.7 and 0.9 μm in terrestrial samples and CM carbonaceous chondrites can be detected by a combination of FC band parameters using the filters at 0.65, 0.75, 0.83, 0.92 and 0.97 μm. This set of parameters serves as a basis to identify and distinguish different lithologies on the cerean surface by FC multispectral data.

  4. Investigating the Effect of Cosmic Opacity on Standard Candles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, J.; Yu, H.; Wang, F. Y., E-mail: fayinwang@nju.edu.cn

    Standard candles can probe the evolution of dark energy over a large redshift range. But the cosmic opacity can degrade the quality of standard candles. In this paper, we use the latest observations, including Type Ia supernovae (SNe Ia) from the “joint light-curve analysis” sample and Hubble parameters, to probe the opacity of the universe. A joint fitting of the SNe Ia light-curve parameters, cosmological parameters, and opacity is used in order to avoid the cosmological dependence of SNe Ia luminosity distances. The latest gamma-ray bursts are used in order to explore the cosmic opacity at high redshifts. The cosmicmore » reionization process is considered at high redshifts. We find that the sample supports an almost transparent universe for flat ΛCDM and XCDM models. Meanwhile, free electrons deplete photons from standard candles through (inverse) Compton scattering, which is known as an important component of opacity. This Compton dimming may play an important role in future supernova surveys. From analysis, we find that about a few per cent of the cosmic opacity is caused by Compton dimming in the two models, which can be corrected.« less

  5. THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Habib, Salman; Biswas, Rahul

    2016-04-01

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  6. The mira-titan universe. Precision predictions for dark energy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Bingham, Derek; Lawrence, Earl

    2016-03-28

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  7. The Mira-Titan Universe. II. Matter Power Spectrum Emulation

    NASA Astrophysics Data System (ADS)

    Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana; Upadhye, Amol; Bingham, Derek; Habib, Salman; Higdon, David; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas

    2017-09-01

    We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k˜ 5 Mpc-1 and redshift z≤slant 2. In addition to covering the standard set of ΛCDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with 16 medium-resolution simulations and TimeRG perturbation theory results to provide accurate coverage over a wide k-range; the data set generated as part of this project is more than 1.2Pbytes. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-up results with more than a hundred cosmological models will soon achieve ˜ 1 % accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.

  8. The Mira-Titan Universe. II. Matter Power Spectrum Emulation

    DOE PAGES

    Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana; ...

    2017-09-20

    We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k ~ 5Mpc -1 and redshift z ≤ 2. Besides covering the standard set of CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with sixteen medium-resolution simulations and TimeRG perturbation theory resultsmore » to provide accurate coverage of a wide k-range; the dataset generated as part of this project is more than 1.2Pbyte. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-on results with more than a hundred cosmological models will soon achieve ~1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.« less

  9. The Mira-Titan Universe. II. Matter Power Spectrum Emulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana

    We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k similar to 5 Mpc(-1) and redshift z <= 2. In addition to covering the standard set of Lambda CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with 16 medium-resolution simulations andmore » TimeRG perturbation theory results to provide accurate coverage over a wide k-range; the data set generated as part of this project is more than 1.2Pbytes. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-up results with more than a hundred cosmological models will soon achieve similar to 1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches.« less

  10. The Mira-Titan Universe. II. Matter Power Spectrum Emulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana

    We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k ~ 5Mpc -1 and redshift z ≤ 2. Besides covering the standard set of CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with sixteen medium-resolution simulations and TimeRG perturbation theory resultsmore » to provide accurate coverage of a wide k-range; the dataset generated as part of this project is more than 1.2Pbyte. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-on results with more than a hundred cosmological models will soon achieve ~1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.« less

  11. Time-varying BRDFs.

    PubMed

    Sun, Bo; Sunkavalli, Kalyan; Ramamoorthi, Ravi; Belhumeur, Peter N; Nayar, Shree K

    2007-01-01

    The properties of virtually all real-world materials change with time, causing their bidirectional reflectance distribution functions (BRDFs) to be time varying. However, none of the existing BRDF models and databases take time variation into consideration; they represent the appearance of a material at a single time instance. In this paper, we address the acquisition, analysis, modeling, and rendering of a wide range of time-varying BRDFs (TVBRDFs). We have developed an acquisition system that is capable of sampling a material's BRDF at multiple time instances, with each time sample acquired within 36 sec. We have used this acquisition system to measure the BRDFs of a wide range of time-varying phenomena, which include the drying of various types of paints (watercolor, spray, and oil), the drying of wet rough surfaces (cement, plaster, and fabrics), the accumulation of dusts (household and joint compound) on surfaces, and the melting of materials (chocolate). Analytic BRDF functions are fit to these measurements and the model parameters' variations with time are analyzed. Each category exhibits interesting and sometimes nonintuitive parameter trends. These parameter trends are then used to develop analytic TVBRDF models. The analytic TVBRDF models enable us to apply effects such as paint drying and dust accumulation to arbitrary surfaces and novel materials.

  12. An overview of STRUCTURE: applications, parameter settings, and supporting software

    PubMed Central

    Porras-Hurtado, Liliana; Ruiz, Yarimar; Santos, Carla; Phillips, Christopher; Carracedo, Ángel; Lareu, Maria V.

    2013-01-01

    Objectives: We present an up-to-date review of STRUCTURE software: one of the most widely used population analysis tools that allows researchers to assess patterns of genetic structure in a set of samples. STRUCTURE can identify subsets of the whole sample by detecting allele frequency differences within the data and can assign individuals to those sub-populations based on analysis of likelihoods. The review covers STRUCTURE's most commonly used ancestry and frequency models, plus an overview of the main applications of the software in human genetics including case-control association studies (CCAS), population genetics, and forensic analysis. The review is accompanied by supplementary material providing a step-by-step guide to running STRUCTURE. Methods: With reference to a worked example, we explore the effects of changing the principal analysis parameters on STRUCTURE results when analyzing a uniform set of human genetic data. Use of the supporting software: CLUMPP and distruct is detailed and we provide an overview and worked example of STRAT software, applicable to CCAS. Conclusion: The guide offers a simplified view of how STRUCTURE, CLUMPP, distruct, and STRAT can be applied to provide researchers with an informed choice of parameter settings and supporting software when analyzing their own genetic data. PMID:23755071

  13. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  14. Estimation of genetic parameters and their sampling variances of quantitative traits in the type 2 modified augmented design

    USDA-ARS?s Scientific Manuscript database

    We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...

  15. QCScreen: a software tool for data quality control in LC-HRMS based metabolomics.

    PubMed

    Simader, Alexandra Maria; Kluger, Bernhard; Neumann, Nora Katharina Nicole; Bueschl, Christoph; Lemmens, Marc; Lirk, Gerald; Krska, Rudolf; Schuhmacher, Rainer

    2015-10-24

    Metabolomics experiments often comprise large numbers of biological samples resulting in huge amounts of data. This data needs to be inspected for plausibility before data evaluation to detect putative sources of error e.g. retention time or mass accuracy shifts. Especially in liquid chromatography-high resolution mass spectrometry (LC-HRMS) based metabolomics research, proper quality control checks (e.g. for precision, signal drifts or offsets) are crucial prerequisites to achieve reliable and comparable results within and across experimental measurement sequences. Software tools can support this process. The software tool QCScreen was developed to offer a quick and easy data quality check of LC-HRMS derived data. It allows a flexible investigation and comparison of basic quality-related parameters within user-defined target features and the possibility to automatically evaluate multiple sample types within or across different measurement sequences in a short time. It offers a user-friendly interface that allows an easy selection of processing steps and parameter settings. The generated results include a coloured overview plot of data quality across all analysed samples and targets and, in addition, detailed illustrations of the stability and precision of the chromatographic separation, the mass accuracy and the detector sensitivity. The use of QCScreen is demonstrated with experimental data from metabolomics experiments using selected standard compounds in pure solvent. The application of the software identified problematic features, samples and analytical parameters and suggested which data files or compounds required closer manual inspection. QCScreen is an open source software tool which provides a useful basis for assessing the suitability of LC-HRMS data prior to time consuming, detailed data processing and subsequent statistical analysis. It accepts the generic mzXML format and thus can be used with many different LC-HRMS platforms to process both multiple quality control sample types as well as experimental samples in one or more measurement sequences.

  16. A multi-channel setup to study fractures in scintillators

    NASA Astrophysics Data System (ADS)

    Tantot, A.; Bouard, C.; Briche, R.; Lefèvre, G.; Manier, B.; Zaïm, N.; Deschanel, S.; Vanel, L.; Di Stefano, P. C. F.

    2016-12-01

    To investigate fractoluminescence in scintillating crystals used for particle detection, we have developed a multi-channel setup built around samples of double-cleavage drilled compression (DCDC) geometry in a controllable atmosphere. The setup allows the continuous digitization over hours of various parameters, including the applied load, and the compressive strain of the sample, as well as the acoustic emission. Emitted visible light is recorded with nanosecond resolution, and crack propagation is monitored using infrared lighting and camera. An example of application to \\text{B}{{\\text{i}}4}\\text{G}{{\\text{e}}3}{{\\text{O}}12} (BGO) is provided.

  17. Method And Apparatus For Evaluatin Of High Temperature Superconductors

    DOEpatents

    Fishman, Ilya M.; Kino, Gordon S.

    1996-11-12

    A technique for evaluation of high-T.sub.c superconducting films and single crystals is based on measurement of temperature dependence of differential optical reflectivity of high-T.sub.c materials. In the claimed method, specific parameters of the superconducting transition such as the critical temperature, anisotropy of the differential optical reflectivity response, and the part of the optical losses related to sample quality are measured. The apparatus for performing this technique includes pump and probe sources, cooling means for sweeping sample temperature across the critical temperature and polarization controller for controlling a state of polarization of a probe light beam.

  18. Water-quality data for the ground-water network in eastern Broward County, Florida, 1983-84

    USGS Publications Warehouse

    Waller, B.G.; Cannon, F.L.

    1986-01-01

    During 1983-84, groundwater from 63 wells located at 31 sites throughout eastern Broward County, Florida, was sampled and analyzed to determine baseline water quality conditions. The physical and chemical parameters analyzed included field measurements (pH and temperature), physical characteristics (color, turbidity, and specific conductance), major inorganic ions, nutrients, (nitrogen, phosphorus and carbon), selected metals, and total phenolic compounds. Groundwater samples were collected at the end of the dry season (April) and during the wet season (July and September). These data are tabulated, by well, in this report. (USGS)

  19. A multi-particle crushing apparatus for studying rock fragmentation due to repeated impacts

    NASA Astrophysics Data System (ADS)

    Huang, S.; Mohanty, B.; Xia, K.

    2017-12-01

    Rock crushing is a common process in mining and related operations. Although a number of particle crushing tests have been proposed in the literature, most of them are concerned with single-particle crushing, i.e., a single rock sample is crushed in each test. Considering the realistic scenario in crushers where many fragments are involved, a laboratory crushing apparatus is developed in this study. This device consists of a Hopkinson pressure bar system and a piston-holder system. The Hopkinson pressure bar system is used to apply calibrated dynamic loads to the piston-holder system, and the piston-holder system is used to hold rock samples and to recover fragments for subsequent particle size analysis. The rock samples are subjected to three to seven impacts under three impact velocities (2.2, 3.8, and 5.0 m/s), with the feed size of the rock particle samples limited between 9.5 and 12.7 mm. Several key parameters are determined from this test, including particle size distribution parameters, impact velocity, loading pressure, and total work. The results show that the total work correlates well with resulting fragmentation size distribution, and the apparatus provides a useful tool for studying the mechanism of crushing, which further provides guidelines for the design of commercial crushers.

  20. Estimation of laser beam pointing parameters in the presence of atmospheric turbulence.

    PubMed

    Borah, Deva K; Voelz, David G

    2007-08-10

    The problem of estimating mechanical boresight and jitter performance of a laser pointing system in the presence of atmospheric turbulence is considered. A novel estimator based on maximizing an average probability density function (pdf) of the received signal is presented. The proposed estimator uses a Gaussian far-field mean irradiance profile, and the irradiance pdf is assumed to be lognormal. The estimates are obtained using a sequence of return signal values from the intended target. Alternatively, one can think of the estimates being made by a cooperative target using the received signal samples directly. The estimator does not require sample-to-sample atmospheric turbulence parameter information. The approach is evaluated using wave optics simulation for both weak and strong turbulence conditions. Our results show that very good boresight and jitter estimation performance can be obtained under the weak turbulence regime. We also propose a novel technique to include the effect of very low received intensity values that cannot be measured well by the receiving device. The proposed technique provides significant improvement over a conventional approach where such samples are simply ignored. Since our method is derived from the lognormal irradiance pdf, the performance under strong turbulence is degraded. However, the ideas can be extended with appropriate pdf models to obtain more accurate results under strong turbulence conditions.

  1. Exercise Capacity Assessment by the Modified Shuttle Walk Test and its Correlation with Biochemical Parameters in Obese Children and Adolescents.

    PubMed

    de Assumpção, Priscila Kurz; Heinzmann-Filho, João Paulo; Isaia, Heloisa Ataíde; Holzschuh, Flávia; Dalcul, Tiéle; Donadio, Márcio Vinícius Fagundes

    2018-03-23

    To evaluate exercise capacity of obese children and adolescents compared with normal-weight individuals and to investigate possible correlations with blood biochemical parameters. In this study, children and adolescents between 6 and 18 y were included and divided into control (eutrophic) and obese groups according to body mass index (BMI). Data were collected regarding demographic, anthropometric, waist circumference and exercise capacity through the Modified Shuttle Walk Test (MSWT). In the obese group, biochemical parameters in the blood (total cholesterol, HDL, LDL, triglycerides and glucose) were evaluated, and a physical activity questionnaire was applied. Seventy seven participants were included; 27 in the control group and 50 obese. There was no significant difference between the two groups regarding sample characteristics, except for body weight, BMI and waist circumference. Most obese children presented results of biochemical tests within the desirable limit, though none were considered active. There was a significant exercise capacity reduction (p < 0.001) in the obese group compared to control subjects. Positive correlations were identified for the MSWT with age and height, and a negative correlation with BMI. However, there were no correlations with the biochemical parameters analyzed. Obese children and adolescents have reduced exercise capacity when compared to normal individuals. The MSWT performance seems to have a negative association with BMI, but is not correlated with blood biochemical parameters.

  2. Quadrant III RFI draft report: Appendix B-I, Volume 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-12-01

    In order to determine the nature and extent of contamination at a RCRA site it is often necessary to investigate and characterize the chemical composition of the medium in question that represents background conditions. Background is defined as current conditions present at a site which are unaffected by past treatment, storage, or disposal of hazardous waste (OEPA, 1991). The background composition of soils at the Portsmouth Gaseous Diffusion Plant (PORTS) site was characterized for the purpose of comparing investigative soil data to a background standard for each metal on the Target Compound List/Target Analyte List and each radiological parameter ofmore » concern in this RFI. Characterization of background compositions with respect to organic parameters was not performed because the organic parameters in the TCL/TAL are not naturally occurring at the site and because the site is not located in a highly industrialized area nor downgradient from another unrelated hazardous waste site. Characterization of the background soil composition with respect to metals and radiological parameters was performed by collecting and analyzing soil boring and hand-auger samples in areas deemed unaffected by past treatment, storage, or disposal of hazardous waste. Criteria used in determining whether a soil sample location would be representative of the true background condition included: environmental history of the location, relation to Solid Waste Management Units (SWMU`s), prevailing wind direction, surface runoff direction, and ground-water flow direction.« less

  3. Quadrant III RFI draft report: Appendix B-I, Volume 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-12-01

    In order to determine the nature and extent of contamination at a RCRA site it is often necessary to investigate and characterize the chemical composition of the medium in question that represents background conditions. Background is defined as current conditions present at a site which are unaffected by past treatment, storage, or disposal of hazardous waste (OEPA, 1991). The background composition of soils at the Portsmouth Gaseous Diffusion Plant (PORTS) site was characterized for the purpose of comparing investigative soil data to a background standard for each metal on the Target Compound List/Target Analyte List and each radiological parameter ofmore » concern in this RFI. Characterization of background compositions with respect to organic parameters was not performed because the organic parameters in the TCL/TAL are not naturally occurring at the site and because the site is not located in a highly industrialized area nor downgradient from another unrelated hazardous waste site. Characterization of the background soil composition with respect to metals and radiological parameters was performed by collecting and analyzing soil boring and hand-auger samples in areas deemed unaffected by past treatment, storage, or disposal of hazardous waste. Criteria used in determining whether a soil sample location would be representative of the true background condition included: environmental history of the location, relation to Solid Waste Management Units (SWMU's), prevailing wind direction, surface runoff direction, and ground-water flow direction.« less

  4. A Gaussian Approximation Approach for Value of Information Analysis.

    PubMed

    Jalal, Hawre; Alarid-Escudero, Fernando

    2018-02-01

    Most decisions are associated with uncertainty. Value of information (VOI) analysis quantifies the opportunity loss associated with choosing a suboptimal intervention based on current imperfect information. VOI can inform the value of collecting additional information, resource allocation, research prioritization, and future research designs. However, in practice, VOI remains underused due to many conceptual and computational challenges associated with its application. Expected value of sample information (EVSI) is rooted in Bayesian statistical decision theory and measures the value of information from a finite sample. The past few years have witnessed a dramatic growth in computationally efficient methods to calculate EVSI, including metamodeling. However, little research has been done to simplify the experimental data collection step inherent to all EVSI computations, especially for correlated model parameters. This article proposes a general Gaussian approximation (GA) of the traditional Bayesian updating approach based on the original work by Raiffa and Schlaifer to compute EVSI. The proposed approach uses a single probabilistic sensitivity analysis (PSA) data set and involves 2 steps: 1) a linear metamodel step to compute the EVSI on the preposterior distributions and 2) a GA step to compute the preposterior distribution of the parameters of interest. The proposed approach is efficient and can be applied for a wide range of data collection designs involving multiple non-Gaussian parameters and unbalanced study designs. Our approach is particularly useful when the parameters of an economic evaluation are correlated or interact.

  5. Assessing the Impact of Model Parameter Uncertainty in Simulating Grass Biomass Using a Hybrid Carbon Allocation Strategy

    NASA Astrophysics Data System (ADS)

    Reyes, J. J.; Adam, J. C.; Tague, C.

    2016-12-01

    Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.

  6. Integrating satellite actual evapotranspiration patterns into distributed model parametrization and evaluation for a mesoscale catchment

    NASA Astrophysics Data System (ADS)

    Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.

    2016-12-01

    Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.

  7. Assessing uncertainty and sensitivity of model parameterizations and parameters in WRF affecting simulated surface fluxes and land-atmosphere coupling over the Amazon region

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.

    2016-12-01

    This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.

  8. An uncertainty analysis of the hydrogen source term for a station blackout accident in Sequoyah using MELCOR 1.8.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles

    2014-03-01

    A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less

  9. RCRA Facility investigation report for Waste Area Grouping 6 at Oak Ridge National Laboratory, Oak Ridge, Tennessee. Volume 5, Technical Memorandums 06-09A, 06-10A, and 06-12A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    This report provides a detailed summary of the activities carried out to sample groundwater at Waste Area Grouping (WAG) 6. The analytical results for samples collected during Phase 1, Activity 2 of the WAG 6 Resource Conservation and Recovery Act Facility Investigation (RFI) are also presented. In addition, analytical results for Phase 1, activity sampling events for which data were not previously reported are included in this TM. A summary of the groundwater sampling activities of WAG 6, to date, are given in the Introduction. The Methodology section describes the sampling procedures and analytical parameters. Six attachments are included. Attachmentsmore » 1 and 2 provide analytical results for selected RFI groundwater samples and ORNL sampling event. Attachment 3 provides a summary of the contaminants detected in each well sampled for all sampling events conducted at WAG 6. Bechtel National Inc. (BNI)/IT Corporation Contract Laboratory (IT) RFI analytical methods and detection limits are given in Attachment 4. Attachment 5 provides the Oak Ridge National Laboratory (ORNL)/Analytical Chemistry Division (ACD) analytical methods and detection limits and Resource Conservation and Recovery Act (RCRA) quarterly compliance monitoring (1988--1989). Attachment 6 provides ORNL/ACD groundwater analytical methods and detection limits (for the 1990 RCRA semi-annual compliance monitoring).« less

  10. Design and Development of Microcontroller-Based Clinical Chemistry Analyser for Measurement of Various Blood Biochemistry Parameters

    PubMed Central

    Taneja, S. R.; Kumar, Jagdish; Thariyan, K. K.; Verma, Sanjeev

    2005-01-01

    Clinical chemistry analyser is a high-performance microcontroller-based photometric biochemical analyser to measure various blood biochemical parameters such as blood glucose, urea, protein, bilirubin, and so forth, and also to measure and observe enzyme growth occurred while performing the other biochemical tests such as ALT (alkaline amino transferase), amylase, AST (aspartate amino transferase), and so forth. These tests are of great significance in biochemistry and used for diagnostic purposes and classifying various disorders and diseases such as diabetes, liver malfunctioning, renal diseases, and so forth. An inexpensive clinical chemistry analyser developed by the authors is described in this paper. This is an open system in which any reagent kit available in the market can be used. The system is based on the principle of absorbance transmittance photometry. System design is based around 80C31 microcontroller with RAM, EPROM, and peripheral interface devices. The developed system incorporates light source, an optical module, interference filters of various wave lengths, peltier device for maintaining required temperature of the mixture in flow cell, peristaltic pump for sample aspiration, graphic LCD display for displaying blood parameters, patients test results and kinetic test graph, 40 columns mini thermal printer, and also 32-key keyboard for executing various functions. The lab tests conducted on the instrument include versatility of the analyzer, flexibility of the software, and treatment of sample. The prototype was tested and evaluated over 1000 blood samples successfully for seventeen blood parameters. Evaluation was carried out at Government Medical College and Hospital, the Department of Biochemistry. The test results were found to be comparable with other standard instruments. PMID:18924737

  11. Design and development of microcontroller-based clinical chemistry analyser for measurement of various blood biochemistry parameters.

    PubMed

    Taneja, S R; Gupta, R C; Kumar, Jagdish; Thariyan, K K; Verma, Sanjeev

    2005-01-01

    Clinical chemistry analyser is a high-performance microcontroller-based photometric biochemical analyser to measure various blood biochemical parameters such as blood glucose, urea, protein, bilirubin, and so forth, and also to measure and observe enzyme growth occurred while performing the other biochemical tests such as ALT (alkaline amino transferase), amylase, AST (aspartate amino transferase), and so forth. These tests are of great significance in biochemistry and used for diagnostic purposes and classifying various disorders and diseases such as diabetes, liver malfunctioning, renal diseases, and so forth. An inexpensive clinical chemistry analyser developed by the authors is described in this paper. This is an open system in which any reagent kit available in the market can be used. The system is based on the principle of absorbance transmittance photometry. System design is based around 80C31 microcontroller with RAM, EPROM, and peripheral interface devices. The developed system incorporates light source, an optical module, interference filters of various wave lengths, peltier device for maintaining required temperature of the mixture in flow cell, peristaltic pump for sample aspiration, graphic LCD display for displaying blood parameters, patients test results and kinetic test graph, 40 columns mini thermal printer, and also 32-key keyboard for executing various functions. The lab tests conducted on the instrument include versatility of the analyzer, flexibility of the software, and treatment of sample. The prototype was tested and evaluated over 1000 blood samples successfully for seventeen blood parameters. Evaluation was carried out at Government Medical College and Hospital, the Department of Biochemistry. The test results were found to be comparable with other standard instruments.

  12. A Systematic Review and Meta-Analysis of the Effects of Transcranial Direct Current Stimulation (tDCS) Over the Dorsolateral Prefrontal Cortex in Healthy and Neuropsychiatric Samples: Influence of Stimulation Parameters.

    PubMed

    Dedoncker, Josefien; Brunoni, Andre R; Baeken, Chris; Vanderhasselt, Marie-Anne

    2016-01-01

    Research into the effects of transcranial direct current stimulation of the dorsolateral prefrontal cortex on cognitive functioning is increasing rapidly. However, methodological heterogeneity in prefrontal tDCS research is also increasing, particularly in technical stimulation parameters that might influence tDCS effects. To systematically examine the influence of technical stimulation parameters on DLPFC-tDCS effects. We performed a systematic review and meta-analysis of tDCS studies targeting the DLPFC published from the first data available to February 2016. Only single-session, sham-controlled, within-subject studies reporting the effects of tDCS on cognition in healthy controls and neuropsychiatric patients were included. Evaluation of 61 studies showed that after single-session a-tDCS, but not c-tDCS, participants responded faster and more accurately on cognitive tasks. Sub-analyses specified that following a-tDCS, healthy subjects responded faster, while neuropsychiatric patients responded more accurately. Importantly, different stimulation parameters affected a-tDCS effects, but not c-tDCS effects, on accuracy in healthy samples vs. increased current density and density charge resulted in improved accuracy in healthy samples, most prominently in females; for neuropsychiatric patients, task performance during a-tDCS resulted in stronger increases in accuracy rates compared to task performance following a-tDCS. Healthy participants respond faster, but not more accurate on cognitive tasks after a-tDCS. However, increasing the current density and/or charge might be able to enhance response accuracy, particularly in females. In contrast, online task performance leads to greater increases in response accuracy than offline task performance in neuropsychiatric patients. Possible implications and practical recommendations are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Impact of heavy smoking on the clinical, microbiological and immunological parameters of patients with dental implants: a prospective cross-sectional study.

    PubMed

    Ata-Ali, Javier; Flichy-Fernández, Antonio Juan; Alegre-Domingo, Teresa; Ata-Ali, Fadi; Peñarrocha-Diago, Miguel

    2016-11-01

    The aim of the present study was to investigate how heavy smoking influences the clinical, microbiological, and host-response characteristics in peri-implant sulcus fluid of patients with healthy dental implants. A total of 29 individuals with 74 dental implants were included in the present study; 20 implants were in heavy smokers and 54 were in non-smokers. The modified gingival index, modified plaque index, and probing pocket depth were evaluated. Periodontopathogenic bacteria Tannerella forsythia, Treponema denticola, and Porphyromonas gingivalis were evaluated, together with the total bacterial load. Peri-implant sulcus fluid samples were analyzed for the quantification of interleukin-8, interleukin-1β, interleukin-6, interleukin-10, and tumor necrosis factor-α. No significant differences in the clinical parameters evaluated were found between the groups, although smokers had poorer peri-implant parameters. Among the smokers, subgingival microbiota was composed of a greater number of periodontal pathogens; these differences were not statistically significant. Smokers showed a greater expression of interleukin-1β, interleukin-6, interleukin-10, and tumor necrosis factor-α, but interleukin-8 was slightly higher among non-smokers, but not significantly. Although smokers presented deeper probing depths, bleeding on probing, and peri-implant microbiota composed of a greater number of periodontal pathogens than in non-smoking patients, these data did not show significant differences. In the present study, and in relation to the samples analyzed, smoking alone did not influence the immunological and microbiological parameters in dental implants with healthy peri-implant tissues. Further studies with larger samples are required to better evaluate the influence of smoking on dental implants. © 2015 Wiley Publishing Asia Pty Ltd.

  14. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  15. Design and performance evaluation of the imaging payload for a remote sensing satellite

    NASA Astrophysics Data System (ADS)

    Abolghasemi, Mojtaba; Abbasi-Moghadam, Dariush

    2012-11-01

    In this paper an analysis method and corresponding analytical tools for design of the experimental imaging payload (IMPL) of a remote sensing satellite (SINA-1) are presented. We begin with top-level customer system performance requirements and constraints and derive the critical system and component parameters, then analyze imaging payload performance until a preliminary design that meets customer requirements. We consider system parameters and components composing the image chain for imaging payload system which includes aperture, focal length, field of view, image plane dimensions, pixel dimensions, detection quantum efficiency, and optical filter requirements. The performance analysis is accomplished by calculating the imaging payload's SNR (signal-to-noise ratio), and imaging resolution. The noise components include photon noise due to signal scene and atmospheric background, cold shield, out-of-band optical filter leakage and electronic noise. System resolution is simulated through cascaded modulation transfer functions (MTFs) and includes effects due to optics, image sampling, and system motion. Calculations results for the SINA-1 satellite are also presented.

  16. The Importance of Behavioral Thresholds and Objective Functions in Contaminant Transport Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Kang, M.; Thomson, N. R.

    2007-12-01

    The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.

  17. ECCM Scheme against Interrupted Sampling Repeater Jammer Based on Parameter-Adjusted Waveform Design

    PubMed Central

    Wei, Zhenhua; Peng, Bo; Shen, Rui

    2018-01-01

    Interrupted sampling repeater jamming (ISRJ) is an effective way of deceiving coherent radar sensors, especially for linear frequency modulated (LFM) radar. In this paper, for a simplified scenario with a single jammer, we propose a dynamic electronic counter-counter measure (ECCM) scheme based on jammer parameter estimation and transmitted signal design. Firstly, the LFM waveform is transmitted to estimate the main jamming parameters by investigating the discontinuousness of the ISRJ’s time-frequency (TF) characteristics. Then, a parameter-adjusted intra-pulse frequency coded signal, whose ISRJ signal after matched filtering only forms a single false target, is designed adaptively according to the estimated parameters, i.e., sampling interval, sampling duration and repeater times. Ultimately, for typical jamming scenes with different jamming signal ratio (JSR) and duty cycle, we propose two particular ISRJ suppression approaches. Simulation results validate the effective performance of the proposed scheme for countering the ISRJ, and the trade-off relationship between the two approaches is demonstrated. PMID:29642508

  18. Application of municipal biosolids to dry-land wheat fields - A monitoring program near Deer Trail, Colorado (USA). A presentation for an international conference: "The Future of Agriculture: Science, Stewardship, and Sustainability", August 7-9, 2006, Sacramento, CA

    USGS Publications Warehouse

    Crock, James G.; Smith, David B.; Yager, Tracy J.B.

    2006-01-01

    Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colorado, has applied Grade I, Class B biosolids to about 52,000 acres of non-irrigated farmland and rangeland near Deer Trail, Colorado. In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring ground water at part of this site. In 1999, the USGS began a more comprehensive study of the entire site to address stakeholder concerns about the chemical effects of biosolids applications. This more comprehensive monitoring program has recently been extended through 2010. Monitoring components of the more comprehensive study included biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock ground water, and stream bed sediment. Streams at the site are dry most of the year, so samples of stream bed sediment deposited after rain were used to indicate surface-water effects. This presentation will only address biosolids, soil, and crops. More information about these and the other monitoring components are presented in the literature (e.g., Yager and others, 2004a, b, c, d) and at the USGS Web site for the Deer Trail area studies at http://co.water.usgs.gov/projects/CO406/CO406.html. Priority parameters identified by the stakeholders for all monitoring components, included the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross alpha and beta activity, regulated by Colorado for biosolids to be used as an agricultural soil amendment. Nitrogen and chromium also were priority parameters for ground water and sediment components. In general, the objective of each component of the study was to determine whether concentrations of priority parameters (1) were higher than regulatory limits, (2) were increasing with time, or (3) were significantly higher in biosolids-applied areas than in a similar farmed area where biosolids were not applied. Where sufficient samples could be collected, statistical methods were used to evaluate effects. Rigorous quality assurance was included in all aspects of the study. The roles of hydrology and geology also were considered in the design, data collection, and interpretation phases of the study. Study results indicate that the chemistry of the biosolids from the Denver plant was consistent during 1999-2005, and total concentrations of regulated trace elements were consistently lower than the regulatory limits. Plutonium isotopes were not detected in the biosolids. Leach tests using deionized water to simulate natural precipitation indicate arsenic, molybdenum, and nickel were the most soluble priority parameters in the biosolids. Study results show no significant difference in concentrations of priority parameters between biosolids-applied soils and unamended soils where no biosolids were applied. However, biosolids were applied only twice during 1999-2003. The next soil sampling is not scheduled until 2010. To date concentrations of most of the priority parameters were not much greater in the biosolids than in natural soil from the sites. Therefore, many more biosolids applications would need to occur before biosolids effects on the soil priority constituents can be quantified. Leach tests using deionized water to simulate precipitation indicate that molybdenum and selenium were the priority parameters that were most soluble in both biosolids-applied soil and natural or unamended soil. Study results do not indicate significant differences in concentrations of priority parameters between crops grown in biosolids-applied areas and crops grown where no biosolids were applied. However, crops were grown only twice during 1999-2003, so only two crop samples could be collected. The wheat-grain elemental data collected during 1999-2003 for both biosolids-applied areas and unamended areas are similar

  19. CFL3D User's Manual (Version 5.0)

    NASA Technical Reports Server (NTRS)

    Krist, Sherrie L.; Biedron, Robert T.; Rumsey, Christopher L.

    1998-01-01

    This document is the User's Manual for the CFL3D computer code, a thin-layer Reynolds-averaged Navier-Stokes flow solver for structured multiple-zone grids. Descriptions of the code's input parameters, non-dimensionalizations, file formats, boundary conditions, and equations are included. Sample 2-D and 3-D test cases are also described, and many helpful hints for using the code are provided.

  20. Environmental magnetism and effective radium concentration: the case study of the painted cave of Pech Merle, France

    NASA Astrophysics Data System (ADS)

    Isambert, Aude; Girault, Frédéric; Perrier, Frédéric; Bouquerel, Hélène; Bourges, François

    2017-04-01

    Painted caves, showing testimony of prehistoric art, are nowadays subject to intense attention to understand the conditions of stability and avoid degradation. The preservation of cultural sequences and archaeological artefacts represents especially a crucial issue in the case of caves opened to visitors. For this purpose, a better knowledge of these preserved environments that imprint paleoenvironmental conditions at the time of deposition is needed. In this context, different environmental parameters of the Pech Merle cave, in France, are currently actively monitored including temperature, hygrometry, and gas measurements such as CO2 and radon-222 (decay product of radium-226). This temporal monitoring needs to be complemented by a detailed characterisation of the site, including petrophysical and mineralogical properties. To better constrain the environmental and paleoenvironmental context, more than 100 samples including soils, sediments, rocks and speleothems were collected inside and outside the cave area. We report here magnetic properties of powdered samples (low-field susceptibility, hysteresis parameters, and saturation magnetization) coupled with effective radium concentration (ECRa) measurements. We observe that magnetic susceptibility, which ranges over 5 orders of magnitude from calcareous rocks to topsoils and argillaceous filling deposits, correlates well with ECRa values. This correlation, previously observed (Girault et al., 2016) in very different geological contexts, could be interpreted as a common concentration of sources, also indicating a signature of natural samples to the contrary of anthropic environments disturbed by human activities, in which case the association is blurred. This study demonstrates the general interest of combining two different parameters - here low-field magnetic susceptibility and effective radium concentration determined using non-destructive techniques in the field and in the laboratory - to physically characterize geosystems and to propose a novel approach to discriminate naturally preserved and human-impacted or polluted sites. Girault, F., Perrier, F., Poitou, C., Isambert, A., Théveniaut, H., Laperche, V., ... & Douay, F. (2016). Effective radium concentration in topsoils contaminated by lead and zinc smelters. Science of The Total Environment, 566, 865-876.

  1. Metagenomic covariation along densely sampled environmental gradients in the Red Sea

    PubMed Central

    Thompson, Luke R; Williams, Gareth J; Haroon, Mohamed F; Shibl, Ahmed; Larsen, Peter; Shorenstein, Joshua; Knight, Rob; Stingl, Ulrich

    2017-01-01

    Oceanic microbial diversity covaries with physicochemical parameters. Temperature, for example, explains approximately half of global variation in surface taxonomic abundance. It is unknown, however, whether covariation patterns hold over narrower parameter gradients and spatial scales, and extending to mesopelagic depths. We collected and sequenced 45 epipelagic and mesopelagic microbial metagenomes on a meridional transect through the eastern Red Sea. We asked which environmental parameters explain the most variation in relative abundances of taxonomic groups, gene ortholog groups, and pathways—at a spatial scale of <2000 km, along narrow but well-defined latitudinal and depth-dependent gradients. We also asked how microbes are adapted to gradients and extremes in irradiance, temperature, salinity, and nutrients, examining the responses of individual gene ortholog groups to these parameters. Functional and taxonomic metrics were equally well explained (75–79%) by environmental parameters. However, only functional and not taxonomic covariation patterns were conserved when comparing with an intruding water mass with different physicochemical properties. Temperature explained the most variation in each metric, followed by nitrate, chlorophyll, phosphate, and salinity. That nitrate explained more variation than phosphate suggested nitrogen limitation, consistent with low surface N:P ratios. Covariation of gene ortholog groups with environmental parameters revealed patterns of functional adaptation to the challenging Red Sea environment: high irradiance, temperature, salinity, and low nutrients. Nutrient-acquisition gene ortholog groups were anti-correlated with concentrations of their respective nutrient species, recapturing trends previously observed across much larger distances and environmental gradients. This dataset of metagenomic covariation along densely sampled environmental gradients includes online data exploration supplements, serving as a community resource for marine microbial ecology. PMID:27420030

  2. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  3. MALDI (matrix assisted laser desorption ionization) Imaging Mass Spectrometry (IMS) of skin: Aspects of sample preparation.

    PubMed

    de Macedo, Cristiana Santos; Anderson, David M; Schey, Kevin L

    2017-11-01

    MALDI (matrix assisted laser desorption ionization) Imaging Mass Spectrometry (IMS) allows molecular analysis of biological materials making possible the identification and localization of molecules in tissues, and has been applied to address many questions on skin pathophysiology, as well as on studies about drug absorption and metabolism. Sample preparation for MALDI IMS is the most important part of the workflow, comprising specimen collection and preservation, tissue embedding, cryosectioning, washing, and matrix application. These steps must be carefully optimized for specific analytes of interest (lipids, proteins, drugs, etc.), representing a challenge for skin analysis. In this review, critical parameters for MALDI IMS sample preparation of skin samples will be described. In addition, specific applications of MALDI IMS of skin samples will be presented including wound healing, neoplasia, and infection. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Communication: Multiple atomistic force fields in a single enhanced sampling simulation

    NASA Astrophysics Data System (ADS)

    Hoang Viet, Man; Derreumaux, Philippe; Nguyen, Phuong H.

    2015-07-01

    The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.

  5. Inverse Analysis of Irradiated NuclearMaterial Gamma Spectra via Nonlinear Optimization

    NASA Astrophysics Data System (ADS)

    Dean, Garrett James

    Nuclear forensics is the collection of technical methods used to identify the provenance of nuclear material interdicted outside of regulatory control. Techniques employed in nuclear forensics include optical microscopy, gas chromatography, mass spectrometry, and alpha, beta, and gamma spectrometry. This dissertation focuses on the application of inverse analysis to gamma spectroscopy to estimate the history of pulse irradiated nuclear material. Previous work in this area has (1) utilized destructive analysis techniques to supplement the nondestructive gamma measurements, and (2) been applied to samples composed of spent nuclear fuel with long irradiation and cooling times. Previous analyses have employed local nonlinear solvers, simple empirical models of gamma spectral features, and simple detector models of gamma spectral features. The algorithm described in this dissertation uses a forward model of the irradiation and measurement process within a global nonlinear optimizer to estimate the unknown irradiation history of pulse irradiated nuclear material. The forward model includes a detector response function for photopeaks only. The algorithm uses a novel hybrid global and local search algorithm to quickly estimate the irradiation parameters, including neutron fluence, cooling time and original composition. Sequential, time correlated series of measurements are used to reduce the uncertainty in the estimated irradiation parameters. This algorithm allows for in situ measurements of interdicted irradiated material. The increase in analysis speed comes with a decrease in information that can be determined, but the sample fluence, cooling time, and composition can be determined within minutes of a measurement. Furthermore, pulse irradiated nuclear material has a characteristic feature that irradiation time and flux cannot be independently estimated. The algorithm has been tested against pulse irradiated samples of pure special nuclear material with cooling times of four minutes to seven hours. The algorithm described is capable of determining the cooling time and fluence the sample was exposed to within 10% as well as roughly estimating the relative concentrations of nuclides present in the original composition.

  6. Laser confocal microscope for analysis of 3013 inner container closure weld region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Rodriguez, M. J.

    As part of the protocol to investigate the corrosion in the inner container closure weld region (ICCWR) a laser confocal microscope (LCM) was used to perform close visual examination of the surface and measurements of corrosion features on the surface. However, initial analysis of selected destructively evaluated (DE) containers using the LCM revealed several challenges for acquiring, processing and interpreting the data. These challenges include topography of the ICCWR sample, surface features, and the amount of surface area for collecting data at high magnification conditions. In FY17, the LCM parameters were investigated to identify the appropriate parameter values for datamore » acquisition and identification of regions of interest. Using these parameter values, selected DE containers were analyzed to determine the extent of the ICCWR to be examined.« less

  7. Relationship of In Vivo MR Parameters to Histopathological and Molecular Characteristics of Newly Diagnosed, Nonenhancing Lower-Grade Gliomas.

    PubMed

    Luks, Tracy L; McKnight, Tracy Richmond; Jalbert, Llewellyn E; Williams, Aurelia; Neill, Evan; Lobo, Khadjia A; Persson, Anders I; Perry, Arie; Phillips, Joanna J; Molinaro, Annette M; Chang, Susan M; Nelson, Sarah J

    2018-06-05

    The goal of this research was to elucidate the relationship between WHO 2016 molecular classifications of newly diagnosed, nonenhancing lower grade gliomas (LrGG), tissue sample histopathology, and magnetic resonance (MR) parameters derived from diffusion, perfusion, and 1 H spectroscopic imaging from the tissue sample locations and the entire tumor. A total of 135 patients were scanned prior to initial surgery, with tumor cellularity scores obtained from 88 image-guided tissue samples. MR parameters were obtained from corresponding sample locations, and histograms of normalized MR parameters within the T2 fluid-attenuated inversion recovery lesion were analyzed in order to evaluate differences between subgroups. For tissue samples, higher tumor scores were related to increased normalized apparent diffusion coefficient (nADC), lower fractional anisotropy (nFA), lower cerebral blood volume (nCBV), higher choline (nCho), and lower N-acetylaspartate (nNAA). Within the T2 lesion, higher tumor grade was associated with higher nADC, lower nFA, and higher Cho to NAA index. Pathological analysis confirmed that diffusion and metabolic parameters increased and perfusion decreased with tumor cellularity. This information can be used to select targets for tissue sampling and to aid in making decisions about treating residual disease. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Characterization of Whole Grain Pasta: Integrating Physical, Chemical, Molecular, and Instrumental Sensory Approaches.

    PubMed

    Marti, Alessandra; Cattaneo, Stefano; Benedetti, Simona; Buratti, Susanna; Abbasi Parizad, Parisa; Masotti, Fabio; Iametti, Stefania; Pagani, Maria Ambrogina

    2017-11-01

    The consumption of whole-grain food-including pasta-has been increasing steadily. In the case of whole-grain pasta, given the many different producers, it seems important to have some objective parameters to define its overall quality. In this study, commercial whole-grain pasta samples representative of the Italian market have been characterized from both molecular and electronic-senses (electronic nose and electronic tongue) standpoint in order to provide a survey of the properties of different commercial samples. Only 1 pasta product showed very low levels of heat damage markers (furosine and pyrraline), suggesting that this sample underwent to low temperature dry treatment. In all samples, the furosine content was directly correlated to protein structural indices, since protein structure compactness increased with increasing levels of heat damage markers. Electronic senses were able to discriminate among pasta samples according to the intensity of heat treatment during the drying step. Pasta sample with low furosine content was discriminated by umami taste and by sensors responding to aliphatic and inorganic compounds. Data obtained with this multidisciplinary approach are meant to provide hints for identifying useful indices for pasta quality. As observed for semolina pasta, objective parameters based on heat-damage were best suited to define the overall quality of wholegrain pasta, almost independently of compositional differences among commercial samples. Drying treatments of different intensity also had an impact on instrumental sensory traits that may provide a reliable alternative to analytical determination of chemical markers of heat damage in all cases where there is a need for avoiding time-consuming procedures. © 2017 Institute of Food Technologists®.

  9. The Impact of Item Position Change on Item Parameters and Common Equating Results under the 3PL Model

    ERIC Educational Resources Information Center

    Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet

    2012-01-01

    Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…

  10. Neutron activation analysis of certified samples by the absolute method

    NASA Astrophysics Data System (ADS)

    Kadem, F.; Belouadah, N.; Idiri, Z.

    2015-07-01

    The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.

  11. Nonequilibrium umbrella sampling in spaces of many order parameters

    NASA Astrophysics Data System (ADS)

    Dickson, Alex; Warmflash, Aryeh; Dinner, Aaron R.

    2009-02-01

    We recently introduced an umbrella sampling method for obtaining nonequilibrium steady-state probability distributions projected onto an arbitrary number of coordinates that characterize a system (order parameters) [A. Warmflash, P. Bhimalapuram, and A. R. Dinner, J. Chem. Phys. 127, 154112 (2007)]. Here, we show how our algorithm can be combined with the image update procedure from the finite-temperature string method for reversible processes [E. Vanden-Eijnden and M. Venturoli, "Revisiting the finite temperature string method for calculation of reaction tubes and free energies," J. Chem. Phys. (in press)] to enable restricted sampling of a nonequilibrium steady state in the vicinity of a path in a many-dimensional space of order parameters. For the study of transitions between stable states, the adapted algorithm results in improved scaling with the number of order parameters and the ability to progressively refine the regions of enforced sampling. We demonstrate the algorithm by applying it to a two-dimensional model of driven Brownian motion and a coarse-grained (Ising) model for nucleation under shear. It is found that the choice of order parameters can significantly affect the convergence of the simulation; local magnetization variables other than those used previously for sampling transition paths in Ising systems are needed to ensure that the reactive flux is primarily contained within a tube in the space of order parameters. The relation of this method to other algorithms that sample the statistics of path ensembles is discussed.

  12. A New Method for Generating Probability Tables in the Unresolved Resonance Region

    DOE PAGES

    Holcomb, Andrew M.; Leal, Luiz C.; Rahnema, Farzad; ...

    2017-04-18

    One new method for constructing probability tables in the unresolved resonance region (URR) has been developed. This new methodology is an extensive modification of the single-level Breit-Wigner (SLBW) pseudo-resonance pair sequence method commonly used to generate probability tables in the URR. The new method uses a Monte Carlo process to generate many pseudo-resonance sequences by first sampling the average resonance parameter data in the URR and then converting the sampled resonance parameters to the more robust R-matrix limited (RML) format. Furthermore, for each sampled set of pseudo-resonance sequences, the temperature-dependent cross sections are reconstructed on a small grid around themore » energy of reference using the Reich-Moore formalism and the Leal-Hwang Doppler broadening methodology. We then use the effective cross sections calculated at the energies of reference to construct probability tables in the URR. The RML cross-section reconstruction algorithm has been rigorously tested for a variety of isotopes, including 16O, 19F, 35Cl, 56Fe, 63Cu, and 65Cu. The new URR method also produced normalized cross-section factor probability tables for 238U that were found to be in agreement with current standards. The modified 238U probability tables were shown to produce results in excellent agreement with several standard benchmarks, including the IEU-MET-FAST-007 (BIG TEN), IEU-MET-FAST-003, and IEU-COMP-FAST-004 benchmarks.« less

  13. Radio Evolution of Supernova Remnants Including Nonlinear Particle Acceleration: Insights from Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Pavlović, Marko Z.; Urošević, Dejan; Arbutina, Bojan; Orlando, Salvatore; Maxted, Nigel; Filipović, Miroslav D.

    2018-01-01

    We present a model for the radio evolution of supernova remnants (SNRs) obtained by using three-dimensional hydrodynamic simulations coupled with nonlinear kinetic theory of cosmic-ray (CR) acceleration in SNRs. We model the radio evolution of SNRs on a global level by performing simulations for a wide range of the relevant physical parameters, such as the ambient density, supernova (SN) explosion energy, acceleration efficiency, and magnetic field amplification (MFA) efficiency. We attribute the observed spread of radio surface brightnesses for corresponding SNR diameters to the spread of these parameters. In addition to our simulations of Type Ia SNRs, we also considered SNR radio evolution in denser, nonuniform circumstellar environments modified by the progenitor star wind. These simulations start with the mass of the ejecta substantially higher than in the case of a Type Ia SN and presumably lower shock speed. The magnetic field is understandably seen as very important for the radio evolution of SNRs. In terms of MFA, we include both resonant and nonresonant modes in our large-scale simulations by implementing models obtained from first-principles, particle-in-cell simulations and nonlinear magnetohydrodynamical simulations. We test the quality and reliability of our models on a sample consisting of Galactic and extragalactic SNRs. Our simulations give Σ ‑ D slopes between ‑4 and ‑6 for the full Sedov regime. Recent empirical slopes obtained for the Galactic samples are around ‑5, while those for the extragalactic samples are around ‑4.

  14. Extraction and derivatization of polar herbicides for GC-MS analyses.

    PubMed

    Ranz, Andreas; Maier, Eveline; Motter, Herbert; Lankmayr, Ernst

    2008-09-01

    A sample preparation procedure including a simultaneous microwave-assisted (MA) extraction and derivatization for the determination of chlorophenoxy acids in soil samples is presented. For a selective and sensitive measurement, an analytical technique such as GC coupled with MS needs to be adopted. For GC analyses, chlorophenoxy acids have to be converted into more volatile and thermally stable derivatives. Derivatization by means of microwave radiation offers new alternatives in terms of shorter derivatization time and reduces susceptibility for the formation of artefacts. Extraction and derivatization into methyl esters (ME) were performed with sulphuric acid and methanol. Due to the novelty of the simultaneous extraction and derivatization assisted by means of microwave radiation, a careful investigation and optimization of influential reaction parameters was necessary. It could be shown that the combination of sulphuric acid and methanol provides a fast sample preparation including an efficient clean up procedure. The data obtained by the described method are in good agreement with those published for the reference material. Finally, compared to conventional heating and also to the standard procedure of the EPA, the sample preparation time could be considerably shortened.

  15. Aqueous geochemical data from the analysis of stream-water samples collected in June and August 2008—Taylor Mountains 1:250,000- and Dillingham D-4 1:63,360-scale quadrangles, Alaska

    USGS Publications Warehouse

    Wang, Bronwen; Owens, Victoria; Bailey, Elizabeth; Lee, Greg

    2011-01-01

    We report on the chemical analysis of water samples collected from the Taylor Mountains 1:250,000- and Dillingham D-4 1:63,360-scale quadrangles, Alaska. Reported parameters include pH, conductivity, water temperature, major cation and anion concentrations, and trace-element concentrations. We collected the samples as part of a multiyear U.S. Geological Survey project entitled "Geologic and Mineral Deposit Data for Alaskan Economic Development." Data presented here are from samples collected in June and August 2008. Minimal interpretation accompanies this data release. This is the fourth release of aqueous geochemical data from this project; data from samples collected in 2004, 2005, and 2006 were published previously. The data in this report augment but do not duplicate or supersede the previous data releases. Site selection was based on a regional sampling strategy that focused on first- and second-order drainages. Water sample sites were selected on the basis of landscape parameters that included physiography, wetland extent, lithological changes, and a cursory field review of mineralogy from pan concentrates. Stream water in the study area is dominated by bicarbonate (HCO3-), although in a few samples more than 50 percent of the anionic charge can be attributed to sulfate (SO42-). The major-cation chemistry of these samples ranges from Ca2+-Mg2+ dominated to a mix of Ca2+-Mg2+-Na++K2+. In most cases, analysis of duplicate samples showed good agreement for the major cation and major anions with the exception of the duplicate samples at site 08TA565. At site 08TA565, Ca, Mg, Cl, and CaCO3 exceeded 25 percent and the concentrations of trace elements As, Fe and Mn also exceeded 25 percent in this duplicate pair. Chloride concentration varied by more than 25 percent in 5 of the 11 duplicated samples. Trace-element concentrations in these samples generally were at or near the detection limit for the method used and, except for Co at site 08TA565, generally good agreement was determined between duplicate samples for elements with detectable concentrations. Major-ion concentrations were below detection limits in all field blanks, and the trace-element concentrations also were generally below detection limits; however, Co, Mn, Na, Zn, Cl, and Hg were detected in one or more field blank samples.

  16. Sensor-Web Operations Explorer

    NASA Technical Reports Server (NTRS)

    Meemong, Lee; Miller, Charles; Bowman, Kevin; Weidner, Richard

    2008-01-01

    Understanding the atmospheric state and its impact on air quality requires observations of trace gases, aerosols, clouds, and physical parameters across temporal and spatial scales that range from minutes to days and from meters to more than 10,000 kilometers. Observations include continuous local monitoring for particle formation; field campaigns for emissions, local transport, and chemistry; and periodic global measurements for continental transport and chemistry. Understanding includes global data assimilation framework capable of hierarchical coupling, dynamic integration of chemical data and atmospheric models, and feedback loops between models and observations. The objective of the sensor-web system is to observe trace gases, aerosols, clouds, and physical parameters, an integrated observation infrastructure composed of space-borne, air-borne, and in-situ sensors will be simulated based on their measurement physics properties. The objective of the sensor-web operation is to optimally plan for heterogeneous multiple sensors, the sampling strategies will be explored and science impact will be analyzed based on comprehensive modeling of atmospheric phenomena including convection, transport, and chemical process. Topics include system architecture, software architecture, hardware architecture, process flow, technology infusion, challenges, and future direction.

  17. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  18. Critical evaluation of methodology commonly used in sample collection, storage and preparation for the analysis of pharmaceuticals and illicit drugs in surface water and wastewater by solid phase extraction and liquid chromatography-mass spectrometry.

    PubMed

    Baker, David R; Kasprzyk-Hordern, Barbara

    2011-11-04

    The main aim of this manuscript is to provide a comprehensive and critical verification of methodology commonly used for sample collection, storage and preparation in studies concerning the analysis of pharmaceuticals and illicit drugs in aqueous environmental samples with the usage of SPE-LC/MS techniques. This manuscript reports the results of investigations into several sample preparation parameters that to the authors' knowledge have not been reported or have received very little attention. This includes: (i) effect of evaporation temperature and (ii) solvent with regards to solid phase extraction (SPE) extracts; (iii) effect of silanising glassware; (iv) recovery of analytes during vacuum filtration through glass fibre filters and (v) pre LC-MS filter membranes. All of these parameters are vital to develop efficient and reliable extraction techniques; an essential factor given that target drug residues are often present in the aqueous environment at ng L(-1) levels. Presented is also the first comprehensive review of the stability of illicit drugs and pharmaceuticals in wastewater. Among the parameters studied are: time of storage, temperature and pH. Over 60 analytes were targeted including stimulants, opioid and morphine derivatives, benzodiazepines, antidepressants, dissociative anaesthetics, drug precursors, human urine indicators and their metabolites. The lack of stability of analytes in raw wastewater was found to be significant for many compounds. For instance, 34% of compounds studied reported a stability change >15% after only 12 h in raw wastewater stored at 2 °C; a very important finding given that wastewater is typically collected with the use of 24 h composite samplers. The stability of these compounds is also critical given the recent development of so-called 'sewage forensics' or 'sewage epidemiology' in which concentrations of target drug residues in wastewater are used to back-calculate drug consumption. Without an understanding of stability, under (or over) reporting of consumption estimations may take place. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Parameters of Concrete Modified with Glass Meal and Chalcedonite Dust

    NASA Astrophysics Data System (ADS)

    Kotwa, Anna

    2017-10-01

    Additives used for production of concrete mixtures affect the rheological properties and parameters of hardened concrete, including compressive strength, water resistance, durability and shrinkage of hardened concrete. By their application, the use of cement and production costs may be reduced. The scheduled program of laboratory tests included preparation of six batches of concrete mixtures with addition of glass meal and / or chalcedonite dust. Mineral dust is a waste product obtained from crushed aggregate mining, with grain size below 0,063μm. The main ingredient of chalcedonite dust is silica. Glass meal used in the study is a material with very fine grain size, less than 65μm. This particle size is present in 60% - 90% of the sample. Additives were used to replace cement in concrete mixes in an amount of 15% and 25%. The amount of aggregate was left unchanged. The study used Portland cement CEM I 42.5R. Concrete mixes were prepared with a constant rate w / s = 0.4. The aim of the study was to identify the effect of the addition of chalcedonite dust and / or glass meal on the parameters of hardened concrete, i.e. compressive strength, water absorption and capillarity. Additives used in the laboratory tests significantly affect the compressive strength. The largest decrease in compressive strength of concrete samples was recorded for samples with 50% substitutes of cement additives. This decrease is 34.35%. The smallest decrease in compressive strength was noted in concrete with the addition of 15% of chalcedonite dust or 15% glass meal, it amounts to an average of 15%. The study of absorption shows that all concrete with the addition of chalcedonite dust and glass meal gained a percentage weight increase between 2.7 ÷ 3.1% for the test batches. This is a very good result, which is probably due to grout sealing. In capillary action for the test batches, the percentage weight gains of samples ranges from 4.6% to 5.1%. However, the reference concrete obtained the lowest water absorption as compared to other batches.

  20. Efficient computation of the joint sample frequency spectra for multiple populations.

    PubMed

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  1. Efficient computation of the joint sample frequency spectra for multiple populations

    PubMed Central

    Kamm, John A.; Terhorst, Jonathan; Song, Yun S.

    2016-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248

  2. Hydrology and trout populations of cold-water rivers of Michigan and Wisconsin

    USGS Publications Warehouse

    Hendrickson, G.E.; Knutilla, R.L.

    1974-01-01

    Statistical multiple-regression analyses showed significant relationships between trout populations and hydrologic parameters. Parameters showing the higher levels of significance were temperature, hardness of water, percentage of gravel bottom, percentage of bottom vegetation, variability of streamflow, and discharge per unit drainage area. Trout populations increase with lower levels of annual maximum water temperatures, with increase in water hardness, and with increase in percentage of gravel and bottom vegetation. Trout populations also increase with decrease in variability of streamflow, and with increase in discharge per unit drainage area. Most hydrologic parameters were significant when evaluated collectively, but no parameter, by itself, showed a high degree of correlation with trout populations in regression analyses that included all the streams sampled. Regression analyses of stream segments that were restricted to certain limits of hardness, temperature, or percentage of gravel bottom showed improvements in correlation. Analyses of trout populations, in pounds per acre and pounds per mile and hydrologic parameters resulted in regression equations from which trout populations could be estimated with standard errors of 89 and 84 per cent, respectively.

  3. Manual versus automated blood sampling: impact of repeated blood sampling on stress parameters and behavior in male NMRI mice

    PubMed Central

    Kalliokoski, Otto; Sørensen, Dorte B; Hau, Jann; Abelson, Klas S P

    2014-01-01

    Facial vein (cheek blood) and caudal vein (tail blood) phlebotomy are two commonly used techniques for obtaining blood samples from laboratory mice, while automated blood sampling through a permanent catheter is a relatively new technique in mice. The present study compared physiological parameters, glucocorticoid dynamics as well as the behavior of mice sampled repeatedly for 24 h by cheek blood, tail blood or automated blood sampling from the carotid artery. Mice subjected to cheek blood sampling lost significantly more body weight, had elevated levels of plasma corticosterone, excreted more fecal corticosterone metabolites, and expressed more anxious behavior than did the mice of the other groups. Plasma corticosterone levels of mice subjected to tail blood sampling were also elevated, although less significantly. Mice subjected to automated blood sampling were less affected with regard to the parameters measured, and expressed less anxious behavior. We conclude that repeated blood sampling by automated blood sampling and from the tail vein is less stressful than cheek blood sampling. The choice between automated blood sampling and tail blood sampling should be based on the study requirements, the resources of the laboratory and skills of the staff. PMID:24958546

  4. QA/QC requirements for physical properties sampling and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Innis, B.E.

    1993-07-21

    This report presents results of an assessment of the available information concerning US Environmental Protection Agency (EPA) quality assurance/quality control (QA/QC) requirements and guidance applicable to sampling, handling, and analyzing physical parameter samples at Comprehensive Environmental Restoration, Compensation, and Liability Act (CERCLA) investigation sites. Geotechnical testing laboratories measure the following physical properties of soil and sediment samples collected during CERCLA remedial investigations (RI) at the Hanford Site: moisture content, grain size by sieve, grain size by hydrometer, specific gravity, bulk density/porosity, saturated hydraulic conductivity, moisture retention, unsaturated hydraulic conductivity, and permeability of rocks by flowing air. Geotechnical testing laboratories alsomore » measure the following chemical parameters of soil and sediment samples collected during Hanford Site CERCLA RI: calcium carbonate and saturated column leach testing. Physical parameter data are used for (1) characterization of vadose and saturated zone geology and hydrogeology, (2) selection of monitoring well screen sizes, (3) to support modeling and analysis of the vadose and saturated zones, and (4) for engineering design. The objectives of this report are to determine the QA/QC levels accepted in the EPA Region 10 for the sampling, handling, and analysis of soil samples for physical parameters during CERCLA RI.« less

  5. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    PubMed

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  6. Structural properties of H13 tool steel parts produced with use of selective laser melting technology

    NASA Astrophysics Data System (ADS)

    Šafka, J.; Ackermann, M.; Voleský, L.

    2016-04-01

    This paper deals with establishing of building parameters for 1.2344 (H13) tool steel processed using Selective Laser Melting (SLM) technology with layer thickness of 50 µm. In the first part of the work, testing matrix of models in the form of a cube with chamfered edge were built under various building parameters such as laser scanning speed and laser power. Resulting models were subjected to set of tests including measurement of surface roughness, inspection of inner structure with aid of Light Optical Microscopy and Scanning Electron Microscopy and evaluation of micro-hardness. These tests helped us to evaluate an influence of changes in building strategy to the properties of the resulting model. In the second part of the work, mechanical properties of the H13 steel were examined. For this purpose, the set of samples in the form of “dog bone” were printed under three different alignments towards the building plate and tested on universal testing machine. Mechanical testing of the samples should then reveal if the different orientation and thus different layering of the material somehow influence its mechanical properties. For this type of material, the producer provides the parameters for layer thickness of 30 µm only. Thus, our 50 µm building strategy brings shortening of the building time which is valuable especially for large models. Results of mechanical tests show slight variation in mechanical properties for various alignment of the sample.

  7. A broadband toolbox for scanning microwave microscopy transmission measurements

    NASA Astrophysics Data System (ADS)

    Lucibello, Andrea; Sardi, Giovanni Maria; Capoccia, Giovanni; Proietti, Emanuela; Marcelli, Romolo; Kasper, Manuel; Gramse, Georg; Kienberger, Ferry

    2016-05-01

    In this paper, we present in detail the design, both electromagnetic and mechanical, the fabrication, and the test of the first prototype of a Scanning Microwave Microscope (SMM) suitable for a two-port transmission measurement, recording, and processing the high frequency transmission scattering parameter S21 passing through the investigated sample. The S21 toolbox is composed by a microwave emitter, placed below the sample, which excites an electromagnetic wave passing through the sample under test, and is collected by the cantilever used as the detector, electrically matched for high frequency measurements. This prototype enhances the actual capability of the instrument for a sub-surface imaging at the nanoscale. Moreover, it allows the study of the electromagnetic properties of the material under test obtained through the measurement of the reflection (S11) and transmission (S21) parameters at the same time. The SMM operates between 1 GHz and 20 GHz, current limit for the microwave matching of the cantilever, and the high frequency signal is recorded by means of a two-port Vector Network Analyzer, using both contact and no-contact modes of operation, the latter, especially minded for a fully nondestructive and topography-free characterization. This tool is an upgrade of the already established setup for the reflection mode S11 measurement. Actually, the proposed setup is able to give richer information in terms of scattering parameters, including amplitude and phase measurements, by means of the two-port arrangement.

  8. Calculation of Weibull strength parameters, Batdorf flaw density constants and related statistical quantities using PC-CARES

    NASA Technical Reports Server (NTRS)

    Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1990-01-01

    This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.

  9. Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia

    NASA Astrophysics Data System (ADS)

    Hussin, F. N.; Rahman, H. A.; Bahar, A.

    2017-09-01

    Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.

  10. Analytical Results for Municipal Biosolids Samples from a Monitoring Program Near Deer Trail, Colorado (U.S.A.), 2008

    USGS Publications Warehouse

    Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Berry, C.J.; Adams, M.G.

    2009-01-01

    Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colo., has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colo. (U.S.A.). In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring groundwater at part of this site. In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program has recently been extended through 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock groundwater, and stream-bed sediment. Streams at the site are dry most of the year, so samples of stream-bed sediment deposited after rain were used to indicate surface-water effects. This report will present only analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed during 2008. Crock and others have presented earlier a compilation of analytical results for the biosolids samples collected and analyzed for 1999 thru 2006, and in a separate report, data for the 2007 biosolids are reported. More information about the other monitoring components is presented elsewhere in the literature. Priority parameters for biosolids identified by the stakeholders and also regulated by Colorado when used as an agricultural soil amendment include the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross alpha and beta activity. Nitrogen and chromium also were priority parameters for groundwater and sediment components.

  11. Climate is associated with prevalence and severity of radiographic hand osteoarthritis.

    PubMed

    Kalichman, L; Korosteshevsky, M; Batsevich, V; Kobyliansky, E

    2011-08-01

    The aim of this study was to evaluate whether geographic location and climatic factors are associated with prevalence and severity of radiographic hand osteoarthritis (OA) in several samples of the same ethnicity. The total sample included 2079 ethnic Russians (900 males and 1179 females), belonging to 7 samples from different geographic locations in the former USSR. Places of residence were characterized by latitude, longitude, altitude and climatic parameters (mean temperatures, humidity, and daylight duration of January and July). Radiographs of the left hand were obtained from each individual. Osteoarthritis (OA) was evaluated in 14 hand joints according to Kellgren and Lawrence's grading system. OA was characterized by the presence of at least one affected joint and its severity by the number of affected joints (NAJ). Statistical analysis included prevalence estimation, polynomial and logistic regressions, ANOVA and correlation analyses. Prevalence of hand OA and NAJ were significantly associated with latitude and altitude and with most climatic parameters (except the inter-seasonal temperature amplitude and the mean atmospheric pressure of January and July). The highest correlations of hand OA prevalence were found with altitude (r=0.29, p<0.001), annual precipitation (r=-0.26, p<0.001) and the mean temperatures of July (r=0.26, p<0.001). The highest correlations of NAJ were found with altitude (r=0.51, p<0.001), mean humidity in January (r=-0.44, p<0.001) and the mean day duration in January (r=0.0.37, p<0.001). The present study demonstrates that the differences in prevalence and severity of radiographic hand OA among Russian samples are most likely dependent on climatic conditions in the place of residence. Copyright © 2011 Elsevier GmbH. All rights reserved.

  12. Automatic tree parameter extraction by a Mobile LiDAR System in an urban context.

    PubMed

    Herrero-Huerta, Mónica; Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo

    2018-01-01

    In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees.

  13. Automatic tree parameter extraction by a Mobile LiDAR System in an urban context

    PubMed Central

    Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo

    2018-01-01

    In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees. PMID:29689076

  14. Predicting nonstationary flood frequencies: Evidence supports an updated stationarity thesis in the United States

    NASA Astrophysics Data System (ADS)

    Luke, Adam; Vrugt, Jasper A.; AghaKouchak, Amir; Matthew, Richard; Sanders, Brett F.

    2017-07-01

    Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB® program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.

  15. Homogeneous spectroscopic parameters for bright planet host stars from the northern hemisphere . The impact on stellar and planetary mass

    NASA Astrophysics Data System (ADS)

    Sousa, S. G.; Santos, N. C.; Mortier, A.; Tsantaki, M.; Adibekyan, V.; Delgado Mena, E.; Israelian, G.; Rojas-Ayala, B.; Neves, V.

    2015-04-01

    Aims: In this work we derive new precise and homogeneous parameters for 37 stars with planets. For this purpose, we analyze high resolution spectra obtained by the NARVAL spectrograph for a sample composed of bright planet host stars in the northern hemisphere. The new parameters are included in the SWEET-Cat online catalogue. Methods: To ensure that the catalogue is homogeneous, we use our standard spectroscopic analysis procedure, ARES+MOOG, to derive effective temperatures, surface gravities, and metallicities. These spectroscopic stellar parameters are then used as input to compute the stellar mass and radius, which are fundamental for the derivation of the planetary mass and radius. Results: We show that the spectroscopic parameters, masses, and radii are generally in good agreement with the values available in online databases of exoplanets. There are some exceptions, especially for the evolved stars. These are analyzed in detail focusing on the effect of the stellar mass on the derived planetary mass. Conclusions: We conclude that the stellar mass estimations for giant stars should be managed with extreme caution when using them to compute the planetary masses. We report examples within this sample where the differences in planetary mass can be as high as 100% in the most extreme cases. Based on observations obtained at the Telescope Bernard Lyot (USR5026) operated by the Observatoire Midi-Pyrénées and the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique of France (Run ID L131N11 - OPTICON_2013A_027).

  16. Flavor of fresh blueberry juice and the comparison to amount of sugars, acids, anthocyanidins, and physicochemical measurements.

    PubMed

    Bett-Garber, Karen L; Lea, Jeanne M; Watson, Michael A; Grimm, Casey C; Lloyd, Steven W; Beaulieu, John C; Stein-Chisholm, Rebecca E; Andrzejewski, Brett P; Marshall, Donna A

    2015-04-01

    Six cultivars of southern highbush (SHB) and rabbiteye (RE) blueberry samples were harvested on 2 different dates. Each treatment combination was pressed 2 times for repeated measures. Fresh juice was characterized for 18 flavor/taste/feeling factor attributes by a descriptive flavor panel. Each sample was measured for sugars, acids, anthocyanidins, Folin-Ciocalteu, soluble solids (BRIX), titratable acidity (TA), and antioxidant capacity (ORACFL ). Flavors were correlated with the composition and physicochemical data. Blueberry flavor correlated with 3 parameters, and negatively correlated with 2. Strawberry correlated with oxalic acid and negatively correlated with sucrose and quinic acid. Sweet aroma correlated with oxalic and citric acid, but negatively correlated with sucrose, quinic, and total acids. Sweet taste correlated with 11 parameters, including the anthocyanidins; and negatively correlated with 3 parameters. Neither bitter nor astringent correlated with any of the antioxidant parameters, but both correlated with total acids. Sour correlated with total acids and TA, while negatively correlating with pH and BRIX:TA. Throat burn correlated with total acids and TA. Principal component analysis negatively related blueberry, sweet aroma, and sweet to sour, bitter, astringent, tongue tingle, and tongue numbness. The information in this component was related to pH, TA, and BRIX:TA ratio. Another principal component related the nonblueberry fruit flavors to BRIX. This PC, also divided the SHB berries from the RE. This work shows that the impact of juice composition on flavor is very complicated and that estimating flavor with physicochemical parameters is complicated by the composition of the juice. © 2015 Institute of Food Technologists®

  17. MORFOMETRYKA—A NEW WAY OF ESTABLISHING MORPHOLOGICAL CLASSIFICATION OF GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrari, F.; Carvalho, R. R. de; Trevisan, M., E-mail: fabricio@ferrari.pro.br

    We present an extended morphometric system to automatically classify galaxies from astronomical images. The new system includes the original and modified versions of the CASGM coefficients (Concentration C{sub 1}, Asymmetry A{sub 3}, and Smoothness S{sub 3}), and the new parameters entropy, H, and spirality σ{sub ψ}. The new parameters A{sub 3}, S{sub 3}, and H are better to discriminate galaxy classes than A{sub 1}, S{sub 1}, and G, respectively. The new parameter σ{sub ψ} captures the amount of non-radial pattern on the image and is almost linearly dependent on T-type. Using a sample of spiral and elliptical galaxies from themore » Galaxy Zoo project as a training set, we employed the Linear Discriminant Analysis (LDA) technique to classify EFIGI (Baillard et al. 4458 galaxies), Nair and Abraham (14,123 galaxies), and SDSS Legacy (779,235 galaxies) samples. The cross-validation test shows that we can achieve an accuracy of more than 90% with our classification scheme. Therefore, we are able to define a plane in the morphometric parameter space that separates the elliptical and spiral classes with a mismatch between classes smaller than 10%. We use the distance to this plane as a morphometric index (M{sub i}) and we show that it follows the human based T-type index very closely. We calculate morphometric index M{sub i} for ∼780k galaxies from SDSS Legacy Survey–DR7. We discuss how M{sub i} correlates with stellar population parameters obtained using the spectra available from SDSS–DR7.« less

  18. Impact of Processing Method on Recovery of Bacteria from Wipes Used in Biological Surface Sampling

    PubMed Central

    Olson, Nathan D.; Filliben, James J.; Morrow, Jayne B.

    2012-01-01

    Environmental sampling for microbiological contaminants is a key component of hygiene monitoring and risk characterization practices utilized across diverse fields of application. However, confidence in surface sampling results, both in the field and in controlled laboratory studies, has been undermined by large variation in sampling performance results. Sources of variation include controlled parameters, such as sampling materials and processing methods, which often differ among studies, as well as random and systematic errors; however, the relative contributions of these factors remain unclear. The objective of this study was to determine the relative impacts of sample processing methods, including extraction solution and physical dissociation method (vortexing and sonication), on recovery of Gram-positive (Bacillus cereus) and Gram-negative (Burkholderia thailandensis and Escherichia coli) bacteria from directly inoculated wipes. This work showed that target organism had the largest impact on extraction efficiency and recovery precision, as measured by traditional colony counts. The physical dissociation method (PDM) had negligible impact, while the effect of the extraction solution was organism dependent. Overall, however, extraction of organisms from wipes using phosphate-buffered saline with 0.04% Tween 80 (PBST) resulted in the highest mean recovery across all three organisms. The results from this study contribute to a better understanding of the factors that influence sampling performance, which is critical to the development of efficient and reliable sampling methodologies relevant to public health and biodefense. PMID:22706055

  19. User's Guide for Monthly Vector Wind Profile Model

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1999-01-01

    The background, theoretical concepts, and methodology for construction of vector wind profiles based on a statistical model are presented. The derived monthly vector wind profiles are to be applied by the launch vehicle design community for establishing realistic estimates of critical vehicle design parameter dispersions related to wind profile dispersions. During initial studies a number of months are used to establish the model profiles that produce the largest monthly dispersions of ascent vehicle aerodynamic load indicators. The largest monthly dispersions for wind, which occur during the winter high-wind months, are used for establishing the design reference dispersions for the aerodynamic load indicators. This document includes a description of the computational process for the vector wind model including specification of input data, parameter settings, and output data formats. Sample output data listings are provided to aid the user in the verification of test output.

  20. LEP precision electroweak measurements from the Z{sup 0} resonance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strom, D.

    1997-01-01

    Preliminary electroweak measurements from the LEP Collaboration from data taken at the Z{sup 0} resonance are presented. Most of the results presented are based on a total data sample of 12 x 10{sup 6} recorded Z{sup 0} events which included data from the 1993 and 1994 LEP runs. The Z{sup 0} resonance parameters, including hadronic and leptonic cross sections and asymmetries, {tau} polarization and its asymmetry, and heavy-quark asymmetries and partial widths, are evaluated and confronted with the predictions of the Standard Model. This comparison incorporates the constraints provided by the recent determination of the top-quark mass at the Tevatron.more » The Z{sup 0} resonance parameters are found to be in good agreement with the Standard Model prediction using the Tevatron top-quark mass, with the exception of the partial widths for Z{sup 0} decays to pairs of b and c quarks.« less

Top