ERIC Educational Resources Information Center
Pfaffel, Andreas; Spiel, Christiane
2016-01-01
Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are…
Accurate EPR radiosensitivity calibration using small sample masses
NASA Astrophysics Data System (ADS)
Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.
2000-03-01
We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
Li, Peng; Redden, David T.
2014-01-01
SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738
Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette
2018-03-01
In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
TableSim--A program for analysis of small-sample categorical data.
David J. Rugg
2003-01-01
Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.
Reversing the Signaled Magnitude Effect in Delayed Matching to Sample: Delay-Specific Remembering?
ERIC Educational Resources Information Center
White, K. Geoffrey; Brown, Glenn S.
2011-01-01
Pigeons performed a delayed matching-to-sample task in which large or small reinforcers for correct remembering were signaled during the retention interval. Accuracy was low when small reinforcers were signaled, and high when large reinforcers were signaled (the signaled magnitude effect). When the reinforcer-size cue was switched from small to…
Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.
ERIC Educational Resources Information Center
Ramsay, J. O.
1980-01-01
Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)
Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.
ERIC Educational Resources Information Center
Parshall, Cynthia G.; Kromrey, Jeffrey D.
1996-01-01
Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding
2012-08-15
Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed 'Super Sampling' involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receivingmore » a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.« less
Neurons from the adult human dentate nucleus: neural networks in the neuron classification.
Grbatinić, Ivan; Marić, Dušica L; Milošević, Nebojša T
2015-04-07
Topological (central vs. border neuron type) and morphological classification of adult human dentate nucleus neurons according to their quantified histomorphological properties using neural networks on real and virtual neuron samples. In the real sample 53.1% and 14.1% of central and border neurons, respectively, are classified correctly with total of 32.8% of misclassified neurons. The most important result present 62.2% of misclassified neurons in border neurons group which is even greater than number of correctly classified neurons (37.8%) in that group, showing obvious failure of network to classify neurons correctly based on computational parameters used in our study. On the virtual sample 97.3% of misclassified neurons in border neurons group which is much greater than number of correctly classified neurons (2.7%) in that group, again confirms obvious failure of network to classify neurons correctly. Statistical analysis shows that there is no statistically significant difference in between central and border neurons for each measured parameter (p>0.05). Total of 96.74% neurons are morphologically classified correctly by neural networks and each one belongs to one of the four histomorphological types: (a) neurons with small soma and short dendrites, (b) neurons with small soma and long dendrites, (c) neuron with large soma and short dendrites, (d) neurons with large soma and long dendrites. Statistical analysis supports these results (p<0.05). Human dentate nucleus neurons can be classified in four neuron types according to their quantitative histomorphological properties. These neuron types consist of two neuron sets, small and large ones with respect to their perykarions with subtypes differing in dendrite length i.e. neurons with short vs. long dendrites. Besides confirmation of neuron classification on small and large ones, already shown in literature, we found two new subtypes i.e. neurons with small soma and long dendrites and with large soma and short dendrites. These neurons are most probably equally distributed throughout the dentate nucleus as no significant difference in their topological distribution is observed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B
2017-04-01
Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.
Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data
ERIC Educational Resources Information Center
McNeish, Daniel; Harring, Jeffrey R.
2017-01-01
To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…
Accurate and fast multiple-testing correction in eQTL studies.
Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm
2015-06-04
In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Tipton, Elizabeth; Pustejovsky, James E.
2015-01-01
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I
2004-10-01
To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.
Method for Measuring Thermal Conductivity of Small Samples Having Very Low Thermal Conductivity
NASA Technical Reports Server (NTRS)
Miller, Robert A.; Kuczmarski, Maria a.
2009-01-01
This paper describes the development of a hot plate method capable of using air as a standard reference material for the steady-state measurement of the thermal conductivity of very small test samples having thermal conductivity on the order of air. As with other approaches, care is taken to ensure that the heat flow through the test sample is essentially one-dimensional. However, unlike other approaches, no attempt is made to use heated guards to block the flow of heat from the hot plate to the surroundings. It is argued that since large correction factors must be applied to account for guard imperfections when sample dimensions are small, it may be preferable to simply measure and correct for the heat that flows from the heater disc to directions other than into the sample. Experimental measurements taken in a prototype apparatus, combined with extensive computational modeling of the heat transfer in the apparatus, show that sufficiently accurate measurements can be obtained to allow determination of the thermal conductivity of low thermal conductivity materials. Suggestions are made for further improvements in the method based on results from regression analyses of the generated data.
A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies
2014-01-01
Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686
A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.
Kottas, Martina; Kuss, Oliver; Zapf, Antonia
2014-02-19
The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.
Secondary School Students' Reasoning about Conditional Probability, Samples, and Sampling Procedures
ERIC Educational Resources Information Center
Prodromou, Theodosia
2016-01-01
In the Australian mathematics curriculum, Year 12 students (aged 16-17) are asked to solve conditional probability problems that involve the representation of the problem situation with two-way tables or three-dimensional diagrams and consider sampling procedures that result in different correct answers. In a small exploratory study, we…
ERIC Educational Resources Information Center
Chromy, James R.
This study addressed statistical techniques that might ameliorate some of the sampling problems currently facing states with small populations participating in State National Assessment of Educational Progress (NAEP) assessments. The study explored how the application of finite population correction factors to the between-school component of…
Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods
2016-11-01
ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample
ERIC Educational Resources Information Center
Patry, Marc W.; Magaletta, Philip R.; Diamond, Pamela M.; Weinman, Beth A.
2011-01-01
Although not originally designed for implementation in correctional settings, researchers and clinicians have begun to use the Personality Assessment Inventory (PAI) to assess offenders. A relatively small number of studies have made attempts to validate the alcohol and drug abuse scales of the PAI, and only a very few studies have validated those…
NASA Astrophysics Data System (ADS)
Bhattacharyya, Kaustuve; Ke, Chih-Ming; Huang, Guo-Tsai; Chen, Kai-Hsiung; Smilde, Henk-Jan H.; Fuchs, Andreas; Jak, Martin; van Schijndel, Mark; Bozkurt, Murat; van der Schaar, Maurits; Meyer, Steffen; Un, Miranda; Morgan, Stephen; Wu, Jon; Tsai, Vincent; Liang, Frida; den Boef, Arie; ten Berge, Peter; Kubis, Michael; Wang, Cathy; Fouquet, Christophe; Terng, L. G.; Hwang, David; Cheng, Kevin; Gau, TS; Ku, Y. C.
2013-04-01
Aggressive on-product overlay requirements in advanced nodes are setting a superior challenge for the semiconductor industry. This forces the industry to look beyond the traditional way-of-working and invest in several new technologies. Integrated metrology2, in-chip overlay control, advanced sampling and process correction-mechanism (using the highest order of correction possible with scanner interface today), are a few of such technologies considered in this publication.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Green, Michael V.; Ostrow, Harold G.; Seidel, Jurgen; Pomper, Martin G.
2013-01-01
Human and small-animal positron emission tomography (PET) scanners with cylindrical geometry and conventional detectors exhibit a progressive reduction in radial spatial resolution with increasing radial distance from the geometric axis of the scanner. This “depth-of-interaction” (DOI) effect is sufficiently deleterious that many laboratories have devised novel schemes to reduce the magnitude of this effect and thereby yield PET images of greater quantitative accuracy. Here we examine experimentally the effects of a particular DOI correction method (dual-scintillator phoswich detectors with pulse shape discrimination) implemented in a small-animal PET scanner by comparing the same phantom and same mouse images with and without DOI correction. The results suggest that even this relatively coarse, two-level estimate of radial gamma ray interaction position significantly reduces the DOI parallax error. This study also confirms two less appreciated advantages of DOI correction: a reduction in radial distortion and radial source displacement as a source is moved toward the edge of the field of view and a resolution improvement detectable in the central field of view likely owing to improved spatial sampling. PMID:21084028
Green, Michael V; Ostrow, Harold G; Seidel, Jurgen; Pomper, Martin G
2010-12-01
Human and small-animal positron emission tomography (PET) scanners with cylindrical geometry and conventional detectors exhibit a progressive reduction in radial spatial resolution with increasing radial distance from the geometric axis of the scanner. This "depth-of-interaction" (DOI) effect is sufficiently deleterious that many laboratories have devised novel schemes to reduce the magnitude of this effect and thereby yield PET images of greater quantitative accuracy. Here we examine experimentally the effects of a particular DOI correction method (dual-scintillator phoswich detectors with pulse shape discrimination) implemented in a small-animal PET scanner by comparing the same phantom and same mouse images with and without DOI correction. The results suggest that even this relatively coarse, two-level estimate of radial gamma ray interaction position significantly reduces the DOI parallax error. This study also confirms two less appreciated advantages of DOI correction: a reduction in radial distortion and radial source displacement as a source is moved toward the edge of the field of view and a resolution improvement detectable in the central field of view likely owing to improved spatial sampling.
Haverkamp, Nicolas; Beauducel, André
2017-01-01
We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.
Incorporating Biological Knowledge into Evaluation of Casual Regulatory Hypothesis
NASA Technical Reports Server (NTRS)
Chrisman, Lonnie; Langley, Pat; Bay, Stephen; Pohorille, Andrew; DeVincenzi, D. (Technical Monitor)
2002-01-01
Biological data can be scarce and costly to obtain. The small number of samples available typically limits statistical power and makes reliable inference of causal relations extremely difficult. However, we argue that statistical power can be increased substantially by incorporating prior knowledge and data from diverse sources. We present a Bayesian framework that combines information from different sources and we show empirically that this lets one make correct causal inferences with small sample sizes that otherwise would be impossible.
NASA Astrophysics Data System (ADS)
Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.
2018-02-01
Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days.
NASA Astrophysics Data System (ADS)
Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.
2017-12-01
Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days. [Figure not available: see fulltext.
Targeted stock identification using multilocus genotype 'familyprinting'
Letcher, B.H.; King, T.L.
1999-01-01
We present an approach to stock identification of small, targeted populations that uses multilocus microsatellite genotypes of individual mating adults to uniquely identify first- and second-generation offspring in a mixture. We call the approach 'familyprinting'; unlike DNA fingerprinting where tissue samples of individuals are matched, offspring from various families are assigned to pairs of parents or sets of four grandparents with known genotypes. The basic unit of identification is the family, but families can be nested within a variety of stock units ranging from naturally reproducing groups of fish in a small tributary or pond from which mating adults can be sampled to large or small collections of families produced in hatcheries and stocked in specific locations. We show that, with as few as seven alleles per locus using four loci without error, first-generation offspring can be uniquely assigned to the correct family. For second-generation applications in a hatchery more alleles per locus (10) and loci (10) are required for correct assignment of all offspring to the correct set of grandparents. Using microsatellite DNA variation from an Atlantic salmon (Salmo solar) restoration river (Connecticut River, USA), we also show that this population contains sufficient genetic diversity in sea-run returns for 100% correct first, generation assignment and 97% correct second-generation assignment using 14 loci. We are currently using first- and second-generation familyprinting in this population with the ultimate goal of identifying stocking tributary. In addition to within-river familyprinting, there also appears to be sufficient genetic diversity within and between Atlantic salmon populations for identification of 'familyprinted' fish in a mixture of multiple populations. We also suggest that second-generation familyprinting with multiple populations may also provide a tool for examining stock structure. Familyprinting with microsatellite DNA markers is a viable method for identification of offspring of randomly mating adults from small, targeted stocks and should provide a useful addition to current mixed stock analyses with genetic markers.
Prevalence of psychiatric disorders in the Texas juvenile correctional system.
Harzke, Amy Jo; Baillargeon, Jacques; Baillargeon, Gwen; Henry, Judith; Olvera, Rene L; Torrealday, Ohiana; Penn, Joseph V; Parikh, Rajendra
2012-04-01
Most studies assessing the burden of psychiatric disorders in juvenile correctional facilities have been based on small or male-only samples or have focused on a single disorder. Using electronic data routinely collected by the Texas juvenile correctional system and its contracted medical provider organization, we estimated the prevalence of selected psychiatric disorders among youths committed to Texas juvenile correctional facilities between January 1, 2004, and December 31, 2008 (N = 11,603). Ninety-eight percent were diagnosed with at least one of the disorders. Highest estimated prevalence was for conduct disorder (83.2%), followed by any substance use disorder (75.6%), any bipolar disorder (19.4%), attention-deficit/hyperactivity disorder (18.3%), and any depressive disorder (12.6%). The estimated prevalence of psychiatric disorders among these youths was exceptionally high and showed patterns by sex, race/ethnicity, and age that were both consistent and inconsistent with other juvenile justice samples.
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
Explanation of Two Anomalous Results in Statistical Mediation Analysis.
Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P
2012-01-01
Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.
2013-09-09
multivariate correction method (Lawley, 1943) was used for all scores except the MAB FSIQ which used the univariate ( Thorndike , 1949) method. FSIQ... Thorndike , R. L. (1949). Personnel selection. NY: Wiley. Tupes, E. C., & Christal, R. C. (1961). Recurrent personality factors based on trait ratings... Thorndike , 1949). aThe correlations for 1995 were not corrected due to the small sample size (N = 17). *p< .05 Consistency of Pilot Attributes
78 FR 59798 - Small Business Subcontracting: Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
... SMALL BUSINESS ADMINISTRATION 13 CFR Part 125 RIN 3245-AG22 Small Business Subcontracting: Correction AGENCY: U.S. Small Business Administration. ACTION: Correcting amendments. SUMMARY: This document... business subcontracting to implement provisions of the Small Business Jobs Act of 2010. This correction...
Uncertainty budgets for liquid waveguide CDOM absorption measurements.
Lefering, Ina; Röttgers, Rüdiger; Utschig, Christian; McKee, David
2017-08-01
Long path length liquid waveguide capillary cell (LWCC) systems using simple spectrometers to determine the spectral absorption by colored dissolved organic matter (CDOM) have previously been shown to have better measurement sensitivity compared to high-end spectrophotometers using 10 cm cuvettes. Information on the magnitude of measurement uncertainties for LWCC systems, however, has remained scarce. Cross-comparison of three different LWCC systems with three different path lengths (50, 100, and 250 cm) and two different cladding materials enabled quantification of measurement precision and accuracy, revealing strong wavelength dependency in both parameters. Stable pumping of the sample through the capillary cell was found to improve measurement precision over measurements made with the sample kept stationary. Results from the 50 and 100 cm LWCC systems, with higher refractive index cladding, showed systematic artifacts including small but unphysical negative offsets and high-frequency spectral perturbations due to limited performance of the salinity correction. In comparison, the newer 250 cm LWCC with lower refractive index cladding returned small positive offsets that may be physically correct. After null correction of measurements at 700 nm, overall agreement of CDOM absorption data at 440 nm was found to be within 5% root mean square percentage error.
Closed loop adaptive optics for microscopy without a wavefront sensor.
Kner, Peter; Winoto, Lukman; Agard, David A; Sedat, John W
2010-02-24
A three-dimensional wide-field image of a small fluorescent bead contains more than enough information to accurately calculate the wavefront in the microscope objective back pupil plane using the phase retrieval technique. The phase-retrieved wavefront can then be used to set a deformable mirror to correct the point-spread function (PSF) of the microscope without the use of a wavefront sensor. This technique will be useful for aligning the deformable mirror in a widefield microscope with adaptive optics and could potentially be used to correct aberrations in samples where small fluorescent beads or other point sources are used as reference beacons. Another advantage is the high resolution of the retrieved wavefont as compared with current Shack-Hartmann wavefront sensors. Here we demonstrate effective correction of the PSF in 3 iterations. Starting from a severely aberrated system, we achieve a Strehl ratio of 0.78 and a greater than 10-fold increase in maximum intensity.
Ion beam machining error control and correction for small scale optics.
Xie, Xuhui; Zhou, Lin; Dai, Yifan; Li, Shengyi
2011-09-20
Ion beam figuring (IBF) technology for small scale optical components is discussed. Since the small removal function can be obtained in IBF, it makes computer-controlled optical surfacing technology possible to machine precision centimeter- or millimeter-scale optical components deterministically. Using a small ion beam to machine small optical components, there are some key problems, such as small ion beam positioning on the optical surface, material removal rate, ion beam scanning pitch control on the optical surface, and so on, that must be seriously considered. The main reasons for the problems are that it is more sensitive to the above problems than a big ion beam because of its small beam diameter and lower material ratio. In this paper, we discuss these problems and their influences in machining small optical components in detail. Based on the identification-compensation principle, an iterative machining compensation method is deduced for correcting the positioning error of an ion beam with the material removal rate estimated by a selected optimal scanning pitch. Experiments on ϕ10 mm Zerodur planar and spherical samples are made, and the final surface errors are both smaller than λ/100 measured by a Zygo GPI interferometer.
Determination of small quantities of fluoride in water: A modified zirconium-alizarin method
Lamar, W.L.; Seegmiller, C.G.
1941-01-01
The zirconium-alizarin method has been modified to facilitate the convenient and accurate determination of small amounts of fluoride in a large number of water samples. Sulfuric acid is used to acidify the samples to reduce the interference of sulfate. The pH is accurately controlled to give the most sensitive comparisons. Most natural waters can be analyzed by the modified procedure without resorting to correction curves. The fluoride content of waters containing less than 500 parts per million of sulfate, 500 parts per million of bicarbonate, and 1000 parts per million of chloride may be determined within a limit of about 0.1 part per million when a 100-ml. sample is used.
Sequencing small genomic targets with high efficiency and extreme accuracy
Schmitt, Michael W.; Fox, Edward J.; Prindle, Marc J.; Reid-Bayliss, Kate S.; True, Lawrence D.; Radich, Jerald P.; Loeb, Lawrence A.
2015-01-01
The detection of minority variants in mixed samples demands methods for enrichment and accurate sequencing of small genomic intervals. We describe an efficient approach based on sequential rounds of hybridization with biotinylated oligonucleotides, enabling more than one-million fold enrichment of genomic regions of interest. In conjunction with error correcting double-stranded molecular tags, our approach enables the quantification of mutations in individual DNA molecules. PMID:25849638
Reum, J C P
2011-12-01
Three lipid correction models were evaluated for liver and white dorsal muscle from Squalus acanthias. For muscle, all three models performed well, based on the Akaike Information Criterion value corrected for small sample sizes (AIC(c) ), and predicted similar lipid corrections to δ(13) C that were up to 2.8 ‰ higher than those predicted using previously published models based on multispecies data. For liver, which possessed higher bulk C:N values compared to that of white muscle, all three models performed poorly and lipid-corrected δ(13) C values were best approximated by simply adding 5.74 ‰ to bulk δ(13) C values. © 2011 The Author. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.
Kitchen, Robert R; Sabine, Vicky S; Sims, Andrew H; Macaskill, E Jane; Renshaw, Lorna; Thomas, Jeremy S; van Hemert, Jano I; Dixon, J Michael; Bartlett, John M S
2010-02-24
Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data.
2010-01-01
Background Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. Results A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. Conclusion In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data. PMID:20181233
Late-Onset Alzheimer's Disease Polygenic Risk Profile Score Predicts Hippocampal Function.
Xiao, Ena; Chen, Qiang; Goldman, Aaron L; Tan, Hao Yang; Healy, Kaitlin; Zoltick, Brad; Das, Saumitra; Kolachana, Bhaskar; Callicott, Joseph H; Dickinson, Dwight; Berman, Karen F; Weinberger, Daniel R; Mattay, Venkata S
2017-11-01
We explored the cumulative effect of several late-onset Alzheimer's disease (LOAD) risk loci using a polygenic risk profile score (RPS) approach on measures of hippocampal function, cognition, and brain morphometry. In a sample of 231 healthy control subjects (19-55 years of age), we used an RPS to study the effect of several LOAD risk loci reported in a recent meta-analysis on hippocampal function (determined by its engagement with blood oxygen level-dependent functional magnetic resonance imaging during episodic memory) and several cognitive metrics. We also studied effects on brain morphometry in an overlapping sample of 280 subjects. There was almost no significant association of LOAD-RPS with cognitive or morphometric measures. However, there was a significant negative relationship between LOAD-RPS and hippocampal function (familywise error [small volume correction-hippocampal region of interest] p < .05). There were also similar associations for risk score based on APOE haplotype, and for a combined LOAD-RPS + APOE haplotype risk profile score (p < .05 familywise error [small volume correction-hippocampal region of interest]). Of the 29 individual single nucleotide polymorphisms used in calculating LOAD-RPS, variants in CLU, PICALM, BCL3, PVRL2, and RELB showed strong effects (p < .05 familywise error [small volume correction-hippocampal region of interest]) on hippocampal function, though none survived further correction for the number of single nucleotide polymorphisms tested. There is a cumulative deleterious effect of LOAD risk genes on hippocampal function even in healthy volunteers. The effect of LOAD-RPS on hippocampal function in the relative absence of any effect on cognitive and morphometric measures is consistent with the reported temporal characteristics of LOAD biomarkers with the earlier manifestation of synaptic dysfunction before morphometric and cognitive changes. Copyright © 2017 Society of Biological Psychiatry. All rights reserved.
NASA Astrophysics Data System (ADS)
Bonczyk, Michal
2018-07-01
This article deals with the problem of the self-attenuation of low-energy gamma-rays from the isotope of lead 210Pb (46.5 keV) in industrial waste. The 167 samples of industrial waste, belonging to nine categories, were tested by means of gamma spectrometry in order to determine 210Pb activity concentration. The experimental method for self-attenuation corrections for gamma rays emitted by lead isotope was applied. Mass attenuation coefficients were determined for energy of 46.5 keV. Correction factors were calculated based on mass attenuation coefficients, sample density and thickness. A mathematical formula for correction calculation was evaluated. The 210Pb activity concentration obtained varied in the range from several Bq·kg-1 up to 19,810 Bq kg-1. The mass attenuation coefficients varied across the range of 0.19-4.42 cm2·g-1. However, the variation of mass attenuation coefficient within some categories of waste was relatively small. The calculated corrections for self-attenuation were 0.98 - 6.97. The high value of correction factors must not be neglect in radiation risk assessment.
ERIC Educational Resources Information Center
Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M.; Begeer, Sander
2014-01-01
The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion…
Research on the magnetorheological finishing (MRF) technology with dual polishing heads
NASA Astrophysics Data System (ADS)
Huang, Wen; Zhang, Yunfei; He, Jianguo; Zheng, Yongcheng; Luo, Qing; Hou, Jing; Yuan, Zhigang
2014-08-01
Magnetorheological finishing (MRF) is a key polishing technique capable of rapidly converging to the required surface figure. Due to the deficiency of general one-polishing-head MRF technology, a dual polishing heads MRF technology was studied and a dual polishing heads MRF machine with 8 axes was developed. The machine has the ability to manufacture large aperture optics with high figure accuracy. The large polishing head is suitable for polishing large aperture optics, controlling large spatial length's wave structures, correcting low-medium frequency errors with high removal rates. While the small polishing head has more advantages in manufacturing small aperture optics, controlling small spatial wavelength's wave structures, correcting mid-high frequency and removing nanoscale materials. Material removal characteristic and figure correction ability for each of large and small polishing head was studied. Each of two polishing heads respectively acquired stable and valid polishing removal function and ultra-precision flat sample. After a single polishing iteration using small polishing head, the figure error in 45mm diameter of a 50 mm diameter plano optics was significantly improved from 0.21λ to 0.08λ by PV (RMS 0.053λ to 0.015λ). After three polishing iterations using large polishing head , the figure error in 410mm×410mm of a 430mm×430mm large plano optics was significantly improved from 0.40λ to 0.10λ by PV (RMS 0.068λ to 0.013λ) .This results show that the dual polishing heads MRF machine not only have good material removal stability, but also excellent figure correction capability.
Method and apparatus for measuring thermal conductivity of small, highly insulating specimens
NASA Technical Reports Server (NTRS)
Miller, Robert A. (Inventor); Kuczmarski, Maria A. (Inventor)
2012-01-01
A hot plate method and apparatus for the measurement of thermal conductivity combines the following capabilities: 1) measurements of very small specimens; 2) measurements of specimens with thermal conductivity on the same order of that as air; and, 3) the ability to use air as a reference material. Care is taken to ensure that the heat flow through the test specimen is essentially one-dimensional. No attempt is made to use heated guards to minimize the flow of heat from the hot plate to the surroundings. Results indicate that since large correction factors must be applied to account for guard imperfections when specimen dimensions are small, simply measuring and correcting for heat from the heater disc that does not flow into the specimen is preferable. The invention is a hot plate method capable of using air as a standard reference material for the steady-state measurement of the thermal conductivity of very small test samples having thermal conductivity on the order of air.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Patrick
Corrective Action Unit (CAU) 541 is co-located on the boundary of Area 5 of the Nevada National Security Site and Range 65C of the Nevada Test and Training Range, approximately 65 miles northwest of Las Vegas, Nevada. CAU 541 is a grouping of sites where there has been a suspected release of contamination associated with nuclear testing. This document describes the planned investigation of CAU 541, which comprises the following corrective action sites (CASs): 05-23-04, Atmospheric Tests (6) - BFa Site; 05-45-03, Atmospheric Test Site - Small Boy. These sites are being investigated because existing information on the nature andmore » extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable CAAs that will be presented in the investigation report. The sites will be investigated based on the data quality objectives (DQOs) developed on April 1, 2014, by representatives of the Nevada Division of Environmental Protection; U.S. Air Force; and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Field Office. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 541. The site investigation process also will be conducted in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices to be applied to this activity. The potential contamination sources associated with CASs 05-23-04 and 05-45-03 are from nuclear testing activities conducted at the Atmospheric Tests (6) - BFa Site and Atmospheric Test Site - Small Boy sites. The presence and nature of contamination at CAU 541 will be evaluated based on information collected from field investigations. Radiological contamination will be evaluated based on a comparison of the total effective dose at sample locations to the dose-based final action level. The total effective dose will be calculated as the total of separate estimates of internal and external dose. Results from the analysis of soil samples will be used to calculate internal radiological dose. Thermoluminescent dosimeters placed at the center of each sample location will be used to measure external radiological dose. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS.« less
Syfert, Mindy M; Smith, Matthew J; Coomes, David A
2013-01-01
Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.
Bayes plus Brass: Estimating Total Fertility for Many Small Areas from Sparse Census Data
Schmertmann, Carl P.; Cavenaghi, Suzana M.; Assunção, Renato M.; Potter, Joseph E.
2013-01-01
Small-area fertility estimates are valuable for analysing demographic change, and important for local planning and population projection. In countries lacking complete vital registration, however, small-area estimates are possible only from sparse survey or census data that are potentially unreliable. Such estimation requires new methods for old problems: procedures must be automated if thousands of estimates are required, they must deal with extreme sampling variability in many areas, and they should also incorporate corrections for possible data errors. We present a two-step algorithm for estimating total fertility in such circumstances, and we illustrate by applying the method to 2000 Brazilian Census data for over five thousand municipalities. Our proposed algorithm first smoothes local age-specific rates using Empirical Bayes methods, and then applies a new variant of Brass’s P/F parity correction procedure that is robust under conditions of rapid fertility decline. PMID:24143946
Is First-Order Vector Autoregressive Model Optimal for fMRI Data?
Ting, Chee-Ming; Seghouane, Abd-Krim; Khalid, Muhammad Usman; Salleh, Sh-Hussain
2015-09-01
We consider the problem of selecting the optimal orders of vector autoregressive (VAR) models for fMRI data. Many previous studies used model order of one and ignored that it may vary considerably across data sets depending on different data dimensions, subjects, tasks, and experimental designs. In addition, the classical information criteria (IC) used (e.g., the Akaike IC (AIC)) are biased and inappropriate for the high-dimensional fMRI data typically with a small sample size. We examine the mixed results on the optimal VAR orders for fMRI, especially the validity of the order-one hypothesis, by a comprehensive evaluation using different model selection criteria over three typical data types--a resting state, an event-related design, and a block design data set--with varying time series dimensions obtained from distinct functional brain networks. We use a more balanced criterion, Kullback's IC (KIC) based on Kullback's symmetric divergence combining two directed divergences. We also consider the bias-corrected versions (AICc and KICc) to improve VAR model selection in small samples. Simulation results show better small-sample selection performance of the proposed criteria over the classical ones. Both bias-corrected ICs provide more accurate and consistent model order choices than their biased counterparts, which suffer from overfitting, with KICc performing the best. Results on real data show that orders greater than one were selected by all criteria across all data sets for the small to moderate dimensions, particularly from small, specific networks such as the resting-state default mode network and the task-related motor networks, whereas low orders close to one but not necessarily one were chosen for the large dimensions of full-brain networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
The primary goal of this project is to evaluate x-ray spectra generated within a scanning electron microscope (SEM) to determine elemental composition of small samples. This will be accomplished by performing Monte Carlo simulations of the electron and photon interactions in the sample and in the x-ray detector. The elemental inventories will be determined by an inverse process that progressively reduces the difference between the measured and simulated x-ray spectra by iteratively adjusting composition and geometric variables in the computational model. The intended benefit of this work will be to develop a method to perform quantitative analysis on substandard samplesmore » (heterogeneous phases, rough surfaces, small sizes, etc.) without involving standard elemental samples or empirical matrix corrections (i.e., true standardless quantitative analysis).« less
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
NASA Astrophysics Data System (ADS)
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.
Westgate, Philip M; Burchett, Woodrow W
2017-03-15
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Real-Time Microfluidic Blood-Counting System for PET and SPECT Preclinical Pharmacokinetic Studies.
Convert, Laurence; Lebel, Réjean; Gascon, Suzanne; Fontaine, Réjean; Pratte, Jean-François; Charette, Paul; Aimez, Vincent; Lecomte, Roger
2016-09-01
Small-animal nuclear imaging modalities have become essential tools in the development process of new drugs, diagnostic procedures, and therapies. Quantification of metabolic or physiologic parameters is based on pharmacokinetic modeling of radiotracer biodistribution, which requires the blood input function in addition to tissue images. Such measurements are challenging in small animals because of their small blood volume. In this work, we propose a microfluidic counting system to monitor rodent blood radioactivity in real time, with high efficiency and small detection volume (∼1 μL). A microfluidic channel is built directly above unpackaged p-i-n photodiodes to detect β-particles with maximum efficiency. The device is embedded in a compact system comprising dedicated electronics, shielding, and pumping unit controlled by custom firmware to enable measurements next to small-animal scanners. Data corrections required to use the input function in pharmacokinetic models were established using calibrated solutions of the most common PET and SPECT radiotracers. Sensitivity, dead time, propagation delay, dispersion, background sensitivity, and the effect of sample temperature were characterized. The system was tested for pharmacokinetic studies in mice by quantifying myocardial perfusion and oxygen consumption with (11)C-acetate (PET) and by measuring the arterial input function using (99m)TcO4 (-) (SPECT). Sensitivity for PET isotopes reached 20%-47%, a 2- to 10-fold improvement relative to conventional catheter-based geometries. Furthermore, the system detected (99m)Tc-based SPECT tracers with an efficiency of 4%, an outcome not possible through a catheter. Correction for dead time was found to be unnecessary for small-animal experiments, whereas propagation delay and dispersion within the microfluidic channel were accurately corrected. Background activity and sample temperature were shown to have no influence on measurements. Finally, the system was successfully used in animal studies. A fully operational microfluidic blood-counting system for preclinical pharmacokinetic studies was developed. Microfluidics enabled reliable and high-efficiency measurement of the blood concentration of most common PET and SPECT radiotracers with high temporal resolution in small blood volume. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Modeling bias and variation in the stochastic processes of small RNA sequencing
Etheridge, Alton; Sakhanenko, Nikita; Galas, David
2017-01-01
Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495
Closed loop adaptive optics for microscopy without a wavefront sensor
Kner, Peter; Winoto, Lukman; Agard, David A.; Sedat, John W.
2013-01-01
A three-dimensional wide-field image of a small fluorescent bead contains more than enough information to accurately calculate the wavefront in the microscope objective back pupil plane using the phase retrieval technique. The phase-retrieved wavefront can then be used to set a deformable mirror to correct the point-spread function (PSF) of the microscope without the use of a wavefront sensor. This technique will be useful for aligning the deformable mirror in a widefield microscope with adaptive optics and could potentially be used to correct aberrations in samples where small fluorescent beads or other point sources are used as reference beacons. Another advantage is the high resolution of the retrieved wavefont as compared with current Shack-Hartmann wavefront sensors. Here we demonstrate effective correction of the PSF in 3 iterations. Starting from a severely aberrated system, we achieve a Strehl ratio of 0.78 and a greater than 10-fold increase in maximum intensity. PMID:24392198
The Top-of-Instrument corrections for nuclei with AMS on the Space Station
NASA Astrophysics Data System (ADS)
Ferris, N. G.; Heil, M.
2018-05-01
The Alpha Magnetic Spectrometer (AMS) is a large acceptance, high precision magnetic spectrometer on the International Space Station (ISS). The top-of-instrument correction for nuclei flux measurements with AMS accounts for backgrounds due to the fragmentation of nuclei with higher charge. Upon entry in the detector, nuclei may interact with AMS materials and split into fragments of lower charge based on their cross-section. The redundancy of charge measurements along the particle trajectory with AMS allows for the determination of inelastic interactions and for the selection of high purity nuclei samples with small uncertainties. The top-of-instrument corrections for nuclei with 2 < Z ≤ 6 are presented.
NASA Astrophysics Data System (ADS)
Bezur, L.; Marshall, J.; Ottaway, J. M.
A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich
2011-12-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.
Ion beam figuring of small optical components
NASA Astrophysics Data System (ADS)
Drueding, Thomas W.; Fawcett, Steven C.; Wilson, Scott R.; Bifano, Thomas G.
1995-12-01
Ion beam figuring provides a highly deterministic method for the final precision figuring of optical components with advantages over conventional methods. The process involves bombarding a component with a stable beam of accelerated particles that selectively removes material from the surface. Figure corrections are achieved by rastering the fixed-current beam across the workplace at appropriate, time-varying velocities. Unlike conventional methods, ion figuring is a noncontact technique and thus avoids such problems as edge rolloff effects, tool wear, and force loading of the workpiece. This work is directed toward the development of the precision ion machining system at NASA's Marshall Space Flight Center. This system is designed for processing small (approximately equals 10-cm diam) optical components. Initial experiments were successful in figuring 8-cm-diam fused silica and chemical-vapor-deposited SiC samples. The experiments, procedures, and results of figuring the sample workpieces to shallow spherical, parabolic (concave and convex), and non-axially-symmetric shapes are discussed. Several difficulties and limitations encountered with the current system are discussed. The use of a 1-cm aperture for making finer corrections on optical components is also reported.
Stability and bias of classification rates in biological applications of discriminant analysis
Williams, B.K.; Titus, K.; Hines, J.E.
1990-01-01
We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
Intensity-corrected Herschel Observations of Nearby Isolated Low-mass Clouds
NASA Astrophysics Data System (ADS)
Sadavoy, Sarah I.; Keto, Eric; Bourke, Tyler L.; Dunham, Michael M.; Myers, Philip C.; Stephens, Ian W.; Di Francesco, James; Webb, Kristi; Stutz, Amelia M.; Launhardt, Ralf; Tobin, John J.
2018-01-01
We present intensity-corrected Herschel maps at 100, 160, 250, 350, and 500 μm for 56 isolated low-mass clouds. We determine the zero-point corrections for Herschel Photodetector Array Camera and Spectrometer (PACS) and Spectral Photometric Imaging Receiver (SPIRE) maps from the Herschel Science Archive (HSA) using Planck data. Since these HSA maps are small, we cannot correct them using typical methods. Here we introduce a technique to measure the zero-point corrections for small Herschel maps. We use radial profiles to identify offsets between the observed HSA intensities and the expected intensities from Planck. Most clouds have reliable offset measurements with this technique. In addition, we find that roughly half of the clouds have underestimated HSA-SPIRE intensities in their outer envelopes relative to Planck, even though the HSA-SPIRE maps were previously zero-point corrected. Using our technique, we produce corrected Herschel intensity maps for all 56 clouds and determine their line-of-sight average dust temperatures and optical depths from modified blackbody fits. The clouds have typical temperatures of ∼14–20 K and optical depths of ∼10‑5–10‑3. Across the whole sample, we find an anticorrelation between temperature and optical depth. We also find lower temperatures than what was measured in previous Herschel studies, which subtracted out a background level from their intensity maps to circumvent the zero-point correction. Accurate Herschel observations of clouds are key to obtaining accurate density and temperature profiles. To make such future analyses possible, intensity-corrected maps for all 56 clouds are publicly available in the electronic version. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
Super-global distortion correction for a rotational C-arm x-ray image intensifier.
Liu, R R; Rudin, S; Bednarek, D R
1999-09-01
Image intensifier (II) distortion changes as a function of C-arm rotation angle because of changes in the orientation of the II with the earth's or other stray magnetic fields. For cone-beam computed tomography (CT), distortion correction for all angles is essential. The new super-global distortion correction consists of a model to continuously correct II distortion not only at each location in the image but for every rotational angle of the C arm. Calibration bead images were acquired with a standard C arm in 9 in. II mode. The super-global (SG) model is obtained from the single-plane global correction of the selected calibration images with given sampling angle interval. The fifth-order single-plane global corrections yielded a residual rms error of 0.20 pixels, while the SG model yielded a rms error of 0.21 pixels, a negligibly small difference. We evaluated the accuracy dependence of the SG model on various factors, such as the single-plane global fitting order, SG order, and angular sampling interval. We found that a good SG model can be obtained using a sixth-order SG polynomial fit based on the fifth-order single-plane global correction, and that a 10 degrees sampling interval was sufficient. Thus, the SG model saves processing resources and storage space. The residual errors from the mechanical errors of the x-ray system were also investigated, and found comparable with the SG residual error. Additionally, a single-plane global correction was done in the cylindrical coordinate system, and physical information about pincushion distortion and S distortion were observed and analyzed; however, this method is not recommended due to a lack of calculational efficiency. In conclusion, the SG model provides an accurate, fast, and simple correction for rotational C-arm images, which may be used for cone-beam CT.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
Chen, Xiao; Lu, Bin; Yan, Chao-Gan
2018-01-01
Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Pedroza, Claudia; Truong, Van Thi Thanh
2017-11-02
Analyses of multicenter studies often need to account for center clustering to ensure valid inference. For binary outcomes, it is particularly challenging to properly adjust for center when the number of centers or total sample size is small, or when there are few events per center. Our objective was to evaluate the performance of generalized estimating equation (GEE) log-binomial and Poisson models, generalized linear mixed models (GLMMs) assuming binomial and Poisson distributions, and a Bayesian binomial GLMM to account for center effect in these scenarios. We conducted a simulation study with few centers (≤30) and 50 or fewer subjects per center, using both a randomized controlled trial and an observational study design to estimate relative risk. We compared the GEE and GLMM models with a log-binomial model without adjustment for clustering in terms of bias, root mean square error (RMSE), and coverage. For the Bayesian GLMM, we used informative neutral priors that are skeptical of large treatment effects that are almost never observed in studies of medical interventions. All frequentist methods exhibited little bias, and the RMSE was very similar across the models. The binomial GLMM had poor convergence rates, ranging from 27% to 85%, but performed well otherwise. The results show that both GEE models need to use small sample corrections for robust SEs to achieve proper coverage of 95% CIs. The Bayesian GLMM had similar convergence rates but resulted in slightly more biased estimates for the smallest sample sizes. However, it had the smallest RMSE and good coverage across all scenarios. These results were very similar for both study designs. For the analyses of multicenter studies with a binary outcome and few centers, we recommend adjustment for center with either a GEE log-binomial or Poisson model with appropriate small sample corrections or a Bayesian binomial GLMM with informative priors.
Correction of image drift and distortion in a scanning electron microscopy.
Jin, P; Li, X
2015-12-01
Continuous research on small-scale mechanical structures and systems has attracted strong demand for ultrafine deformation and strain measurements. Conventional optical microscope cannot meet such requirements owing to its lower spatial resolution. Therefore, high-resolution scanning electron microscope has become the preferred system for high spatial resolution imaging and measurements. However, scanning electron microscope usually is contaminated by distortion and drift aberrations which cause serious errors to precise imaging and measurements of tiny structures. This paper develops a new method to correct drift and distortion aberrations of scanning electron microscope images, and evaluates the effect of correction by comparing corrected images with scanning electron microscope image of a standard sample. The drift correction is based on the interpolation scheme, where a series of images are captured at one location of the sample and perform image correlation between the first image and the consequent images to interpolate the drift-time relationship of scanning electron microscope images. The distortion correction employs the axial symmetry model of charged particle imaging theory to two images sharing with the same location of one object under different imaging fields of view. The difference apart from rigid displacement between the mentioned two images will give distortion parameters. Three-order precision is considered in the model and experiment shows that one pixel maximum correction is obtained for the employed high-resolution electron microscopic system. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Elongation measurement using 1-dimensional image correlation method
NASA Astrophysics Data System (ADS)
Phongwisit, Phachara; Kamoldilok, Surachart; Buranasiri, Prathan
2016-11-01
Aim of this paper was to study, setup, and calibrate an elongation measurement by using 1- Dimensional Image Correlation method (1-DIC). To confirm our method and setup correctness, we need calibration with other methods. In this paper, we used a small spring as a sample to find a result in terms of spring constant. With a fundamental of Image Correlation method, images of formed and deformed samples were compared to understand the difference between deformed process. By comparing the location of reference point on both image's pixel, the spring's elongation were calculated. Then, the results have been compared with the spring constants, which were found from Hooke's law. The percentage of 5 percent error has been found. This DIC method, then, would be applied to measure the elongation of some different kinds of small fiber samples.
Population entropies estimates of proteins
NASA Astrophysics Data System (ADS)
Low, Wai Yee
2017-05-01
The Shannon entropy equation provides a way to estimate variability of amino acids sequences in a multiple sequence alignment of proteins. Knowledge of protein variability is useful in many areas such as vaccine design, identification of antibody binding sites, and exploration of protein 3D structural properties. In cases where the population entropies of a protein are of interest but only a small sample size can be obtained, a method based on linear regression and random subsampling can be used to estimate the population entropy. This method is useful for comparisons of entropies where the actual sequence counts differ and thus, correction for alignment size bias is needed. In the current work, an R based package named EntropyCorrect that enables estimation of population entropy is presented and an empirical study on how well this new algorithm performs on simulated dataset of various combinations of population and sample sizes is discussed. The package is available at https://github.com/lloydlow/EntropyCorrect. This article, which was originally published online on 12 May 2017, contained an error in Eq. (1), where the summation sign was missing. The corrected equation appears in the Corrigendum attached to the pdf.
NASA Astrophysics Data System (ADS)
Choi, Y.; Park, S.; Baik, S.; Jung, J.; Lee, S.; Yoo, J.
A small scale laboratory adaptive optics system using a Shack-Hartmann wave-front sensor (WFS) and a membrane deformable mirror (DM) has been built for robust image acquisition. In this study, an adaptive limited control technique is adopted to maintain the long-term correction stability of an adaptive optics system. To prevent the waste of dynamic correction range for correcting small residual wave-front distortions which are inefficient to correct, the built system tries to limit wave-front correction when a similar small difference wave-front pattern is repeatedly generated. Also, the effect of mechanical distortion in an adaptive optics system is studied and a pre-recognition method for the distortion is devised to prevent low-performance system operation. A confirmation process for a balanced work assignment among deformable mirror (DM) actuators is adopted for the pre-recognition. The corrected experimental results obtained by using a built small scale adaptive optics system are described in this paper.
Consideration of Kaolinite Interference Correction for Quartz Measurements in Coal Mine Dust
Lee, Taekhee; Chisholm, William P.; Kashon, Michael; Key-Schwartz, Rosa J.; Harper, Martin
2015-01-01
Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed “deviation,” not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected. PMID:23767881
Consideration of kaolinite interference correction for quartz measurements in coal mine dust.
Lee, Taekhee; Chisholm, William P; Kashon, Michael; Key-Schwartz, Rosa J; Harper, Martin
2013-01-01
Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed "deviation," not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected.
Single image non-uniformity correction using compressive sensing
NASA Astrophysics Data System (ADS)
Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu
2016-05-01
A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.
Feng, Dai; Cortese, Giuliana; Baumgartner, Richard
2017-12-01
The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.
Reducing representativeness and sampling errors in radio occultation-radiosonde comparisons
NASA Astrophysics Data System (ADS)
Gilpin, Shay; Rieckh, Therese; Anthes, Richard
2018-05-01
Radio occultation (RO) and radiosonde (RS) comparisons provide a means of analyzing errors associated with both observational systems. Since RO and RS observations are not taken at the exact same time or location, temporal and spatial sampling errors resulting from atmospheric variability can be significant and inhibit error analysis of the observational systems. In addition, the vertical resolutions of RO and RS profiles vary and vertical representativeness errors may also affect the comparison. In RO-RS comparisons, RO observations are co-located with RS profiles within a fixed time window and distance, i.e. within 3-6 h and circles of radii ranging between 100 and 500 km. In this study, we first show that vertical filtering of RO and RS profiles to a common vertical resolution reduces representativeness errors. We then test two methods of reducing horizontal sampling errors during RO-RS comparisons: restricting co-location pairs to within ellipses oriented along the direction of wind flow rather than circles and applying a spatial-temporal sampling correction based on model data. Using data from 2011 to 2014, we compare RO and RS differences at four GCOS Reference Upper-Air Network (GRUAN) RS stations in different climatic locations, in which co-location pairs were constrained to a large circle ( ˜ 666 km radius), small circle ( ˜ 300 km radius), and ellipse parallel to the wind direction ( ˜ 666 km semi-major axis, ˜ 133 km semi-minor axis). We also apply a spatial-temporal sampling correction using European Centre for Medium-Range Weather Forecasts Interim Reanalysis (ERA-Interim) gridded data. Restricting co-locations to within the ellipse reduces root mean square (RMS) refractivity, temperature, and water vapor pressure differences relative to RMS differences within the large circle and produces differences that are comparable to or less than the RMS differences within circles of similar area. Applying the sampling correction shows the most significant reduction in RMS differences, such that RMS differences are nearly identical to the sampling correction regardless of the geometric constraints. We conclude that implementing the spatial-temporal sampling correction using a reliable model will most effectively reduce sampling errors during RO-RS comparisons; however, if a reliable model is not available, restricting spatial comparisons to within an ellipse parallel to the wind flow will reduce sampling errors caused by horizontal atmospheric variability.
Iglesias, María Teresa; De Lorenzo, Cristina; Del Carmen Polo, María; Martín-Alvarez, Pedro Jésus; Pueyo, Encarnacíon
2004-01-14
With the aim of finding methods that could constitute a solid alternative to melissopalynological and physicochemical analyses to determine the botanical origin (floral or honeydew) of honeys, the free amino acid content of 46 honey samples has been determined. The honeys were collected in a small geographic area of approximately 2000 km(2) in central Spain. Twenty-seven honey samples were classified as floral and 19 as honeydew according to their palynological and physicochemical analyses. The resulting data have been subjected to different multivariant analysis techniques. One hundred percent of honey samples have been correctly classified into either the floral or the honeydew groups, according to their content in glutamic acid and tryptophan. It is concluded that free amino acids are good indicators of the botanical origin of honeys, saving time compared with more tedious analyses.
Apparatus and method for detecting gamma radiation
Sigg, Raymond A.
1994-01-01
A high efficiency radiation detector for measuring X-ray and gamma radiation from small-volume, low-activity liquid samples with an overall uncertainty better than 0.7% (one sigma SD). The radiation detector includes a hyperpure germanium well detector, a collimator, and a reference source. The well detector monitors gamma radiation emitted by the reference source and a radioactive isotope or isotopes in a sample source. The radiation from the reference source is collimated to avoid attenuation of reference source gamma radiation by the sample. Signals from the well detector are processed and stored, and the stored data is analyzed to determine the radioactive isotope(s) content of the sample. Minor self-attenuation corrections are calculated from chemical composition data.
Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference
Shringarpure, Suyash; Xing, Eric P.
2014-01-01
Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351
Corrections for the geometric distortion of the tube detectors on SANS instruments at ORNL
He, Lilin; Do, Changwoo; Qian, Shuo; ...
2014-11-25
Small-angle neutron scattering instruments at the Oak Ridge National Laboratory's High Flux Isotope Reactor were upgraded in area detectors from the large, single volume crossed-wire detectors originally installed to staggered arrays of linear position-sensitive detectors (LPSDs). The specific geometry of the LPSD array requires that approaches to data reduction traditionally employed be modified. Here, two methods for correcting the geometric distortion produced by the LPSD array are presented and compared. The first method applies a correction derived from a detector sensitivity measurement performed using the same configuration as the samples are measured. In the second method, a solid angle correctionmore » is derived that can be applied to data collected in any instrument configuration during the data reduction process in conjunction with a detector sensitivity measurement collected at a sufficiently long camera length where the geometric distortions are negligible. Furthermore, both methods produce consistent results and yield a maximum deviation of corrected data from isotropic scattering samples of less than 5% for scattering angles up to a maximum of 35°. The results are broadly applicable to any SANS instrument employing LPSD array detectors, which will be increasingly common as instruments having higher incident flux are constructed at various neutron scattering facilities around the world.« less
Observing System Simulations for Small Satellite Formations Estimating Bidirectional Reflectance
NASA Technical Reports Server (NTRS)
Nag, Sreeja; Gatebe, Charles K.; de Weck, Olivier
2015-01-01
The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: Use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.
Observing system simulations for small satellite formations estimating bidirectional reflectance
NASA Astrophysics Data System (ADS)
Nag, Sreeja; Gatebe, Charles K.; Weck, Olivier de
2015-12-01
The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.
TRIPPy: Trailed Image Photometry in Python
NASA Astrophysics Data System (ADS)
Fraser, Wesley; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michaël; Pike, Rosemary E.; Kavelaars, J. J.; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey
2016-06-01
Photometry of moving sources typically suffers from a reduced signal-to-noise ratio (S/N) or flux measurements biased to incorrect low values through the use of circular apertures. To address this issue, we present the software package, TRIPPy: TRailed Image Photometry in Python. TRIPPy introduces the pill aperture, which is the natural extension of the circular aperture appropriate for linearly trailed sources. The pill shape is a rectangle with two semicircular end-caps and is described by three parameters, the trail length and angle, and the radius. The TRIPPy software package also includes a new technique to generate accurate model point-spread functions (PSFs) and trailed PSFs (TSFs) from stationary background sources in sidereally tracked images. The TSF is merely the convolution of the model PSF, which consists of a moffat profile, and super-sampled lookup table. From the TSF, accurate pill aperture corrections can be estimated as a function of pill radius with an accuracy of 10 mmag for highly trailed sources. Analogous to the use of small circular apertures and associated aperture corrections, small radius pill apertures can be used to preserve S/Ns of low flux sources, with appropriate aperture correction applied to provide an accurate, unbiased flux measurement at all S/Ns.
Occupational exposure decisions: can limited data interpretation training help improve accuracy?
Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul
2009-06-01
Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The accuracy for quantitative desktop judgments increased from 43 to 63% correct after the rule of thumb training (P < 0.001). The rule of thumb training did not significantly impact accuracy for qualitative desktop judgments. The finding that even some simple statistical rules of thumb improve judgment accuracy significantly suggests that hygienists need to routinely use statistical tools while making exposure judgments using monitoring data.
NASA Astrophysics Data System (ADS)
Hatakeyama, Rokuro; Yoshizawa, Masazumi; Moriya, Tadashi
2000-11-01
Precise correction for γ-ray attenuation in skull bone has been a significant problem in obtaining quantitative single photon emission computed tomography (SPECT) images. The correction for γ-ray attenuation is approximately proportional to the density and thickness of the bone under investigation. If the acoustic impedance and the speed of sound in bone are measurable using ultrasonic techniques, then the density and thickness of the bone sample can be calculated. Whole bone usually consists of three layers, and each layer has a different ultrasonic character. Thus, the speed of sound must be measured in a small section of each layer in order to determine the overall density of whole bone. It is important to measure the attenuation constant in order to determine the appropriate level for the ultrasonic input signal. We have developed a method for measuring the acoustic impedance, speed of sound, and attenuation constant in a small region of a bone sample using a fused quartz rod as a transmission line. In the present study, we obtained the following results: impedance of compact bone; 5.30(±0.40)× 106 kg/(m2s), speed of sound; 3780± 250 m/s, and attenuation constant; 2.70± 0.50 Np/m. These results were used to obtain the densities of compact bone, spongy bone and bone marrow in a bovine bone sample and as well as the density of pig skull bone, which were found to be 1.40± 0.30 g/cm3, 1.19± 0.50 g/cm3, 0.90± 0.30 g/cm3 and 1.26± 0.30 g/cm3, respectively. Using a thin solid transmission line, the proposed method makes it possible to determine the density of a small region of a bone sample. It is expected that the proposed method, which is based on ultrasonic measurement, will be useful for application in brain SPECT.
Preterm newborns at Kangaroo Mother Care: a cohort follow-up from birth to six months
Menezes, Maria Alexsandra da S.; Garcia, Daniela Cavalcante; de Melo, Enaldo Vieira; Cipolotti, Rosana
2014-01-01
OBJECTIVE: To evaluate clinical outcomes, growth and exclusive breastfeeding rates in premature infants assisted by Kangaroo Mother Care at birth, at discharge and at six months of life. METHODS: Prospective study of a premature infants cohort assisted by Kangaroo Mother Care in a tertiary public maternity in Northeast Brazil with birth weight ≤1750g and with clinical conditions for Kangaroo care. RESULTS: The sample was composed by 137 premature infants, being 62.8% female, with average birth weight of 1365±283g, average gestational age of 32±3 weeks and 26.2% were adequate for gestational age. They have been admitted in the Kangaroo Ward with a median of 13 days of life, weighing 1430±167g and, at this time, 57.7% were classified as small for corrected gestational age. They were discharged with 36.8±21.8 days of chronological age, weighing 1780±165g and 67.9% were small for corrected gestational age. At six months of life (n=76), they had an average weight of 5954±971g, and 68.4% presented corrected weight for gestational age between percentiles 15 and 85 of the World Health Organization (WHO) weight curve. Exclusive breastfeeding rate at discharge was 56.2% and, at six months of life, 14.4%. CONCLUSIONS: In the studied sample, almost two thirds of the children assisted by Kangaroo Mother Care were, at six months of life, between percentiles 15 and 85 of the WHO weight curves. The frequency of exclusive breastfeeding at six months was low. PMID:25119747
Apparatus and method for detecting gamma radiation
Sigg, R.A.
1994-12-13
A high efficiency radiation detector is disclosed for measuring X-ray and gamma radiation from small-volume, low-activity liquid samples with an overall uncertainty better than 0.7% (one sigma SD). The radiation detector includes a hyperpure germanium well detector, a collimator, and a reference source. The well detector monitors gamma radiation emitted by the reference source and a radioactive isotope or isotopes in a sample source. The radiation from the reference source is collimated to avoid attenuation of reference source gamma radiation by the sample. Signals from the well detector are processed and stored, and the stored data is analyzed to determine the radioactive isotope(s) content of the sample. Minor self-attenuation corrections are calculated from chemical composition data. 4 figures.
Li, Yulong; Zhang, Rui; Peng, Rongxue; Ding, Jiansheng; Han, Yanxi; Wang, Guojing; Zhang, Kuo; Lin, Guigao; Li, Jinming
2016-06-01
Currently, several approaches are being used to detect echinoderm microtubule associated protein like 4 gene (EML4)-anaplastic lymphoma receptor tyrosine kinase gene (ALK) rearrangement, but the performance of laboratories in China is unknown. To evaluate the proficiency of different laboratories in detecting EML4-ALK rearrangement, we organized a proficiency test (PT). We prepared formalin-fixed, paraffin-embedded samples derived from the xenograft tumor tissue of three non-small cell lung cancer cell lines with different EML4-ALK rearrangements and used PTs to evaluate the detection performance of laboratories in China. We received results from 94 laboratories that used different methods. Of the participants, 75.53% correctly identified all samples in the PT panel. Among the errors made by participants, false-negative errors were likely to occur. According to the methodology applied, 82.86%, 76.67%, 77.78%, and 66.67% of laboratories using reverse transcriptase polymerase chain reaction, fluorescence in situ hybridization, next-generation sequencing, and immunohistochemical analysis, respectively, could analyze all the samples correctly. Moreover, we have found that the laboratories' genotyping capacity is high, especially for variant 3. Our PT survey revealed that the performance and methodological problems of laboratories must be addressed to further increase the reproducibility and accuracy of detection of EML4-ALK rearrangement to ensure reliable results for selection of appropriate patients. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-05
... Conservation Program: Energy Conservation Standards for Small Electric Motors; Correction AGENCY: Office of... standards for small electric motors, which was published on March 9, 2010. In that final rule, the U.S... titled ``Energy Conservation Standards for Small Electric Motors.'' 75 FR 10874. Since the publication of...
How to justify small-refinery info/control system modernization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haskins, D.E.
1993-05-01
Information and control systems modernization can be justified by successful implementation of advanced process control (APC) in nearly all refineries, even the small ones. However, the small refineries require special solutions to meet the challenges of limited resources in both finance and manpower. Based on a number of case studies, a typical small refinery as it operates today is described. A sample information and control system modernization plan is described and the typical cost and benefits show how the project cost can be justified. Business objectives of an HPI plant are to satisfy customers by providing specific products, to satisfymore » the owners by maximizing profits and to satisfy the public by being safe and environmentally correct. Managers have always tried to meet these objectives with functions for the total plant.« less
High-temperature calibration of a multi-anvil high pressure apparatus
NASA Astrophysics Data System (ADS)
Sokol, Alexander G.; Borzdov, Yury M.; Palyanov, Yury N.; Khokhryakov, Alexander F.
2015-04-01
Fusion and solidification of Al and Ag samples, as well as Fe93-Al3-C4, Fe56-Co37-Al3-C4, and Fe57.5-Co38-Al1-Pb0.5-C3 alloys (in wt%), have been investigated at 6.3 GPa. Heater power jumps due to heat consumption and release on metal fusion and solidification, respectively, were used to calibrate the thermal electromotive force of the thermocouple against the melting points (mp) for Ag and Al. Thus, obtained corrections are +100°C (for sample periphery) and +65°C (center) within the 1070-1320°C range. For small samples positioned randomly in the low-gradient zone of a high pressure cell, the corrections should be +80°C and +84°C at the temperatures 1070°C and 1320°C, respectively. The temperature contrast recorded in the low-gradient cell zone gives an error about ±17°C. The method has been applied to identify the mp of the systems, which is especially important for temperature-gradient growth of large type IIa synthetic diamonds.
Schwertner, M; Booth, M J; Neil, M A A; Wilson, T
2004-01-01
Confocal or multiphoton microscopes, which deliver optical sections and three-dimensional (3D) images of thick specimens, are widely used in biology. These techniques, however, are sensitive to aberrations that may originate from the refractive index structure of the specimen itself. The aberrations cause reduced signal intensity and the 3D resolution of the instrument is compromised. It has been suggested to correct for aberrations in confocal microscopes using adaptive optics. In order to define the design specifications for such adaptive optics systems, one has to know the amount of aberrations present for typical applications such as with biological samples. We have built a phase stepping interferometer microscope that directly measures the aberration of the wavefront. The modal content of the wavefront is extracted by employing Zernike mode decomposition. Results for typical biological specimens are presented. It was found for all samples investigated that higher order Zernike modes give only a small contribution to the overall aberration. Therefore, these higher order modes can be neglected in future adaptive optics sensing and correction schemes implemented into confocal or multiphoton microscopes, leading to more efficient designs.
Performance of SMARTer at Very Low Scattering Vector q-Range Revealed by Monodisperse Nanoparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putra, E. Giri Rachman; Ikram, A.; Bharoto
2008-03-17
A monodisperse nanoparticle sample of polystyrene has been employed to determine performance of the 36 meter small-angle neutron scattering (SANS) BATAN spectrometer (SMARTer) at the Neutron Scattering Laboratory (NSL)--Serpong, Indonesia, in a very low scattering vector q-range. Detector position at 18 m from sample position, beam stopper of 50 mm in diameter, neutron wavelength of 5.66 A as well as 18 m-long collimator had been set up to achieve very low scattering vector q-range of SMARTer. A polydisperse smeared-spherical particle model was applied to fit the corrected small-angle scattering data of monodisperse polystyrene nanoparticle sample. The mean average of particlemore » radius of 610 A, volume fraction of 0.0026, and polydispersity of 0.1 were obtained from the fitting results. The experiment results from SMARTer are comparable to SANS-J, JAEA - Japan and it is revealed that SMARTer is powerfully able to achieve the lowest scattering vector down to 0.002 A{sup -1}.« less
Serrano-Fernandez, Pablo; Dymerska, Dagmara; Kurzawski, Grzegorz; Derkacz, Róża; Sobieszczańska, Tatiana; Banaszkiewicz, Zbigniew; Roomere, Hanno; Oitmaa, Eneli; Metspalu, Andres; Janavičius, Ramūnas; Elsakov, Pavel; Razumas, Mindaugas; Petrulis, Kestutis; Irmejs, Arvīds; Miklaševičs, Edvīns; Scott, Rodney J.; Lubiński, Jan
2015-01-01
The continued identification of new low-penetrance genetic variants for colorectal cancer (CRC) raises the question of their potential cumulative effect among compound carriers. We focused on 6 SNPs (rs380284, rs4464148, rs4779584, rs4939827, rs6983267, and rs10795668), already described as risk markers, and tested their possible independent and combined contribution to CRC predisposition. Material and Methods. DNA was collected and genotyped from 2330 unselected consecutive CRC cases and controls from Estonia (166 cases and controls), Latvia (81 cases and controls), Lithuania (123 cases and controls), and Poland (795 cases and controls). Results. Beyond individual effects, the analysis revealed statistically significant linear cumulative effects for these 6 markers for all samples except of the Latvian one (corrected P value = 0.018 for the Estonian, corrected P value = 0.0034 for the Lithuanian, and corrected P value = 0.0076 for the Polish sample). Conclusions. The significant linear cumulative effects demonstrated here support the idea of using sets of low-risk markers for delimiting new groups with high-risk of CRC in clinical practice that are not carriers of the usual CRC high-risk markers. PMID:26101521
Serrano-Fernandez, Pablo; Dymerska, Dagmara; Kurzawski, Grzegorz; Derkacz, Róża; Sobieszczańska, Tatiana; Banaszkiewicz, Zbigniew; Roomere, Hanno; Oitmaa, Eneli; Metspalu, Andres; Janavičius, Ramūnas; Elsakov, Pavel; Razumas, Mindaugas; Petrulis, Kestutis; Irmejs, Arvīds; Miklaševičs, Edvīns; Scott, Rodney J; Lubiński, Jan
2015-01-01
The continued identification of new low-penetrance genetic variants for colorectal cancer (CRC) raises the question of their potential cumulative effect among compound carriers. We focused on 6 SNPs (rs380284, rs4464148, rs4779584, rs4939827, rs6983267, and rs10795668), already described as risk markers, and tested their possible independent and combined contribution to CRC predisposition. Material and Methods. DNA was collected and genotyped from 2330 unselected consecutive CRC cases and controls from Estonia (166 cases and controls), Latvia (81 cases and controls), Lithuania (123 cases and controls), and Poland (795 cases and controls). Results. Beyond individual effects, the analysis revealed statistically significant linear cumulative effects for these 6 markers for all samples except of the Latvian one (corrected P value = 0.018 for the Estonian, corrected P value = 0.0034 for the Lithuanian, and corrected P value = 0.0076 for the Polish sample). Conclusions. The significant linear cumulative effects demonstrated here support the idea of using sets of low-risk markers for delimiting new groups with high-risk of CRC in clinical practice that are not carriers of the usual CRC high-risk markers.
Qian, Yishan; Huang, Jia; Zhou, Xingtao; Wang, Yutung
2015-11-01
To compare the efficacy of correcting myopic astigmatism with femtosecond laser small-incision lenticule extraction (SMILE, Carl Zeiss Meditec AG) versus laser-assisted subepithelial keratectomy (LASEK). The study was conducted at the Ophthalmology Department, Eye and ENT Hospital, Shanghai, China. A retrospective, cross-sectional study. This study included patients who underwent small-incision lenticule extraction or LASEK for the correction of myopia and myopic astigmatism. Preoperative and 6-month postoperative astigmatism values were analyzed. The efficacies of the 2 surgeries to correct astigmatism were compared. A total of 180 right eyes of 180 patients (small-incision lenticule extraction: n = 113, LASEK: n = 67) were included. No significant difference was found between the 2 groups in the preoperative astigmatism (small-incision lenticule extraction: 1.16 ± 0.85D, LASEK: 1.16 ± 0.83D, P > .05) or the postoperative astigmatism (small-incision lenticule extraction: 0.35 ± 0.37D; LASEK: 0.31 ± 0.42D, P > .05), determined by manifest refraction. No significant difference was found between the 2 groups in surgically induced astigmatism vector (small-incision lenticule extraction: 1.13 ± 0.83D, LASEK: 1.01 ± 0.65D, P > .05). The correction index was higher for the small-incision lenticule extraction group (1.05 ± 0.53) than for the LASEK group (0.95 ± 0.21, P = .045). The postoperative astigmatism was significantly higher for the small-incision lenticule extraction group when the preoperative astigmatism was 1.0 D or less (small-incision lenticule extraction: 0.26 ± 0.30D, LASEK: 0.12 ± 0.20D, P = .007) and lower for the small-incision lenticule extraction group when the preoperative astigmatism was more than 2.0 D (small-incision lenticule extraction: 0.48 ± 0.37D, LASEK: 0.89 ± 0.46D, P = .002). An adjustment of nomograms for correcting low astigmatism (≤1.0 D) by small-incision lenticule extraction is suggested due to the tendency toward overcorrection, whereas a nomogram adjustment for tissue-saving ablation profile is needed for the correction of high astigmatism (>2.0 D) by LASEK due to the tendency toward undercorrection. The authors declare that they have no competing financial interests. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Meteorite heat capacities: Results to date
NASA Astrophysics Data System (ADS)
Consolmagno, G.; Macke, R.; Britt, D.
2014-07-01
Heat capacity is an essential thermal property for modeling asteroid internal metamorphism or differentiation, and dynamical effects like YORP or Yarkovsky perturbations. We have developed a rapid, inexpensive, and non-destructive method for measuring the heat capacity of meteorites at low temperature [1]. A sample is introduced into a dewar of liquid nitrogen and an electronic scale measures the amount of nitrogen boiled away as the sample is cooled from the room temperature to the liquid nitrogen temperature; given the heat of vaporization of liquid nitrogen, one can then calculate the heat lost from the sample during the cooling process. Note that heat capacity in this temperature range is a strong function of temperature, but this functional relation is essentially the same for all materials; the values we determine are equivalent to the heat capacity of the sample at 175 K. To correct for systematic errors, samples of laboratory-grade quartz are measured along with the meteorite samples. To date, more than 70 samples of more than 50 different meteorites have been measured in this way, including ordinary chondrites [1], irons [2], basaltic achondrites [3], and a limited number of carbonaceous chondrites [1]. In general, one can draw a number of important conclusions from these results. First, the heat capacity of a meteorite is a function of its mineral composition, independent of shock, metamorphism, or other physical state. Second, given this relation, heat capacity can be strongly altered by terrestrial weathering. Third, the measurement of heat capacity in small (less than 1 g) samples as done typically by commercial systems runs a serious risk of giving misleading results for samples that are heterogeneous on scales of tens of grams or more. Finally, we demonstrate that heat capacity is a useful tool for determining and classifying a sample, especially if used in conjunction with other intrinsic variables such as grain density and magnetic susceptibility. We will present an updated list of our results, incorporating our latest corrections for a variety of small but measurable systematic errors, and new results for meteorites and meteorite types not previously measured or reported.
Transfer of SIMNET Training in the Armor Officer Basic Course
1991-01-01
group correctly performed more tasks in the posttest , but the difference was not statistically significant for these small samples. Gains from pretest ...to posttest were not compared statistically, but the field-trained group showed little average gain. Based on these results and other supporting data...that serve as a control group , and (b) SIMNET classes after the change that serve as a treatment group . The comparison is termed quasi - experimental
Small refractive errors--their correction and practical importance.
Skrbek, Matej; Petrová, Sylvie
2013-04-01
Small refractive errors present a group of specifc far-sighted refractive dispositions that are compensated by enhanced accommodative exertion and aren't exhibited by loss of the visual acuity. This paper should answer a few questions about their correction, flowing from theoretical presumptions and expectations of this dilemma. The main goal of this research was to (dis)confirm the hypothesis about convenience, efficiency and frequency of the correction that do not raise the visual acuity (or if the improvement isn't noticeable). The next goal was to examine the connection between this correction and other factors (age, size of the refractive error, etc.). The last aim was to describe the subjective personal rating of the correction of these small refractive errors, and to determine the minimal improvement of the visual acuity, that is attractive enough for the client to purchase the correction (glasses, contact lenses). It was confirmed, that there's an indispensable group of subjects with good visual acuity, where the correction is applicable, although it doesn't improve the visual acuity much. The main importance is to eliminate the asthenopia. The prime reason for acceptance of the correction is typically changing during the life, so as the accommodation is declining. Young people prefer the correction on the ground of the asthenopia, caused by small refractive error or latent strabismus; elderly people acquire the correction because of improvement of the visual acuity. Generally the correction was found useful in more than 30%, if the gain of the visual acuity was at least 0,3 of the decimal row.
Real-time fMRI processing with physiological noise correction - Comparison with off-line analysis.
Misaki, Masaya; Barzigar, Nafise; Zotev, Vadim; Phillips, Raquel; Cheng, Samuel; Bodurka, Jerzy
2015-12-30
While applications of real-time functional magnetic resonance imaging (rtfMRI) are growing rapidly, there are still limitations in real-time data processing compared to off-line analysis. We developed a proof-of-concept real-time fMRI processing (rtfMRIp) system utilizing a personal computer (PC) with a dedicated graphic processing unit (GPU) to demonstrate that it is now possible to perform intensive whole-brain fMRI data processing in real-time. The rtfMRIp performs slice-timing correction, motion correction, spatial smoothing, signal scaling, and general linear model (GLM) analysis with multiple noise regressors including physiological noise modeled with cardiac (RETROICOR) and respiration volume per time (RVT). The whole-brain data analysis with more than 100,000voxels and more than 250volumes is completed in less than 300ms, much faster than the time required to acquire the fMRI volume. Real-time processing implementation cannot be identical to off-line analysis when time-course information is used, such as in slice-timing correction, signal scaling, and GLM. We verified that reduced slice-timing correction for real-time analysis had comparable output with off-line analysis. The real-time GLM analysis, however, showed over-fitting when the number of sampled volumes was small. Our system implemented real-time RETROICOR and RVT physiological noise corrections for the first time and it is capable of processing these steps on all available data at a given time, without need for recursive algorithms. Comprehensive data processing in rtfMRI is possible with a PC, while the number of samples should be considered in real-time GLM. Copyright © 2015 Elsevier B.V. All rights reserved.
Xia, Yan; Li, Ming; Kučerka, Norbert; Li, Shutao; Nieh, Mu-Ping
2015-02-01
We have designed and constructed a temperature-controllable shear flow cell for in-situ study on flow alignable systems. The device has been tested in the neutron diffraction and has the potential to be applied in the small angle neutron scattering configuration to characterize the nanostructures of the materials under flow. The required sample amount is as small as 1 ml. The shear rate on the sample is controlled by the flow rate produced by an external pump and can potentially vary from 0.11 to 3.8 × 10(5) s(-1). Both unidirectional and oscillational flows are achievable by the setting of the pump. The instrument is validated by using a lipid bicellar mixture, which yields non-alignable nanodisc-like bicelles at low T and shear-alignable membranes at high T. Using the shear cell, the bicellar membranes can be aligned at 31 °C under the flow with a shear rate of 11.11 s(-1). Multiple high-order Bragg peaks are observed and the full width at half maximum of the "rocking curve" around the Bragg's condition is found to be 3.5°-4.1°. It is noteworthy that a portion of the membranes remains aligned even after the flow stops. Detailed and comprehensive intensity correction for the rocking curve has been derived based on the finite rectangular sample geometry and the absorption of the neutrons as a function of sample angle [See supplementary material at http://dx.doi.org/10.1063/1.4908165 for the detailed derivation of the absorption correction]. The device offers a new capability to study the conformational or orientational anisotropy of the solvated macromolecules or aggregates induced by the hydrodynamic interaction in a flow field.
40 CFR 1065.690 - Buoyancy correction for PM sample media.
Code of Federal Regulations, 2014 CFR
2014-07-01
... media. 1065.690 Section 1065.690 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Buoyancy correction for PM sample media. (a) General. Correct PM sample media for their buoyancy in air if you weigh them on a balance. The buoyancy correction depends on the sample media density, the density...
40 CFR 1065.690 - Buoyancy correction for PM sample media.
Code of Federal Regulations, 2011 CFR
2011-07-01
... media. 1065.690 Section 1065.690 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Buoyancy correction for PM sample media. (a) General. Correct PM sample media for their buoyancy in air if you weigh them on a balance. The buoyancy correction depends on the sample media density, the density...
40 CFR 1065.690 - Buoyancy correction for PM sample media.
Code of Federal Regulations, 2012 CFR
2012-07-01
... media. 1065.690 Section 1065.690 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Buoyancy correction for PM sample media. (a) General. Correct PM sample media for their buoyancy in air if you weigh them on a balance. The buoyancy correction depends on the sample media density, the density...
40 CFR 1065.690 - Buoyancy correction for PM sample media.
Code of Federal Regulations, 2013 CFR
2013-07-01
... media. 1065.690 Section 1065.690 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Buoyancy correction for PM sample media. (a) General. Correct PM sample media for their buoyancy in air if you weigh them on a balance. The buoyancy correction depends on the sample media density, the density...
Evaluation of wastewater contaminant transport in surface waters using verified Lagrangian sampling
Antweiler, Ronald C.; Writer, Jeffrey H.; Murphy, Sheila F.
2014-01-01
Contaminants released from wastewater treatment plants can persist in surface waters for substantial distances. Much research has gone into evaluating the fate and transport of these contaminants, but this work has often assumed constant flow from wastewater treatment plants. However, effluent discharge commonly varies widely over a 24-hour period, and this variation controls contaminant loading and can profoundly influence interpretations of environmental data. We show that methodologies relying on the normalization of downstream data to conservative elements can give spurious results, and should not be used unless it can be verified that the same parcel of water was sampled. Lagrangian sampling, which in theory samples the same water parcel as it moves downstream (the Lagrangian parcel), links hydrologic and chemical transformation processes so that the in-stream fate of wastewater contaminants can be quantitatively evaluated. However, precise Lagrangian sampling is difficult, and small deviations – such as missing the Lagrangian parcel by less than 1 h – can cause large differences in measured concentrations of all dissolved compounds at downstream sites, leading to erroneous conclusions regarding in-stream processes controlling the fate and transport of wastewater contaminants. Therefore, we have developed a method termed “verified Lagrangian” sampling, which can be used to determine if the Lagrangian parcel was actually sampled, and if it was not, a means for correcting the data to reflect the concentrations which would have been obtained had the Lagrangian parcel been sampled. To apply the method, it is necessary to have concentration data for a number of conservative constituents from the upstream, effluent, and downstream sites, along with upstream and effluent concentrations that are constant over the short-term (typically 2–4 h). These corrections can subsequently be applied to all data, including non-conservative constituents. Finally, we show how data from other studies can be corrected.
Evaluation of wastewater contaminant transport in surface waters using verified Lagrangian sampling.
Antweiler, Ronald C; Writer, Jeffrey H; Murphy, Sheila F
2014-02-01
Contaminants released from wastewater treatment plants can persist in surface waters for substantial distances. Much research has gone into evaluating the fate and transport of these contaminants, but this work has often assumed constant flow from wastewater treatment plants. However, effluent discharge commonly varies widely over a 24-hour period, and this variation controls contaminant loading and can profoundly influence interpretations of environmental data. We show that methodologies relying on the normalization of downstream data to conservative elements can give spurious results, and should not be used unless it can be verified that the same parcel of water was sampled. Lagrangian sampling, which in theory samples the same water parcel as it moves downstream (the Lagrangian parcel), links hydrologic and chemical transformation processes so that the in-stream fate of wastewater contaminants can be quantitatively evaluated. However, precise Lagrangian sampling is difficult, and small deviations - such as missing the Lagrangian parcel by less than 1h - can cause large differences in measured concentrations of all dissolved compounds at downstream sites, leading to erroneous conclusions regarding in-stream processes controlling the fate and transport of wastewater contaminants. Therefore, we have developed a method termed "verified Lagrangian" sampling, which can be used to determine if the Lagrangian parcel was actually sampled, and if it was not, a means for correcting the data to reflect the concentrations which would have been obtained had the Lagrangian parcel been sampled. To apply the method, it is necessary to have concentration data for a number of conservative constituents from the upstream, effluent, and downstream sites, along with upstream and effluent concentrations that are constant over the short-term (typically 2-4h). These corrections can subsequently be applied to all data, including non-conservative constituents. Finally, we show how data from other studies can be corrected. © 2013.
Correction of small imperfections on white glazed china surfaces by laser radiation
NASA Astrophysics Data System (ADS)
Képíró, I.; Osvay, K.; Divall, M.
2007-07-01
A laser-assisted technique has been developed for correction of small diameter (1 mm) and shallow (0.5 mm) imperfections on the surface of gloss fired porcelain. To study the physics and establish the important parameters, artificially made holes in a porcelain sample have been first filled with correction material, then covered with raw glaze and treated by a pulsed, 7 kHz repetition rate CO 2 laser at 10.6 μm. The modification of the surface and the surrounding area have been quantified and studied with a large range of parameters of incident laser power (1-10 W), width of the laser pulses (10-125 μs) and duration of laser heating (60-480 s). Although the shine of the treated area, defined as the distribution of micro-droplets on the surface, is very similar to the untreated surfaces, the surroundings of the treated area usually show cracks. The measurement of both the spatial temperature distribution and the temporal cooling rate of the treated surface has revealed that a simple melting process always results in high gradient temperature distribution within the irradiated zone. Its inhomogeneous and fast cooling always generate at least micro-cracks on the surface within a few seconds after the laser was turned off. The duration and intensity of the laser irradiation have been then optimized in order to achieve the fastest possible melting of the surface, but without producing such high temperature gradients. To eliminate the cracks, more elaborated pre-heating and slowed-cooling-rate processes have been tried with prosperous results. These achievements complete our previous study, making possible to repair the most common surface imperfections and holes of gloss fired china samples.
Precise Th/U-dating of small and heavily coated samples of deep sea corals
NASA Astrophysics Data System (ADS)
Lomitschka, Michael; Mangini, Augusto
1999-07-01
Marine carbonate skeletons like deep-sea corals are frequently coated with iron and manganese oxides/hydroxides which adsorb additional thorium and uranium out of the sea water. A new cleaning procedure has been developed to reduce this contamination. In this further cleaning step a solution of Na 2EDTA (Na 2H 2T B) and ascorbic acid is used which composition is optimised especially for samples of 20 mg of weight. It was first tested on aliquots of a reef-building coral which had been artificially contaminated with powdered ferromanganese nodule. Applied on heavily contaminated deep-sea corals (scleractinia), it reduced excess 230Th by another order of magnitude in addition to usual cleaning procedures. The measurement of at least three fractions of different contamination, together with an additional standard correction for contaminated carbonates results in Th/U-ages corrected for the authigenic component. A good agreement between Th/U- and 14C-ages can be achieved even for extremely coated corals.
Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn
2008-09-30
The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.
40 CFR 1065.690 - Buoyancy correction for PM sample media.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Buoyancy correction for PM sample media. (a) General. Correct PM sample media for their buoyancy in air if you weigh them on a balance. The buoyancy correction depends on the sample media density, the density of air, and the density of the calibration weight used to calibrate the balance. The buoyancy...
Frye, M A; Nassan, M; Jenkins, G D; Kung, S; Veldic, M; Palmer, B A; Feeder, S E; Tye, S J; Choi, D S; Biernacka, J M
2015-01-01
The objective of this study was to determine whether proteomic profiling in serum samples can be utilized in identifying and differentiating mood disorders. A consecutive sample of patients with a confirmed diagnosis of unipolar (UP n=52) or bipolar depression (BP-I n=46, BP-II n=49) and controls (n=141) were recruited. A 7.5-ml blood sample was drawn for proteomic multiplex profiling of 320 proteins utilizing the Myriad RBM Discovery Multi-Analyte Profiling platform. After correcting for multiple testing and adjusting for covariates, growth differentiation factor 15 (GDF-15), hemopexin (HPX), hepsin (HPN), matrix metalloproteinase-7 (MMP-7), retinol-binding protein 4 (RBP-4) and transthyretin (TTR) all showed statistically significant differences among groups. In a series of three post hoc analyses correcting for multiple testing, MMP-7 was significantly different in mood disorder (BP-I+BP-II+UP) vs controls, MMP-7, GDF-15, HPN were significantly different in bipolar cases (BP-I+BP-II) vs controls, and GDF-15, HPX, HPN, RBP-4 and TTR proteins were all significantly different in BP-I vs controls. Good diagnostic accuracy (ROC-AUC⩾0.8) was obtained most notably for GDF-15, RBP-4 and TTR when comparing BP-I vs controls. While based on a small sample not adjusted for medication state, this discovery sample with a conservative method of correction suggests feasibility in using proteomic panels to assist in identifying and distinguishing mood disorders, in particular bipolar I disorder. Replication studies for confirmation, consideration of state vs trait serial assays to delineate proteomic expression of bipolar depression vs previous mania, and utility studies to assess proteomic expression profiling as an advanced decision making tool or companion diagnostic are encouraged. PMID:26645624
NASA Astrophysics Data System (ADS)
Roether, Wolfgang; Vogt, Martin; Vogel, Sandra; Sültenfuß, Jürgen
2013-06-01
We present a new method to obtain samples for the measurement of helium isotopes and neon in water, to replace the classical sampling procedure using clamped-off Cu tubing containers that we have been using so far. The new method saves the gas extraction step prior to admission to the mass spectrometer, which the classical method requires. Water is drawn into evacuated glass ampoules with subsequent flame sealing. Approximately 50% headspace is left, from which admission into the mass spectrometer occurs without further treatment. Extensive testing has shown that, with due care and with small corrections applied, the samples represent the gas concentrations in the water within ±0.07% (95% confidence level; ±0.05% with special handling). Fast evacuation is achieved by pumping on a small charge of water placed in the ampoule. The new method was successfully tested at sea in comparison with Cu-tubing sampling. We found that the ampoule samples were superior in data precision and that a lower percentage of samples were lost prior to measurement. Further measurements revealed agreement between the two methods in helium, 3He and neon within ±0.1%. The new method facilitates the dealing with large sample sets and minimizes the delay between sampling and measurement. The method is applicable also for gases other than helium and neon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scolnic, D.; Kessler, R., E-mail: dscolnic@kicp.uchicago.edu, E-mail: kessler@kicp.uchicago.edu
Simulations of Type Ia supernovae (SNe Ia) surveys are a critical tool for correcting biases in the analysis of SNe Ia to infer cosmological parameters. Large-scale Monte Carlo simulations include a thorough treatment of observation history, measurement noise, intrinsic scatter models, and selection effects. In this Letter, we improve simulations with a robust technique to evaluate the underlying populations of SN Ia color and stretch that correlate with luminosity. In typical analyses, the standardized SN Ia brightness is determined from linear “Tripp” relations between the light curve color and luminosity and between stretch and luminosity. However, this solution produces Hubblemore » residual biases because intrinsic scatter and measurement noise result in measured color and stretch values that do not follow the Tripp relation. We find a 10 σ bias (up to 0.3 mag) in Hubble residuals versus color and 5 σ bias (up to 0.2 mag) in Hubble residuals versus stretch in a joint sample of 920 spectroscopically confirmed SN Ia from PS1, SNLS, SDSS, and several low- z surveys. After we determine the underlying color and stretch distributions, we use simulations to predict and correct the biases in the data. We show that removing these biases has a small impact on the low- z sample, but reduces the intrinsic scatter σ {sub int} from 0.101 to 0.083 in the combined PS1, SNLS, and SDSS sample. Past estimates of the underlying populations were too broad, leading to a small bias in the equation of state of dark energy w of Δ w = 0.005.« less
A universal TaqMan-based RT-PCR protocol for cost-efficient detection of small noncoding RNA.
Jung, Ulrike; Jiang, Xiaoou; Kaufmann, Stefan H E; Patzel, Volker
2013-12-01
Several methods for the detection of RNA have been developed over time. For small RNA detection, a stem-loop reverse primer-based protocol relying on TaqMan RT-PCR has been described. This protocol requires an individual specific TaqMan probe for each target RNA and, hence, is highly cost-intensive for experiments with small sample sizes or large numbers of different samples. We describe a universal TaqMan-based probe protocol which can be used to detect any target sequence and demonstrate its applicability for the detection of endogenous as well as artificial eukaryotic and bacterial small RNAs. While the specific and the universal probe-based protocol showed the same sensitivity, the absolute sensitivity of detection was found to be more than 100-fold lower for both than previously reported. In subsequent experiments, we found previously unknown limitations intrinsic to the method affecting its feasibility in determination of mature template RISC incorporation as well as in multiplexing. Both protocols were equally specific in discriminating between correct and incorrect small RNA targets or between mature miRNA and its unprocessed RNA precursor, indicating the stem-loop RT-primer, but not the TaqMan probe, triggers target specificity. The presented universal TaqMan-based RT-PCR protocol represents a cost-efficient method for the detection of small RNAs.
Azangwe, Godfrey; Grochowska, Paulina; Georg, Dietmar; Izewska, Joanna; Hopfgartner, Johannes; Lechner, Wolfgang; Andersen, Claus E; Beierholm, Anders R; Helt-Hansen, Jakob; Mizuno, Hideyuki; Fukumura, Akifumi; Yajima, Kaori; Gouldstone, Clare; Sharpe, Peter; Meghzifene, Ahmed; Palmans, Hugo
2014-07-01
The aim of the present study is to provide a comprehensive set of detector specific correction factors for beam output measurements for small beams, for a wide range of real time and passive detectors. The detector specific correction factors determined in this study may be potentially useful as a reference data set for small beam dosimetry measurements. Dose response of passive and real time detectors was investigated for small field sizes shaped with a micromultileaf collimator ranging from 0.6 × 0.6 cm(2) to 4.2 × 4.2 cm(2) and the measurements were extended to larger fields of up to 10 × 10 cm(2). Measurements were performed at 5 cm depth, in a 6 MV photon beam. Detectors used included alanine, thermoluminescent dosimeters (TLDs), stereotactic diode, electron diode, photon diode, radiophotoluminescent dosimeters (RPLDs), radioluminescence detector based on carbon-doped aluminium oxide (Al2O3:C), organic plastic scintillators, diamond detectors, liquid filled ion chamber, and a range of small volume air filled ionization chambers (volumes ranging from 0.002 cm(3) to 0.3 cm(3)). All detector measurements were corrected for volume averaging effect and compared with dose ratios determined from alanine to derive a detector correction factors that account for beam perturbation related to nonwater equivalence of the detector materials. For the detectors used in this study, volume averaging corrections ranged from unity for the smallest detectors such as the diodes, 1.148 for the 0.14 cm(3) air filled ionization chamber and were as high as 1.924 for the 0.3 cm(3) ionization chamber. After applying volume averaging corrections, the detector readings were consistent among themselves and with alanine measurements for several small detectors but they differed for larger detectors, in particular for some small ionization chambers with volumes larger than 0.1 cm(3). The results demonstrate how important it is for the appropriate corrections to be applied to give consistent and accurate measurements for a range of detectors in small beam geometry. The results further demonstrate that depending on the choice of detectors, there is a potential for large errors when effects such as volume averaging, perturbation and differences in material properties of detectors are not taken into account. As the commissioning of small fields for clinical treatment has to rely on accurate dose measurements, the authors recommend the use of detectors that require relatively little correction, such as unshielded diodes, diamond detectors or microchambers, and solid state detectors such as alanine, TLD, Al2O3:C, or scintillators.
Anomalous waveforms observed in laboratory-formed gas hydrate-bearing and ice-bearing sediments
Lee, Myung W.; Waite, William F.
2011-01-01
Acoustic transmission measurements of compressional, P, and shear, S, wave velocities rely on correctly identifying the P- and S-body wave arrivals in the measured waveform. In cylindrical samples for which the sample is much longer than the acoustic wavelength, these body waves can be obscured by high-amplitude waveform features arriving just after the relatively small-amplitude P-body wave. In this study, a normal mode approach is used to analyze this type of waveform, observed in sediment containing gas hydrate or ice. This analysis extends an existing normal-mode waveform propagation theory by including the effects of the confining medium surrounding the sample, and provides guidelines for estimating S-wave velocities from waveforms containing multiple large-amplitude arrivals. PMID:21476628
Soulakova, Julia N; Bright, Brianna C
2013-01-01
A large-sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two-sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk-Koch, and Reduced Falk-Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two-sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates' continuity correction is recommended for balanced designs and the Falk-Koch method with Yates' correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk-Koch method is recommended, although in this case, all methods tend to be over-conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned. Copyright © 2013 John Wiley & Sons, Ltd.
Bias correction for selecting the minimal-error classifier from many machine learning models.
Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C
2014-11-15
Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, Ahmed A., E-mail: asaleh@uow.edu.au
Even with the use of X-ray polycapillary lenses, sample tilting during pole figure measurement results in a decrease in the recorded X-ray intensity. The magnitude of this error is affected by the sample size and/or the finite detector size. These errors can be typically corrected by measuring the intensity loss as a function of the tilt angle using a texture-free reference sample (ideally made of the same alloy as the investigated material). Since texture-free reference samples are not readily available for all alloys, the present study employs an empirical procedure to estimate the correction curve for a particular experimental configuration.more » It involves the use of real texture-free reference samples that pre-exist in any X-ray diffraction laboratory to first establish the empirical correlations between X-ray intensity, sample tilt and their Bragg angles and thereafter generate correction curves for any Bragg angle. It will be shown that the empirically corrected textures are in very good agreement with the experimentally corrected ones. - Highlights: •Sample tilting during X-ray pole figure measurement leads to intensity loss errors. •Texture-free reference samples are typically used to correct the pole figures. •An empirical correction procedure is proposed in the absence of reference samples. •The procedure relies on reference samples that pre-exist in any texture laboratory. •Experimentally and empirically corrected textures are in very good agreement.« less
NASA Astrophysics Data System (ADS)
Xiong, Guoming; Cumming, Paul; Todica, Andrei; Hacker, Marcus; Bartenstein, Peter; Böning, Guido
2012-12-01
Absolute quantitation of the cerebral metabolic rate for glucose (CMRglc) can be obtained in positron emission tomography (PET) studies when serial measurements of the arterial [18F]-fluoro-deoxyglucose (FDG) input are available. Since this is not always practical in PET studies of rodents, there has been considerable interest in defining an image-derived input function (IDIF) by placing a volume of interest (VOI) within the left ventricle of the heart. However, spill-in arising from trapping of FDG in the myocardium often leads to progressive contamination of the IDIF, which propagates to underestimation of the magnitude of CMRglc. We therefore developed a novel, non-invasive method for correcting the IDIF without scaling to a blood sample. To this end, we first obtained serial arterial samples and dynamic FDG-PET data of the head and heart in a group of eight anaesthetized rats. We fitted a bi-exponential function to the serial measurements of the IDIF, and then used the linear graphical Gjedde-Patlak method to describe the accumulation in myocardium. We next estimated the magnitude of myocardial spill-in reaching the left ventricle VOI by assuming a Gaussian point-spread function, and corrected the measured IDIF for this estimated spill-in. Finally, we calculated parametric maps of CMRglc using the corrected IDIF, and for the sake of comparison, relative to serial blood sampling from the femoral artery. The uncorrected IDIF resulted in 20% underestimation of the magnitude of CMRglc relative to the gold standard arterial input method. However, there was no bias with the corrected IDIF, which was robust to the variable extent of myocardial tracer uptake, such that there was a very high correlation between individual CMRglc measurements using the corrected IDIF with gold-standard arterial input results. Based on simulation, we furthermore find that electrocardiogram-gating, i.e. ECG-gating is not necessary for IDIF quantitation using our approach.
Miyashita, Shin-Ichi; Mitsuhashi, Hiroaki; Fujii, Shin-Ichiro; Takatsu, Akiko; Inagaki, Kazumi; Fujimoto, Toshiyuki
2017-02-01
In order to facilitate reliable and efficient determination of both the particle number concentration (PNC) and the size of nanoparticles (NPs) by single-particle ICP-MS (spICP-MS) without the need to correct for the particle transport efficiency (TE, a possible source of bias in the results), a total-consumption sample introduction system consisting of a large-bore, high-performance concentric nebulizer and a small-volume on-axis cylinder chamber was utilized. Such a system potentially permits a particle TE of 100 %, meaning that there is no need to include a particle TE correction when calculating the PNC and the NP size. When the particle TE through the sample introduction system was evaluated by comparing the frequency of sharp transient signals from the NPs in a measured NP standard of precisely known PNC to the particle frequency for a measured NP suspension, the TE for platinum NPs with a nominal diameter of 70 nm was found to be very high (i.e., 93 %), and showed satisfactory repeatability (relative standard deviation of 1.0 % for four consecutive measurements). These results indicated that employing this total consumption system allows the particle TE correction to be ignored when calculating the PNC. When the particle size was determined using a solution-standard-based calibration approach without an NP standard, the particle diameters of platinum and silver NPs with nominal diameters of 30-100 nm were found to agree well with the particle diameters determined by transmission electron microscopy, regardless of whether a correction was performed for the particle TE. Thus, applying the proposed system enables NP size to be accurately evaluated using a solution-standard-based calibration approach without the need to correct for the particle TE.
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
NASA Astrophysics Data System (ADS)
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
Method of absorbance correction in a spectroscopic heating value sensor
Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John
2013-09-17
A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.
Bakbergenuly, Ilyas; Morgenthaler, Stephan
2016-01-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062
Eblen, Denise R; Barlow, Kristina E; Naugle, Alecia Larew
2006-11-01
The U.S. Food Safety and Inspection Service (FSIS) pathogen reduction-hazard analysis critical control point systems final rule, published in 1996, established Salmonella performance standards for broiler chicken, cow and bull, market hog, and steer and heifer carcasses and for ground beef, chicken, and turkey meat. In 1998, the FSIS began testing to verify that establishments are meeting performance standards. Samples are collected in sets in which the number of samples is defined but varies according to product class. A sample set fails when the number of positive Salmonella samples exceeds the maximum number of positive samples allowed under the performance standard. Salmonella sample sets collected at 1,584 establishments from 1998 through 2003 were examined to identify factors associated with failure of one or more sets. Overall, 1,282 (80.9%) of establishments never had failed sets. In establishments that did experience set failure(s), generally the failed sets were collected early in the establishment testing history, with the exception of broiler establishments where failure(s) occurred both early and late in the course of testing. Small establishments were more likely to have experienced a set failure than were large or very small establishments, and broiler establishments were more likely to have failed than were ground beef, market hog, or steer-heifer establishments. Agency response to failed Salmonella sample sets in the form of in-depth verification reviews and related establishment-initiated corrective actions have likely contributed to declines in the number of establishments that failed sets. A focus on food safety measures in small establishments and broiler processing establishments should further reduce the number of sample sets that fail to meet the Salmonella performance standard.
Methodological considerations for measuring glucocorticoid metabolites in feathers
Berk, Sara A.; McGettrick, Julie R.; Hansen, Warren K.; Breuner, Creagh W.
2016-01-01
In recent years, researchers have begun to use corticosteroid metabolites in feathers (fCORT) as a metric of stress physiology in birds. However, there remain substantial questions about how to measure fCORT most accurately. Notably, small samples contain artificially high amounts of fCORT per millimetre of feather (the small sample artefact). Furthermore, it appears that fCORT is correlated with circulating plasma corticosterone only when levels are artificially elevated by the use of corticosterone implants. Here, we used several approaches to address current methodological issues with the measurement of fCORT. First, we verified that the small sample artefact exists across species and feather types. Second, we attempted to correct for this effect by increasing the amount of methanol relative to the amount of feather during extraction. We consistently detected more fCORT per millimetre or per milligram of feather in small samples than in large samples even when we adjusted methanol:feather concentrations. We also used high-performance liquid chromatography to identify hormone metabolites present in feathers and measured the reactivity of these metabolites against the most commonly used antibody for measuring fCORT. We verified that our antibody is mainly identifying corticosterone (CORT) in feathers, but other metabolites have significant cross-reactivity. Lastly, we measured faecal glucocorticoid metabolites in house sparrows and correlated these measurements with corticosteroid metabolites deposited in concurrently grown feathers; we found no correlation between faecal glucocorticoid metabolites and fCORT. We suggest that researchers should be cautious in their interpretation of fCORT in wild birds and should seek alternative validation methods to examine species-specific relationships between environmental challenges and fCORT. PMID:27335650
NASA Technical Reports Server (NTRS)
Moul, T. M.
1983-01-01
The nature of corrections for flow direction measurements obtained with a wing-tip mounted sensor was investigated. Corrections for the angle of attack and sideslip, measured by sensors mounted in front of each wing tip of a general aviation airplane, were determined. These flow corrections were obtained from both wind-tunnel and flight tests over a large angle-of-attack range. Both the angle-of-attack and angle-of-sideslip flow corrections were found to be substantial. The corrections were a function of the angle of attack and angle of sideslip. The effects of wing configuration changes, small changes in Reynolds number, and spinning rotation on the angle-of-attack flow correction were found to be small. The angle-of-attack flow correction determined from the static wind-tunnel tests agreed reasonably well with the correction determined from flight tests.
Large-particle calcium hydroxylapatite injection for correction of facial wrinkles and depressions.
Alam, Murad; Havey, Jillian; Pace, Natalie; Pongprutthipan, Marisa; Yoo, Simon
2011-07-01
Small-particle calcium hydroxylapatite (Radiesse, Merz, Frankfurt, Germany) is safe and effective for facial wrinkle reduction, and has medium-term persistence for this indication. There is patient demand for similar fillers that may be longer lasting. We sought to assess the safety and persistence of effect in vivo associated with use of large-particle calcium hydroxylapatite (Coaptite, Merz) for facial augmentation and wrinkle reduction. This was a case series of 3 patients injected with large-particle calcium hydroxylapatite. Large-particle calcium hydroxylapatite appears to be effective and well tolerated for correction of facial depressions, including upper mid-cheek atrophy, nasolabial creases, and HIV-associated lipoatrophy. Adverse events included erythema and edema, and transient visibility of the injection sites. Treated patients, all of whom had received small-particle calcium hydroxylapatite correction before, noted improved persistence at 6 and 15 months with the large-particle injections as compared with prior small-particle injections. This is a small case series, and there was no direct control to compare the persistence of small-particle versus large-particle correction. For facial wrinkle correction, large-particle calcium hydroxylapatite has a safety profile comparable with that of small-particle calcium hydroxylapatite. The large-particle variant may have longer persistence that may be useful in selected clinical circumstances. Copyright © 2010 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.
Identification and Correction of Sample Mix-Ups in Expression Genetic Data: A Case Study
Broman, Karl W.; Keller, Mark P.; Broman, Aimee Teo; Kendziorski, Christina; Yandell, Brian S.; Sen, Śaunak; Attie, Alan D.
2015-01-01
In a mouse intercross with more than 500 animals and genome-wide gene expression data on six tissues, we identified a high proportion (18%) of sample mix-ups in the genotype data. Local expression quantitative trait loci (eQTL; genetic loci influencing gene expression) with extremely large effect were used to form a classifier to predict an individual’s eQTL genotype based on expression data alone. By considering multiple eQTL and their related transcripts, we identified numerous individuals whose predicted eQTL genotypes (based on their expression data) did not match their observed genotypes, and then went on to identify other individuals whose genotypes did match the predicted eQTL genotypes. The concordance of predictions across six tissues indicated that the problem was due to mix-ups in the genotypes (although we further identified a small number of sample mix-ups in each of the six panels of gene expression microarrays). Consideration of the plate positions of the DNA samples indicated a number of off-by-one and off-by-two errors, likely the result of pipetting errors. Such sample mix-ups can be a problem in any genetic study, but eQTL data allow us to identify, and even correct, such problems. Our methods have been implemented in an R package, R/lineup. PMID:26290572
Identification and Correction of Sample Mix-Ups in Expression Genetic Data: A Case Study.
Broman, Karl W; Keller, Mark P; Broman, Aimee Teo; Kendziorski, Christina; Yandell, Brian S; Sen, Śaunak; Attie, Alan D
2015-08-19
In a mouse intercross with more than 500 animals and genome-wide gene expression data on six tissues, we identified a high proportion (18%) of sample mix-ups in the genotype data. Local expression quantitative trait loci (eQTL; genetic loci influencing gene expression) with extremely large effect were used to form a classifier to predict an individual's eQTL genotype based on expression data alone. By considering multiple eQTL and their related transcripts, we identified numerous individuals whose predicted eQTL genotypes (based on their expression data) did not match their observed genotypes, and then went on to identify other individuals whose genotypes did match the predicted eQTL genotypes. The concordance of predictions across six tissues indicated that the problem was due to mix-ups in the genotypes (although we further identified a small number of sample mix-ups in each of the six panels of gene expression microarrays). Consideration of the plate positions of the DNA samples indicated a number of off-by-one and off-by-two errors, likely the result of pipetting errors. Such sample mix-ups can be a problem in any genetic study, but eQTL data allow us to identify, and even correct, such problems. Our methods have been implemented in an R package, R/lineup. Copyright © 2015 Broman et al.
NASA Astrophysics Data System (ADS)
Shen, Chuan-Chou; Lin, Huei-Ting; Chu, Mei-Fei; Yu, Ein-Fen; Wang, Xianfeng; Dorale, Jeffrey A.
2006-09-01
A new analytical technique using inductively coupled plasma-quadrupole mass spectrometry (ICP-QMS) has been developed that produces permil-level precision in the measurement of uranium concentration ([U]) and isotopic composition (δ234U) in natural materials. A 233U-236U double spike method was used to correct for mass fractionation during analysis. To correct for ratio drifting, samples were bracketed by uranium standard measurements. A sensitivity of 6-7 × 108 cps/ppm was generated with a sample solution uptake rate of 30 μL/min. With a measurement time of 15-20 min, standards of 30-ng uranium produced a within-run precision better than 3‰ (±2 R.S.D.) for δ234U and better than 2‰ for [U]. Replicate measurements made on standards show that a between-run reproducibility of 3.5‰ for δ234U and 2‰ for [U] can be achieved. ICP-QMS data of δ234U and [U] in seawater, coral, and speleothem materials are consistent with the data measured by other ICP-MS and TIMS techniques. Advantages of the ICP-QMS method include low cost, easy maintenance, simple instrumental operation, and few sample preparation steps. Sample size requirements are small, such as 10-14 mg of coral material. The results demonstrate that this technique can be applied to natural samples with various matrices.
Matching-to-sample by an echolocating dolphin (Tursiops truncatus).
Roitblat, H L; Penner, R H; Nachtigall, P E
1990-01-01
An adult male dolphin was trained to perform a three-alternative delayed matching-to-sample task while wearing eyecups to occlude its vision. Sample and comparison stimuli consisted of a small and a large PVC plastic tube, a water-filled stainless steel sphere, and a solid aluminum cone. Stimuli were presented under water and the dolphin was allowed to identify the stimuli through echolocation. The echolocation clicks emitted by the dolphin to each sample and each comparison stimulus were recorded and analyzed. Over 48 sessions of testing, choice accuracy averaged 94.5% correct. This high level of accuracy was apparently achieved by varying the number of echolocation clicks emitted to various stimuli. Performance appeared to reflect a preexperimental stereotyped search pattern that dictated the order in which comparison items were examined and a complex sequential-sampling decision process. A model for the dolphin's decision-making processes is described.
Patient satisfaction with nursing staff in bone marrow transplantation and hematology units.
Piras, A; Poddigue, M; Angelucci, E
2010-01-01
Several validated questionnaires for assessment of hospitalized patient satisfaction have been reported in the literature. Many have been designed specifically for patients with cancer. User satisfaction is one indicator of service quality and benefits. Thus, we conducted a small qualitative survey managed by nursing staff in our Bone Marrow Transplantation Unit and Acute Leukemia Unit, with the objectives of assessing patient satisfaction, determining critical existing problems, and developing required interventions. The sample was not probabilistic. A questionnaire was developed using the Delphi method in a pilot study with 30 patients. Analysis of the data suggested a good level of patient satisfaction with medical and nursing staffs (100%), but poor satisfaction with food (48%), services (38%), and amenities (31%). Limitations of the study were that the questionnaire was unvalidated and the sample was small. However, for the first time, patient satisfaction was directly measured at our hospital. Another qualitative study will be conducted after correction of the critical points that emerged during this initial study, in a larger sample of patients. Copyright 2010 Elsevier Inc. All rights reserved.
Examination of multi-model ensemble seasonal prediction methods using a simple climate system
NASA Astrophysics Data System (ADS)
Kang, In-Sik; Yoo, Jin Ho
2006-02-01
A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.
NASA Astrophysics Data System (ADS)
Yang, Chun; Quarles, C. A.
2007-10-01
We have used positron Doppler Broadening Spectroscopy (DBS) to investigate the uniformity of rubber-carbon black composite samples. The amount of carbon black added to a rubber sample is characterized by phr, the number of grams of carbon black per hundred grams of rubber. Typical concentrations in rubber tires are 50 phr. It has been shown that the S parameter measured by DBS depends on the phr of the sample, so the variation in carbon black concentration can be easily measured to 0.5 phr. In doing the experiments we observed a dependence of the S parameter on small variation in the counting rate or deadtime. By carefully calibrating this deadtime correction we can significantly reduce the experimental run time and thus make faster determination of the uniformity of extended samples.
Correcting for Optimistic Prediction in Small Data Sets
Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.
2014-01-01
The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219
Anomalous waveforms observed in laboratory-formed gas hydrate-bearing and ice-bearing sediments
Lee, M.W.; Waite, W.F.
2011-01-01
Acoustic transmission measurements of compressional, P, and shear, S, wave velocities rely on correctly identifying the P- and S-body wave arrivals in the measured waveform. In cylindrical samples for which the sample is much longer than the acoustic wavelength, these body waves can be obscured by high-amplitude waveform features arriving just after the relatively small-amplitude P-body wave. In this study, a normal mode approach is used to analyze this type of waveform, observed in sediment containing gas hydrate or ice. This analysis extends an existing normal-mode waveform propagation theory by including the effects of the confining medium surrounding the sample, and provides guidelines for estimating S-wave velocities from waveforms containing multiple large-amplitude arrivals. ?? 2011 Acoustical Society of America.
Sex determination from calcification of costal cartilages in a Scottish sample.
Middleham, Helen P; Boyd, Laura E; Mcdonald, Stuart W
2015-10-01
The pelvic bones and skull are not always available when human remains are discovered in a forensic setting. This study investigates the suitability to a Scottish sample of existing methods of sexing based on calcification patterns in the costal cartilages. Radiographs of chest plates of 41 cadavers, 22 male and 19 female aged 57-96 years were analyzed for their calcification patterns according to the methods of McCormick et al. (1985, Am. J. Phys. Anthropol. 68:173-195) and Rejtarova et al. (2004, Biomed. Pap. Med. Fac. Univ. Palacky. Olomouc. Czech. Repub. 148:241-243). With the method of Rejtarova et al. (2004, Biomed. Pap. Med. Fac. Univ. Palacky. Olomouc. Czech. Repub. 148:241-243) none of the male specimens was sexed correctly. Of the chest plates that were suitable for sexing, the method of McCormick et al. (1985, Am. J. Phys. Anthropol. 68:173-195) correctly sexed 82.4% of the female specimens but only 41.2% of the males. To improve the reliability, we suggest a new method of sex determination based on whether the calcified deposits in the second to seventh costal cartilages are predominantly trabecular bone or sclerotic calcified deposits. Specimens with minimal amounts or similar amounts of trabecular bone or sclerotic deposits in the costal cartilages are not appropriate for our method. When such specimens (10 specimens) were excluded, our method correctly sexed 16 of 17 (94%) males and 12 of 14 (86%) females. The authors acknowledge that their sample is small and that many of their subjects were elderly and that the method should be tested on a larger sample group before application in a forensic context. © 2014 Wiley Periodicals, Inc.
Efficient free energy calculations by combining two complementary tempering sampling methods.
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-14
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
Efficient free energy calculations by combining two complementary tempering sampling methods
NASA Astrophysics Data System (ADS)
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-01
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
NASA Astrophysics Data System (ADS)
Schulz-Hildebrandt, H.; Münter, Michael; Ahrens, M.; Spahr, H.; Hillmann, D.; König, P.; Hüttmann, G.
2018-03-01
Optical coherence tomography (OCT) images scattering tissues with 5 to 15 μm resolution. This is usually not sufficient for a distinction of cellular and subcellular structures. Increasing axial and lateral resolution and compensation of artifacts caused by dispersion and aberrations is required to achieve cellular and subcellular resolution. This includes defocus which limit the usable depth of field at high lateral resolution. OCT gives access the phase of the scattered light and hence correction of dispersion and aberrations is possible by numerical algorithms. Here we present a unified dispersion/aberration correction which is based on a polynomial parameterization of the phase error and an optimization of the image quality using Shannon's entropy. For validation, a supercontinuum light sources and a costume-made spectrometer with 400 nm bandwidth were combined with a high NA microscope objective in a setup for tissue and small animal imaging. Using this setup and computation corrections, volumetric imaging at 1.5 μm resolution is possible. Cellular and near cellular resolution is demonstrated in porcine cornea and the drosophila larva, when computational correction of dispersion and aberrations is used. Due to the excellent correction of the used microscope objective, defocus was the main contribution to the aberrations. In addition, higher aberrations caused by the sample itself were successfully corrected. Dispersion and aberrations are closely related artifacts in microscopic OCT imaging. Hence they can be corrected in the same way by optimization of the image quality. This way microscopic resolution is easily achieved in OCT imaging of static biological tissues.
Radiometric Correction of Multitemporal Hyperspectral Uas Image Mosaics of Seedling Stands
NASA Astrophysics Data System (ADS)
Markelin, L.; Honkavaara, E.; Näsi, R.; Viljanen, N.; Rosnell, T.; Hakala, T.; Vastaranta, M.; Koivisto, T.; Holopainen, M.
2017-10-01
Novel miniaturized multi- and hyperspectral imaging sensors on board of unmanned aerial vehicles have recently shown great potential in various environmental monitoring and measuring tasks such as precision agriculture and forest management. These systems can be used to collect dense 3D point clouds and spectral information over small areas such as single forest stands or sample plots. Accurate radiometric processing and atmospheric correction is required when data sets from different dates and sensors, collected in varying illumination conditions, are combined. Performance of novel radiometric block adjustment method, developed at Finnish Geospatial Research Institute, is evaluated with multitemporal hyperspectral data set of seedling stands collected during spring and summer 2016. Illumination conditions during campaigns varied from bright to overcast. We use two different methods to produce homogenous image mosaics and hyperspectral point clouds: image-wise relative correction and image-wise relative correction with BRDF. Radiometric datasets are converted to reflectance using reference panels and changes in reflectance spectra is analysed. Tested methods improved image mosaic homogeneity by 5 % to 25 %. Results show that the evaluated method can produce consistent reflectance mosaics and reflectance spectra shape between different areas and dates.
Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis
2009-02-01
Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.
Determination of A FB b at the Z pole using inclusive charge reconstruction and lifetime tagging
NASA Astrophysics Data System (ADS)
DELPHI Collaboration
2005-03-01
A novel high precision method measures the b-quark forward-backward asymmetry at the Z pole on a sample of 3,560,890 hadronic events collected with the DELPHI detector in 1992 to 2000. An enhanced impact parameter tag provides a high purity b sample. For event hemispheres with a reconstructed secondary vertex the charge of the corresponding quark or anti-quark is determined using a neural network which combines in an optimal way the full available charge information from the vertex charge, the jet charge and from identified leptons and hadrons. The probability of correctly identifying b-quarks and anti-quarks is measured on the data themselves comparing the rates of double hemisphere tagged like-sign and unlike-sign events. The b-quark forward-backward asymmetry is determined from the differential asymmetry, taking small corrections due to hemisphere correlations and background contributions into account. The results for different centre-of-mass energies are: A_{FB}^{{b}} (89.449 GeV) = 0.0637 ± 0.0143(stat.) ± 0.0017(syst.)
Diagnosis and management of small intestinal bacterial overgrowth.
Bohm, Matthew; Siwiec, Robert M; Wo, John M
2013-06-01
Small intestinal bacterial overgrowth (SIBO) can result from failure of the gastric acid barrier, failure of small intestinal motility, anatomic alterations, or impairment of systemic and local immunity. The current accepted criteria for the diagnosis of SIBO is the presence of coliform bacteria isolated from the proximal jejunum with >10(5) colony-forming units/mL. A major concern with luminal aspiration is that it is only one random sampling of the small intestine and may not always be representative of the underlying microbiota. A new approach to examine the underlying microbiota uses rapid molecular sequencing, but its clinical utilization is still under active investigation. Clinical manifestations of SIBO are variable and include bloating, flatulence, abdominal distention, abdominal pain, and diarrhea. Severe cases may present with nutrition deficiencies due to malabsorption of micro- and macronutrients. The current management strategies for SIBO center on identifying and correcting underlying causes, addressing nutrition deficiencies, and judicious utilization of antibiotics to treat symptomatic SIBO.
Efficient logistic regression designs under an imperfect population identifier.
Albert, Paul S; Liu, Aiyi; Nansel, Tonja
2014-03-01
Motivated by actual study designs, this article considers efficient logistic regression designs where the population is identified with a binary test that is subject to diagnostic error. We consider the case where the imperfect test is obtained on all participants, while the gold standard test is measured on a small chosen subsample. Under maximum-likelihood estimation, we evaluate the optimal design in terms of sample selection as well as verification. We show that there may be substantial efficiency gains by choosing a small percentage of individuals who test negative on the imperfect test for inclusion in the sample (e.g., verifying 90% test-positive cases). We also show that a two-stage design may be a good practical alternative to a fixed design in some situations. Under optimal and nearly optimal designs, we compare maximum-likelihood and semi-parametric efficient estimators under correct and misspecified models with simulations. The methodology is illustrated with an analysis from a diabetes behavioral intervention trial. © 2013, The International Biometric Society.
Survey of background scattering from materials found in small-angle neutron scattering.
Barker, J G; Mildner, D F R
2015-08-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3 He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3 He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.
Survey of background scattering from materials found in small-angle neutron scattering
Barker, J. G.; Mildner, D. F. R.
2015-01-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088
MacArthur, Katherine E; Brown, Hamish G; Findlay, Scott D; Allen, Leslie J
2017-11-01
Advances in microscope stability, aberration correction and detector design now make it readily possible to achieve atomic resolution energy dispersive X-ray mapping for dose resilient samples. These maps show impressive atomic-scale qualitative detail as to where the elements reside within a given sample. Unfortunately, while electron channelling is exploited to provide atomic resolution data, this very process makes the images rather more complex to interpret quantitatively than if no electron channelling occurred. Here we propose small sample tilt as a means for suppressing channelling and improving quantification of composition, whilst maintaining atomic-scale resolution. Only by knowing composition and thickness of the sample is it possible to determine the atomic configuration within each column. The effects of neighbouring atomic columns with differing composition and of residual channelling on our ability to extract exact column-by-column composition are also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
SUSANS With Polarized Neutrons.
Wagh, Apoorva G; Rakhecha, Veer Chand; Strobl, Makus; Treimer, Wolfgang
2005-01-01
Super Ultra-Small Angle Neutron Scattering (SUSANS) studies over wave vector transfers of 10(-4) nm(-1) to 10(-3) nm(-1) afford information on micrometer-size agglomerates in samples. Using a right-angled magnetic air prism, we have achieved a separation of ≈10 arcsec between ≈2 arcsec wide up- and down-spin peaks of 0.54 nm neutrons. The SUSANS instrument has thus been equipped with the polarized neutron option. The samples are placed in a uniform vertical field of 8.8 × 10(4) A/m (1.1 kOe). Several magnetic alloy ribbon samples broaden the up-spin neutron peak significantly over the ±1.3 × 10(-3) nm(-1) range, while leaving the down-spin peak essentially unaltered. Fourier transforms of these SUSANS spectra corrected for the instrument resolution, yield micrometer-range pair distribution functions for up- and down-spin neutrons as well as the nuclear and magnetic scattering length density distributions in the samples.
Alió Del Barrio, Jorge L; Vargas, Verónica; Al-Shymali, Olena; Alió, Jorge L
2017-01-01
Small Incision Lenticule Extraction (SMILE) is a flap-free intrastromal technique for the correction of myopia and myopic astigmatism. To date, this technique lacks automated centration and cyclotorsion control, so several concerns have been raised regarding its capability to correct moderate or high levels of astigmatism. The objective of this paper is to review the reported SMILE outcomes for the correction of myopic astigmatism associated with a cylinder over 0.75 D, and its comparison with the outcomes reported with the excimer laser-based corneal refractive surgery techniques. A total of five studies clearly reporting SMILE astigmatic outcomes were identified. SMILE shows acceptable outcomes for the correction of myopic astigmatism, although a general agreement exists about the superiority of the excimer laser-based techniques for low to moderate levels of astigmatism. Manual correction of the static cyclotorsion should be adopted for any SMILE astigmatic correction over 0.75 D.
No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.
van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B
2016-11-24
Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.
NASA Astrophysics Data System (ADS)
Zhao, Ye; Hsieh, Yu-Te; Belshaw, Nick
2015-04-01
Silicon (Si) stable isotopes have been used in a broad range of geochemical and cosmochemical applications. A precise and accurate determination of Si isotopes is desirable to distinguish their small natural variations (< 0.2‰) in many of these studies. In the past decade, the advent of the MC-ICP-MS has spurred a remarkable improvement in the precision and accuracy of Si isotopic analysis. The instrumental mass fractionation correction is one crucial aspect of the analysis of Si isotopes. Two options are currently available: the sample-standard bracketing approach and the Mg doping approach. However, there has been a debate over the validity of the Mg doping approach. Some studies (Cardinal et al., 2003; Engström et al., 2006) favoured it compared to the sample-standard bracketing approach, whereas some other studies (e.g. De La Rocha, 2002) considered it unsuitable. This study investigates the Mg doping approach on both the Nu Plasma II and the Nu Plasma 1700. Experiments were performed in both the wet plasma and the dry plasma modes, using a number of different combinations of cones. A range of different Mg to Si ratios as well as different matrices have been used in the experiments. A sample-standard bracketing approach has also been adopted for the Si mass fractionation correction to compare with the Mg doping approach. Through assessing the mass fractionation behaviours of both Si and Mg under different instrument settings, this study aims to identity the factors which may affect the Mg doping approach and answer some key questions to the debate.
NASA Technical Reports Server (NTRS)
Knudsen, William C.
1992-01-01
The effect of finite grid radius and thickness on the electron current measured by planar retarding potential analyzers (RPAs) is analyzed numerically. Depending on the plasma environment, the current is significantly reduced below that which is calculated using a theoretical equation derived for an idealized RPA having grids with infinite radius and vanishingly small thickness. A correction factor to the idealized theoretical equation is derived for the Pioneer Venus (PV) orbiter RPA (ORPA) for electron gasses consisting of one or more components obeying Maxwell statistics. The error in density and temperature of Maxwellian electron distributions previously derived from ORPA data using the theoretical expression for the idealized ORPA is evaluated by comparing the densities and temperatures derived from a sample of PV ORPA data using the theoretical expression with and without the correction factor.
Aide, Nicolas; Louis, Marie-Hélène; Dutoit, Soizic; Labiche, Alexandre; Lemoisson, Edwige; Briand, Mélanie; Nataf, Valérie; Poulain, Laurent; Gauduchon, Pascal; Talbot, Jean-Noël; Montravers, Françoise
2007-10-01
To evaluate the accuracy of semi-quantitative small-animal PET data, uncorrected for attenuation, and then of the same semi-quantitative data corrected by means of recovery coefficients (RCs) based on phantom studies. A phantom containing six fillable spheres (diameter range: 4.4-14 mm) was filled with an 18F-FDG solution (spheres/background activity=10.1, 5.1 and 2.5). RCs, defined as measured activity/expected activity, were calculated. Nude rats harbouring tumours (n=50) were imaged after injection of 18F-FDG and sacrificed. The standardized uptake value (SUV) in tumours was determined with small-animal PET and compared to ex-vivo counting (ex-vivo SUV). Small-animal PET SUVs were corrected with RCs based on the greatest tumour diameter. Tumour proliferation was assessed with cyclin A immunostaining and correlated to the SUV. RCs ranged from 0.33 for the smallest sphere to 0.72 for the largest. A sigmoidal correlation was found between RCs and sphere diameters (r(2)=0.99). Small-animal PET SUVs were well correlated with ex-vivo SUVs (y=0.48x-0.2; r(2)=0.71) and the use of RCs based on the greatest tumour diameter significantly improved regression (y=0.84x-0.81; r(2)=0.77), except for tumours with important necrosis. Similar results were obtained without sacrificing animals, by using PET images to estimate tumour dimensions. RC-based corrections improved correlation between small-animal PET SUVs and tumour proliferation (uncorrected data: Rho=0.79; corrected data: Rho=0.83). Recovery correction significantly improves both accuracy of small-animal PET semi-quantitative data in rat studies and their correlation with tumour proliferation, except for largely necrotic tumours.
Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I
2017-12-01
The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yan; Li, Ming; Kučerka, Norbert
We have designed and constructed a temperature-controllable shear flow cell for in-situ study on flow alignable systems. The device has been tested in the neutron diffraction and has the potential to be applied in the small angle neutron scattering configuration to characterize the nanostructures of the materials under flow. The required sample amount is as small as 1 ml. The shear rate on the sample is controlled by the flow rate produced by an external pump and can potentially vary from 0.11 to 3.8 × 10{sup 5} s{sup −1}. Both unidirectional and oscillational flows are achievable by the setting ofmore » the pump. The instrument is validated by using a lipid bicellar mixture, which yields non-alignable nanodisc-like bicelles at low T and shear-alignable membranes at high T. Using the shear cell, the bicellar membranes can be aligned at 31 °C under the flow with a shear rate of 11.11 s{sup −1}. Multiple high-order Bragg peaks are observed and the full width at half maximum of the “rocking curve” around the Bragg’s condition is found to be 3.5°–4.1°. It is noteworthy that a portion of the membranes remains aligned even after the flow stops. Detailed and comprehensive intensity correction for the rocking curve has been derived based on the finite rectangular sample geometry and the absorption of the neutrons as a function of sample angle [See supplementary material at http://dx.doi.org/10.1063/1.4908165 for the detailed derivation of the absorption correction]. The device offers a new capability to study the conformational or orientational anisotropy of the solvated macromolecules or aggregates induced by the hydrodynamic interaction in a flow field.« less
Park, Seok Chan; Kim, Minjung; Noh, Jaegeun; Chung, Hoeil; Woo, Youngah; Lee, Jonghwa; Kemper, Mark S
2007-06-12
The concentration of acetaminophen in a turbid pharmaceutical suspension has been measured successfully using Raman spectroscopy. The spectrometer was equipped with a large spot probe which enabled the coverage of a representative area during sampling. This wide area illumination (WAI) scheme (coverage area 28.3 mm2) for Raman data collection proved to be more reliable for the compositional determination of these pharmaceutical suspensions, especially when the samples were turbid. The reproducibility of measurement using the WAI scheme was compared to that of using a conventional small-spot scheme which employed a much smaller illumination area (about 100 microm spot size). A layer of isobutyric anhydride was placed in front of the sample vials to correct the variation in the Raman intensity due to the fluctuation of laser power. Corrections were accomplished using the isolated carbonyl band of isobutyric anhydride. The acetaminophen concentrations of prediction samples were accurately estimated using a partial least squares (PLS) calibration model. The prediction accuracy was maintained even with changes in laser power. It was noted that the prediction performance was somewhat degraded for turbid suspensions with high acetaminophen contents. When comparing the results of reproducibility obtained with the WAI scheme and those obtained using the conventional scheme, it was concluded that the quantitative determination of the active pharmaceutical ingredient (API) in turbid suspensions is much improved when employing a larger laser coverage area. This is presumably due to the improvement in representative sampling.
Dosimetry for Small and Nonstandard Fields
NASA Astrophysics Data System (ADS)
Junell, Stephanie L.
The proposed small and non-standard field dosimetry protocol from the joint International Atomic Energy Agency (IAEA) and American Association of Physicist in Medicine working group introduces new reference field conditions for ionization chamber based reference dosimetry. Absorbed dose beam quality conversion factors (kQ factors) corresponding to this formalism were determined for three different models of ionization chambers: a Farmer-type ionization chamber, a thimble ionization chamber, and a small volume ionization chamber. Beam quality correction factor measurements were made in a specially developed cylindrical polymethyl methacrylate (PMMA) phantom and a water phantom using thermoluminescent dosimeters (TLDs) and alanine dosimeters to determine dose to water. The TLD system for absorbed dose to water determination in high energy photon and electron beams was fully characterized as part of this dissertation. The behavior of the beam quality correction factor was observed as it transfers the calibration coefficient from the University of Wisconsin Accredited Dosimetry Calibration Laboratory (UWADCL) 60Co reference beam to the small field calibration conditions of the small field formalism. TLD-determined beam quality correction factors for the calibration conditions investigated ranged from 0.97 to 1.30 and had associated standard deviations from 1% to 3%. The alanine-determined beam quality correction factors ranged from 0.996 to 1.293. Volume averaging effects were observed with the Farmer-type ionization chamber in the small static field conditions. The proposed small and non-standard field dosimetry protocols new composite-field reference condition demonstrated its potential to reduce or remove ionization chamber volume dependancies, but the measured beam quality correction factors were not equal to the standard CoP's kQ, indicating a change in beam quality in the small and non-standard field dosimetry protocols new composite-field reference condition relative to the standard broad beam reference conditions. The TLD- and alanine-determined beam quality correction factors in the composite-field reference conditions were approximately 3% greater and differed by more than one standard deviation from the published TG-51 kQ values for all three chambers.
A beam hardening and dispersion correction for x-ray dark-field radiography.
Pelzer, Georg; Anton, Gisela; Horn, Florian; Rieger, Jens; Ritter, André; Wandner, Johannes; Weber, Thomas; Michel, Thilo
2016-06-01
X-ray dark-field imaging promises information on the small angle scattering properties even of large samples. However, the dark-field image is correlated with the object's attenuation and phase-shift if a polychromatic x-ray spectrum is used. A method to remove part of these correlations is proposed. The experimental setup for image acquisition was modeled in a wave-field simulation to quantify the dark-field signals originating solely from a material's attenuation and phase-shift. A calibration matrix was simulated for ICRU46 breast tissue. Using the simulated data, a dark-field image of a human mastectomy sample was corrected for the finger print of attenuation- and phase-image. Comparing the simulated, attenuation-based dark-field values to a phantom measurement, a good agreement was found. Applying the proposed method to mammographic dark-field data, a reduction of the dark-field background and anatomical noise was achieved. The contrast between microcalcifications and their surrounding background was increased. The authors show that the influence of and dispersion can be quantified by simulation and, thus, measured image data can be corrected. The simulation allows to determine the corresponding dark-field artifacts for a wide range of setup parameters, like tube-voltage and filtration. The application of the proposed method to mammographic dark-field data shows an increase in contrast compared to the original image, which might simplify a further image-based diagnosis.
Range of validity for perturbative treatments of relativistic sum rules
NASA Astrophysics Data System (ADS)
Cohen, Scott M.
2003-10-01
The range of validity of perturbative calculations of relativistic sum rules is investigated by calculating the second-order relativistic corrections to the Bethe sum rule and its small momentum limit, the Thomas-Reiche-Kuhn (TRK) sum rule. For the TRK sum rule and atomic systems, the second-order correction is found to be less than 0.5% up to about Z=70. The total relativistic corrections should then be accurate at least through this range of Z, and probably beyond this range if the second-order terms are included. For Rn (Z=86), however, the second-order corrections are nearly 1%. The total corrections to the Bethe sum rule are largest at small momentum, never being significantly larger than the corresponding corrections to the TRK sum rule. The first-order corrections to the Bethe sum rule also give better than 0.5% accuracy for Z<70, and inclusion of the second-order corrections should extend this range, as well.
TNO/Centaurs grouping tested with asteroid data sets
NASA Astrophysics Data System (ADS)
Fulchignoni, M.; Birlan, M.; Barucci, M. A.
2001-11-01
Recently, we have discussed the possible subdivision in few groups of a sample of 22 TNO and Centaurs for which the BVRIJ photometry were available (Barucci et al., 2001, A&A, 371,1150). We obtained this results using the multivariate statistics adopted to define the current asteroid taxonomy, namely the Principal Components Analysis and the G-mode method (Tholen & Barucci, 1989, in ASTEROIDS II). How these methods work with a very small statistical sample as the TNO/Centaurs one? Theoretically, the number of degrees of freedom of the sample is correct. In fact it is 88 in our case and have to be larger then 50 to cope with the requirements of the G-mode. Does the random sampling of the small number of members of a large population contain enough information to reveal some structure in the population? We extracted several samples of 22 asteroids out of a data-base of 86 objects of known taxonomic type for which BVRIJ photometry is available from ECAS (Zellner et al. 1985, ICARUS 61, 355), SMASS II (S.W. Bus, 1999, PhD Thesis, MIT), and the Bell et al. Atlas of the asteroid infrared spectra. The objects constituting the first sample were selected in order to give a good representation of the major asteroid taxonomic classes (at least three samples each class): C,S,D,A, and G. Both methods were able to distinguish all these groups confirming the validity of the adopted methods. The S class is hard to individuate as a consequence of the choice of I and J variables, which imply a lack of information on the absorption band at 1 micron. The other samples were obtained by random choice of the objects. Not all the major groups were well represented (less than three samples per groups), but the general trend of the asteroid taxonomy has been always obtained. We conclude that the quoted grouping of TNO/Centaurs is representative of some physico-chemical structure of the outer solar system small body population.
Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
NASA Astrophysics Data System (ADS)
Bonin, Timothy A.; Goines, David C.; Scott, Aaron K.; Wainwright, Charlotte E.; Gibbs, Jeremy A.; Chilson, Phillip B.
2015-06-01
The structure function is often used to quantify the intensity of spatial inhomogeneities within turbulent flows. Here, the Small Multifunction Research and Teaching Sonde (SMARTSonde), an unmanned aerial system, is used to measure horizontal variations in temperature and to calculate the structure function of temperature at various heights for a range of separation distances. A method for correcting for the advection of turbulence in the calculation of the structure function is discussed. This advection correction improves the data quality, particularly when wind speeds are high. The temperature structure-function parameter can be calculated from the structure function of temperature. Two case studies from which the SMARTSonde was able to take measurements used to derive at several heights during multiple consecutive flights are discussed and compared with sodar measurements, from which is directly related to return power. Profiles of from both the sodar and SMARTSonde from an afternoon case exhibited generally good agreement. However, the profiles agreed poorly for a morning case. The discrepancies are partially attributed to different averaging times for the two instruments in a rapidly evolving environment, and the measurement errors associated with the SMARTSonde sampling within the stable boundary layer.
Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan
2016-07-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Systematic investigation of NLTE phenomena in the limit of small departures from LTE
NASA Astrophysics Data System (ADS)
Libby, S. B.; Graziani, F. R.; More, R. M.; Kato, T.
1997-04-01
In this paper, we begin a systematic study of Non-Local Thermal Equilibrium (NLTE) phenomena in near equilibrium (LTE) high energy density, highly radiative plasmas. It is shown that the principle of minimum entropy production rate characterizes NLTE steady states for average atom rate equations in the case of small departures form LTE. With the aid of a novel hohlraum-reaction box thought experiment, we use the principles of minimum entropy production and detailed balance to derive Onsager reciprocity relations for the NLTE responses of a near equilibrium sample to non-Planckian perturbations in different frequency groups. This result is a significant symmetry constraint on the linear corrections to Kirchoff's law. We envisage applying our strategy to a number of test problems which include: the NLTE corrections to the ionization state of an ion located near the edge of an otherwise LTE medium; the effect of a monochromatic radiation field perturbation on an LTE medium; the deviation of Rydberg state populations from LTE in recombining or ionizing plasmas; multi-electron temperature models such as that of Busquet; and finally, the effect of NLTE population shifts on opacity models.
A simple on-line arterial time-activity curve detector for [O-15] water PET studies
NASA Astrophysics Data System (ADS)
Wollenweber, S. D.; Hichwa, R. D.; Ponto, L. L. B.
1997-08-01
A simple, automated on-line detector system has been fabricated and implemented to detect the arterial time-activity curve (TAG) for bolus-injection [O-15] water PET studies. This system offers two significant improvements over existing systems: a pump mechanism is not required to control arterial blood flow through the detector and dispersion correction of the time-activity curve for dispersion in external tubing is unnecessary. The [O-15] positrons emanating from blood within a thin-walled, 0.134 cm inner-diameter plastic tube are detected by a 0.5 cm wide by 1.0 cm long by 0.1 cm thick plastic scintillator mounted to a miniature PMT. Photon background is reduced to insignificant levels by a 2.0 cm thick cylindrical lead shield. Mean cerebral blood flow (mCBF) determined from an autoradiographic model and from the TAC measured by 1-second automated sampling was compared to that calculated from a TAC acquired using 5-second integrated manual samples. Improvements in timing resolution (1-sec vs. 5-sec) cause small but significant differences between the two sampling methods. Dispersion is minimized due to small tubing diameters, short lengths of tubing between the radial arterial sampling site and the detector and the presence of a 3-way valve 10 cm proximal to the detector.
Rogers, Paul; Stoner, Julie
2016-01-01
Regression models for correlated binary outcomes are commonly fit using a Generalized Estimating Equations (GEE) methodology. GEE uses the Liang and Zeger sandwich estimator to produce unbiased standard error estimators for regression coefficients in large sample settings even when the covariance structure is misspecified. The sandwich estimator performs optimally in balanced designs when the number of participants is large, and there are few repeated measurements. The sandwich estimator is not without drawbacks; its asymptotic properties do not hold in small sample settings. In these situations, the sandwich estimator is biased downwards, underestimating the variances. In this project, a modified form for the sandwich estimator is proposed to correct this deficiency. The performance of this new sandwich estimator is compared to the traditional Liang and Zeger estimator as well as alternative forms proposed by Morel, Pan and Mancl and DeRouen. The performance of each estimator was assessed with 95% coverage probabilities for the regression coefficient estimators using simulated data under various combinations of sample sizes and outcome prevalence values with an Independence (IND), Autoregressive (AR) and Compound Symmetry (CS) correlation structure. This research is motivated by investigations involving rare-event outcomes in aviation data. PMID:26998504
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
NASA Technical Reports Server (NTRS)
Wyman, D.; Steinman, R. M.
1973-01-01
Recently Timberlake, Wyman, Skavenski, and Steinman (1972) concluded in a study of the oculomotor error signal in the fovea that 'the oculomotor dead zone is surely smaller than 10 min and may even be less than 5 min (smaller than the 0.25 to 0.5 deg dead zone reported by Rashbass (1961) with similar stimulus conditions).' The Timberlake et al. speculation is confirmed by demonstrating that the fixating eye consistently and accurately corrects target displacements as small as 3.4 min. The contact lens optical lever technique was used to study the manner in which the oculomotor system responds to small step displacements of the fixation target. Subjects did, without prior practice, use saccades to correct step displacements of the fixation target just as they correct small position errors during maintained fixation.
NASA Astrophysics Data System (ADS)
Oelze, Michael L.; O'Brien, William D.
2004-11-01
Backscattered rf signals used to construct conventional ultrasound B-mode images contain frequency-dependent information that can be examined through the backscattered power spectrum. The backscattered power spectrum is found by taking the magnitude squared of the Fourier transform of a gated time segment corresponding to a region in the scattering volume. When a time segment is gated, the edges of the gated regions change the frequency content of the backscattered power spectrum due to truncating of the waveform. Tapered windows, like the Hanning window, and longer gate lengths reduce the relative contribution of the gate-edge effects. A new gate-edge correction factor was developed that partially accounted for the edge effects. The gate-edge correction factor gave more accurate estimates of scatterer properties at small gate lengths compared to conventional windowing functions. The gate-edge correction factor gave estimates of scatterer properties within 5% of actual values at very small gate lengths (less than 5 spatial pulse lengths) in both simulations and from measurements on glass-bead phantoms. While the gate-edge correction factor gave higher accuracy of estimates at smaller gate lengths, the precision of estimates was not improved at small gate lengths over conventional windowing functions. .
Color and Vector Flow Imaging in Parallel Ultrasound With Sub-Nyquist Sampling.
Madiena, Craig; Faurie, Julia; Poree, Jonathan; Garcia, Damien; Garcia, Damien; Madiena, Craig; Faurie, Julia; Poree, Jonathan
2018-05-01
RF acquisition with a high-performance multichannel ultrasound system generates massive data sets in short periods of time, especially in "ultrafast" ultrasound when digital receive beamforming is required. Sampling at a rate four times the carrier frequency is the standard procedure since this rule complies with the Nyquist-Shannon sampling theorem and simplifies quadrature sampling. Bandpass sampling (or undersampling) outputs a bandpass signal at a rate lower than the maximal frequency without harmful aliasing. Advantages over Nyquist sampling are reduced storage volumes and data workflow, and simplified digital signal processing tasks. We used RF undersampling in color flow imaging (CFI) and vector flow imaging (VFI) to decrease data volume significantly (factor of 3 to 13 in our configurations). CFI and VFI with Nyquist and sub-Nyquist samplings were compared in vitro and in vivo. The estimate errors due to undersampling were small or marginal, which illustrates that Doppler and vector Doppler images can be correctly computed with a drastically reduced amount of RF samples. Undersampling can be a method of choice in CFI and VFI to avoid information overload and reduce data transfer and storage.
NASA Astrophysics Data System (ADS)
Glazner, Allen F.; Sadler, Peter M.
2016-12-01
The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is
Using a Divided Bar Apparatus to Measure Thermal Conductivity of Samples of Odd Sizes and Shapes
NASA Astrophysics Data System (ADS)
Crowell, J. "; Gosnold, W. D.
2012-12-01
Standard procedure for measuring thermal conductivity using a divided bar apparatus requires a sample that has the same surface dimensions as the heat sink/source surface in the divided bar. Heat flow is assumed to be constant throughout the column and thermal conductivity (K) is determined by measuring temperatures (T) across the sample and across standard layers and using the basic relationship Ksample=(Kstandard*(ΔT1+ΔT2)/2)/(ΔTsample). Sometimes samples are not large enough or of correct proportions to match the surface of the heat sink/source, however using the equations presented here the thermal conductivity of these samples can still be measured with a divided bar. Measurements were done on the UND Geothermal Laboratories stationary divided bar apparatus (SDB). This SDB has been designed to mimic many in-situ conditions, with a temperature range of -20C to 150C and a pressure range of 0 to 10,000 psi for samples with parallel surfaces and 0 to 3000 psi for samples with non-parallel surfaces. The heat sink/source surfaces are copper disks and have a surface area of 1,772 mm2 (2.74 in2). Layers of polycarbonate 6 mm thick with the same surface area as the copper disks are located in the heat sink and in the heat source as standards. For this study, all samples were prepared from a single piece of 4 inch limestone core. Thermal conductivities were measured for each sample as it was cut successively smaller. The above equation was adjusted to include the thicknesses (Th) of the samples and the standards and the surface areas (A) of the heat sink/source and of the sample Ksample=(Kstandard*Astandard*Thsample*(ΔT1+ΔT3))/(ΔTsample*Asample*2*Thstandard). Measuring the thermal conductivity of samples of multiple sizes, shapes, and thicknesses gave consistent values for samples with surfaces as small as 50% of the heat sink/source surface, regardless of the shape of the sample. Measuring samples with surfaces smaller than 50% of the heat sink/source surface resulted in thermal conductivity values which were too high. The cause of the error with the smaller samples is being examined as is the relationship between the amount of error in the thermal conductivity and the difference in surface areas. As more measurements are made an equation to mathematically correct for the error is being developed on in case a way to physically correct the problem cannot be determined.
Chan, Tommy C Y; Wang, Yan; Ng, Alex L K; Zhang, Jiamei; Yu, Marco C Y; Jhanji, Vishal; Cheng, George P M
2018-06-13
To compare the astigmatic correction in high myopic astigmatism between small-incision lenticule extraction and laser in situ keratomileusis (LASIK) using vector analysis. Hong Kong Laser Eye Center, Hong Kong. Retrospective case series. Patients who had correction of myopic astigmatism of 3.0 diopters (D) or more and had either small-incision lenticule extraction or femtosecond laser-assisted LASIK were included. Only the left eye was included for analysis. Visual and refractive results were presented and compared between groups. The study comprised 105 patients (40 eyes in the small-incision lenticule extraction group and 65 eyes in the femtosecond laser-assisted LASIK group.) The mean preoperative manifest cylinder was -3.42 D ± 0.55 (SD) in the small-incision lenticule extraction group and -3.47 ± 0.49 D in the LASIK group (P = .655). At 3 months, there was no significant between-group difference in uncorrected distance visual acuity (P = .915) and manifest spherical equivalent (P = .145). Ninety percent and 95.4% of eyes were within ± 0.5 D of the attempted cylindrical correction for the small-incision lenticule extraction and LASIK group, respectively (P = .423). Vector analysis showed comparable target-induced astigmatism (P = .709), surgically induced astigmatism vector (P = .449), difference vector (P = .335), and magnitude of error (P = .413) between groups. The absolute angle of error was 1.88 ± 2.25 degrees in the small-incision lenticule extraction group and 1.37 ± 1.58 degrees in the LASIK group (P = .217). Small-incision lenticule extraction offered astigmatic correction comparable to LASIK in eyes with high myopic astigmatism. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
O'Brien, D J; León-Vintró, L; McClean, B
2016-01-01
The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.
X-ray microanalytical surveys of minor element concentrations in unsectioned biological samples
NASA Astrophysics Data System (ADS)
Schofield, R. M. S.; Lefevre, H. W.; Overley, J. C.; Macdonald, J. D.
1988-03-01
Approximate concentration maps of small unsectioned biological samples are made using the pixel by pixel ratio of PIXE images to areal density images. Areal density images are derived from scanning transmission ion microscopy (STIM) proton energy-loss images. Corrections for X-ray production cross section variations, X-ray attenuation, and depth averaging are approximated or ignored. Estimates of the magnitude of the resulting error are made. Approximate calcium concentrations within the head of a fruit fly are reported. Concentrations in the retinula cell region of the eye average about 1 mg/g dry weight. Concentrations of zinc in the mandible of several ant species average about 40 mg/g. Zinc concentrations in the stomachs of these ants are at least 1 mg/g.
Shankar, Vijay; Reo, Nicholas V; Paliy, Oleg
2015-12-09
We previously showed that stool samples of pre-adolescent and adolescent US children diagnosed with diarrhea-predominant IBS (IBS-D) had different compositions of microbiota and metabolites compared to healthy age-matched controls. Here we explored whether observed fecal microbiota and metabolite differences between these two adolescent populations can be used to discriminate between IBS and health. We constructed individual microbiota- and metabolite-based sample classification models based on the partial least squares multivariate analysis and then applied a Bayesian approach to integrate individual models into a single classifier. The resulting combined classification achieved 84 % accuracy of correct sample group assignment and 86 % prediction for IBS-D in cross-validation tests. The performance of the cumulative classification model was further validated by the de novo analysis of stool samples from a small independent IBS-D cohort. High-throughput microbial and metabolite profiling of subject stool samples can be used to facilitate IBS diagnosis.
Chang, Melinda Y.; Pineles, Stacy L.; Velez, Federico G.
2015-01-01
PURPOSE To evaluate the effectiveness of adjustable small-incision selective tenotomy and plication of vertical rectus muscles in correcting vertical strabismus incomitant in horizontal gaze positions and cyclotorsion. METHODS The medical records of all patients who underwent adjustable small-incision selective tenotomy or plication of a vertical rectus muscle for correction of horizontally incomitant vertical strabismus or cyclotorsion by a single surgeon at a single eye institute from July 2013 to September 2014 were retrospectively reviewed. Selective tenotomy and plication were performed on either the nasal or temporal side of vertical rectus muscles, based on the direction of cyclotorsion and incomitance of vertical strabismus. RESULTS Of 9 patients identified, 8 (89%) had successful correction of horizontally incomitant vertical strabismus, with postoperative vertical alignment within 4Δ of orthotropia in primary position, lateral gazes, and downgaze. Of the 8 patients with preoperative cyclotorsion, 4 (50%) were successfully corrected, with <5° of cyclotorsion postoperatively. Of the 4 patients in whom cyclotorsion did not improve, 3 had undergone prior strabismus surgery, and 2 had restrictive strabismus. Eight of the 9 patients (89%) reported postoperative resolution of diplopia. CONCLUSIONS Adjustable small-incision selective tenotomy and plication effectively treat horizontally incomitant vertical strabismus. These surgeries may be less effective for correcting cyclotorsion in patients with restriction or prior strabismus surgery. Advantages are that they may be performed in an adjustable manner and, in some cases, under topical anesthesia. PMID:26486021
Pozzi, P; Wilding, D; Soloviev, O; Verstraete, H; Bliek, L; Vdovin, G; Verhaegen, M
2017-01-23
The quality of fluorescence microscopy images is often impaired by the presence of sample induced optical aberrations. Adaptive optical elements such as deformable mirrors or spatial light modulators can be used to correct aberrations. However, previously reported techniques either require special sample preparation, or time consuming optimization procedures for the correction of static aberrations. This paper reports a technique for optical sectioning fluorescence microscopy capable of correcting dynamic aberrations in any fluorescent sample during the acquisition. This is achieved by implementing adaptive optics in a non conventional confocal microscopy setup, with multiple programmable confocal apertures, in which out of focus light can be separately detected, and used to optimize the correction performance with a sampling frequency an order of magnitude faster than the imaging rate of the system. The paper reports results comparing the correction performances to traditional image optimization algorithms, and demonstrates how the system can compensate for dynamic changes in the aberrations, such as those introduced during a focal stack acquisition though a thick sample.
Bobby, Zachariah; Nandeesha, H; Sridhar, M G; Soundravally, R; Setiya, Sajita; Babu, M Sathish; Niranjan, G
2014-01-01
Graduate medical students often get less opportunity for clarifying their doubts and to reinforce their concepts after lecture classes. The Medical Council of India (MCI) encourages group discussions among students. We evaluated the effect of identifying mistakes in a given set of wrong statements and their correction by a small group discussion by graduate medical students as a revision exercise. At the end of a module, a pre-test consisting of multiple-choice questions (MCQs) was conducted. Later, a set of incorrect statements related to the topic was given to the students and they were asked to identify the mistakes and correct them in a small group discussion. The effects on low, medium and high achievers were evaluated by a post-test and delayed post-tests with the same set of MCQs. The mean post-test marks were significantly higher among all the three groups compared to the pre-test marks. The gain from the small group discussion was equal among low, medium and high achievers. The gain from the exercise was retained among low, medium and high achievers after 15 days. Identification of mistakes in statements and their correction by a small group discussion is an effective, but unconventional revision exercise in biochemistry. Copyright 2014, NMJI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyrey, L.; Hammond, C.B.
1976-05-15
Antiserum generated against the hormone-specific ..beta..-subunit of hCG was used with different labeled antigens to measure circulating hCG in patients having trophoblastic disease. When /sup 125/I-hCG..beta.. served as the labeled antigen, a small number of patient sera failed to show parallelism with the second IS-hCG reference and erroneous estimates of hormone concentrations were obtained. Replacement of the /sup 125/I-hCG..beta.. with labeled hCG corrected the nonparallelism exhibited by these samples. Inhibition curves obtained with purified hCG and hCG..beta.. suggested that both the nonparallelism and its correction with the change in labeled antigen would be consistent with the possibility that this assaymore » aberration may result from the presence of free hCG..beta.. in these sera. (auth)« less
Computing correct truncated excited state wavefunctions
NASA Astrophysics Data System (ADS)
Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.
2016-12-01
We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.
76 FR 44010 - Medicare Program; Hospice Wage Index for Fiscal Year 2012; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
.... 93.774, Medicare-- Supplementary Medical Insurance Program) Dated: July 15, 2011. Dawn L. Smalls... corrects technical errors that appeared in the notice of CMS ruling published in the Federal Register on... FR 26731), there were technical errors that are identified and corrected in the Correction of Errors...
Imaging single atoms using secondary electrons with an aberration-corrected electron microscope.
Zhu, Y; Inada, H; Nakamura, K; Wall, J
2009-10-01
Aberration correction has embarked on a new frontier in electron microscopy by overcoming the limitations of conventional round lenses, providing sub-angstrom-sized probes. However, improvement of spatial resolution using aberration correction so far has been limited to the use of transmitted electrons both in scanning and stationary mode, with an improvement of 20-40% (refs 3-8). In contrast, advances in the spatial resolution of scanning electron microscopes (SEMs), which are by far the most widely used instrument for surface imaging at the micrometre-nanometre scale, have been stagnant, despite several recent efforts. Here, we report a new SEM, with aberration correction, able to image single atoms by detecting electrons emerging from its surface as a result of interaction with the small probe. The spatial resolution achieved represents a fourfold improvement over the best-reported resolution in any SEM (refs 10-12). Furthermore, we can simultaneously probe the sample through its entire thickness with transmitted electrons. This ability is significant because it permits the selective visualization of bulk atoms and surface ones, beyond a traditional two-dimensional projection in transmission electron microscopy. It has the potential to revolutionize the field of microscopy and imaging, thereby opening the door to a wide range of applications, especially when combined with simultaneous nanoprobe spectroscopy.
Abe, Hitoshi; Niwa, Yasuhiro; Kimura, Masao; Murakami, Youichi; Yokoyama, Toshiharu; Hosono, Hideo
2016-04-05
A gritty surface sample holder has been invented to obtain correct XAFS spectra for concentrated samples by fluorescence yield (FY). Materials are usually mixed with boron nitride (BN) to prepare proper concentrations to measure XAFS spectra. Some materials, however, could not be mixed with BN and would be measured in too concentrated conditions to obtain correct XAFS spectra. Consequently, XAFS spectra will be incorrect typically with decreased intensities of the peaks. We have invented the gritty surface sample holders to obtain correct XAFS spectra even for concentrated materials for FY measurements. Pure Cu and CuO powders were measured mounted on the sample holders, and the same spectra were obtained as transmission spectra of properly prepared samples. This sample holder is useful to measure XAFS for any concentrated materials.
NASA Astrophysics Data System (ADS)
Hanke, Ulrich M.; McIntyre, Cameron P.; Schmidt, Michael W. I.; Wacker, Lukas; Eglinton, Timothy I.
2016-04-01
Measurements of the natural abundance of radiocarbon (14C) concentrations in inorganic and organic carbon-containing materials can be used to investigate their date of origin. Particularly, the biogeochemical cycling of specific compounds in the environment may be investigated applying molecular marker analyses. However, the isolation of specific molecules from environmental matrices requires a complex processing procedure resulting in small sample sizes that often contain less than 30 μg C. Such small samples are sensitive to extraneous carbon (Cex) that is introduced during the purification of the compounds (Shah and Pearson, 2007). We present a thorough radiocarbon blank assessment for benzene polycarboxylic acids (BPCA), a proxy for combustion products that are formed during the oxidative degradation of condensed polyaromatic structures (Wiedemeier et al, in press). The extraneous carbon assessment includes reference material for (1) chemical extraction, (2) preparative liquid chromatography (3) wet chemical oxidation which are subsequently measured with gas ion source AMS (Accelerator Mass Spectrometer, 5-100 μg C). We always use pairs of reference materials, radiocarbon depleted (14Cfossil) and modern (14Cmodern) to determine the fraction modern (F14C) of Cex.Our results include detailed information about the quantification of Cex in radiocarbon molecular marker analysis using BPCA. Error propagation calculations indicate that ultra-microscale samples (20-30 μg) are feasible with uncertainties of less than 10 %. Calculations of the constant contamination reveal important information about the source (F14C) and mass (μg) of Cex (Wacker and Christl, 2011) for each sub procedure. An external correction of compound specific radiocarbon data is essential for robust results that allow for a high degree of confidence in the 14C results. References Shah and Pearson, 2007. Ultra-microscale (5-25μg C) analysis of individual lipids by 14C AMS: Assessment and correction for sample processing blanks. Radiocarbon 49(1), 69-82. Wacker, L. and M. Christl. 2011. Data reduction for small radiocarbon samples - error propagation using the model of constant contamination. Ion Beam Physics, ETH Zurich, Annual report 2011. Wiedemeier, D.B., S.Q. Lang, M. Gierga, S. Abiven, S.M. Bernasconi, G.L. Bernasconi-Green, I. Hajdas, U.M. Hanke, M.D. Hilf, C.P. McIntyre, M.P.W. Schneider, R.H. Smittenberg, L. Wacker, G.L.B. Wiesenberg, M.W.I. Schmidt. Characterization, quantification and compound-specific isotopic analysis of pyrogenic carbon using benzene polycarboxylic acids (BPCA). Journal of Visualized Experiments. In press.
Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging
NASA Astrophysics Data System (ADS)
Konik, Arda Bekir
Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.
40 CFR 80.1622 - Approval for small refiner and small volume refinery status.
Code of Federal Regulations, 2014 CFR
2014-07-01
... appropriate data to correct the record when the company submits its application. (ii) Foreign small refiners... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Approval for small refiner and small... Approval for small refiner and small volume refinery status. (a) Applications for small refiner or small...
Barrientos, Rafael; Ponce, Carlos; Palacín, Carlos; Martín, Carlos A.; Martín, Beatriz; Alonso, Juan Carlos
2012-01-01
Background Collision with electric power lines is a conservation problem for many bird species. Although the implementation of flight diverters is rapidly increasing, few well-designed studies supporting the effectiveness of this costly conservation measure have been published. Methodology/Principal Findings We provide information on the largest worldwide marking experiment to date, including carcass searches at 35 (15 experimental, 20 control) power lines totalling 72.5 km, at both transmission (220 kV) and distribution (15 kV–45 kV) lines. We found carcasses of 45 species, 19 of conservation concern. Numbers of carcasses found were corrected to account for carcass losses due to removal by scavengers or being overlooked by researchers, resulting in an estimated collision rate of 8.2 collisions per km per month. We observed a small (9.6%) but significant decrease in the number of casualties after line marking compared to before line marking in experimental lines. This was not observed in control lines. We found no influence of either marker size (large vs. small spirals, sample of distribution lines only) or power line type (transmission vs. distribution, sample of large spirals only) on the collision rate when we analyzed all species together. However, great bustard mortality was slightly lower when lines were marked with large spirals and in transmission lines after marking. Conclusions Our results confirm the overall effectiveness of wire marking as a way to reduce, but not eliminate, bird collisions with power lines. If raw field data are not corrected by carcass losses due to scavengers and missed observations, findings may be biased. The high cost of this conservation measure suggests a need for more studies to improve its application, including wire marking with non-visual devices. Our findings suggest that different species may respond differently to marking, implying that species-specific patterns should be explored, at least for species of conservation concern. PMID:22396776
Correction of Population Stratification in Large Multi-Ethnic Association Studies
Serre, David; Montpetit, Alexandre; Paré, Guillaume; Engert, James C.; Yusuf, Salim; Keavney, Bernard; Hudson, Thomas J.; Anand, Sonia
2008-01-01
Background The vast majority of genetic risk factors for complex diseases have, taken individually, a small effect on the end phenotype. Population-based association studies therefore need very large sample sizes to detect significant differences between affected and non-affected individuals. Including thousands of affected individuals in a study requires recruitment in numerous centers, possibly from different geographic regions. Unfortunately such a recruitment strategy is likely to complicate the study design and to generate concerns regarding population stratification. Methodology/Principal Findings We analyzed 9,751 individuals representing three main ethnic groups - Europeans, Arabs and South Asians - that had been enrolled from 154 centers involving 52 countries for a global case/control study of acute myocardial infarction. All individuals were genotyped at 103 candidate genes using 1,536 SNPs selected with a tagging strategy that captures most of the genetic diversity in different populations. We show that relying solely on self-reported ethnicity is not sufficient to exclude population stratification and we present additional methods to identify and correct for stratification. Conclusions/Significance Our results highlight the importance of carefully addressing population stratification and of carefully “cleaning” the sample prior to analyses to obtain stronger signals of association and to avoid spurious results. PMID:18196181
DOE Office of Scientific and Technical Information (OSTI.GOV)
ITLV.
1999-03-01
The Corrective Action Investigation Plan for Corrective Action Unit 428, Area 3 Septic Waste Systems 1 and 5, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U. S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 428 consists of Corrective Action Sites 03- 05- 002- SW01 and 03- 05- 002- SW05, respectively known as Area 3 Septic Waste System 1 and Septic Waste System 5. This Corrective Action Investigation Plan is used inmore » combination with the Work Plan for Leachfield Corrective Action Units: Nevada Test Site and Tonopah Test Range, Nevada , Rev. 1 (DOE/ NV, 1998c). The Leachfield Work Plan was developed to streamline investigations at leachfield Corrective Action Units by incorporating management, technical, quality assurance, health and safety, public involvement, field sampling, and waste management information common to a set of Corrective Action Units with similar site histories and characteristics into a single document that can be referenced. This Corrective Action Investigation Plan provides investigative details specific to Corrective Action Unit 428. A system of leachfields and associated collection systems was used for wastewater disposal at Area 3 of the Tonopah Test Range until a consolidated sewer system was installed in 1990 to replace the discrete septic waste systems. Operations within various buildings at Area 3 generated sanitary and industrial wastewaters potentially contaminated with contaminants of potential concern and disposed of in septic tanks and leachfields. Corrective Action Unit 428 is composed of two leachfield systems in the northern portion of Area 3. Based on site history collected to support the Data Quality Objectives process, contaminants of potential concern for the site include oil/ diesel range total petroleum hydrocarbons, and Resource Conservation and Recovery Act characteristic volatile organic compounds, semivolatile organic compounds, and metals. A limited number of samples will be analyzed for gamma- emitting radionuclides and isotopic uranium from four of the septic tanks and if radiological field screening levels are exceeded. Additional samples will be analyzed for geotechnical and hydrological properties and a bioassessment may be performed. The technical approach for investigating this Corrective Action Unit consists of the following activities: Perform video surveys of the discharge and outfall lines. Collect samples of material in the septic tanks. Conduct exploratory trenching to locate and inspect subsurface components. Collect subsurface soil samples in areas of the collection system including the septic tanks and outfall end of distribution boxes. Collect subsurface soil samples underlying the leachfield distribution pipes via trenching. Collect surface and near- surface samples near potential locations of the Acid Sewer Outfall if Septic Waste System 5 Leachfield cannot be located. Field screen samples for volatile organic compounds, total petroleum hydrocarbons, and radiological activity. Drill boreholes and collect subsurface soil samples if required. Analyze samples for total volatile organic compounds, total semivolatile organic compounds, total Resource Conservation and Recovery Act metals, and total petroleum hydrocarbons (oil/ diesel range organics). Limited number of samples will be analyzed for gamma- emitting radionuclides and isotopic uranium from particular septic tanks and if radiological field screening levels are exceeded. Collect samples from native soils beneath the distribution system and analyze for geotechnical/ hydrologic parameters. Collect and analyze bioassessment samples at the discretion of the Site Supervisor if total petroleum hydrocarbons exceed field- screening levels.« less
Analysis of RNA structure using small-angle X-ray scattering
Cantara, William A.; Olson, Erik D.; Musier-Forsyth, Karin
2016-01-01
In addition to their role in correctly attaching specific amino acids to cognate tRNAs, aminoacyl-tRNA synthetases (aaRS) have been found to possess many alternative functions and often bind to and act on other nucleic acids. In contrast to the well-defined 3D structure of tRNA, the structures of many of the other RNAs recognized by aaRSs have not been solved. Despite advances in the use of X-ray crystallography (XRC), nuclear magnetic resonance (NMR) spectroscopy and cryo-electron microscopy (cryo-EM) for structural characterization of biomolecules, significant challenges to solving RNA structures still exist. Recently, small-angle X-ray scattering (SAXS) has been increasingly employed to characterize the 3D structures of RNAs and RNA-protein complexes. SAXS is capable of providing low-resolution tertiary structure information under physiological conditions and with less intensive sample preparation and data analysis requirements than XRC, NMR and cryo-EM. In this article, we describe best practices involved in the process of RNA and RNA-protein sample preparation, SAXS data collection, data analysis, and structural model building. PMID:27777026
NASA Astrophysics Data System (ADS)
Li, Yinlin; Kundu, Bijoy K.
2018-03-01
The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged -1.4 ± 8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4 ± 5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic algorithm. The proposed method significantly improved the model estimation performance in terms of the accuracy of the MCIF and K i , as well as the convergence speed.
Magnetic irreversibility: An important amendment in the zero-field-cooling and field-cooling method
NASA Astrophysics Data System (ADS)
Teixeira Dias, Fábio; das Neves Vieira, Valdemar; Esperança Nunes, Sabrina; Pureur, Paulo; Schaf, Jacob; Fernanda Farinela da Silva, Graziele; de Paiva Gouvêa, Cristol; Wolff-Fabris, Frederik; Kampert, Erik; Obradors, Xavier; Puig, Teresa; Roa Rovira, Joan Josep
2016-02-01
The present work reports about experimental procedures to correct significant deviations of magnetization data, caused by magnetic relaxation, due to small field cycling by sample transport in the inhomogeneous applied magnetic field of commercial magnetometers. The extensively used method for measuring the magnetic irreversibility by first cooling the sample in zero field, switching on a constant applied magnetic field and measuring the magnetization M(T) while slowly warming the sample, and subsequently measuring M(T) while slowly cooling it back in the same field, is very sensitive even to small displacement of the magnetization curve. In our melt-processed YBaCuO superconducting sample we observed displacements of the irreversibility limit up to 7 K in high fields. Such displacements are detected only on confronting the magnetic irreversibility limit with other measurements, like for instance zero resistance, in which the sample remains fixed and so is not affected by such relaxation. We measured the magnetic irreversibility, Tirr(H), using a vibrating sample magnetometer (VSM) from Quantum Design. The zero resistance data, Tc0(H), were obtained using a PPMS from Quantum Design. On confronting our irreversibility lines with those of zero resistance, we observed that the Tc0(H) data fell several degrees K above the Tirr(H) data, which obviously contradicts the well known properties of superconductivity. In order to get consistent Tirr(H) data in the H-T plane, it was necessary to do a lot of additional measurements as a function of the amplitude of the sample transport and extrapolate the Tirr(H) data for each applied field to zero amplitude.
ERIC Educational Resources Information Center
Espin, Christine; Wallace, Teri; Campbell, Heather; Lembke, Erica S.; Long, Jeffrey D.; Ticha, Renata
2008-01-01
We examined the technical adequacy of writing progress measures as indicators of success on state standards tests. Tenth-grade students wrote for 10 min, marking their samples at 3, 5, and 7 min. Samples were scored for words written, words spelled correctly, and correct and correct minus incorrect word sequences. The number of correct minus…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydhorst, Jared H., E-mail: jared.strydhorst@gmail.com; Ruddy, Terrence D.; Wells, R. Glenn
2015-04-15
Purpose: Our goal in this work was to investigate the impact of CT-based attenuation correction on measurements of rat myocardial perfusion with {sup 99m}Tc and {sup 201}Tl single photon emission computed tomography (SPECT). Methods: Eight male Sprague-Dawley rats were injected with {sup 99m}Tc-tetrofosmin and scanned in a small animal pinhole SPECT/CT scanner. Scans were repeated weekly over a period of 5 weeks. Eight additional rats were injected with {sup 201}Tl and also scanned following a similar protocol. The images were reconstructed with and without attenuation correction, and the relative perfusion was analyzed with the commercial cardiac analysis software. The absolutemore » uptake of {sup 99m}Tc in the heart was also quantified with and without attenuation correction. Results: For {sup 99m}Tc imaging, relative segmental perfusion changed by up to +2.1%/−1.8% as a result of attenuation correction. Relative changes of +3.6%/−1.0% were observed for the {sup 201}Tl images. Interscan and inter-rat reproducibilities of relative segmental perfusion were 2.7% and 3.9%, respectively, for the uncorrected {sup 99m}Tc scans, and 3.6% and 4.3%, respectively, for the {sup 201}Tl scans, and were not significantly affected by attenuation correction for either tracer. Attenuation correction also significantly increased the measured absolute uptake of tetrofosmin and significantly altered the relationship between the rat weight and tracer uptake. Conclusions: Our results show that attenuation correction has a small but statistically significant impact on the relative perfusion measurements in some segments of the heart and does not adversely affect reproducibility. Attenuation correction had a small but statistically significant impact on measured absolute tracer uptake.« less
Desmopressin to Prevent Rapid Sodium Correction in Severe Hyponatremia: A Systematic Review.
MacMillan, Thomas E; Tang, Terence; Cavalcanti, Rodrigo B
2015-12-01
Hyponatremia is common among inpatients and is associated with severe adverse outcomes such as osmotic demyelination syndrome. Current guidelines recommend serum sodium concentration correction targets of no more than 8 mEq/L per day in patients at high risk of osmotic demyelination syndrome. Desmopressin is recommended to control high rates of serum sodium concentration correction in severe hyponatremia. However, recommendations are based on limited data. The objective of this study is to review current strategies for DDAVP use in severe hyponatremia. Systematic literature search of 4 databases of peer-reviewed studies was performed and study quality was appraised. The literature search identified 17 observational studies with 80 patients. We found 3 strategies for desmopressin administration in hyponatremia: 1) proactive, where desmopressin is administered early based on initial serum sodium concentration; 2) reactive, where desmopressin is administered based on changes in serum sodium concentration or urine output; 3) rescue, where desmopressin is administered after serum sodium correction targets are exceeded or when osmotic demyelination appears imminent. A proactive strategy of desmopressin administration with hypertonic saline was associated with lower incidence of exceeding serum sodium concentration correction targets, although this evidence is derived from a small case series. Three distinct strategies for desmopressin administration are described in the literature. Limitations in study design and sample size prevent definitive conclusions about the optimal strategy for desmopressin administration to correct hyponatremia. There is a pressing need for better quality research to guide clinicians in managing severe hyponatremia. Copyright © 2015 Elsevier Inc. All rights reserved.
78 FR 23970 - Interagency Task Force on Veterans Small Business Development
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-23
... SMALL BUSINESS ADMINISTRATION Interagency Task Force on Veterans Small Business Development AGENCY: U.S. Small Business Administration. ACTION: Notice of open Federal Interagency Task Force Meeting. SUMMARY: This document corrects the SBA's Interagency Task Force on Veterans Small Business Developments...
NASA Astrophysics Data System (ADS)
Udalski, A.; Pietrzynski, G.; Szymanski, M.; Kubiak, M.; Zebrun, K.; Soszynski, I.; Szewczyk, O.; Wyrzykowski, L.
2003-06-01
The photometric data collected by OGLE-III during the 2001 and 2002 observational campaigns aiming at detection of planetary or low-luminosity object transits were corrected for small scale systematic effects using the data pipeline by Kruszewski and Semeniuk and searched again for low amplitude transits. Sixteen new objects with small transiting companions, additional to previously found samples, were discovered. Most of them are small amplitude cases which remained undetected in the original data. Several new objects seem to be very promising candidates for systems containing substellar objects: extrasolar planets or brown dwarfs. Those include OGLE-TR-122, OGLE-TR-125, OGLE-TR-130, OGLE-TR-131 and a few others. Those objects are particularly worth spectroscopic follow-up observations for radial velocity measurements and mass determination. With well known photometric orbit only a few RV measurements should allow to confirm their actual status. All photometric data of presented objects are available to the astronomical community from the OGLE Internet archive.
NASA Astrophysics Data System (ADS)
Sargent, S.; Somers, J. M.
2015-12-01
Trace-gas eddy covariance flux measurement can be made with open-path or closed-path analyzers. Traditional closed-path trace-gas analyzers use multipass absorption cells that behave as mixing volumes, requiring high sample flow rates to achieve useful frequency response. The high sample flow rate and the need to keep the multipass cell extremely clean dictates the use of a fine-pore filter that may clog quickly. A large-capacity filter cannot be used because it would degrade the EC system frequency response. The high flow rate also requires a powerful vacuum pump, which will typically consume on the order of 1000 W. The analyzer must measure water vapor for spectroscopic and dilution corrections. Open-path analyzers are available for methane, but not for nitrous oxide. The currently available methane analyzers have low power consumption, but are very large. Their large size degrades frequency response and disturbs the air flow near the sonic anemometer. They require significant maintenance to keep the exposed multipass optical surfaces clean. Water vapor measurements for dilution and spectroscopic corrections require a separate water vapor analyzer. A new closed-path eddy covariance system for measuring nitrous oxide or methane fluxes provides an elegant solution. The analyzer (TGA200A, Campbell Scientific, Inc.) uses a thermoelectrically-cooled interband cascade laser. Its small sample-cell volume and unique sample-cell configuration (200 ml, 1.5 m single pass) provide excellent frequency response with a low-power scroll pump (240 W). A new single-tube Nafion® dryer removes most of the water vapor, and attenuates fluctuations in the residual water vapor. Finally, a vortex intake assembly eliminates the need for an intake filter without adding volume that would degrade system frequency response. Laboratory testing shows the system attenuates the water vapor dilution term by more than 99% and achieves a half-power band width of 3.5 Hz.
78 FR 27442 - Coal Mine Dust Sampling Devices; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-10
... DEPARTMENT OF LABOR Mine Safety and Health Administration Coal Mine Dust Sampling Devices; Correction AGENCY: Mine Safety and Health Administration, Labor. ACTION: Notice; correction. SUMMARY: On April 30, 2013, Mine Safety and Health Administration (MSHA) published a notice in the Federal Register...
Guardabassi, Luca; Hedberg, Sandra; Jessen, Lisbeth Rem; Damborg, Peter
2015-10-26
Urinary tract infection (UTI) is a common reason for antimicrobial prescription in dogs and cats. The objective of this study was to optimize and evaluate a culture-based point-of-care test for detection, identification and antimicrobial susceptibility testing of bacterial uro-pathogens in veterinary practice. Seventy-two urine samples from dogs and cats with suspected UTI presenting to seven veterinary facilities were used by clinical staff and an investigator to estimate sensitivity and specificity of Flexicult Vet A compared to laboratory reference standards for culture and susceptibility testing. Subsequently, the test was modified by inclusion of an oxacillin-containing compartment for detection of methicillin-resistant staphylococci. The performance of the modified product (Flexicult Vet B) for susceptibility testing was evaluated in vitro using a collection of 110 clinical isolates. Bacteriuria was reported by the laboratory in 25 (35 %) samples from the field study. The sensitivity and specificity of Flexicult Vet A for detection of bacteriuria were 83 and 100 %, respectively. Bacterial species were correctly identified in 53 and 100 % of the positive samples by clinical staff and the investigator, respectively. The susceptibility results were interpreted correctly by clinical staff for 70 % of the 94 drug-strain combinations. Higher percentages of correct interpretation were observed when the results were interpreted by the investigator in both the field (76 %) and the in vitro study (94 %). The most frequent errors were false resistance to β-lactams (ampicillin, amoxicillin-clavulanate and cephalotin) in Escherichia coli for Flexicult Vet A, and false amoxicillin-clavulanate resistance in E. coli and false ampicillin susceptibility in Staphylococcus pseudintermedius for Flexicult Vet B. The latter error can be prevented by categorizing staphylococcal strains growing in the oxacillin compartment as resistant to all β-lactams. Despite the shortcomings regarding species identification by clinical staff and β-lactam susceptibility testing of E. coli, Flexicult Vet B (commercial name Flexicult(®) Vet) is a time- and cost-effective point-of-care test to guide antimicrobial choice and facilitate implementation of antimicrobial use guidelines for treatment of UTIs in small animals, provided that clinical staff is adequately trained to interpret the results and that clinics meet minimum standards to operate in-house culture.
Nakagawa, Seiji
2011-04-01
Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver-the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 [ordinal indicator, masculine]C, and concurrently with x-ray CT imaging. The described split Hopkinson resonant bar test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples, and a natural rock sample. © 2011 American Institute of Physics
SUSANS With Polarized Neutrons
Wagh, Apoorva G.; Rakhecha, Veer Chand; Strobl, Makus; Treimer, Wolfgang
2005-01-01
Super Ultra-Small Angle Neutron Scattering (SUSANS) studies over wave vector transfers of 10–4 nm–1 to 10–3 nm–1 afford information on micrometer-size agglomerates in samples. Using a right-angled magnetic air prism, we have achieved a separation of ≈10 arcsec between ≈2 arcsec wide up- and down-spin peaks of 0.54 nm neutrons. The SUSANS instrument has thus been equipped with the polarized neutron option. The samples are placed in a uniform vertical field of 8.8 × 104 A/m (1.1 kOe). Several magnetic alloy ribbon samples broaden the up-spin neutron peak significantly over the ±1.3 × 10–3 nm–1 range, while leaving the down-spin peak essentially unaltered. Fourier transforms of these SUSANS spectra corrected for the instrument resolution, yield micrometer-range pair distribution functions for up- and down-spin neutrons as well as the nuclear and magnetic scattering length density distributions in the samples. PMID:27308127
An Accurate Framework for Arbitrary View Pedestrian Detection in Images
NASA Astrophysics Data System (ADS)
Fan, Y.; Wen, G.; Qiu, S.
2018-01-01
We consider the problem of detect pedestrian under from images collected under various viewpoints. This paper utilizes a novel framework called locality-constrained affine subspace coding (LASC). Firstly, the positive training samples are clustered into similar entities which represent similar viewpoint. Then Principal Component Analysis (PCA) is used to obtain the shared feature of each viewpoint. Finally, the samples that can be reconstructed by linear approximation using their top- k nearest shared feature with a small error are regarded as a correct detection. No negative samples are required for our method. Histograms of orientated gradient (HOG) features are used as the feature descriptors, and the sliding window scheme is adopted to detect humans in images. The proposed method exploits the sparse property of intrinsic information and the correlations among the multiple-views samples. Experimental results on the INRIA and SDL human datasets show that the proposed method achieves a higher performance than the state-of-the-art methods in form of effect and efficiency.
Paschou, Peristera
2010-01-01
Recent large-scale studies of European populations have demonstrated the existence of population genetic structure within Europe and the potential to accurately infer individual ancestry when information from hundreds of thousands of genetic markers is used. In fact, when genomewide genetic variation of European populations is projected down to a two-dimensional Principal Components Analysis plot, a surprising correlation with actual geographic coordinates of self-reported ancestry has been reported. This substructure can hamper the search of susceptibility genes for common complex disorders leading to spurious correlations. The identification of genetic markers that can correct for population stratification becomes therefore of paramount importance. Analyzing 1,200 individuals from 11 populations genotyped for more than 500,000 SNPs (Population Reference Sample), we present a systematic exploration of the extent to which geographic coordinates of origin within Europe can be predicted, with small panels of SNPs. Markers are selected to correlate with the top principal components of the dataset, as we have previously demonstrated. Performing thorough cross-validation experiments we show that it is indeed possible to predict individual ancestry within Europe down to a few hundred kilometers from actual individual origin, using information from carefully selected panels of 500 or 1,000 SNPs. Furthermore, we show that these panels can be used to correctly assign the HapMap Phase 3 European populations to their geographic origin. The SNPs that we propose can prove extremely useful in a variety of different settings, such as stratification correction or genetic ancestry testing, and the study of the history of European populations. PMID:20805874
Longitudinal plasma metabolic profiles, infant feeding, and islet autoimmunity in the MIDIA study.
Jørgenrud, Benedicte; Stene, Lars C; Tapia, German; Bøås, Håkon; Pepaj, Milaim; Berg, Jens P; Thorsby, Per M; Orešič, Matej; Hyötyläinen, Tuulia; Rønningen, Kjersti S
2017-03-01
The aim of this study was to investigate the longitudinal plasma metabolic profiles in healthy infants and the potential association with breastfeeding duration and islet autoantibodies predictive of type 1 diabetes. Up to four longitudinal plasma samples from age 3 months from case children who developed islet autoimmunity (n = 29) and autoantibody-negative control children (n = 29) with the HLA DR4-DQ8/DR3-DQ2 genotype were analyzed using two-dimensional gas chromatography coupled to a time-of-flight mass spectrometer for detection of small polar metabolites. Plasma metabolite levels were found to depend strongly on age, with fold changes varying up to 50% from age 3 to 24 months (p < 0.001 after correction for multiple testing). Tyrosine levels tended to be lower in case children, but this was not significant after correction for multiple testing. Ornithine levels were lower in case children compared with the controls at the time of seroconversion, but the difference was not statistically significant after correcting for multiple testing. Breastfeeding for at least 3 months as compared with shorter duration was associated with higher plasma levels of isoleucine, and lower levels of methionine and 3,4-dihydroxybutyric acid at 3 months of age. Plasma levels of several small, polar metabolites changed with age during early childhood, independent of later islet autoimmunity status and sex. Breastfeeding was associated with higher levels of branched-chain amino acids, and lower levels of methionine and 3,4-dihydroxybutyric acid. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Bruza, Petr; Gollub, Sarah L; Andreozzi, Jacqueline M; Tendler, Irwin I; Williams, Benjamin B; Jarvis, Lesley A; Gladstone, David J; Pogue, Brian W
2018-05-02
The purpose of this study was to measure surface dose by remote time-gated imaging of plastic scintillators. A novel technique for time-gated, intensified camera imaging of scintillator emission was demonstrated, and key parameters influencing the signal were analyzed, including distance, angle and thickness. A set of scintillator samples was calibrated by using thermo-luminescence detector response as reference. Examples of use in total skin electron therapy are described. The data showed excellent room light rejection (signal-to-noise ratio of scintillation SNR ≈ 470), ideal scintillation dose response linearity, and 2% dose rate error. Individual sample scintillation response varied by 7% due to sample preparation. Inverse square distance dependence correction and lens throughput error (8% per meter) correction were needed. At scintillator-to-source angle and observation angle <50°, the radiant energy fluence error was smaller than 1%. The achieved standard error of the scintillator cumulative dose measurement compared to the TLD dose was 5%. The results from this proof-of-concept study documented the first use of small scintillator targets for remote surface dosimetry in ambient room lighting. The measured dose accuracy renders our method to be comparable to thermo-luminescent detector dosimetry, with the ultimate realization of accuracy likely to be better than shown here. Once optimized, this approach to remote dosimetry may substantially reduce the time and effort required for surface dosimetry.
NASA Astrophysics Data System (ADS)
Bruza, Petr; Gollub, Sarah L.; Andreozzi, Jacqueline M.; Tendler, Irwin I.; Williams, Benjamin B.; Jarvis, Lesley A.; Gladstone, David J.; Pogue, Brian W.
2018-05-01
The purpose of this study was to measure surface dose by remote time-gated imaging of plastic scintillators. A novel technique for time-gated, intensified camera imaging of scintillator emission was demonstrated, and key parameters influencing the signal were analyzed, including distance, angle and thickness. A set of scintillator samples was calibrated by using thermo-luminescence detector response as reference. Examples of use in total skin electron therapy are described. The data showed excellent room light rejection (signal-to-noise ratio of scintillation SNR ≈ 470), ideal scintillation dose response linearity, and 2% dose rate error. Individual sample scintillation response varied by 7% due to sample preparation. Inverse square distance dependence correction and lens throughput error (8% per meter) correction were needed. At scintillator-to-source angle and observation angle <50°, the radiant energy fluence error was smaller than 1%. The achieved standard error of the scintillator cumulative dose measurement compared to the TLD dose was 5%. The results from this proof-of-concept study documented the first use of small scintillator targets for remote surface dosimetry in ambient room lighting. The measured dose accuracy renders our method to be comparable to thermo-luminescent detector dosimetry, with the ultimate realization of accuracy likely to be better than shown here. Once optimized, this approach to remote dosimetry may substantially reduce the time and effort required for surface dosimetry.
Consistency of ARESE II Cloud Absorption Estimates and Sampling Issues
NASA Technical Reports Server (NTRS)
Oreopoulos, L.; Marshak, A.; Cahalan, R. F.; Lau, William K. M. (Technical Monitor)
2002-01-01
Data from three cloudy days (March 3, 21, 29, 2000) of the ARM Enhanced Shortwave Experiment II (ARESE II) were analyzed. Grand averages of broadband absorptance among three sets of instruments were compared. Fractional solar absorptances were approx. 0.21-0.22 with the exception of March 3 when two sets of instruments gave values smaller by approx. 0.03-0.04. The robustness of these values was investigated by looking into possible sampling problems with the aid of 500 nm spectral fluxes. Grand averages of 500 nm apparent absorptance cover a wide range of values for these three days, namely from a large positive (approx. 0.011) average for March 3, to a small negative (approximately -0.03) for March 21, to near zero (approx. 0.01) for March 29. We present evidence suggesting that a large part of the discrepancies among the three days is due to the different nature of clouds and their non-uniform sampling. Hence, corrections to the grand average broadband absorptance values may be necessary. However, application of the known correction techniques may be precarious due to the sparsity of collocated flux measurements above and below the clouds. Our analysis leads to the conclusion that only March 29 fulfills all requirements for reliable estimates of cloud absorption, that is, the presence of thick, overcast, homogeneous clouds.
Evangelista, P.; Kumar, S.; Stohlgren, T.J.; Crall, A.W.; Newman, G.J.
2007-01-01
Predictive models of aboveground biomass of nonnative Tamarix ramosissima of various sizes were developed using destructive sampling techniques on 50 individuals and four 100-m2 plots. Each sample was measured for average height (m) of stems and canopy area (m2) prior to cutting, drying, and weighing. Five competing regression models (P < 0.05) were developed to estimate aboveground biomass of T. ramosissima using average height and/or canopy area measurements and were evaluated using Akaike's Information Criterion corrected for small sample size (AICc). Our best model (AICc = -148.69, ??AICc = 0) successfully predicted T. ramosissima aboveground biomass (R2 = 0.97) and used average height and canopy area as predictors. Our 2nd-best model, using the same predictors, was also successful in predicting aboveground biomass (R2 = 0.97, AICc = -131.71, ??AICc = 16.98). A 3rd model demonstrated high correlation between only aboveground biomass and canopy area (R2 = 0.95), while 2 additional models found high correlations between aboveground biomass and average height measurements only (R2 = 0.90 and 0.70, respectively). These models illustrate how simple field measurements, such as height and canopy area, can be used in allometric relationships to accurately predict aboveground biomass of T. ramosissima. Although a correction factor may be necessary for predictions at larger scales, the models presented will prove useful for many research and management initiatives.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Bai, T
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soh, R; Lee, J; Harianto, F
Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute materialmore » for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.« less
Underwood, T S A; Rowland, B C; Ferrand, R; Vieillevigne, L
2015-09-07
In this work we use EBT3 film measurements at 10 MV to demonstrate the suitability of the Exradin W1 (plastic scintillator) for relative dosimetry within small photon fields. We then use the Exradin W1 to measure the small field correction factors required by two other detectors: the PTW unshielded Ediode 60017 and the PTW microDiamond 60019. We consider on-axis correction-factors for small fields collimated using MLCs for four different TrueBeam energies: 6 FFF, 6 MV, 10 FFF and 10 MV. We also investigate percentage depth dose and lateral profile perturbations. In addition to high-density effects from its silicon sensitive region, the Ediode exhibited a dose-rate dependence and its known over-response to low energy scatter was found to be greater for 6 FFF than 6 MV. For clinical centres without access to a W1 scintillator, we recommend the microDiamond over the Ediode and suggest that 'limits of usability', field sizes below which a detector introduces unacceptable errors, can form a practical alternative to small-field correction factors. For a dosimetric tolerance of 2% on-axis, the microDiamond might be utilised down to 10 mm and 15 mm field sizes for 6 MV and 10 MV, respectively.
Brain Based Instruction in Correctional Settings: Strategies for Teachers.
ERIC Educational Resources Information Center
Becktold, Toni Hill
2001-01-01
Brain-based learning strategies (learner choice, movement, small groups) may be inappropriate in corrections for security reasons. Problems encountered in correctional education (attention deficit disorder, learned helplessness) complicate the use of these strategies. Incorporating brain-based instruction in these settings requires creativity and…
A Meta-Analytic Review of Stand-Alone Interventions to Improve Body Image
Alleva, Jessica M.; Sheeran, Paschal; Webb, Thomas L.; Martijn, Carolien; Miles, Eleanor
2015-01-01
Objective Numerous stand-alone interventions to improve body image have been developed. The present review used meta-analysis to estimate the effectiveness of such interventions, and to identify the specific change techniques that lead to improvement in body image. Methods The inclusion criteria were that (a) the intervention was stand-alone (i.e., solely focused on improving body image), (b) a control group was used, (c) participants were randomly assigned to conditions, and (d) at least one pretest and one posttest measure of body image was taken. Effect sizes were meta-analysed and moderator analyses were conducted. A taxonomy of 48 change techniques used in interventions targeted at body image was developed; all interventions were coded using this taxonomy. Results The literature search identified 62 tests of interventions (N = 3,846). Interventions produced a small-to-medium improvement in body image (d + = 0.38), a small-to-medium reduction in beauty ideal internalisation (d + = -0.37), and a large reduction in social comparison tendencies (d + = -0.72). However, the effect size for body image was inflated by bias both within and across studies, and was reliable but of small magnitude once corrections for bias were applied. Effect sizes for the other outcomes were no longer reliable once corrections for bias were applied. Several features of the sample, intervention, and methodology moderated intervention effects. Twelve change techniques were associated with improvements in body image, and three techniques were contra-indicated. Conclusions The findings show that interventions engender only small improvements in body image, and underline the need for large-scale, high-quality trials in this area. The review identifies effective techniques that could be deployed in future interventions. PMID:26418470
Xu, Deshun; Wu, Xiaofang; Han, Jiankang; Chen, Liping; Ji, Lei; Yan, Wei; Shen, Yuehua
2015-12-01
Vibrio parahaemolyticus is a marine seafood-borne pathogen that causes gastrointestinal disorders in humans. In this study, we developed a cross-priming amplification (CPA) assay coupled with vertical flow (VF) visualization for rapid and sensitive detection of V. parahaemolyticus. This assay correctly detected all target strains (n = 13) and none of the non-target strains (n = 27). Small concentrations of V. parahaemolyticus (1.8 CFU/mL for pure cultures and 18 CFU/g for reconstituted samples) were detected within 1 h. CPA-VF can be applied at a large scale and can be used to detect V. parahaemolyticus strains rapidly in seafood and environmental samples, being especially useful in the field. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Optical free piston cell with constant diameter for use under high pressure
NASA Astrophysics Data System (ADS)
Ishihara, Koji; Takagi, Masahiro
1994-02-01
An optical free piston cell (a modified le Noble and Schlott type optical cell) is described for use in spectrophotometric study under high pressure. The cell consists of a disk, a cylinder, and a free piston, which are made of quartz and are mounted within a stainless-steel holder. A small amount of sample solution (˜0.6 cm3), which only contacts with quartz, is required for measurements. The path length is fixed (1.2 cm) at ambient pressure, but is self-adjusting at elevated pressure so that no compressibility corrections are necessary.
Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.
Czarnecki, D; Zink, K
2013-04-21
The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors Ω(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 × 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size.
Dobie, Robert A; Wojcik, Nancy C
2015-01-01
Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999–2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Methods Regression analysis was used to derive new age-correction values using audiometric data from the 1999–2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20–75 years. Results The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20–75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61–75 years. Conclusions Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. PMID:26169804
Dobie, Robert A; Wojcik, Nancy C
2015-07-13
The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Safe and sensible preprocessing and baseline correction of pupil-size data.
Mathôt, Sebastiaan; Fabius, Jasper; Van Heusden, Elle; Van der Stigchel, Stefan
2018-02-01
Measurement of pupil size (pupillometry) has recently gained renewed interest from psychologists, but there is little agreement on how pupil-size data is best analyzed. Here we focus on one aspect of pupillometric analyses: baseline correction, i.e., analyzing changes in pupil size relative to a baseline period. Baseline correction is useful in experiments that investigate the effect of some experimental manipulation on pupil size. In such experiments, baseline correction improves statistical power by taking into account random fluctuations in pupil size over time. However, we show that baseline correction can also distort data if unrealistically small pupil sizes are recorded during the baseline period, which can easily occur due to eye blinks, data loss, or other distortions. Divisive baseline correction (corrected pupil size = pupil size/baseline) is affected more strongly by such distortions than subtractive baseline correction (corrected pupil size = pupil size - baseline). We discuss the role of baseline correction as a part of preprocessing of pupillometric data, and make five recommendations: (1) before baseline correction, perform data preprocessing to mark missing and invalid data, but assume that some distortions will remain in the data; (2) use subtractive baseline correction; (3) visually compare your corrected and uncorrected data; (4) be wary of pupil-size effects that emerge faster than the latency of the pupillary response allows (within ±220 ms after the manipulation that induces the effect); and (5) remove trials on which baseline pupil size is unrealistically small (indicative of blinks and other distortions).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kidman, Raymond; Matthews, Patrick
The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 541 based on the no further action alternative listed in Table ES-1.
ERIC Educational Resources Information Center
Sampson, Andrew
2012-01-01
This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…
Fukata, Kyohei; Sugimoto, Satoru; Kurokawa, Chie; Saito, Akito; Inoue, Tatsuya; Sasai, Keisuke
2018-06-01
The difficulty of measuring output factor (OPF) in a small field has been frequently discussed in recent publications. This study is aimed to determine the OPF in a small field using 10-MV photon beam and stereotactic conical collimator (cone). The OPF was measured by two diode detectors (SFD, EDGE detector) and one micro-ion chamber (PinPoint 3D chamber) in a water phantom. A Monte Carlo simulation using simplified detector model was performed to obtain the correction factor for the detector measurements. About 12% OPF difference was observed in the measurement at the smallest field (7.5 mm diameter) for EDGE detector and PinPoint 3D chamber. By applying the Monte Carlo-based correction factor to the measurement, the maximum discrepancy among the three detectors was reduced to within 3%. The results indicate that determination of OPF in a small field should be carefully performed. Especially, detector choice and appropriate correction factor application are very important in this regard.
Coplen, Tyler B.; Wassenaar, Leonard I
2015-01-01
Although laser absorption spectrometry (LAS) instrumentation is easy to use, its incorporation into laboratory operations is not easy, owing to extensive offline manipulation of comma-separated-values files for outlier detection, between-sample memory correction, nonlinearity (δ-variation with water amount) correction, drift correction, normalization to VSMOW-SLAP scales, and difficulty in performing long-term QA/QC audits. METHODS: A Microsoft Access relational-database application, LIMS (Laboratory Information Management System) for Lasers 2015, was developed. It automates LAS data corrections and manages clients, projects, samples, instrument-sample lists, and triple-isotope (δ(17) O, δ(18) O, and δ(2) H values) instrumental data for liquid-water samples. It enables users to (1) graphically evaluate sample injections for variable water yields and high isotope-delta variance; (2) correct for between-sample carryover, instrumental drift, and δ nonlinearity; and (3) normalize final results to VSMOW-SLAP scales. RESULTS: Cost-free LIMS for Lasers 2015 enables users to obtain improved δ(17) O, δ(18) O, and δ(2) H values with liquid-water LAS instruments, even those with under-performing syringes. For example, LAS δ(2) HVSMOW measurements of USGS50 Lake Kyoga (Uganda) water using an under-performing syringe having ±10 % variation in water concentration gave +31.7 ± 1.6 ‰ (2-σ standard deviation), compared with the reference value of +32.8 ± 0.4 ‰, after correction for variation in δ value with water concentration, between-sample memory, and normalization to the VSMOW-SLAP scale. CONCLUSIONS: LIMS for Lasers 2015 enables users to create systematic, well-founded instrument templates, import δ(2) H, δ(17) O, and δ(18) O results, evaluate performance with automatic graphical plots, correct for δ nonlinearity due to variable water concentration, correct for between-sample memory, adjust for drift, perform VSMOW-SLAP normalization, and perform long-term QA/QC audits easily.
Biener, Lois; Bogen, Karen; Connolly, Gregory
2007-01-01
Objective To determine whether providing corrective health information can reduce the tendency of consumers to believe that the implied marketing message that two “potentially reduced exposure products” (PREPs) are safer than regular cigarettes. Design Face‐to‐face interviews with smokers assigned to one of four conditions, which varied in terms of the presence or absence of health information that qualified claims made in advertising for two PREPs. Subjects A convenience sample of 177 smokers in Boston area. Interventions Health information detailed the extent to which exposure to toxins and health risks of the brands were unknown. Main outcome measures Respondents' assessments of the health risks and toxicity of the two combustible PREPs, Advance and Eclipse. Results The health information had a modest but significant effect on ratings of health risk, and reduced perceptions that switching to the new brands would lower a smoker's risk of cancer (OR 0.75; p<0.05). The health information had no effect on perceptions of toxicity. Conclusions A small dose of corrective information was effective in tempering smokers' perceptions. A higher dose of public health campaigns would be needed to affect misperceptions likely to follow a full‐scale tobacco marketing effort. PMID:17897988
Figure correction of a metallic ellipsoidal neutron focusing mirror
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jiang, E-mail: jiang.guo@riken.jp; Yamagata, Yutaka; Morita, Shin-ya
2015-06-15
An increasing number of neutron focusing mirrors is being adopted in neutron scattering experiments in order to provide high fluxes at sample positions, reduce measurement time, and/or increase statistical reliability. To realize a small focusing spot and high beam intensity, mirrors with both high form accuracy and low surface roughness are required. To achieve this, we propose a new figure correction technique to fabricate a two-dimensional neutron focusing mirror made with electroless nickel-phosphorus (NiP) by effectively combining ultraprecision shaper cutting and fine polishing. An arc envelope shaper cutting method is introduced to generate high form accuracy, while a fine polishingmore » method, in which the material is removed effectively without losing profile accuracy, is developed to reduce the surface roughness of the mirror. High form accuracy in the minor-axis and the major-axis is obtained through tool profile error compensation and corrective polishing, respectively, and low surface roughness is acquired under a low polishing load. As a result, an ellipsoidal neutron focusing mirror is successfully fabricated with high form accuracy of 0.5 μm peak-to-valley and low surface roughness of 0.2 nm root-mean-square.« less
Thermal Ionization Mass Spectrometry Techniques for the Determination of d34S and D33S
NASA Astrophysics Data System (ADS)
Mann, J. L.; Kelly, W. R.
2006-12-01
Mass-dependent (MD) and mass-independent (MI) sulfur isotopic compositions are measured by gas source isotope ratio mass spectrometry (GIRMS) using either SO2 or SF6 gas. The variations in sulfur isotopes are used for tracing sources of sulfur and elucidating the sulfur cycle. The recent discovery of MI sulfur isotopic effects provide a tracer for atmospheric processes that may yield insight into the atmospheric sulfur cycle. Determinations of δ^{34}S and Δ^{33}S as well as sulfur concentration in low concentration (ppb) samples are now possible by multi-collector thermal ionization mass spectrometry (MCTIMS) by measuring arsenic sulfide molecular ions (AsS+) using silica gel as an emitter. δ^{34}S is determined using a ^{33}S/^{36}S double spike to correct for instrumental mass fractionation. It is added to the sample before chemical processing which permits the simultaneous determination of the natural MD isotopic fractionation and the sulfur concentration. The addition of the double spike before sample processing has the important additional advantage that any isotopic fractionation that may occur during the chemistry will be removed by the double spike correction procedure. The accuracy and precision of the double spike technique is comparable to modern GIRMS, but requires about a factor of 10 less sample. Δ^{33}S effects can also be measured by MCTIMS on unspiked samples using internal normalization. In GIRMS Δ^{33}S effects are defined by the following equation: Δ^{33}S = δ^{33}S - k δ^{34}S A resolvable effect is governed by both the precision and reproducibility of the δ^{33}S and δ^{34}S measurements and the k value. It is claimed that effects of 0.05 to 0.20 Δ^{33}S units are resolvable. MI effects in mass 33 using MCTIMS are determined on an unspiked sample using internal normalization. Because mass 33 falls between and adjacent to the masses 32 and 34 that are used for correction the interpolation correction is over the smallest possible range. A resolvable Δ^{33}S effect depends only on the precision of the measurement. It is direct in that unlike GIRMS it does not require measurement of δ^{33}S and δ^{34}S or any assumption as to the value of the parameter k. GIRMS could also potentially use the internal normalization procedure to perform direct measurements of Δ^{33}S. The double spike MCTIMS procedure was evaluated by measuring the international standards (IAEA-S-1, S-2, and S-3). The δ^{34}S values (relative to Vienna Canyon Diablo Troilite (VCDT)) determined were 0.32‰ ± 0.04‰ (1σ, n=4) and 0.31‰ ± 0.13‰ (1σ, n=8) for S-1, 22.65‰ ± 0.04‰ (1σ, n=7) and 22.60‰ ± 0.06‰ (1σ, n=5) for S-2, and 32.47‰ ± 0.07‰ (1σ, n=8) for S-3. The uncertainties reported are comparable to or better then those obtained by GIRMS. Δ^{33}S determinations for S-1, also reported relative to VCDT, ranged from -0.67‰ ± 2.2‰ (1σ) to 0.71‰ ± 2.1‰ (1σ) and averaged 0.0028‰ ± 0.55‰ (1σ, n=6) suggesting there is no MI effect (Δ^{33}S=0). Although the MCTIMS procedure requires the use of a mass fractionation law, previous work on MD standards showed that the change in fractionation during data collection was small (2 to 3‰) and thus the correction required was small and unlikely to produce measurable artifacts in Δ^{33}S.
ELECTRODYNAMIC CORRECTIONS TO MAGNETIC MOMENT OF ELECTRON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulehla, I.
1960-01-01
Values obtained for fourth-order corrections to the magnetic moment of the electron were compared and recalculated. The regularizsion for small momenta was modified so that each diverging integral was regularized by expanding the denominator by an infinitely small part. The value obtained for the magnetic moment, mu = mu /sub o/ (1 + alpha /2 pi - 0.328 alpha /sup 2// pi /sup 2/, agreed with that of Petermann. (M.C.G.)
Improved method for fluorescence cytometric immunohematology testing.
Roback, John D; Barclay, Sheilagh; Hillyer, Christopher D
2004-02-01
A method for accurate immunohematology testing by fluorescence cytometry (FC) was previously described. Nevertheless, the use of vacuum filtration to wash RBCs and a standard-flow cytometer for data acquisition hindered efforts to incorporate this method into an automated platform. A modified procedure was developed that used low-speed centrifugation of 96-well filter plates for RBC staining. Small-footprint benchtop capillary cytometers (PCA and PCA-96, Guava Technologies, Inc.) were used for data acquisition. Authentic clinical samples from hospitalized patients were tested for ABO group and the presence of D antigen (n = 749) as well as for the presence of RBC alloantibodies (n = 428). Challenging samples with mixed-field reactions and weak antibodies were included. Results were compared to those obtained by column agglutination technology (CAT), and discrepancies were resolved by standard tube methods. Detailed investigations of FC sensitivity and reproducibility were also performed. The modified FC method with the PCA determined the correct ABO group and D type for 98.7 percent of 520 samples, compared to 98.8 percent for CAT (p > 0.05). No-type-determined (NTD) rates were 1.2 percent for both methods. In testing for unexpected alloantibodies, FC determined the correct result for 98.6 percent of 215 samples, compared to 96.3 percent for CAT (p > 0.05). When samples were automatically acquired in the 96-well plate format with the PCA-96, 98.7 percent of 229 samples had correct ABO group and D type determined by FC, compared to 97.4 percent for CAT (p > 0.05). NTD rates were 0.9 and 2.6 percent, respectively. Antibody screens were accurate for 99.1 percent of 213 samples with the PCA-96, compared to 99.5 percent for CAT (p > 0.05). Further investigations demonstrated that FC with the PCA-96 was better than CAT at detecting weak anti-A (p < 0.0001) and alloantibodies. An improved method for FC immunohematology testing has been described. This assay was comparable in accuracy to standard CAT techniques, but had better sensitivity for detecting weak antibodies and was superior in detecting mixed-field reactions (p < 0.005). The FC method demonstrated excellent reproducibility. The compatibility of this assay with the PCA-96 capillary cytometer with plate-handling capabilities should simplify development of a completely automated platform.
Channel Capacity Calculation at Large SNR and Small Dispersion within Path-Integral Approach
NASA Astrophysics Data System (ADS)
Reznichenko, A. V.; Terekhov, I. S.
2018-04-01
We consider the optical fiber channel modelled by the nonlinear Shrödinger equation with additive white Gaussian noise. Using Feynman path-integral approach for the model with small dispersion we find the first nonzero corrections to the conditional probability density function and the channel capacity estimations at large signal-to-noise ratio. We demonstrate that the correction to the channel capacity in small dimensionless dispersion parameter is quadratic and positive therefore increasing the earlier calculated capacity for a nondispersive nonlinear optical fiber channel in the intermediate power region. Also for small dispersion case we find the analytical expressions for simple correlators of the output signals in our noisy channel.
Liu, Hui; Yang, Yuelian; Cui, Jinghua; Liu, Lanzheng; Liu, Huiyuan; Hu, Guangchun; Shi, Yuwen; Li, Jian
2013-07-01
A membrane filter (MF) method was evaluated for its suitability for qualitative and quantitative analyses of Cronobacter spp. in drinking water by pure strains of Cronobacter and non-Cronobacter, and samples spiked with chlorinated Cronobacter sakazakii ATCC 29544. The applicability was verified by the tests: for pure strains, the sensitivity and the specificity were both 100%; for spiked samples, the MF method recovered 82.8 ± 10.4% chlorinated ATCC 29544 cells. The MF method was also applied to screen Cronobacter spp. in drinking water samples from municipal water supplies on premises (MWSP) and small community water supplies on premises (SCWSP). The isolation rate of Cronobacter spp. from SCWSP samples was 31/114, which was significantly higher than that from MWSP samples which was 1/131. Besides, the study confirmed the possibility of using total coliform as an indicator of contamination level of Cronobacter spp. in drinking water, and the acquired correct positive rate was 96%. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.
ICESat laser altimetry over small mountain glaciers
NASA Astrophysics Data System (ADS)
Treichler, Désirée; Kääb, Andreas
2016-09-01
Using sparsely glaciated southern Norway as a case study, we assess the potential and limitations of ICESat laser altimetry for analysing regional glacier elevation change in rough mountain terrain. Differences between ICESat GLAS elevations and reference elevation data are plotted over time to derive a glacier surface elevation trend for the ICESat acquisition period 2003-2008. We find spatially varying biases between ICESat and three tested digital elevation models (DEMs): the Norwegian national DEM, SRTM DEM, and a high-resolution lidar DEM. For regional glacier elevation change, the spatial inconsistency of reference DEMs - a result of spatio-temporal merging - has the potential to significantly affect or dilute trends. Elevation uncertainties of all three tested DEMs exceed ICESat elevation uncertainty by an order of magnitude, and are thus limiting the accuracy of the method, rather than ICESat uncertainty. ICESat matches glacier size distribution of the study area well and measures small ice patches not commonly monitored in situ. The sample is large enough for spatial and thematic subsetting. Vertical offsets to ICESat elevations vary for different glaciers in southern Norway due to spatially inconsistent reference DEM age. We introduce a per-glacier correction that removes these spatially varying offsets, and considerably increases trend significance. Only after application of this correction do individual campaigns fit observed in situ glacier mass balance. Our correction also has the potential to improve glacier trend significance for other causes of spatially varying vertical offsets, for instance due to radar penetration into ice and snow for the SRTM DEM or as a consequence of mosaicking and merging that is common for national or global DEMs. After correction of reference elevation bias, we find that ICESat provides a robust and realistic estimate of a moderately negative glacier mass balance of around -0.36 ± 0.07 m ice per year. This regional estimate agrees well with the heterogeneous but overall negative in situ glacier mass balance observed in the area.
NASA Astrophysics Data System (ADS)
Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.
2017-09-01
This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.
Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O
2017-09-01
This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.
Irurita Olivares, Javier; Alemán Aguilera, Inmaculada
2016-11-01
Sex estimation of juveniles in the Physical and Forensic Anthropology context is currently a task with serious difficulties because the discriminatory bone characteristics are minimal until puberty. Also, the small number of osteological collections of children available for research has made it difficult to develop effective methodologies in this regard. This study tested the characteristics of the ilium and jaw proposed by Schutkowski in 1993 for estimation of sex in subadults. The study sample consisted of 109 boys and 76 girls, ranging in age from 5 months of gestation to 6 years, from the identified osteological collection of Granada (Spain). For the analysis and interpretation of the results, we have proposed changes from previous studies because we believe they raised methodological errors relating to the calculation of probabilities of success and sex distribution in the sample. The results showed correct assignment probabilities much lower than those obtained by Schutkowski as well as by other authors. The best results were obtained with the angle and depth of the sciatic notch, with 0.73 and 0.80 probability of correct assignment respectively if the male trait was observed. The results obtained with the other criteria were too small to be valid in the context of Physical or Forensic Anthropology. From our results, we concluded that Schutkowski method should not be used in forensic context, and that the sciatic notch is the most dimorphic trait in subadults and, therefore, the most appropriate to develop more effective methods for estimating sex.
Gritti, Fabrice; Guiochon, Georges
2011-08-05
The corrected heights equivalent to a theoretical plate (HETP) of three 4.6mm I.D. monolithic Onyx-C(18) columns (Onyx, Phenomenex, Torrance, CA) of different lengths (2.5, 5, and 10 cm) are reported for retained (toluene, naphthalene) and non-retained (uracil, caffeine) small molecules. The moments of the peak profiles were measured according to the accurate numerical integration method. Correction for the extra-column contributions was systematically applied. The peak parking method was used in order to measure the bulk diffusion coefficients of the sample molecules, their longitudinal diffusion terms, and the eddy diffusion term of the three monolithic columns. The experimental results demonstrate that the maximum efficiency was 60,000 plates/m for retained compounds. The column length has a large impact on the plate height of non-retained species. These observations were unambiguously explained by a large trans-column eddy diffusion term in the van Deemter HETP equation. This large trans-rod eddy diffusion term is due to the combination of a large trans-rod velocity bias (≃3%), a small radial dispersion coefficient in silica monolithic columns, and a poorly designed distribution and collection of the sample streamlets at the inlet and outlet of the monolithic rod. Improving the performance of large I.D. monolithic columns will require (1) a detailed knowledge of the actual flow distribution across and along these monolithic rod and (2) the design of appropriate inlet and outlet distributors designed to minimize the nefarious impact of the radial flow heterogeneity on band broadening. Copyright © 2011 Elsevier B.V. All rights reserved.
Meta-analysis of alcohol price and income elasticities – with corrections for publication bias
2013-01-01
Background This paper contributes to the evidence-base on prices and alcohol use by presenting meta-analytic summaries of price and income elasticities for alcohol beverages. The analysis improves on previous meta-analyses by correcting for outliers and publication bias. Methods Adjusting for outliers is important to avoid assigning too much weight to studies with very small standard errors or large effect sizes. Trimmed samples are used for this purpose. Correcting for publication bias is important to avoid giving too much weight to studies that reflect selection by investigators or others involved with publication processes. Cumulative meta-analysis is proposed as a method to avoid or reduce publication bias, resulting in more robust estimates. The literature search obtained 182 primary studies for aggregate alcohol consumption, which exceeds the database used in previous reviews and meta-analyses. Results For individual beverages, corrected price elasticities are smaller (less elastic) by 28-29 percent compared with consensus averages frequently used for alcohol beverages. The average price and income elasticities are: beer, -0.30 and 0.50; wine, -0.45 and 1.00; and spirits, -0.55 and 1.00. For total alcohol, the price elasticity is -0.50 and the income elasticity is 0.60. Conclusions These new results imply that attempts to reduce alcohol consumption through price or tax increases will be less effective or more costly than previously claimed. PMID:23883547
Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro
2011-01-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029
Ferrazzi, Giulio; Kuklisova Murgasova, Maria; Arichi, Tomoki; Malamateniou, Christina; Fox, Matthew J; Makropoulos, Antonios; Allsop, Joanna; Rutherford, Mary; Malik, Shaihan; Aljabar, Paul; Hajnal, Joseph V
2014-11-01
There is growing interest in exploring fetal functional brain development, particularly with Resting State fMRI. However, during a typical fMRI acquisition, the womb moves due to maternal respiration and the fetus may perform large-scale and unpredictable movements. Conventional fMRI processing pipelines, which assume that brain movements are infrequent or at least small, are not suitable. Previous published studies have tackled this problem by adopting conventional methods and discarding as much as 40% or more of the acquired data. In this work, we developed and tested a processing framework for fetal Resting State fMRI, capable of correcting gross motion. The method comprises bias field and spin history corrections in the scanner frame of reference, combined with slice to volume registration and scattered data interpolation to place all data into a consistent anatomical space. The aim is to recover an ordered set of samples suitable for further analysis using standard tools such as Group Independent Component Analysis (Group ICA). We have tested the approach using simulations and in vivo data acquired at 1.5 T. After full motion correction, Group ICA performed on a population of 8 fetuses extracted 20 networks, 6 of which were identified as matching those previously observed in preterm babies. Copyright © 2014 Elsevier Inc. All rights reserved.
Twenty years of experience with particulate silicone in plastic surgery.
Planas, J; del Cacho, C
1992-01-01
The use of particulate silicone in plastic surgery involves the introduction of solid silicone into the body. The silicone is in small pieces in order for it to adapt to the shape of the defect. This way large quantities can be introduced through small incisions. It is also possible to distribute the silicone particles from outside the skin to make the corrections more regular. This method has been very useful for correcting post-traumatic depressions in the face and all areas where the depression has a rigid back support. We consider it the treatment of choice for correcting the funnel chest deformity.
Open-target sparse sensing of biological agents using DNA microarray
2011-01-01
Background Current biosensors are designed to target and react to specific nucleic acid sequences or structural epitopes. These 'target-specific' platforms require creation of new physical capture reagents when new organisms are targeted. An 'open-target' approach to DNA microarray biosensing is proposed and substantiated using laboratory generated data. The microarray consisted of 12,900 25 bp oligonucleotide capture probes derived from a statistical model trained on randomly selected genomic segments of pathogenic prokaryotic organisms. Open-target detection of organisms was accomplished using a reference library of hybridization patterns for three test organisms whose DNA sequences were not included in the design of the microarray probes. Results A multivariate mathematical model based on the partial least squares regression (PLSR) was developed to detect the presence of three test organisms in mixed samples. When all 12,900 probes were used, the model correctly detected the signature of three test organisms in all mixed samples (mean(R2)) = 0.76, CI = 0.95), with a 6% false positive rate. A sampling algorithm was then developed to sparsely sample the probe space for a minimal number of probes required to capture the hybridization imprints of the test organisms. The PLSR detection model was capable of correctly identifying the presence of the three test organisms in all mixed samples using only 47 probes (mean(R2)) = 0.77, CI = 0.95) with nearly 100% specificity. Conclusions We conceived an 'open-target' approach to biosensing, and hypothesized that a relatively small, non-specifically designed, DNA microarray is capable of identifying the presence of multiple organisms in mixed samples. Coupled with a mathematical model applied to laboratory generated data, and sparse sampling of capture probes, the prototype microarray platform was able to capture the signature of each organism in all mixed samples with high sensitivity and specificity. It was demonstrated that this new approach to biosensing closely follows the principles of sparse sensing. PMID:21801424
DOE Office of Scientific and Technical Information (OSTI.GOV)
DOE /NV
1999-03-26
The Corrective Action Investigation Plan for Corrective Action Unit 428, Area 3 Septic Waste Systems 1 and 5, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U. S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 428 consists of Corrective Action Sites 03- 05- 002- SW01 and 03- 05- 002- SW05, respectively known as Area 3 Septic Waste System 1 and Septic Waste System 5. This Corrective Action Investigation Plan is used inmore » combination with the Work Plan for Leachfield Corrective Action Units: Nevada Test Site and Tonopah Test Range, Nevada , Rev. 1 (DOE/ NV, 1998c). The Leachfield Work Plan was developed to streamline investigations at leachfield Corrective Action Units by incorporating management, technical, quality assurance, health and safety, public involvement, field sampling, and waste management information common to a set of Corrective Action Units with similar site histories and characteristics into a single document that can be referenced. This Corrective Action Investigation Plan provides investigative details specific to Corrective Action Unit 428. A system of leachfields and associated collection systems was used for wastewater disposal at Area 3 of the Tonopah Test Range until a consolidated sewer system was installed in 1990 to replace the discrete septic waste systems. Operations within various buildings at Area 3 generated sanitary and industrial wastewaters potentially contaminated with contaminants of potential concern and disposed of in septic tanks and leachfields. Corrective Action Unit 428 is composed of two leachfield systems in the northern portion of Area 3. Based on site history collected to support the Data Quality Objectives process, contaminants of potential concern for the site include oil/ diesel range total petroleum hydrocarbons, and Resource Conservation and Recovery Act characteristic volatile organic compounds, semivolatile organic compounds, and metals. A limited number of samples will be analyzed for gamma- emitting radionuclides and isotopic uranium from four of the septic tanks and if radiological field screening levels are exceeded. Additional samples will be analyzed for geotechnical and hydrological properties and a bioassessment may be performed. The technical approach for investigating this Corrective Action Unit consists of the following activities: (1) Perform video surveys of the discharge and outfall lines. (2) Collect samples of material in the septic tanks. (3) Conduct exploratory trenching to locate and inspect subsurface components. (4) Collect subsurface soil samples in areas of the collection system including the septic tanks and outfall end of distribution boxes. (5) Collect subsurface soil samples underlying the leachfield distribution pipes via trenching. (6) Collect surface and near- surface samples near potential locations of the Acid Sewer Outfall if Septic Waste System 5 Leachfield cannot be located. (7) Field screen samples for volatile organic compounds, total petroleum hydrocarbons, and radiological activity. (8) Drill boreholes and collect subsurface soil samples if required. (9) Analyze samples for total volatile organic compounds, total semivolatile organic compounds, total Resource Conservation and Recovery Act metals, and total petroleum hydrocarbons (oil/ diesel range organics). Limited number of samples will be analyzed for gamma- emitting radionuclides and isotopic uranium from particular septic tanks and if radiological field screening levels are exceeded. (10) Collect samples from native soils beneath the distribution system and analyze for geotechnical/ hydrologic parameters. (11) Collect and analyze bioassessment samples at the discretion of the Site Supervisor if total petroleum hydrocarbons exceed field- screening levels.« less
Photometric Modeling of Simulated Surace-Resolved Bennu Images
NASA Astrophysics Data System (ADS)
Golish, D.; DellaGiustina, D. N.; Clark, B.; Li, J. Y.; Zou, X. D.; Bennett, C. A.; Lauretta, D. S.
2017-12-01
The Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is a NASA mission to study and return a sample of asteroid (101955) Bennu. Imaging data from the mission will be used to develop empirical surface-resolved photometric models of Bennu at a series of wavelengths. These models will be used to photometrically correct panchromatic and color base maps of Bennu, compensating for variations due to shadows and photometric angle differences, thereby minimizing seams in mosaicked images. Well-corrected mosaics are critical to the generation of a global hazard map and a global 1064-nm reflectance map which predicts LIDAR response. These data products directly feed into the selection of a site from which to safely acquire a sample. We also require photometric correction for the creation of color ratio maps of Bennu. Color ratios maps provide insight into the composition and geological history of the surface and allow for comparison to other Solar System small bodies. In advance of OSIRIS-REx's arrival at Bennu, we use simulated images to judge the efficacy of both the photometric modeling software and the mission observation plan. Our simulation software is based on USGS's Integrated Software for Imagers and Spectrometers (ISIS) and uses a synthetic shape model, a camera model, and an empirical photometric model to generate simulated images. This approach gives us the flexibility to create simulated images of Bennu based on analog surfaces from other small Solar System bodies and to test our modeling software under those conditions. Our photometric modeling software fits image data to several conventional empirical photometric models and produces the best fit model parameters. The process is largely automated, which is crucial to the efficient production of data products during proximity operations. The software also produces several metrics on the quality of the observations themselves, such as surface coverage and the completeness of the data set for evaluating the phase and disk functions of the surface. Application of this software to simulated mission data has revealed limitations in the initial mission design, which has fed back into the planning process. The entire photometric pipeline further serves as an exercise of planned activities for proximity operations.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-11
... size may be reduced by the finite population correction factor. The finite population correction is a statistical formula utilized to determine sample size where the population is considered finite rather than... program may notify us and the annual sample size will be reduced by the finite population correction...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hershey, Ronald L.; Fereday, Wyall; Thomas, James M
Dissolved inorganic carbon (DIC) carbon-14 ( 14C) ages must be corrected for complex chemical and physical reactions and processes that change the amount of 14C in groundwater as it flows from recharge to downgradient areas. Because of these reactions, DIC 14C can produce unrealistically old ages and long groundwater travel times that may, or may not, agree with travel times estimated by other methods. Dissolved organic carbon (DOC) 14C ages are often younger than DIC 14C ages because there are few chemical reactions or physical processes that change the amount of DOC 14C in groundwater. However, there are several issuesmore » that create uncertainty in DOC 14C groundwater ages including limited knowledge of the initial (A 0) DOC 14C in groundwater recharge and potential changes in DOC composition as water moves through an aquifer. This study examines these issues by quantifying A 0 DOC 14C in recharge areas of southern Nevada groundwater flow systems and by evaluating changes in DOC composition as water flows from recharge areas to downgradient areas. The effect of these processes on DOC 14C groundwater ages is evaluated and DOC and DIC 14C ages are then compared along several southern Nevada groundwater flow paths. Twenty-seven groundwater samples were collected from springs and wells in southern Nevada in upgradient, midgradient, and downgradient locations. DOC 14C for upgradient samples ranged from 96 to 120 percent modern carbon (pmc) with an average of 106 pmc, verifying modern DOC 14C ages in recharge areas, which decreases uncertainty in DOC 14C A 0 values, groundwater ages, and travel times. The HPLC spectra of groundwater along a flow path in the Spring Mountains show the same general pattern indicating that the DOC compound composition does not change along this flow path. Although DOC concentration decreases from recharge-area to downgradient groundwater, the organic compounds are similar, indicating that DOC 14C is unaffected by other processes such as microbial degradation. A small amount of organic carbon was leached from crushed volcanic and carbonate aquifer outcrop rock in rock-leaching experiments. The leached DOC was high in 14C (75 pmc carbonate rocks, 91 pmc volcanic) suggesting that the leached DOC likely came from microbes in the rock samples. The small amount of DOC and high 14C indicates that the amount of old organic carbon in these rocks is low so there should be minimal impact on groundwater DOC 14C ages. Based on the results from this study, DOC 14C ages do not require additional corrections. Several correction models were applied to DIC 14C ages to correct for water-rock reactions along two carbonate and two volcanic flow paths and the corresponding travel times were compare to DOC 14C travel times. The DOC 14C travel times were hundreds to thousands of years shorter than uncorrected and corrected DIC 14C travel times except for the upper section of one carbonate flow path. DOC 14C travel times ranged from 400 to 5,400 years as compared to DIC 14C that ranged from modern to 20,900 years. The DIC 14C ages are greatly influenced by carbonate mineral and gas reactions and other processes such as matrix diffusion, isotope exchange, or adsorption, which are not always adequately accounted for in DIC 14C groundwater age correction models.« less
Zhong, Sheng; McPeek, Mary Sara
2016-01-01
We consider the problem of genetic association testing of a binary trait in a sample that contains related individuals, where we adjust for relevant covariates and allow for missing data. We propose CERAMIC, an estimating equation approach that can be viewed as a hybrid of logistic regression and linear mixed-effects model (LMM) approaches. CERAMIC extends the recently proposed CARAT method to allow samples with related individuals and to incorporate partially missing data. In simulations, we show that CERAMIC outperforms existing LMM and generalized LMM approaches, maintaining high power and correct type 1 error across a wider range of scenarios. CERAMIC results in a particularly large power increase over existing methods when the sample includes related individuals with some missing data (e.g., when some individuals with phenotype and covariate information have missing genotype), because CERAMIC is able to make use of the relationship information to incorporate partially missing data in the analysis while correcting for dependence. Because CERAMIC is based on a retrospective analysis, it is robust to misspecification of the phenotype model, resulting in better control of type 1 error and higher power than that of prospective methods, such as GMMAT, when the phenotype model is misspecified. CERAMIC is computationally efficient for genomewide analysis in samples of related individuals of almost any configuration, including small families, unrelated individuals and even large, complex pedigrees. We apply CERAMIC to data on type 2 diabetes (T2D) from the Framingham Heart Study. In a genome scan, 9 of the 10 smallest CERAMIC p-values occur in or near either known T2D susceptibility loci or plausible candidates, verifying that CERAMIC is able to home in on the important loci in a genome scan. PMID:27695091
NASA Astrophysics Data System (ADS)
Evans, M. N.; Selmer, K. J.; Breeden, B. T.; Lopatka, A. S.; Plummer, R. E.
2016-09-01
We describe an algorithm to correct for scale compression, runtime drift, and amplitude effects in carbonate and cellulose oxygen and carbon isotopic analyses made on two online continuous flow isotope ratio mass spectrometry (CF-IRMS) systems using gas chromatographic (GC) separation. We validate the algorithm by correcting measurements of samples of known isotopic composition which are not used to estimate the corrections. For carbonate δ13C (δ18O) data, median precision of validation estimates for two reference materials and two calibrated working standards is 0.05‰ (0.07‰); median bias is 0.04‰ (0.02‰) over a range of 49.2‰ (24.3‰). For α-cellulose δ13C (δ18O) data, median precision of validation estimates for one reference material and five working standards is 0.11‰ (0.27‰); median bias is 0.13‰ (-0.10‰) over a range of 16.1‰ (19.1‰). These results are within the 5th-95th percentile range of subsequent routine runtime validation exercises in which one working standard is used to calibrate the other. Analysis of the relative importance of correction steps suggests that drift and scale-compression corrections are most reliable and valuable. If validation precisions are not already small, routine cross-validated precision estimates are improved by up to 50% (80%). The results suggest that correction for systematic error may enable these particular CF-IRMS systems to produce δ13C and δ18O carbonate and cellulose isotopic analyses with higher validated precision, accuracy, and throughput than is typically reported for these systems. The correction scheme may be used in support of replication-intensive research projects in paleoclimatology and other data-intensive applications within the geosciences.
Bolte, John F B
2016-09-01
Personal exposure measurements of radio frequency electromagnetic fields are important for epidemiological studies and developing prediction models. Minimizing biases and uncertainties and handling spatial and temporal variability are important aspects of these measurements. This paper reviews the lessons learnt from testing the different types of exposimeters and from personal exposure measurement surveys performed between 2005 and 2015. Applying them will improve the comparability and ranking of exposure levels for different microenvironments, activities or (groups of) people, such that epidemiological studies are better capable of finding potential weak correlations with health effects. Over 20 papers have been published on how to prevent biases and minimize uncertainties due to: mechanical errors; design of hardware and software filters; anisotropy; and influence of the body. A number of biases can be corrected for by determining multiplicative correction factors. In addition a good protocol on how to wear the exposimeter, a sufficiently small sampling interval and sufficiently long measurement duration will minimize biases. Corrections to biases are possible for: non-detects through detection limit, erroneous manufacturer calibration and temporal drift. Corrections not deemed necessary, because no significant biases have been observed, are: linearity in response and resolution. Corrections difficult to perform after measurements are for: modulation/duty cycle sensitivity; out of band response aka cross talk; temperature and humidity sensitivity. Corrections not possible to perform after measurements are for: multiple signals detection in one band; flatness of response within a frequency band; anisotropy to waves of different elevation angle. An analysis of 20 microenvironmental surveys showed that early studies using exposimeters with logarithmic detectors, overestimated exposure to signals with bursts, such as in uplink signals from mobile phones and WiFi appliances. Further, the possible corrections for biases have not been fully applied. The main findings are that if the biases are not corrected for, the actual exposure will on average be underestimated. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Doherty, W.; Lightfoot, P. C.; Ames, D. E.
2014-08-01
The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.
The accuracy of parent-reported height and weight for 6-12 year old U.S. children.
Wright, Davene R; Glanz, Karen; Colburn, Trina; Robson, Shannon M; Saelens, Brian E
2018-02-12
Previous studies have examined correlations between BMI calculated using parent-reported and directly-measured child height and weight. The objective of this study was to validate correction factors for parent-reported child measurements. Concordance between parent-reported and investigator measured child height, weight, and BMI (kg/m 2 ) among participants in the Neighborhood Impact on Kids Study (n = 616) was examined using the Lin coefficient, where a value of ±1.0 indicates perfect concordance and a value of zero denotes non-concordance. A correction model for parent-reported height, weight, and BMI based on commonly collected demographic information was developed using 75% of the sample. This model was used to estimate corrected measures for the remaining 25% of the sample and measured concordance between correct parent-reported and investigator-measured values. Accuracy of corrected values in classifying children as overweight/obese was assessed by sensitivity and specificity. Concordance between parent-reported and measured height, weight and BMI was low (0.007, - 0.039, and - 0.005 respectively). Concordance in the corrected test samples improved to 0.752 for height, 0.616 for weight, and 0.227 for BMI. Sensitivity of corrected parent-reported measures for predicting overweight and obesity among children in the test sample decreased from 42.8 to 25.6% while specificity improved from 79.5 to 88.6%. Correction factors improved concordance for height and weight but did not improve the sensitivity of parent-reported measures for measuring child overweight and obesity. Future research should be conducted using larger and more nationally-representative samples that allow researchers to fully explore demographic variance in correction coefficients.
Corrections to the Eckhaus' stability criterion for one-dimensional stationary structures
NASA Astrophysics Data System (ADS)
Malomed, B. A.; Staroselsky, I. E.; Konstantinov, A. B.
1989-01-01
Two amendments to the well-known Eckhaus' stability criterion for small-amplitude non-linear structures generated by weak instability of a spatially uniform state of a non-equilibrium one-dimensional system against small perturbations with finite wavelengths are obtained. Firstly, we evaluate small corrections to the main Eckhaus' term which, on the contrary so that term, do not have a universal form. Comparison of those non-universal corrections with experimental or numerical results gives a possibility to select a more relevant form of an effective nonlinear evolution equation. In particular, the comparison with such results for convective rolls and Taylor vortices gives arguments in favor of the Swift-Hohenberg equation. Secondly, we derive an analog of the Eckhaus criterion for systems degenerate in the sense that in an expansion of their non-linear parts in powers of dynamical variables, the second and third degree terms are absent.
Power in Bayesian Mediation Analysis for Small Sample Research
Miočević, Milica; MacKinnon, David P.; Levy, Roy
2018-01-01
It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results. PMID:29662296
Power in Bayesian Mediation Analysis for Small Sample Research.
Miočević, Milica; MacKinnon, David P; Levy, Roy
2017-01-01
It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results.
García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M
2018-01-01
Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathew, D; Tanny, S; Parsai, E
2015-06-15
Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measuredmore » on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference conditions.« less
NASA Astrophysics Data System (ADS)
Thomas, Philipp; Straube, Arthur V.; Grima, Ramon
2010-11-01
Chemical reactions inside cells occur in compartment volumes in the range of atto- to femtoliters. Physiological concentrations realized in such small volumes imply low copy numbers of interacting molecules with the consequence of considerable fluctuations in the concentrations. In contrast, rate equation models are based on the implicit assumption of infinitely large numbers of interacting molecules, or equivalently, that reactions occur in infinite volumes at constant macroscopic concentrations. In this article we compute the finite-volume corrections (or equivalently the finite copy number corrections) to the solutions of the rate equations for chemical reaction networks composed of arbitrarily large numbers of enzyme-catalyzed reactions which are confined inside a small subcellular compartment. This is achieved by applying a mesoscopic version of the quasisteady-state assumption to the exact Fokker-Planck equation associated with the Poisson representation of the chemical master equation. The procedure yields impressively simple and compact expressions for the finite-volume corrections. We prove that the predictions of the rate equations will always underestimate the actual steady-state substrate concentrations for an enzyme-reaction network confined in a small volume. In particular we show that the finite-volume corrections increase with decreasing subcellular volume, decreasing Michaelis-Menten constants, and increasing enzyme saturation. The magnitude of the corrections depends sensitively on the topology of the network. The predictions of the theory are shown to be in excellent agreement with stochastic simulations for two types of networks typically associated with protein methylation and metabolism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calderon, E; Siergiej, D
2014-06-01
Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less
Yildiz, Yesna O; Eckersley, Robert J; Senior, Roxy; Lim, Adrian K P; Cosgrove, David; Tang, Meng-Xing
2015-07-01
Non-linear propagation of ultrasound creates artifacts in contrast-enhanced ultrasound images that significantly affect both qualitative and quantitative assessments of tissue perfusion. This article describes the development and evaluation of a new algorithm to correct for this artifact. The correction is a post-processing method that estimates and removes non-linear artifact in the contrast-specific image using the simultaneously acquired B-mode image data. The method is evaluated on carotid artery flow phantoms with large and small vessels containing microbubbles of various concentrations at different acoustic pressures. The algorithm significantly reduces non-linear artifacts while maintaining the contrast signal from bubbles to increase the contrast-to-tissue ratio by up to 11 dB. Contrast signal from a small vessel 600 μm in diameter buried in tissue artifacts before correction was recovered after the correction. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasper, Ahren W.; Gruey, Zackery B.; Harding, Lawrence B.
Monte Carlo phase space integration (MCPSI) is used to compute full dimensional and fully anharmonic, but classical, rovibrational partition functions for 22 small- and medium-sized molecules and radicals. Several of the species considered here feature multiple minima and low-frequency nonlocal motions, and efficiently sampling these systems is facilitated using curvilinear (stretch, bend, and torsion) coordinates. The curvilinear coordinate MCPSI method is demonstrated to be applicable to the treatment of fluxional species with complex rovibrational structures and as many as 21 fully coupled rovibrational degrees of freedom. Trends in the computed anharmonicity corrections are discussed. For many systems, rovibrational anharmonicities atmore » elevated temperatures are shown to vary consistently with the number of degrees of freedom and with temperature once rovibrational coupling and torsional anharmonicity are accounted for. Larger corrections are found for systems with complex vibrational structures, such as systems with multiple large-amplitude modes and/or multiple minima.« less
Quality Assurance of NCI Thesaurus by Mining Structural-Lexical Patterns
Abeysinghe, Rashmie; Brooks, Michael A.; Talbert, Jeffery; Licong, Cui
2017-01-01
Quality assurance of biomedical terminologies such as the National Cancer Institute (NCI) Thesaurus is an essential part of the terminology management lifecycle. We investigate a structural-lexical approach based on non-lattice subgraphs to automatically identify missing hierarchical relations and missing concepts in the NCI Thesaurus. We mine six structural-lexical patterns exhibiting in non-lattice subgraphs: containment, union, intersection, union-intersection, inference-contradiction, and inference union. Each pattern indicates a potential specific type of error and suggests a potential type of remediation. We found 809 non-lattice subgraphs with these patterns in the NCI Thesaurus (version 16.12d). Domain experts evaluated a random sample of 50 small non-lattice subgraphs, of which 33 were confirmed to contain errors and make correct suggestions (33/50 = 66%). Of the 25 evaluated subgraphs revealing multiple patterns, 22 were verified correct (22/25 = 88%). This shows the effectiveness of our structurallexical-pattern-based approach in detecting errors and suggesting remediations in the NCI Thesaurus. PMID:29854100
Rank-based permutation approaches for non-parametric factorial designs.
Umlauft, Maria; Konietschke, Frank; Pauly, Markus
2017-11-01
Inference methods for null hypotheses formulated in terms of distribution functions in general non-parametric factorial designs are studied. The methods can be applied to continuous, ordinal or even ordered categorical data in a unified way, and are based only on ranks. In this set-up Wald-type statistics and ANOVA-type statistics are the current state of the art. The first method is asymptotically exact but a rather liberal statistical testing procedure for small to moderate sample size, while the latter is only an approximation which does not possess the correct asymptotic α level under the null. To bridge these gaps, a novel permutation approach is proposed which can be seen as a flexible generalization of the Kruskal-Wallis test to all kinds of factorial designs with independent observations. It is proven that the permutation principle is asymptotically correct while keeping its finite exactness property when data are exchangeable. The results of extensive simulation studies foster these theoretical findings. A real data set exemplifies its applicability. © 2017 The British Psychological Society.
Huckins, J.N.; Petty, J.D.; Orazio, C.E.; Lebo, J.A.; Clark, R.C.; Gibson, V.L.; Gala, W.R.; Echols, K.R.
1999-01-01
The use of lipid-containing semipermeable membrane devices (SPMDs) is becoming commonplace, but very little sampling rate data are available for the estimation of ambient contaminant concentrations from analyte levels in exposed SPMDs. We determined the aqueous sampling rates (R(s)s; expressed as effective volumes of water extracted daily) of the standard (commercially available design) 1-g triolein SPMD for 15 of the priority pollutant (PP) polycyclic aromatic hydrocarbons (PAHs) at multiple temperatures and concentrations. Under the experimental conditions of this study, recovery- corrected R(s) values for PP PAHs ranged from ???1.0 to 8.0 L/d. These values would be expected to be influenced by significant changes (relative to this study) in water temperature, degree of biofouling, and current velocity- turbulence. Included in this paper is a discussion of the effects of temperature and octanol-water partition coefficient (K(ow)); the impacts of biofouling and hydrodynamics are reported separately. Overall, SPMDs responded proportionally to aqueous PAH concentrations; i.e., SPMD R(s) values and SPMD-water concentration factors were independent of aqueous concentrations. Temperature effects (10, 18, and 26 ??C) on Rs values appeared to be complex but were relatively small.The use of lipid-containing semipermeable membrane devices (SPMDs) is becoming commonplace, but very little sampling rate data are available for the estimation of ambient contaminant concentrations from analyte levels in exposed SPMDs. We determined the aqueous sampling rates (Rss; expressed as effective volumes of water extracted daily) of the standard (commercially available design) 1-g triolein SPMD for 15 of the priority pollutant (PP) polycyclic aromatic hydrocarbons (PAHs) at multiple temperatures and concentrations. Under the experimental conditions of this study, recovery-corrected Rs values for PP PAHs ranged from ???1.0 to 8.0 L/d. These values would be expected to be influenced by significant changes (relative to this study) in water temperature, degree of biofouling, and current velocity-turbulence. Included in this paper is a discussion of the effects of temperature and octanol-water partition coefficient (KOW); the impacts of biofouling and hydrodynamics are reported separately. Overall, SPMDs responded proportionally to aqueous PAH concentrations; i.e., SPMD RS values and SPMD-water concentration factors were independent of aqueous concentrations. Temperature effects (10, 18, and 26??C) on RS values appeared to be complex but were relatively small.
Pagès, Loïc; Picon-Cochard, Catherine
2014-10-01
Our objective was to calibrate a model of the root system architecture on several Poaceae species and to assess its value to simulate several 'integrated' traits measured at the root system level: specific root length (SRL), maximum root depth and root mass. We used the model ArchiSimple, made up of sub-models that represent and combine the basic developmental processes, and an experiment on 13 perennial grassland Poaceae species grown in 1.5-m-deep containers and sampled at two different dates after planting (80 and 120 d). Model parameters were estimated almost independently using small samples of the root systems taken at both dates. The relationships obtained for calibration validated the sub-models, and showed species effects on the parameter values. The simulations of integrated traits were relatively correct for SRL and were good for root depth and root mass at the two dates. We obtained some systematic discrepancies that were related to the slight decline of root growth in the last period of the experiment. Because the model allowed correct predictions on a large set of Poaceae species without global fitting, we consider that it is a suitable tool for linking root traits at different organisation levels. © 2014 INRA. New Phytologist © 2014 New Phytologist Trust.
Dielectric spectroscopy of Dy2O3 doped (K0.5Na0.5)NbO3 piezoelectric ceramics
NASA Astrophysics Data System (ADS)
Mahesh, P.; Subhash, T.; Pamu, D.
2014-06-01
We report the dielectric properties of ( K 0.5 Na 0.5 ) NbO 3 ceramics doped with x wt% of Dy 2 O 3 (x= 0.0-1.5 wt%) using the broadband dielectric spectroscopy. The X-ray diffraction studies showed the formation of perovskite structure signifying that Dy 2 O 3 diffuse into the KNN lattice. Samples doped with x > 0.5 wt% exhibit smaller grain size and lower relative densities. The dielectric properties of KNN ceramics doped with Dy 2 O 3 are enhanced by increasing the Dy 3+ content; among the compositions studied, x = 0.5 wt% exhibited the highest dielectric constant and lowest loss at 1MHz over the temperature range of 30°C to 400°C. All the samples exhibit maximum dielectric constant at the Curie temperature (˜ 326°C) and a small peak in the dielectric constant at around 165°C is due to a structural phase transition. At the request of all authors, and by agreement with the Proceedings Editors, a corrected version of this article was published on 19 June 2014. The full text of the Corrigendum is attached to the corrected article PDF file.
Brookes, Emre; Vachette, Patrice; Rocco, Mattia; Pérez, Javier
2016-01-01
Size-exclusion chromatography coupled with SAXS (small-angle X-ray scattering), often performed using a flow-through capillary, should allow direct collection of monodisperse sample data. However, capillary fouling issues and non-baseline-resolved peaks can hamper its efficacy. The UltraScan solution modeler (US-SOMO) HPLC-SAXS (high-performance liquid chromatography coupled with SAXS) module provides a comprehensive framework to analyze such data, starting with a simple linear baseline correction and symmetrical Gaussian decomposition tools [Brookes, Pérez, Cardinali, Profumo, Vachette & Rocco (2013 ▸). J. Appl. Cryst. 46, 1823–1833]. In addition to several new features, substantial improvements to both routines have now been implemented, comprising the evaluation of outcomes by advanced statistical tools. The novel integral baseline-correction procedure is based on the more sound assumption that the effect of capillary fouling on scattering increases monotonically with the intensity scattered by the material within the X-ray beam. Overlapping peaks, often skewed because of sample interaction with the column matrix, can now be accurately decomposed using non-symmetrical modified Gaussian functions. As an example, the case of a polydisperse solution of aldolase is analyzed: from heavily convoluted peaks, individual SAXS profiles of tetramers, octamers and dodecamers are extracted and reliably modeled. PMID:27738419
Effect of Malmquist bias on correlation studies with IRAS data base
NASA Technical Reports Server (NTRS)
Verter, Frances
1993-01-01
The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.
Real-time 3D motion tracking for small animal brain PET
NASA Astrophysics Data System (ADS)
Kyme, A. Z.; Zhou, V. W.; Meikle, S. R.; Fulton, R. R.
2008-05-01
High-resolution positron emission tomography (PET) imaging of conscious, unrestrained laboratory animals presents many challenges. Some form of motion correction will normally be necessary to avoid motion artefacts in the reconstruction. The aim of the current work was to develop and evaluate a motion tracking system potentially suitable for use in small animal PET. This system is based on the commercially available stereo-optical MicronTracker S60 which we have integrated with a Siemens Focus-220 microPET scanner. We present measured performance limits of the tracker and the technical details of our implementation, including calibration and synchronization of the system. A phantom study demonstrating motion tracking and correction was also performed. The system can be calibrated with sub-millimetre accuracy, and small lightweight markers can be constructed to provide accurate 3D motion data. A marked reduction in motion artefacts was demonstrated in the phantom study. The techniques and results described here represent a step towards a practical method for rigid-body motion correction in small animal PET. There is scope to achieve further improvements in the accuracy of synchronization and pose measurements in future work.
An overview of the thematic mapper geometric correction system
NASA Technical Reports Server (NTRS)
Beyer, E. P.
1983-01-01
Geometric accuracy specifications for LANDSAT 4 are reviewed and the processing concepts which form the basis of NASA's thematic mapper geometric correction system are summarized for both the flight and ground segments. The flight segment includes the thematic mapper instrument, attitude measurement devices, attitude control, and ephemeris processing. For geometric correction the ground segment uses mirror scan correction data, payload correction data, and control point information to determine where TM detector samples fall on output map projection systems. Then the raw imagery is reformatted and resampled to produce image samples on a selected output projection grid system.
Coplen, Tyler B.; Wassenaar, Leonard I
2015-01-01
RationaleAlthough laser absorption spectrometry (LAS) instrumentation is easy to use, its incorporation into laboratory operations is not easy, owing to extensive offline manipulation of comma-separated-values files for outlier detection, between-sample memory correction, nonlinearity (δ-variation with water amount) correction, drift correction, normalization to VSMOW-SLAP scales, and difficulty in performing long-term QA/QC audits.MethodsA Microsoft Access relational-database application, LIMS (Laboratory Information Management System) for Lasers 2015, was developed. It automates LAS data corrections and manages clients, projects, samples, instrument-sample lists, and triple-isotope (δ17O, δ18O, and δ2H values) instrumental data for liquid-water samples. It enables users to (1) graphically evaluate sample injections for variable water yields and high isotope-delta variance; (2) correct for between-sample carryover, instrumental drift, and δ nonlinearity; and (3) normalize final results to VSMOW-SLAP scales.ResultsCost-free LIMS for Lasers 2015 enables users to obtain improved δ17O, δ18O, and δ2H values with liquid-water LAS instruments, even those with under-performing syringes. For example, LAS δ2HVSMOW measurements of USGS50 Lake Kyoga (Uganda) water using an under-performing syringe having ±10 % variation in water concentration gave +31.7 ± 1.6 ‰ (2-σ standard deviation), compared with the reference value of +32.8 ± 0.4 ‰, after correction for variation in δ value with water concentration, between-sample memory, and normalization to the VSMOW-SLAP scale.ConclusionsLIMS for Lasers 2015 enables users to create systematic, well-founded instrument templates, import δ2H, δ17O, and δ18O results, evaluate performance with automatic graphical plots, correct for δ nonlinearity due to variable water concentration, correct for between-sample memory, adjust for drift, perform VSMOW-SLAP normalization, and perform long-term QA/QC audits easily. Published in 2015. This article is a U.S. Government work and is in the public domain in the USA.
Ellipsometric study of peptide layers - island-like character, depolarization and quasi-absorption
NASA Astrophysics Data System (ADS)
Pápa, Z.; Ramakrishnan, S.; Martin, M.; Cloitre, T.; Zimányi, L.; Tóth, Z.; Gergely, C.; Budai, J.
2017-11-01
In this work, the ellipsometric measurements of small molecular size polypeptides deposited onto silicon are analyzed. Results of ellipsometric evaluation procedures based on transparent layer, absorbing layer and discontinuous layer approaches are compared. Although these models result in similar fitting quality and can predict the amount of the deposited material, the gained optical properties can be rather different due to the different assumptions of the models. To choose the physically correct results, independent measurements as atomic force microscopy or transmission measurement of peptide solutions are necessary. It is shown that the measured ellipsometric depolarization can provide also useful information about the sample properties.
78 FR 36715 - VA Veteran-Owned Small Business (VOSB) Verification Guidelines; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-19
... DEPARTMENT OF VETERANS AFFAIRS 38 CFR Part 74 RIN 2900-AO63 VA Veteran-Owned Small Business (VOSB... Department of Veterans Affairs (VA) amended its Veteran-Owned Small Business (VOSB) Verification Guidelines... Office of Small and Disadvantaged Business Utilization (00SB), Department of Veterans Affairs, 810...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2012 CFR
2012-07-01
... EPA with appropriate data to correct the record when the company submits its application for small... a small refiner? 80.1340 Section 80.1340 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner...
NASA Astrophysics Data System (ADS)
Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu
2018-04-01
Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.
NASA Astrophysics Data System (ADS)
Schoenberg, Ronny; von Blanckenburg, Friedhelm
2005-04-01
Multicollector ICP-MS-based stable isotope procedures provide the capability to determine small variations in metal isotope composition of materials, but they are prone to substantial bias introduced by inadequate sample preparation. Such a "cryptic" bias is not necessarily identifiable from the measured isotope ratios. The analytical protocol for Fe isotope analyses of organic and inorganic materials described here identifies and avoids such pitfalls. In medium-mass resolution mode of the ThermoFinnigan Neptune MC-ICP-MS, a 1-ppm Fe solution with an uptake rate of 50-70 [mu]L min-1 yielded 3 × 10-11 A on 56Fe for the ThermoFinnigan stable introduction system and 1.2-1.8 × 10-10 A for the ESI Apex-Q uptake system. Sensitivity was increased again 3-5-fold when using Finnigan X-cones instead of the standard H-cones. The combination of the ESI Apex-Q apparatus and X-cones allowed the determination of the isotope composition on as little as 50 ng of Fe. Fe isotope compositions were corrected for mass bias with both the standard-sample bracketing (SSB) method, and by using the 65Cu/63Cu ratio of added synthetic copper (Cu-doping) as internal monitor of mass discrimination. Both methods provide identical results on high-purity Fe solutions of either synthetic or natural samples. We prefer the SSB method because of its shorter analysis time and more straightforward correction of instrumental mass bias compared to Cu-doping. Strong error correlations of the data are observed in three isotope diagrams. Thus, we suggest that the quality assessment in such diagrams should be performed with error ellipses rather than error bars. Reproducibility of [delta]56Fe, [delta]57Fe and [delta]58Fe values of natural samples alone is not a sufficient criterion for accuracy. A set of tests is lined out that identify cryptic matrix effects and ensure a reproducible level of quality control. Using these criteria and the SSB correction method, we determined the external reproducibilities for [delta]56Fe, [delta]57Fe and [delta]58Fe at the 95% confidence interval from 318 measurements of 95 natural samples to be 0.049, 0.071 and 0.28[per mille sign], respectively.
Fluid therapy in small ruminants and camelids.
Jones, Meredyth; Navarre, Christine
2014-07-01
Body water, electrolytes, and acid-base balance are important considerations in the evaluation and treatment of small ruminants and camelids with any disease process, with restoration of these a priority as adjunctive therapy. The goals of fluid therapy should be to maintain cardiac output and tissue perfusion, and to correct acid-base and electrolyte abnormalities. Hypoglycemia, hyperkalemia, and acidosis are the most life-threatening abnormalities, and require most immediate correction. Copyright © 2014 Elsevier Inc. All rights reserved.
Class III dento-skeletal anomalies: rotational growth and treatment timing.
Mosca, G; Grippaudo, C; Marchionni, P; Deli, R
2006-03-01
The interception of a Class III malocclusion requires a long-term growth prediction in order to estimate the subject's evolution from the prepubertal phase to adulthood. The aim of this retrospective longitudinal study was to highlight the differences in facial morphology in relation to the direction of mandibular growth in a sample of subjects with Class III skeletal anomalies divided on the basis of their Petrovic's auxological categories and rotational types. The study involved 20 patients (11 females and 9 males) who started therapy before reaching their pubertal peak and were followed up for a mean of 4.3 years (range: 3.9-5.5 years). Despite the small sample size, the definition of the rotational type of growth was the main diagnostic element for setting the correct individualised therapy. We therefore believe that the observation of a larger sample would reinforce the diagnostic-therapeutic validity of Petrovic's auxological categories, allow an evaluation off all rotational types, and improve the statistical significance of the results obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, E.J.; Hoffman, G.L.; Duce, R.A.
1980-10-20
Three cascade impactor samples were collected from a 20-m-high tower on the southeastern coast of Bermuda. These samples were analyzed for Na, K, Ca, Mg, and Fe by atomic absorption spectrophotometry. When the alkali-alkakine earth metal concentrations are corrected for a soil-derived component, utilizing the atmospheric Fe concentrations, Mg, Ca, and Na are found to be present in the same relative abundances as in seawater for all particle sizes sampled. Potassium also shows no deviation from a bulk seawater composition for particles with radii greater than approx.0.5 ..mu..m. However, excess K above that expected from either a bulk seawater ormore » soil source is observed on particles with radii less than approx.0.5 ..mu..m. While oceanic chemical fractionation processes during bubble bursting may be responsible for this excess small particle K, it is most likely due to long-range transport of K-rich particles of terrestrial vegetative origin.« less
NASA Astrophysics Data System (ADS)
Simon, S. B.; Grossman, L.
2004-10-01
Analyses of coarse-grained refractory inclusions typically do not have the solar CaO/Al 2O 3 ratio, probably reflecting nonrepresentative sampling of them in the laboratory. Many previous studies, especially those done by instrumental neutron activation analysis (INAA), were based on very small amounts of material removed from those restricted portions of inclusions that happened to be exposed on surfaces of bulk meteorite samples. Here, we address the sampling problem by studying thin sections of large inclusions, and by analyzing much larger aliquots of powders of these inclusions by INAA than has typically been done in the past. These results do show convergence toward the solar CaO/Al 2O 3 ratio of 0.792. The bulk compositions of 15 coarse-grained inclusions determined by INAA of samples >2 mg have an average CaO/Al 2O 3 ratio of 0.80 ± 0.18. When bulk compositions are obtained by modal recombination based on analysis of thin sections with cross-sections of entire, large, unbroken inclusions, the average of 11 samples (0.79 ± 0.15) also matches the solar value. Among those analyzed by INAA and by modal recombination, there were no inclusions for which both techniques agreed on a CaO/Al 2O 3 ratio deviating by >˜15% from the solar value. These results suggest that: individual inclusions may have the solar CaO/Al 2O 3 ratio; departures from this value are due to sample heterogeneity and nonrepresentative sampling in the laboratory; and it is therefore valid to correct compositions to this value. We present a method for doing so by mathematical addition or subtraction of melilite, spinel, or pyroxene. This yields a set of multiple, usually slightly different, corrected compositions for each inclusion. The best estimate of the bulk composition of an inclusion is the average of these corrected compositions, which simultaneously accounts for errors in sampling of all major phases. Results show that Type B2 inclusions tend to be more SiO 2-rich and have higher normative Anorthite/Gehlenite component ratios than Type B1s. The inclusion bulk compositions lie in a field that can result from evaporation at 1700-2000K of CMAS liquids with solar CaO/Al 2O 3, but with a wide range of initial MgO (30-60 wt%) and SiO 2 (15-50 wt%) contents.
Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.
Spiess, Martin; Jordan, Pascal; Wendt, Mike
2018-05-07
In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.
Measurement of dissolved oxygen during red wines tank aging with chips and micro-oxygenation.
Nevares, I; del Alamo, M
2008-07-21
Nowadays, micro-oxygenation is a very important technique used in aging wines in order to improve their characteristics. The techniques of wine tank aging imply the use of small doses of oxygen and the addition of wood pieces of oak to the wine. Considering the low dissolved oxygen (DO) levels used by micro-oxygenation technique it is necessary to choose the appropriate measurement principle to apply the precise oxygen dosage in wine at any time, in order to assure its correct assimilation. This knowledge will allow the oenologist to control and run the wine aging correctly. This work is a thorough revision of DO measurement main technologies applied to oenology. It describes the strengths and weaknesses of each of them, and draws a comparison of their workings in wine measurement. Both, the traditional systems by electrochemical probes, and the newest photoluminescence-based probes have been used. These probes adapted to red wines ageing study are then compared. This paper also details the first results of the dissolved oxygen content evolution in red wines during a traditional and alternative tank aging. Samples have been treated by three different ageing systems: oak barrels, stainless-steel tanks with small oak wood pieces (chips) and with bigger oak pieces (staves) with low micro-oxygenation levels. French and American oak barrels manufactured by the same cooperage have been used.
Gilmore, Adam Matthew
2014-01-01
Contemporary spectrofluorimeters comprise exciting light sources, excitation and emission monochromators, and detectors that without correction yield data not conforming to an ideal spectral response. The correction of the spectral properties of the exciting and emission light paths first requires calibration of the wavelength and spectral accuracy. The exciting beam path can be corrected up to the sample position using a spectrally corrected reference detection system. The corrected reference response accounts for both the spectral intensity and drift of the exciting light source relative to emission and/or transmission detector responses. The emission detection path must also be corrected for the combined spectral bias of the sample compartment optics, emission monochromator, and detector. There are several crucial issues associated with both excitation and emission correction including the requirement to account for spectral band-pass and resolution, optical band-pass or neutral density filters, and the position and direction of polarizing elements in the light paths. In addition, secondary correction factors are described including (1) subtraction of the solvent's fluorescence background, (2) removal of Rayleigh and Raman scattering lines, as well as (3) correcting for sample concentration-dependent inner-filter effects. The importance of the National Institute of Standards and Technology (NIST) traceable calibration and correction protocols is explained in light of valid intra- and interlaboratory studies and effective spectral qualitative and quantitative analyses including multivariate spectral modeling.
Qualifying the benefit of Advanced Traveler Information Systems (ATIS)
DOT National Transportation Integrated Search
2000-11-21
ATIS Yields Time Management Benefits: No conflict between survey and empirical research : ATIS users correctly perceive that they save time Field studies correctly measured only small changes in in-vehicle travel times. When travel behavior f...
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
Late-stage galaxy mergers in cosmos to z ∼ 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lackner, C. N.; Silverman, J. D.; Salvato, M.
2014-12-01
The role of major mergers in galaxy and black hole formation is not well-constrained. To help address this, we develop an automated method to identify late-stage galaxy mergers before coalescence of the galactic cores. The resulting sample of mergers is distinct from those obtained using pair-finding and morphological indicators. Our method relies on median-filtering of high-resolution images to distinguish two concentrated galaxy nuclei at small separations. This method does not rely on low surface brightness features to identify mergers, and is therefore reliable to high redshift. Using mock images, we derive statistical contamination and incompleteness corrections for the fraction ofmore » late-stage mergers. The mock images show that our method returns an uncontaminated (<10%) sample of mergers with projected separations between 2.2 and 8 kpc out to z∼1. We apply our new method to a magnitude-limited (m{sub FW} {sub 814}<23) sample of 44,164 galaxies from the COSMOS HST/ACS catalog. Using a mass-complete sample with logM{sub ∗}/M{sub ⊙}>10.6 and 0.25« less
Moffa-Sánchez, Paola; Hall, Ian R
2018-02-15
In the original version of this Article, the third sentence of the first paragraph of the "Changes in the input of polar waters into the Labrador Sea" section of the Results originally incorrectly read 'During the spring-summer months, after the winter convection has ceased in the Labrador Sea, its northwest boundary currents (the EGC and IC) support restratification of the surface ocean through lateral transport.' The correct version states 'northeast' instead of 'northwest'. The fifth sentence of the second paragraph of the same section originally incorrectly read "In contrast, in the western section of the Nordic Seas, under the presence of warm Atlantic waters of the Norwegian Current, Nps was found to calcify deeper in the water column (100-200 m), whereas in the east under the influence of the EGC polar waters it calcified closer to the surface at a similar depth as Tq 23 ." The correct version states 'eastern' instead of 'western' and 'west' instead of 'east'.The seventh sentence of the same paragraph originally incorrectly read "Small/large differences in Δδ 18 O Nps-Tq indicating increased/decreased presence of warm and salty Atlantic IC waters vs. polar EGC waters in the upper water column, respectively." The correct version starts 'Large/small' rather than 'Small/large'.These errors have been corrected in both the PDF and HTML versions of the Article.
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
Schultz, Natalie M; Griffis, Timothy J; Lee, Xuhui; Baker, John M
2011-11-15
Plant water extracts typically contain organic materials that may cause spectral interference when using isotope ratio infrared spectroscopy (IRIS), resulting in errors in the measured isotope ratios. Manufacturers of IRIS instruments have developed post-processing software to identify the degree of contamination in water samples, and potentially correct the isotope ratios of water with known contaminants. Here, the correction method proposed by an IRIS manufacturer, Los Gatos Research, Inc., was employed and the results were compared with those obtained from isotope ratio mass spectrometry (IRMS). Deionized water was spiked with methanol and ethanol to create correction curves for δ(18)O and δ(2)H. The contamination effects of different sample types (leaf, stem, soil) and different species from agricultural fields, grasslands, and forests were compared. The average corrections in leaf samples ranged from 0.35 to 15.73‰ for δ(2)H and 0.28 to 9.27‰ for δ(18)O. The average corrections in stem samples ranged from 1.17 to 13.70‰ for δ(2)H and 0.47 to 7.97‰ for δ(18)O. There was no contamination observed in soil water. Cleaning plant samples with activated charcoal had minimal effects on the degree of spectral contamination, reducing the corrections, by on average, 0.44‰ for δ(2)H and 0.25‰ for δ(18)O. The correction method eliminated the discrepancies between IRMS and IRIS for δ(18)O, and greatly reduced the discrepancies for δ(2)H. The mean differences in isotope ratios between IRMS and the corrected IRIS method were 0.18‰ for δ(18)O, and -3.39‰ for δ(2)H. The inability to create an ethanol correction curve for δ(2)H probably caused the larger discrepancies. We conclude that ethanol and methanol are the primary compounds causing interference in IRIS analyzers, and that each individual analyzer will probably require customized correction curves. Copyright © 2011 John Wiley & Sons, Ltd.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
Individualized correction of insulin measurement in hemolyzed serum samples.
Wu, Zhi-Qi; Lu, Ju; Chen, Huanhuan; Chen, Wensen; Xu, Hua-Guo
2017-06-01
Insulin measurement plays a key role in the investigation of patients with hypoglycemia, subtype classification of diabetes mellitus, insulin resistance, and impaired beta cell function. However, even slight hemolysis can negatively affect insulin measurement due to RBC insulin-degrading enzyme (IDE). Here, we derived and validated an individualized correction equation in an attempt to eliminate the effects of hemolysis on insulin measurement. The effects of hemolysis on insulin measurement were studied by adding lysed self-RBCs to serum. A correction equation was derived, accounting for both percentage and exposure time of hemolysis. The performance of this individualized correction was evaluated in intentionally hemolyzed samples. Insulin concentration decreased with increasing percentage and exposure time of hemolysis. Based on the effects of hemolysis on insulin measurement of 17 donors (baseline insulin concentrations ranged from 156 to 2119 pmol/L), the individualized hemolysis correction equation was derived: INS corr = INS meas /(0.705lgHb plasma /Hb serum - 0.001Time - 0.612). This equation can revert insulin concentrations of the intentionally hemolyzed samples to values that were statistically not different from the corresponding insulin baseline concentrations (p = 0.1564). Hemolysis could lead to a negative interference on insulin measurement; by individualized hemolysis correction equation for insulin measurement, we can correct and report reliable serum insulin results for a wide range of degrees of sample hemolysis. This correction would increase diagnostic accuracy, reduce inappropriate therapeutic decisions, and improve patient satisfaction with care.
Star formation rate and extinction in faint z ∼ 4 Lyman break galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
To, Chun-Hao; Wang, Wei-Hao; Owen, Frazer N.
We present a statistical detection of 1.5 GHz radio continuum emission from a sample of faint z ∼ 4 Lyman break galaxies (LBGs). To constrain their extinction and intrinsic star formation rate (SFR), we combine the latest ultradeep Very Large Array 1.5 GHz radio image and the Hubble Space Telescope Advanced Camera for Surveys (ACS) optical images in the GOODS-N. We select a large sample of 1771 z ∼ 4 LBGs from the ACS catalog using B {sub F435W}-dropout color criteria. Our LBG samples have I {sub F775W} ∼ 25-28 (AB), ∼0-3 mag fainter than M{sub UV}{sup ⋆} at zmore » ∼ 4. In our stacked radio images, we find the LBGs to be point-like under our 2'' angular resolution. We measure their mean 1.5 GHz flux by stacking the measurements on the individual objects. We achieve a statistical detection of S {sub 1.5} {sub GHz} = 0.210 ± 0.075 μJy at ∼3σ for the first time on such a faint LBG population at z ∼ 4. The measurement takes into account the effects of source size and blending of multiple objects. The detection is visually confirmed by stacking the radio images of the LBGs, and the uncertainty is quantified with Monte Carlo simulations on the radio image. The stacked radio flux corresponds to an obscured SFR of 16.0 ± 5.7 M {sub ☉} yr{sup –1}, and implies a rest-frame UV extinction correction factor of 3.8. This extinction correction is in excellent agreement with that derived from the observed UV continuum spectral slope, using the local calibration of Meurer et al. This result supports the use of the local calibration on high-redshift LBGs to derive the extinction correction and SFR, and also disfavors a steep reddening curve such as that of the Small Magellanic Cloud.« less
Can small field diode correction factors be applied universally?
Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R
2014-09-01
Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.
Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu
2015-06-01
Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.
75 FR 13145 - SBA Lender Risk Rating System
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-18
... SMALL BUSINESS ADMINISTRATION [Docket No. SBA-2010-0004] SBA Lender Risk Rating System AGENCY: Small Business Administration. ACTION: Notice; extension of comment period and correction. SUMMARY: On March 1, 2010, the Small Business Administration (SBA) published a notice in the Federal Register to...
Radiative corrections from heavy fast-roll fields during inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Rajeev Kumar; Sandora, McCullen; Sloth, Martin S., E-mail: jain@cp3.dias.sdu.dk, E-mail: sandora@cp3.dias.sdu.dk, E-mail: sloth@cp3.dias.sdu.dk
2015-06-01
We investigate radiative corrections to the inflaton potential from heavy fields undergoing a fast-roll phase transition. We find that a logarithmic one-loop correction to the inflaton potential involving this field can induce a temporary running of the spectral index. The induced running can be a short burst of strong running, which may be related to the observed anomalies on large scales in the cosmic microwave spectrum, or extend over many e-folds, sustaining an effectively constant running to be searched for in the future. We implement this in a general class of models, where effects are mediated through a heavy messengermore » field sitting in its minimum. Interestingly, within the present framework it is a generic outcome that a large running implies a small field model with a vanishing tensor-to-scalar ratio, circumventing the normal expectation that small field models typically lead to an unobservably small running of the spectral index. An observable level of tensor modes can also be accommodated, but, surprisingly, this requires running to be induced by a curvaton. If upcoming observations are consistent with a small tensor-to-scalar ratio as predicted by small field models of inflation, then the present study serves as an explicit example contrary to the general expectation that the running will be unobservable.« less
Radiative corrections from heavy fast-roll fields during inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Rajeev Kumar; Sandora, McCullen; Sloth, Martin S.
2015-06-09
We investigate radiative corrections to the inflaton potential from heavy fields undergoing a fast-roll phase transition. We find that a logarithmic one-loop correction to the inflaton potential involving this field can induce a temporary running of the spectral index. The induced running can be a short burst of strong running, which may be related to the observed anomalies on large scales in the cosmic microwave spectrum, or extend over many e-folds, sustaining an effectively constant running to be searched for in the future. We implement this in a general class of models, where effects are mediated through a heavy messengermore » field sitting in its minimum. Interestingly, within the present framework it is a generic outcome that a large running implies a small field model with a vanishing tensor-to-scalar ratio, circumventing the normal expectation that small field models typically lead to an unobservably small running of the spectral index. An observable level of tensor modes can also be accommodated, but, surprisingly, this requires running to be induced by a curvaton. If upcoming observations are consistent with a small tensor-to-scalar ratio as predicted by small field models of inflation, then the present study serves as an explicit example contrary to the general expectation that the running will be unobservable.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-31
...The Food and Drug Administration (FDA) is correcting a notice that appeared in the Federal Register of October 25, 2011 (76 FR 66074). The document announced the availability of a guidance for industry entitled ``Required Warnings for Cigarette Packages and Advertisements--Small Entity Compliance Guide'' for a final rule that published in the Federal Register of June 22, 2011 (76 FR 36628). The notice published with an incorrect docket number. This document corrects that error.
Forbes, Jessica L.; Kim, Regina E. Y.; Paulsen, Jane S.; Johnson, Hans J.
2016-01-01
The creation of high-quality medical imaging reference atlas datasets with consistent dense anatomical region labels is a challenging task. Reference atlases have many uses in medical image applications and are essential components of atlas-based segmentation tools commonly used for producing personalized anatomical measurements for individual subjects. The process of manual identification of anatomical regions by experts is regarded as a so-called gold standard; however, it is usually impractical because of the labor-intensive costs. Further, as the number of regions of interest increases, these manually created atlases often contain many small inconsistently labeled or disconnected regions that need to be identified and corrected. This project proposes an efficient process to drastically reduce the time necessary for manual revision in order to improve atlas label quality. We introduce the LabelAtlasEditor tool, a SimpleITK-based open-source label atlas correction tool distributed within the image visualization software 3D Slicer. LabelAtlasEditor incorporates several 3D Slicer widgets into one consistent interface and provides label-specific correction tools, allowing for rapid identification, navigation, and modification of the small, disconnected erroneous labels within an atlas. The technical details for the implementation and performance of LabelAtlasEditor are demonstrated using an application of improving a set of 20 Huntingtons Disease-specific multi-modal brain atlases. Additionally, we present the advantages and limitations of automatic atlas correction. After the correction of atlas inconsistencies and small, disconnected regions, the number of unidentified voxels for each dataset was reduced on average by 68.48%. PMID:27536233
Forbes, Jessica L; Kim, Regina E Y; Paulsen, Jane S; Johnson, Hans J
2016-01-01
The creation of high-quality medical imaging reference atlas datasets with consistent dense anatomical region labels is a challenging task. Reference atlases have many uses in medical image applications and are essential components of atlas-based segmentation tools commonly used for producing personalized anatomical measurements for individual subjects. The process of manual identification of anatomical regions by experts is regarded as a so-called gold standard; however, it is usually impractical because of the labor-intensive costs. Further, as the number of regions of interest increases, these manually created atlases often contain many small inconsistently labeled or disconnected regions that need to be identified and corrected. This project proposes an efficient process to drastically reduce the time necessary for manual revision in order to improve atlas label quality. We introduce the LabelAtlasEditor tool, a SimpleITK-based open-source label atlas correction tool distributed within the image visualization software 3D Slicer. LabelAtlasEditor incorporates several 3D Slicer widgets into one consistent interface and provides label-specific correction tools, allowing for rapid identification, navigation, and modification of the small, disconnected erroneous labels within an atlas. The technical details for the implementation and performance of LabelAtlasEditor are demonstrated using an application of improving a set of 20 Huntingtons Disease-specific multi-modal brain atlases. Additionally, we present the advantages and limitations of automatic atlas correction. After the correction of atlas inconsistencies and small, disconnected regions, the number of unidentified voxels for each dataset was reduced on average by 68.48%.
Sample Dimensionality Effects on d' and Proportion of Correct Responses in Discrimination Testing.
Bloom, David J; Lee, Soo-Yeun
2016-09-01
Products in the food and beverage industry have varying levels of dimensionality ranging from pure water to multicomponent food products, which can modify sensory perception and possibly influence discrimination testing results. The objectives of the study were to determine the impact of (1) sample dimensionality and (2) complex formulation changes on the d' and proportion of correct response of the 3-AFC and triangle methods. Two experiments were conducted using 47 prescreened subjects who performed either triangle or 3-AFC test procedures. In Experiment I, subjects performed 3-AFC and triangle tests using model solutions with different levels of dimensionality. Samples increased in dimensionality from 1-dimensional sucrose in water solution to 3-dimensional sucrose, citric acid, and flavor in water solution. In Experiment II, subjects performed 3-AFC and triangle tests using 3-dimensional solutions. Sample pairs differed in all 3 dimensions simultaneously to represent complex formulation changes. Two forms of complexity were compared: dilution, where all dimensions decreased in the same ratio, and compensation, where a dimension was increased to compensate for a reduction in another. The proportion of correct responses decreased for both methods when the dimensionality was increased from 1- to 2-dimensional samples. No reduction in correct responses was observed from 2- to 3-dimensional samples. No significant differences in d' were demonstrated between the 2 methods when samples with complex formulation changes were tested. Results reveal an impact on proportion of correct responses due to sample dimensionality and should be explored further using a wide range of sample formulations. © 2016 Institute of Food Technologists®
The purpose of this paper is to provide guidelines for sub-slab sampling using dedicated vapor probes. Use of dedicated vapor probes allows for multiple sample events before and after corrective action and for vacuum testing to enhance the design and monitoring of a corrective m...
Empirical Validation of a Procedure to Correct Position and Stimulus Biases in Matching-to-Sample
ERIC Educational Resources Information Center
Kangas, Brian D.; Branch, Marc N.
2008-01-01
The development of position and stimulus biases often occurs during initial training on matching-to-sample tasks. Furthermore, without intervention, these biases can be maintained via intermittent reinforcement provided by matching-to-sample contingencies. The present study evaluated the effectiveness of a correction procedure designed to…
Understanding the atmospheric measurement and behavior of perfluorooctanoic acid.
Webster, Eva M; Ellis, David A
2012-09-01
The recently reported quantification of the atmospheric sampling artifact for perfluorooctanoic acid (PFOA) was applied to existing gas and particle concentration measurements. Specifically, gas phase concentrations were increased by a factor of 3.5 and particle-bound concentrations by a factor of 0.1. The correlation constants in two particle-gas partition coefficient (K(QA)) estimation equations were determined for multiple studies with and without correcting for the sampling artifact. Correction for the sampling artifact gave correlation constants with improved agreement to those reported for other neutral organic contaminants, thus supporting the application of the suggested correction factors for perfluorinated carboxylic acids. Applying the corrected correlation constant to a recent multimedia modeling study improved model agreement with corrected, reported, atmospheric concentrations. This work confirms that there is sufficient partitioning to the gas phase to support the long-range atmospheric transport of PFOA. Copyright © 2012 SETAC.
Thomas B. Lynch; Jeffrey H. Gove
2014-01-01
The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...
Ji, Xiaohong; Liu, Peng; Sun, Zhenqi; Su, Xiaohui; Wang, Wei; Gao, Yanhui; Sun, Dianjun
2016-01-01
Objective To determine the effect of statistical correction for intra-individual variation on estimated urinary iodine concentration (UIC) by sampling on 3 consecutive days in four seasons in children. Setting School-aged children from urban and rural primary schools in Harbin, Heilongjiang, China. Participants 748 and 640 children aged 8–11 years were recruited from urban and rural schools, respectively, in Harbin. Primary and secondary outcome measures The spot urine samples were collected once a day for 3 consecutive days in each season over 1 year. The UIC of the first day was corrected by two statistical correction methods: the average correction method (average of days 1, 2; average of days 1, 2 and 3) and the variance correction method (UIC of day 1 corrected by two replicates and by three replicates). The variance correction method determined the SD between subjects (Sb) and within subjects (Sw), and calculated the correction coefficient (Fi), Fi=Sb/(Sb+Sw/di), where di was the number of observations. The UIC of day 1 was then corrected using the following equation: Results The variance correction methods showed the overall Fi was 0.742 for 2 days’ correction and 0.829 for 3 days’ correction; the values for the seasons spring, summer, autumn and winter were 0.730, 0.684, 0.706 and 0.703 for 2 days’ correction and 0.809, 0.742, 0.796 and 0.804 for 3 days’ correction, respectively. After removal of the individual effect, the correlation coefficient between consecutive days was 0.224, and between non-consecutive days 0.050. Conclusions The variance correction method is effective for correcting intra-individual variation in estimated UIC following sampling on 3 consecutive days in four seasons in children. The method varies little between ages, sexes and urban or rural setting, but does vary between seasons. PMID:26920442
Method of wavefront tilt correction for optical heterodyne detection systems under strong turbulence
NASA Astrophysics Data System (ADS)
Xiang, Jing-song; Tian, Xin; Pan, Le-chun
2014-07-01
Atmospheric turbulence decreases the heterodyne mixing efficiency of the optical heterodyne detection systems. Wavefront tilt correction is often used to improve the optical heterodyne mixing efficiency. But the performance of traditional centroid tracking tilt correction is poor under strong turbulence conditions. In this paper, a tilt correction method which tracking the peak value of laser spot on focal plane is proposed. Simulation results show that, under strong turbulence conditions, the performance of peak value tracking tilt correction is distinctly better than that of traditional centroid tracking tilt correction method, and the phenomenon of large antenna's performance inferior to small antenna's performance which may be occurred in centroid tracking tilt correction method can also be avoid in peak value tracking tilt correction method.
Heintz, Sonja; Ruch, Willibald; Platt, Tracey; Pang, Dandan; Carretero-Dios, Hugo; Dionigi, Alberto; Argüello Gutiérrez, Catalina; Brdar, Ingrid; Brzozowska, Dorota; Chen, Hsueh-Chih; Chłopicki, Władysław; Collins, Matthew; Ďurka, Róbert; Yahfoufi, Najwa Y. El; Quiroga-Garza, Angélica; Isler, Robert B.; Mendiburo-Seguel, Andrés; Ramis, TamilSelvan; Saglam, Betül; Shcherbakova, Olga V.; Singh, Kamlesh; Stokenberga, Ieva; Wong, Peter S. O.; Torres-Marín, Jorge
2018-01-01
Recently, two forms of virtue-related humor, benevolent and corrective, have been introduced. Benevolent humor treats human weaknesses and wrongdoings benevolently, while corrective humor aims at correcting and bettering them. Twelve marker items for benevolent and corrective humor (the BenCor) were developed, and it was demonstrated that they fill the gap between humor as temperament and virtue. The present study investigates responses to the BenCor from 25 samples in 22 countries (overall N = 7,226). The psychometric properties of the BenCor were found to be sufficient in most of the samples, including internal consistency, unidimensionality, and factorial validity. Importantly, benevolent and corrective humor were clearly established as two positively related, yet distinct dimensions of virtue-related humor. Metric measurement invariance was supported across the 25 samples, and scalar invariance was supported across six age groups (from 18 to 50+ years) and across gender. Comparisons of samples within and between four countries (Malaysia, Switzerland, Turkey, and the UK) showed that the item profiles were more similar within than between countries, though some evidence for regional differences was also found. This study thus supported, for the first time, the suitability of the 12 marker items of benevolent and corrective humor in different countries, enabling a cumulative cross-cultural research and eventually applications of humor aiming at the good. PMID:29479326
40Ar/39Ar technique of KAr dating: a comparison with the conventional technique
Brent, Dalrymple G.; Lanphere, M.A.
1971-01-01
K-Ar ages have been determined by the 40Ar/39Ar total fusion technique on 19 terrestrial samples whose conventional K-Ar ages range from 3.4 my to nearly 1700 my. Sample materials included biotite, muscovite, sanidine, adularia, plagioclase, hornblende, actinolite, alunite, dacite, and basalt. For 18 samples there are no significant differences at the 95% confidence level between the KAr ages obtained by these two techniques; for one sample the difference is 4.3% and is statistically significant. For the neutron doses used in these experiments (???4 ?? 1018 nvt) it appears that corrections for interfering Ca- and K-derived Ar isotopes can be made without significant loss of precision for samples with K/Ca > 1 as young as about 5 ?? 105 yr, and for samples with K/Ca < 1 as young as about 107 yr. For younger samples the combination of large atmospheric Ar corrections and large corrections for Ca- and K-derived Ar may make the precision of the 40Ar/39Ar technique less than that of the conventional technique unless the irradiation parameters are adjusted to minimize these corrections. ?? 1971.
Predicting the helix packing of globular proteins by self-correcting distance geometry.
Mumenthaler, C; Braun, W
1995-05-01
A new self-correcting distance geometry method for predicting the three-dimensional structure of small globular proteins was assessed with a test set of 8 helical proteins. With the knowledge of the amino acid sequence and the helical segments, our completely automated method calculated the correct backbone topology of six proteins. The accuracy of the predicted structures ranged from 2.3 A to 3.1 A for the helical segments compared to the experimentally determined structures. For two proteins, the predicted constraints were not restrictive enough to yield a conclusive prediction. The method can be applied to all small globular proteins, provided the secondary structure is known from NMR analysis or can be predicted with high reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mark Krauss
2011-09-01
The purpose of this CADD/CAP is to present the corrective action alternatives (CAAs) evaluated for CAU 547, provide justification for selection of the recommended alternative, and describe the plan for implementing the selected alternative. Corrective Action Unit 547 consists of the following three corrective action sites (CASs): (1) CAS 02-37-02, Gas Sampling Assembly; (2) CAS 03-99-19, Gas Sampling Assembly; and(3) CAS 09-99-06, Gas Sampling Assembly. The gas sampling assemblies consist of inactive process piping, equipment, and instrumentation that were left in place after completion of underground safety experiments. The purpose of these safety experiments was to confirm that a nuclearmore » explosion would not occur in the case of an accidental detonation of the high-explosive component of the device. The gas sampling assemblies allowed for the direct sampling of the gases and particulates produced by the safety experiments. Corrective Action Site 02-37-02 is located in Area 2 of the Nevada National Security Site (NNSS) and is associated with the Mullet safety experiment conducted in emplacement borehole U2ag on October 17, 1963. Corrective Action Site 03-99-19 is located in Area 3 of the NNSS and is associated with the Tejon safety experiment conducted in emplacement borehole U3cg on May 17, 1963. Corrective Action Site 09-99-06 is located in Area 9 of the NNSS and is associated with the Player safety experiment conducted in emplacement borehole U9cc on August 27, 1964. The CAU 547 CASs were investigated in accordance with the data quality objectives (DQOs) developed by representatives of the Nevada Division of Environmental Protection (NDEP) and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office. The DQO process was used to identify and define the type, amount, and quality of data needed to determine and implement appropriate corrective actions for CAU 547. Existing radiological survey data and historical knowledge of the CASs were sufficient to meet the DQOs and evaluate CAAs without additional investigation. As a result, further investigation of the CAU 547 CASs was not required. The following CAAs were identified for the gas sampling assemblies: (1) clean closure, (2) closure in place, (3) modified closure in place, (4) no further action (with administrative controls), and (5) no further action. Based on the CAAs evaluation, the recommended corrective action for the three CASs in CAU 547 is closure in place. This corrective action will involve construction of a soil cover on top of the gas sampling assembly components and establishment of use restrictions at each site. The closure in place alternative was selected as the best and most appropriate corrective action for the CASs at CAU 547 based on the following factors: (1) Provides long-term protection of human health and the environment; (2) Minimizes short-term risk to site workers in implementing corrective action; (3) Is easily implemented using existing technology; (4) Complies with regulatory requirements; (5) Fulfills FFACO requirements for site closure; (6) Does not generate transuranic waste requiring offsite disposal; (7) Is consistent with anticipated future land use of the areas (i.e., testing and support activities); and (8) Is consistent with other NNSS site closures where contamination was left in place.« less
Finite amplitude effects on drop levitation for material properties measurement
NASA Astrophysics Data System (ADS)
Ansari Hosseinzadeh, Vahideh; Holt, R. Glynn
2017-05-01
The method of exciting shape oscillation of drops to extract material properties has a long history, which is most often coupled with the technique of acoustic levitation to achieve non-contact manipulation of the drop sample. We revisit this method with application to the inference of bulk shear viscosity and surface tension. The literature is replete with references to a "10% oscillation amplitude" as a sufficient condition for the application of Lamb's analytical expressions for the shape oscillations of viscous liquids. Our results show that even a 10% oscillation amplitude leads to dynamic effects which render Lamb's results inapplicable. By comparison with samples of known viscosity and surface tension, we illustrate the complicating finite-amplitude effects (mode-splitting and excess dissipation associated with vorticity) that can occur and then show that sufficiently small oscillations allow us to recover the correct material properties using Lamb's formula.
Babin, Volodymyr; Roland, Christopher; Darden, Thomas A.; Sagui, Celeste
2007-01-01
There is considerable interest in developing methodologies for the accurate evaluation of free energies, especially in the context of biomolecular simulations. Here, we report on a reexamination of the recently developed metadynamics method, which is explicitly designed to probe “rare events” and areas of phase space that are typically difficult to access with a molecular dynamics simulation. Specifically, we show that the accuracy of the free energy landscape calculated with the metadynamics method may be considerably improved when combined with umbrella sampling techniques. As test cases, we have studied the folding free energy landscape of two prototypical peptides: Ace-(Gly)2-Pro-(Gly)3-Nme in vacuo and trialanine solvated by both implicit and explicit water. The method has been implemented in the classical biomolecular code AMBER and is to be distributed in the next scheduled release of the code. © 2006 American Institute of Physics. PMID:17144742
Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander
2014-09-01
The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.
Improved spatial resolution in PET scanners using sampling techniques
Surti, Suleman; Scheuermann, Ryan; Werner, Matthew E.; Karp, Joel S.
2009-01-01
Increased focus towards improved detector spatial resolution in PET has led to the use of smaller crystals in some form of light sharing detector design. In this work we evaluate two sampling techniques that can be applied during calibrations for pixelated detector designs in order to improve the reconstructed spatial resolution. The inter-crystal positioning technique utilizes sub-sampling in the crystal flood map to better sample the Compton scatter events in the detector. The Compton scatter rejection technique, on the other hand, rejects those events that are located further from individual crystal centers in the flood map. We performed Monte Carlo simulations followed by measurements on two whole-body scanners for point source data. The simulations and measurements were performed for scanners using scintillators with Zeff ranging from 46.9 to 63 for LaBr3 and LYSO, respectively. Our results show that near the center of the scanner, inter-crystal positioning technique leads to a gain of about 0.5-mm in reconstructed spatial resolution (FWHM) for both scanner designs. In a small animal LYSO scanner the resolution improves from 1.9-mm to 1.6-mm with the inter-crystal technique. The Compton scatter rejection technique shows higher gains in spatial resolution but at the cost of reduction in scanner sensitivity. The inter-crystal positioning technique represents a modest acquisition software modification for an improvement in spatial resolution, but at a cost of potentially longer data correction and reconstruction times. The Compton scatter rejection technique, while also requiring a modest acquisition software change with no increased data correction and reconstruction times, will be useful in applications where the scanner sensitivity is very high and larger improvements in spatial resolution are desirable. PMID:19779586
The Impact of Transcription Writing Interventions for First-Grade Students
Wanzek, Jeanne; Gatlin, Brandy; Al Otaiba, Stephanie; Kim, Young-Suk Grace
2016-01-01
We examined the effects of transcription instruction for students in first grade. Students in the lowest 70% of the participating schools were selected for the study. These 81 students were randomly assigned to: (a) spelling instruction, (b) handwriting instruction, (c) combination spelling and handwriting instruction, or (d) no intervention. Intervention was provided in small groups of 4 students, 25 min a day, 4 days a week for 8 weeks. Students in the spelling condition outperformed the control group on spelling measures with moderate effect sizes noted on curriculum-based writing measures (e.g., correct word sequence; g range = 0.34 to 0.68). Students in the handwriting condition outperformed the control group on correct word sequences with small to moderate effects on other handwriting and writing measures (g range = 0.31 to 0.71). Students in the combined condition outperformed the control group on correct word sequences with a small effect on total words written (g range = 0.39 to 0.84). PMID:28989267
Identification of fecal contamination sources in water using host-associated markers.
Krentz, Corinne A; Prystajecky, Natalie; Isaac-Renton, Judith
2013-03-01
In British Columbia, Canada, drinking water is tested for total coliforms and Escherichia coli, but there is currently no routine follow-up testing to investigate fecal contamination sources in samples that test positive for indicator bacteria. Reliable microbial source tracking (MST) tools to rapidly test water samples for multiple fecal contamination markers simultaneously are currently lacking. The objectives of this study were (i) to develop a qualitative MST tool to identify fecal contamination from different host groups, and (ii) to evaluate the MST tool using water samples with evidence of fecal contamination. Singleplex and multiplex polymerase chain reaction (PCR) were used to test (i) water from polluted sites and (ii) raw and drinking water samples for presence of bacterial genetic markers associated with feces from humans, cattle, seagulls, pigs, chickens, and geese. The multiplex MST assay correctly identified suspected contamination sources in contaminated waterways, demonstrating that this test may have utility for heavily contaminated sites. Most raw and drinking water samples analyzed using singleplex PCR contained at least one host-associated marker. Singleplex PCR was capable of detecting host-associated markers in small sample volumes and is therefore a promising tool to further analyze water samples submitted for routine testing and provide information useful for water quality management.
Smith, Stephen D. A.; Markic, Ana
2013-01-01
Marine debris is a global issue with impacts on marine organisms, ecological processes, aesthetics and economies. Consequently, there is increasing interest in quantifying the scale of the problem. Accumulation rates of debris on beaches have been advocated as a useful proxy for at-sea debris loads. However, here we show that past studies may have vastly underestimated the quantity of available debris because sampling was too infrequent. Our study of debris on a small beach in eastern Australia indicates that estimated daily accumulation rates decrease rapidly with increasing intervals between surveys, and the quantity of available debris is underestimated by 50% after only 3 days and by an order of magnitude after 1 month. As few past studies report sampling frequencies of less than a month, estimates of the scale of the marine debris problem need to be critically re-examined and scaled-up accordingly. These results reinforce similar, recent work advocating daily sampling as a standard approach for accurate quantification of available debris in coastal habitats. We outline an alternative approach whereby site-specific accumulation models are generated to correct bias when daily sampling is impractical. PMID:24367607
Using large volume samplers for the monitoring of particle bound micro pollutants in rivers
NASA Astrophysics Data System (ADS)
Kittlaus, Steffen; Fuchs, Stephan
2015-04-01
The requirements of the WFD as well as substance emission modelling at the river basin scale require stable monitoring data for micro pollutants. The monitoring concepts applied by the local authorities as well as by many scientists use single sampling techniques. Samples from water bodies are usually taken in volumes of about one litre and depending on predetermined time steps or through discharge thresholds. For predominantly particle bound micro pollutants the small sample size of about one litre results in a very small amount of suspended particles. To measure micro pollutant concentrations in these samples is demanding and results in a high uncertainty of the measured concentrations, if the concentration is above the detection limit in the first place. In many monitoring programs most of the measured values were below the detection limit. This results in a high uncertainty if river loads were calculated from these data sets. The authors propose a different approach to gain stable concentration values for particle bound micro pollutants from river monitoring: A mixed sample of about 1000 L was pumped in a tank with a dirty-water pump. The sampling usually is done discharge dependant by using a gauge signal as input for the control unit. After the discharge event is over or the tank is fully filled, the suspended solids settle in the tank for 2 days. After this time a clear separation of water and solids can be shown. A sample (1 L) from the water phase and the total mass of the settled solids (about 10 L) are taken to the laboratory for analysis. While the micro pollutants can't hardly be detected in the water phase, the signal from the sediment is high above the detection limit, thus certain and very stable. From the pollutant concentration in the solid phase and the total tank volume the initial pollutant concentration in the sample can be calculated. If the concentration in the water phase is detectable, it can be used to correct the total load. This relatively low cost approach (less costs for analysis because of small sample number) allows to quantify the pollutant load, to derive dissolved-solid partition coefficients and to quantify the pollutant load in different particle size classes.
Neural network approach to proximity effect corrections in electron-beam lithography
NASA Astrophysics Data System (ADS)
Frye, Robert C.; Cummings, Kevin D.; Rietman, Edward A.
1990-05-01
The proximity effect, caused by electron beam backscattering during resist exposure, is an important concern in writing submicron features. It can be compensated by appropriate local changes in the incident beam dose, but computation of the optimal correction usually requires a prohibitively long time. We present an example of such a computation on a small test pattern, which we performed by an iterative method. We then used this solution as a training set for an adaptive neural network. After training, the network computed the same correction as the iterative method, but in a much shorter time. Correcting the image with a software based neural network resulted in a decrease in the computation time by a factor of 30, and a hardware based network enhanced the computation speed by more than a factor of 1000. Both methods had an acceptably small error of 0.5% compared to the results of the iterative computation. Additionally, we verified that the neural network correctly generalized the solution of the problem to include patterns not contained in its training set.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
77 FR 39452 - Substantial Business Activities; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-03
... Substantial Business Activities; Correction AGENCY: Internal Revenue Service (IRS), Treasury. ACTION... whether a foreign corporation has substantial business activities in a foreign country. FOR FURTHER... the Code, the regulations have been submitted to the Chief Counsel for Advocacy of the Small Business...
Matsuda, Atsushi; Schermelleh, Lothar; Hirano, Yasuhiro; Haraguchi, Tokuko; Hiraoka, Yasushi
2018-05-15
Correction of chromatic shift is necessary for precise registration of multicolor fluorescence images of biological specimens. New emerging technologies in fluorescence microscopy with increasing spatial resolution and penetration depth have prompted the need for more accurate methods to correct chromatic aberration. However, the amount of chromatic shift of the region of interest in biological samples often deviates from the theoretical prediction because of unknown dispersion in the biological samples. To measure and correct chromatic shift in biological samples, we developed a quadrisection phase correlation approach to computationally calculate translation, rotation, and magnification from reference images. Furthermore, to account for local chromatic shifts, images are split into smaller elements, for which the phase correlation between channels is measured individually and corrected accordingly. We implemented this method in an easy-to-use open-source software package, called Chromagnon, that is able to correct shifts with a 3D accuracy of approximately 15 nm. Applying this software, we quantified the level of uncertainty in chromatic shift correction, depending on the imaging modality used, and for different existing calibration methods, along with the proposed one. Finally, we provide guidelines to choose the optimal chromatic shift registration method for any given situation.
Attenuation correction factors for cylindrical, disc and box geometry
NASA Astrophysics Data System (ADS)
Agarwal, Chhavi; Poi, Sanhita; Mhatre, Amol; Goswami, A.; Gathibandhe, M.
2009-08-01
In the present study, attenuation correction factors have been experimentally determined for samples having cylindrical, disc and box geometry and compared with the attenuation correction factors calculated by Hybrid Monte Carlo (HMC) method [ C. Agarwal, S. Poi, A. Goswami, M. Gathibandhe, R.A. Agrawal, Nucl. Instr. and. Meth. A 597 (2008) 198] and with the near-field and far-field formulations available in literature. It has been observed that the near-field formulae, although said to be applicable at close sample-detector geometry, does not work at very close sample-detector configuration. The advantage of the HMC method is that it is found to be valid for all sample-detector geometries.
Discrimination of almonds (Prunus dulcis) geographical origin by minerals and fatty acids profiling.
Amorello, Diana; Orecchio, Santino; Pace, Andrea; Barreca, Salvatore
2016-09-01
Twenty-one almond samples from three different geographical origins (Sicily, Spain and California) were investigated by determining minerals and fatty acids compositions. Data were used to discriminate by chemometry almond origin by linear discriminant analysis. With respect to previous PCA profiling studies, this work provides a simpler analytical protocol for the identification of almonds geographical origin. Classification by using mineral contents data only was correct in 77% of the samples, while, by using fatty acid profiles, the percentages of samples correctly classified reached 82%. The coupling of mineral contents and fatty acid profiles lead to an increased efficiency of the classification with 87% of samples correctly classified.
NASA Technical Reports Server (NTRS)
Garriz, Javier A.; Haigler, Kara J.
1992-01-01
A three dimensional transonic Wind-tunnel Interference Assessment and Correction (WIAC) procedure developed specifically for use in the National Transonic Facility (NTF) at NASA Langley Research Center is discussed. This report is a user manual for the codes comprising the correction procedure. It also includes listings of sample procedures and input files for running a sample case and plotting the results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Chuan, E-mail: chuan.huang@stonybrookmedicine.edu; Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115; Departments of Radiology, Psychiatry, Stony Brook Medicine, Stony Brook, New York 11794
2015-02-15
Purpose: Degradation of image quality caused by cardiac and respiratory motions hampers the diagnostic quality of cardiac PET. It has been shown that improved diagnostic accuracy of myocardial defect can be achieved by tagged MR (tMR) based PET motion correction using simultaneous PET-MR. However, one major hurdle for the adoption of tMR-based PET motion correction in the PET-MR routine is the long acquisition time needed for the collection of fully sampled tMR data. In this work, the authors propose an accelerated tMR acquisition strategy using parallel imaging and/or compressed sensing and assess the impact on the tMR-based motion corrected PETmore » using phantom and patient data. Methods: Fully sampled tMR data were acquired simultaneously with PET list-mode data on two simultaneous PET-MR scanners for a cardiac phantom and a patient. Parallel imaging and compressed sensing were retrospectively performed by GRAPPA and kt-FOCUSS algorithms with various acceleration factors. Motion fields were estimated using nonrigid B-spline image registration from both the accelerated and fully sampled tMR images. The motion fields were incorporated into a motion corrected ordered subset expectation maximization reconstruction algorithm with motion-dependent attenuation correction. Results: Although tMR acceleration introduced image artifacts into the tMR images for both phantom and patient data, motion corrected PET images yielded similar image quality as those obtained using the fully sampled tMR images for low to moderate acceleration factors (<4). Quantitative analysis of myocardial defect contrast over ten independent noise realizations showed similar results. It was further observed that although the image quality of the motion corrected PET images deteriorates for high acceleration factors, the images were still superior to the images reconstructed without motion correction. Conclusions: Accelerated tMR images obtained with more than 4 times acceleration can still provide relatively accurate motion fields and yield tMR-based motion corrected PET images with similar image quality as those reconstructed using fully sampled tMR data. The reduction of tMR acquisition time makes it more compatible with routine clinical cardiac PET-MR studies.« less
An Investigation of the Sample Performance of Two Nonnormality Corrections for RMSEA
ERIC Educational Resources Information Center
Brosseau-Liard, Patricia E.; Savalei, Victoria; Li, Libo
2012-01-01
The root mean square error of approximation (RMSEA) is a popular fit index in structural equation modeling (SEM). Typically, RMSEA is computed using the normal theory maximum likelihood (ML) fit function. Under nonnormality, the uncorrected sample estimate of the ML RMSEA tends to be inflated. Two robust corrections to the sample ML RMSEA have…
45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations
Code of Federal Regulations, 2012 CFR
2012-10-01
... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than...
45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations
Code of Federal Regulations, 2014 CFR
2014-10-01
... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than...
45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations
Code of Federal Regulations, 2011 CFR
2011-10-01
... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than...
Rao, Amrita; Stahlman, Shauna; Hargreaves, James; Weir, Sharon; Edwards, Jessie; Rice, Brian; Kochelani, Duncan; Mavimbela, Mpumelelo; Baral, Stefan
2018-01-15
[This corrects the article DOI: 10.2196/publichealth.8116.]. ©Amrita Rao, Shauna Stahlman, James Hargreaves, Sharon Weir, Jessie Edwards, Brian Rice, Duncan Kochelani, Mpumelelo Mavimbela, Stefan Baral. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 15.01.2018.
Age determination of bottled Chinese rice wine by VIS-NIR spectroscopy
NASA Astrophysics Data System (ADS)
Yu, Haiyan; Lin, Tao; Ying, Yibin; Pan, Xingxiang
2006-10-01
The feasibility of non-invasive visible and near infrared (VIS-NIR) spectroscopy for determining wine age (1, 2, 3, 4, and 5 years) of Chinese rice wine was investigated. Samples of Chinese rice wine were analyzed in 600 mL square brown glass bottles with side length of approximately 64 mm at room temperature. VIS-NIR spectra of 100 bottled Chinese rice wine samples were collected in transmission mode in the wavelength range of 350-1200 nm by a fiber spectrometer system. Discriminant models were developed based on discriminant analysis (DA) together with raw, first and second derivative spectra. The concentration of alcoholic degree, total acid, and °Brix was determined to validate the NIR results. The calibration result for raw spectra was better than that for first and second derivative spectra. The percentage of samples correctly classified for raw spectra was 98%. For 1-, 2-, and 3-year-old sample groups, the sample were all correctly classified, and for 4- and 5-year-old sample groups, the percentage of samples correctly classified was 92.9%, respectively. In validation analysis, the percentage of samples correctly classified was 100%. The results demonstrated that VIS-NIR spectroscopic technique could be used as a non-invasive, rapid and reliable method for predicting wine age of bottled Chinese rice wine.
Oberacher, Herbert
2013-01-01
The “Critical Assessment of Small Molecule Identification” (CASMI) contest was aimed in testing strategies for small molecule identification that are currently available in the experimental and computational mass spectrometry community. We have applied tandem mass spectral library search to solve Category 2 of the CASMI Challenge 2012 (best identification for high resolution LC/MS data). More than 230,000 tandem mass spectra part of four well established libraries (MassBank, the collection of tandem mass spectra of the “NIST/NIH/EPA Mass Spectral Library 2012”, METLIN, and the ‘Wiley Registry of Tandem Mass Spectral Data, MSforID’) were searched. The sample spectra acquired in positive ion mode were processed. Seven out of 12 challenges did not produce putative positive matches, simply because reference spectra were not available for the compounds searched. This suggests that to some extent the limited coverage of chemical space with high-quality reference spectra is still a problem encountered in tandem mass spectral library search. Solutions were submitted for five challenges. Three compounds were correctly identified (kanamycin A, benzyldiphenylphosphine oxide, and 1-isopropyl-5-methyl-1H-indole-2,3-dione). In the absence of any reference spectrum, a false positive identification was obtained for 1-aminoanthraquinone by matching the corresponding sample spectrum to the structurally related compounds N-phenylphthalimide and 2-aminoanthraquinone. Another false positive result was submitted for 1H-benz[g]indole; for the 1H-benz[g]indole-specific sample spectra provided, carbazole was listed as the best matching compound. In this case, the quality of the available 1H-benz[g]indole-specific reference spectra was found to hamper unequivocal identification. PMID:24957994
A modified TEW approach to scatter correction for In-111 and Tc-99m dual-isotope small-animal SPECT.
Prior, Paul; Timmins, Rachel; Petryk, Julia; Strydhorst, Jared; Duan, Yin; Wei, Lihui; Glenn Wells, R
2016-10-01
In dual-isotope (Tc-99m/In-111) small-animal single-photon emission computed tomography (SPECT), quantitative accuracy of Tc-99m activity measurements is degraded due to the detection of Compton-scattered photons in the Tc-99m photopeak window, which originate from the In-111 emissions (cross talk) and from the Tc-99m emission (self-scatter). The standard triple-energy window (TEW) estimates the total scatter (self-scatter and cross talk) using one scatter window on either side of the Tc-99m photopeak window, but the estimate is biased due to the presence of unscattered photons in the scatter windows. The authors present a modified TEW method to correct for total scatter that compensates for this bias and evaluate the method in phantoms and in vivo. The number of unscattered Tc-99m and In-111 photons present in each scatter-window projection is estimated based on the number of photons detected in the photopeak of each isotope, using the isotope-dependent energy resolution of the detector. The camera-head-specific energy resolutions for the 140 keV Tc-99m and 171 keV In-111 emissions were determined experimentally by separately sampling the energy spectra of each isotope. Each sampled spectrum was fit with a Linear + Gaussian function. The fitted Gaussian functions were integrated across each energy window to determine the proportion of unscattered photons from each emission detected in the scatter windows. The method was first tested and compared to the standard TEW in phantoms containing Tc-99m:In-111 activity ratios between 0.15 and 6.90. True activities were determined using a dose calibrator, and SPECT activities were estimated from CT-attenuation-corrected images with and without scatter-correction. The method was then tested in vivo in six rats using In-111-liposome and Tc-99m-tetrofosmin to generate cross talk in the area of the myocardium. The myocardium was manually segmented using the SPECT and CT images, and partial-volume correction was performed using a template-based approach. The rat heart was counted in a well-counter to determine the true activity. In the phantoms without correction for Compton-scatter, Tc-99m activity quantification errors as high as 85% were observed. The standard TEW method quantified Tc-99m activity with an average accuracy of -9.0% ± 0.7%, while the modified TEW was accurate within 5% of truth in phantoms with Tc-99m:In-111 activity ratios ≥0.52. Without scatter-correction, In-111 activity was quantified with an average accuracy of 4.1%, and there was no dependence of accuracy on the activity ratio. In rat myocardia, uncorrected images were overestimated by an average of 23% ± 5%, and the standard TEW had an accuracy of -13.8% ± 1.6%, while the modified TEW yielded an accuracy of -4.0% ± 1.6%. Cross talk and self-scatter were shown to produce quantification errors in phantoms as well as in vivo. The standard TEW provided inaccurate results due to the inclusion of unscattered photons in the scatter windows. The modified TEW improved the scatter estimate and reduced the quantification errors in phantoms and in vivo.
Dippold, Michaela A; Boesel, Stefanie; Gunina, Anna; Kuzyakov, Yakov; Glaser, Bruno
2014-03-30
Amino sugars build up microbial cell walls and are important components of soil organic matter. To evaluate their sources and turnover, δ(13)C analysis of soil-derived amino sugars by liquid chromatography was recently suggested. However, amino sugar δ(13)C determination remains challenging due to (1) a strong matrix effect, (2) CO2 -binding by alkaline eluents, and (3) strongly different chromatographic behavior and concentrations of basic and acidic amino sugars. To overcome these difficulties we established an ion chromatography-oxidation-isotope ratio mass spectrometry method to improve and facilitate soil amino sugar analysis. After acid hydrolysis of soil samples, the extract was purified from salts and other components impeding chromatographic resolution. The amino sugar concentrations and δ(13)C values were determined by coupling an ion chromatograph to an isotope ratio mass spectrometer. The accuracy and precision of quantification and δ(13)C determination were assessed. Internal standards enabled correction for losses during analysis, with a relative standard deviation <6%. The higher magnitude peaks of basic than of acidic amino sugars required an amount-dependent correction of δ(13)C values. This correction improved the accuracy of the determination of δ(13)C values to <1.5‰ and the precision to <0.5‰ for basic and acidic amino sugars in a single run. This method enables parallel quantification and δ(13)C determination of basic and acidic amino sugars in a single chromatogram due to the advantages of coupling an ion chromatograph to the isotope ratio mass spectrometer. Small adjustments of sample amount and injection volume are necessary to optimize precision and accuracy for individual soils. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chen, Chun-Chi; Lin, Shih-Hao; Lin, Yi
2014-06-01
This paper proposes a time-domain CMOS smart temperature sensor featuring on-chip curvature correction and one-point calibration support for thermal management systems. Time-domain inverter-based temperature sensors, which exhibit the advantages of low power and low cost, have been proposed for on-chip thermal monitoring. However, the curvature is large for the thermal transfer curve, which substantially affects the accuracy as the temperature range increases. Another problem is that the inverter is sensitive to process variations, resulting in difficulty for the sensors to achieve an acceptable accuracy for one-point calibration. To overcome these two problems, a temperature-dependent oscillator with curvature correction is proposed to increase the linearity of the oscillatory width, thereby resolving the drawback caused by a costly off-chip second-order master curve fitting. For one-point calibration support, an adjustable-gain time amplifier was adopted to eliminate the effect of process variations, with the assistance of a calibration circuit. The proposed circuit occupied a small area of 0.073 mm2 and was fabricated in a TSMC CMOS 0.35-μm 2P4M digital process. The linearization of the oscillator and the effect cancellation of process variations enabled the sensor, which featured a fixed resolution of 0.049 °C/LSB, to achieve an optimal inaccuracy of -0.8 °C to 1.2 °C after one-point calibration of 12 test chips from -40 °C to 120 °C. The power consumption was 35 μW at a sample rate of 10 samples/s.
Sekundo, Walter; Kunert, Kathleen S; Blum, Marcus
2011-03-01
This 6 month prospective multi-centre study evaluated the feasibility of performing myopic femtosecond lenticule extraction (FLEx) through a small incision using the small incision lenticule extraction (SMILE) procedure. Prospective, non-randomised clinical trial. PARTICIPANTS; Ninety-one eyes of 48 patients with myopia with and without astigmatism completed the final 6 month follow-up. The patients' mean age was 35.3 years. Their preoperative mean spherical equivalent (SE) was −4.75±1.56 D. A refractive lenticule of intrastromal corneal tissue was cut utilising a prototype of the Carl Zeiss Meditec AG VisuMax femtosecond laser system. Simultaneously two opposite small ‘pocket’ incisions were created by the laser system. Thereafter, the lenticule was manually dissected with a spatula and removed through one of incisions using modified McPherson forceps. Uncorrected visual acuity (UCVA) and best spectacle corrected visual acuity (BSCVA) after 6 months, objective and manifest refraction as well as slit-lamp examination, side effects and a questionnaire. Six months postoperatively the mean SE was −0.01 D±0.49 D. Most treated eyes (95.6%) were within ±1.0 D, and 80.2% were within ±0.5 D of intended correction. Of the eyes treated, 83.5% had an UCVA of 1.0 (20/20) or better, 53% remained unchanged, 32.3% gained one line, 3.3% gained two lines of BSCVA, 8.8% lost one line and 1.1% lost ≥2 lines of BSCVA. When answering a standardised questionnaire, 93.3% of patients were satisfied with the results obtained and would undergo the procedure again. SMILE is a promising new flapless minimally invasive refractive procedure to correct myopia.
Baumrind, S; Korn, E L; Isaacson, R J; West, E E; Molthen, R
1983-12-01
This article analyzes differences in the measured displacement of the condyle and of progonion when different vectors of force are delivered to the maxilla in the course of non-full-banded, Phase 1, mixed-dentition treatment for the correction of Class II malocclusion. The 238-case sample is identical to that for which changes in other parameters of facial form have been reported previously. Relative to superimposition on anterior cranial base and measured in a Frankfort-plane-determined coordinate system, we have attempted to identify and quantitate (1) the displacement of each structure which results from local remodeling and (2) the displacement of each structure which occurs as a secondary consequence of changes in other regions of the skull. We have also attempted to isolate treatment effects from those attributable to spontaneous growth and development. At the condyle, we note that in all three treatment groups and in the control group there is a small but real downward and backward displacement of the glenoid fossa. This change is not treatment induced but, rather, is associated with spontaneous growth and development. (See Fig. 5.) Some interesting differences in pattern of "growth at the condyle" were noted between samples. In the intraoral (modified activator) sample, there were small but statistically significant increases in growth rate as compared to the untreated group of Class II controls. To our surprise, similar statistically significant increases over the growth rate of the control group were noted in the cervical sample. (See Table III, variables 17 and 18.) Small but statistically significant differences between treatments were also noted in the patterns of change at pogonion. As compared to the untreated control group, the rate of total displacement in the modified activator group was significantly greater in the forward direction, while the rate of total displacement in the cervical group was significantly greater in the downward direction. There were no statistically significant differences in the rate of total displacement of pogonion between the high-pull sample and the control sample. (See Table IV, variables 21 and 22.
Benmakhlouf, Hamza; Andreo, Pedro
2017-02-01
Correction factors for the relative dosimetry of narrow megavoltage photon beams have recently been determined in several publications. These corrections are required because of the several small-field effects generally thought to be caused by the lack of lateral charged particle equilibrium (LCPE) in narrow beams. Correction factors for relative dosimetry are ultimately necessary to account for the fluence perturbation caused by the detector. For most small field detectors the perturbation depends on field size, resulting in large correction factors when the field size is decreased. In this work, electron and photon fluence differential in energy will be calculated within the radiation sensitive volume of a number of small field detectors for 6 MV linear accelerator beams. The calculated electron spectra will be used to determine electron fluence perturbation as a function of field size and its implication on small field dosimetry analyzed. Fluence spectra were calculated with the user code PenEasy, based on the PENELOPE Monte Carlo system. The detectors simulated were one liquid ionization chamber, two air ionization chambers, one diamond detector, and six silicon diodes, all manufactured either by PTW or IBA. The spectra were calculated for broad (10 cm × 10 cm) and narrow (0.5 cm × 0.5 cm) photon beams in order to investigate the field size influence on the fluence spectra and its resulting perturbation. The photon fluence spectra were used to analyze the impact of absorption and generation of photons. These will have a direct influence on the electrons generated in the detector radiation sensitive volume. The electron fluence spectra were used to quantify the perturbation effects and their relation to output correction factors. The photon fluence spectra obtained for all detectors were similar to the spectrum in water except for the shielded silicon diodes. The photon fluence in the latter group was strongly influenced, mostly in the low-energy region, by photoabsorption in the high-Z shielding material. For the ionization chambers and the diamond detector, the electron fluence spectra were found to be similar to that in water, for both field sizes. In contrast, electron spectra in the silicon diodes were much higher than that in water for both field sizes. The estimated perturbations of the fluence spectra for the silicon diodes were 11-21% for the large fields and 14-27% for the small fields. These perturbations are related to the atomic number, density and mean excitation energy (I-value) of silicon, as well as to the influence of the "extracameral"' components surrounding the detector sensitive volume. For most detectors the fluence perturbation was also found to increase when the field size was decreased, in consistency with the increased small-field effects observed for the smallest field sizes. The present work improves the understanding of small-field effects by relating output correction factors to spectral fluence perturbations in small field detectors. It is shown that the main reasons for the well-known small-field effects in silicon diodes are the high-Z and density of the "extracameral" detector components and the high I-value of silicon relative to that of water and diamond. Compared to these parameters, the density and atomic number of the radiation sensitive volume material play a less significant role. © 2016 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Munafo, I.; Malagnini, L.; Chiaraluce, L.; Valoroso, L.
2015-12-01
The relation between moment magnitude (MW) and local magnitude (ML) is still a debated issue (Bath, 1966, 1981; Ristau et al., 2003, 2005). Theoretical considerations and empirical observations show that, in the magnitude range between 3 and 5, MW and ML scale 1∶1. Whilst for smaller magnitudes this 1∶1 scaling breaks down (Bethmann et al. 2011). For accomplishing this task we analyzed the source parameters of about 1500 (30.000 waveforms) well-located small earthquakes occurred in the Upper Tiber Valley (Northern Apennines) in the range of -1.5≤ML≤3.8. In between these earthquakes there are 300 events repeatedly rupturing the same fault patch generally twice within a short time interval (less than 24 hours; Chiaraluce et al., 2007). We use high-resolution short period and broadband recordings acquired between 2010 and 2014 by 50 permanent seismic stations deployed to monitor the activity of a regional low angle normal fault (named Alto Tiberina fault, ATF) in the framework of The Alto Tiberina Near Fault Observatory project (TABOO; Chiaraluce et al., 2014). For this study the direct determination of MW for small earthquakes is essential but unfortunately the computation of MW for small earthquakes (MW < 3) is not a routine procedure in seismology. We apply the contributions of source, site, and crustal attenuation computed for this area in order to obtain precise spectral corrections to be used in the calculation of small earthquakes spectral plateaus. The aim of this analysis is to achieve moment magnitudes of small events through a procedure that uses our previously calibrated crustal attenuation parameters (geometrical spreading g(r), quality factor Q(f), and the residual parameter k) to correct for path effects. We determine the MW-ML relationships in two selected fault zones (on-fault and fault-hanging-wall) of the ATF by an orthogonal regression analysis providing a semi-automatic and robust procedure for moment magnitude determination within a region characterized by small to moderate seismicity. Finally, we present for a subset of data, corner frequency values computed by spectral analysis of S-waves, using data from three nearby shallow borehole stations sampled at 500 sps.
Retrofit designs for small bench-type blood cell counters.
Ferris, C D
1991-01-01
This paper describes several retrofit designs to correct operational problems associated with small bench-type blood cell counters. Replacement electronic circuits as well as modifications to the vacuum systems are discussed.
48 CFR 552.219-73 - Goals for Subcontracting Plan.
Code of Federal Regulations, 2012 CFR
2012-10-01
...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...
48 CFR 552.219-73 - Goals for Subcontracting Plan.
Code of Federal Regulations, 2014 CFR
2014-10-01
...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...
48 CFR 552.219-73 - Goals for Subcontracting Plan.
Code of Federal Regulations, 2013 CFR
2013-10-01
...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...
48 CFR 552.219-73 - Goals for Subcontracting Plan.
Code of Federal Regulations, 2011 CFR
2011-10-01
...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...
Grabitz, Clara R; Button, Katherine S; Munafò, Marcus R; Newbury, Dianne F; Pernet, Cyril R; Thompson, Paul A; Bishop, Dorothy V M
2018-01-01
Genetics and neuroscience are two areas of science that pose particular methodological problems because they involve detecting weak signals (i.e., small effects) in noisy data. In recent years, increasing numbers of studies have attempted to bridge these disciplines by looking for genetic factors associated with individual differences in behavior, cognition, and brain structure or function. However, different methodological approaches to guarding against false positives have evolved in the two disciplines. To explore methodological issues affecting neurogenetic studies, we conducted an in-depth analysis of 30 consecutive articles in 12 top neuroscience journals that reported on genetic associations in nonclinical human samples. It was often difficult to estimate effect sizes in neuroimaging paradigms. Where effect sizes could be calculated, the studies reporting the largest effect sizes tended to have two features: (i) they had the smallest samples and were generally underpowered to detect genetic effects, and (ii) they did not fully correct for multiple comparisons. Furthermore, only a minority of studies used statistical methods for multiple comparisons that took into account correlations between phenotypes or genotypes, and only nine studies included a replication sample or explicitly set out to replicate a prior finding. Finally, presentation of methodological information was not standardized and was often distributed across Methods sections and Supplementary Material, making it challenging to assemble basic information from many studies. Space limits imposed by journals could mean that highly complex statistical methods were described in only a superficial fashion. In summary, methods that have become standard in the genetics literature-stringent statistical standards, use of large samples, and replication of findings-are not always adopted when behavioral, cognitive, or neuroimaging phenotypes are used, leading to an increased risk of false-positive findings. Studies need to correct not just for the number of phenotypes collected but also for the number of genotypes examined, genetic models tested, and subsamples investigated. The field would benefit from more widespread use of methods that take into account correlations between the factors corrected for, such as spectral decomposition, or permutation approaches. Replication should become standard practice; this, together with the need for larger sample sizes, will entail greater emphasis on collaboration between research groups. We conclude with some specific suggestions for standardized reporting in this area.
Sarrazin, Samuel; Poupon, Cyril; Linke, Julia; Wessa, Michèle; Phillips, Mary; Delavest, Marine; Versace, Amelia; Almeida, Jorge; Guevara, Pamela; Duclap, Delphine; Duchesnay, Edouard; Mangin, Jean-François; Le Dudal, Katia; Daban, Claire; Hamdani, Nora; D'Albis, Marc-Antoine; Leboyer, Marion; Houenou, Josselin
2014-04-01
Tractography studies investigating white matter (WM) abnormalities in patients with bipolar disorder have yielded heterogeneous results owing to small sample sizes. The small size limits their generalizability, a critical issue for neuroimaging studies of biomarkers of bipolar I disorder (BPI). To study WM abnormalities using whole-brain tractography in a large international multicenter sample of BPI patients and to compare these alterations between patients with or without a history of psychotic features during mood episodes. A cross-sectional, multicenter, international, Q-ball imaging tractography study comparing 118 BPI patients and 86 healthy control individuals. In addition, among the patient group, we compared those with and without a history of psychotic features. University hospitals in France, Germany, and the United States contributed participants. Participants underwent assessment using the Diagnostic Interview for Genetic Studies at the French sites or the Structured Clinical Interview for DSM-IV at the German and US sites. Diffusion-weighted magnetic resonance images were acquired using the same acquisition parameters and scanning hardware at each site. We reconstructed 22 known deep WM tracts using Q-ball imaging tractography and an automatized segmentation technique. Generalized fractional anisotropy values along each reconstructed WM tract. Compared with controls, BPI patients had significant reductions in mean generalized fractional anisotropy values along the body and the splenium of the corpus callosum, the left cingulum, and the anterior part of the left arcuate fasciculus when controlling for age, sex, and acquisition site (corrected for multiple testing). Patients with a history of psychotic features had a lower mean generalized fractional anisotropy value than those without along the body of the corpus callosum (corrected for multiple testing). In this multicenter sample, BPI patients had reduced WM integrity in interhemispheric, limbic, and arcuate WM tracts. Interhemispheric pathways are more disrupted in patients with than in those without psychotic symptoms. Together these results highlight the existence of an anatomic disconnectivity in BPI and further underscore a role for interhemispheric disconnectivity in the pathophysiological features of psychosis in BPI.
Bilgmann, Kerstin; Möller, Luciana M.; Harcourt, Robert G.; Kemper, Catherine M.; Beheregaray, Luciano B.
2011-01-01
Advances in molecular techniques have enabled the study of genetic diversity and population structure in many different contexts. Studies that assess the genetic structure of cetacean populations often use biopsy samples from free-ranging individuals and tissue samples from stranded animals or individuals that became entangled in fishery or aquaculture equipment. This leads to the question of how representative the location of a stranded or entangled animal is with respect to its natural range, and whether similar results would be obtained when comparing carcass samples with samples from free-ranging individuals in studies of population structure. Here we use tissue samples from carcasses of dolphins that stranded or died as a result of bycatch in South Australia to investigate spatial population structure in two species: coastal bottlenose (Tursiops sp.) and short-beaked common dolphins (Delphinus delphis). We compare these results with those previously obtained from biopsy sampled free-ranging dolphins in the same area to test whether carcass samples yield similar patterns of genetic variability and population structure. Data from dolphin carcasses were gathered using seven microsatellite markers and a fragment of the mitochondrial DNA control region. Analyses based on carcass samples alone failed to detect genetic structure in Tursiops sp., a species previously shown to exhibit restricted dispersal and moderate genetic differentiation across a small spatial scale in this region. However, genetic structure was correctly inferred in D. delphis, a species previously shown to have reduced genetic structure over a similar geographic area. We propose that in the absence of corroborating data, and when population structure is assessed over relatively small spatial scales, the sole use of carcasses may lead to an underestimate of genetic differentiation. This can lead to a failure in identifying management units for conservation. Therefore, this risk should be carefully assessed when planning population genetic studies of cetaceans. PMID:21655285
Performance of a Line Loss Correction Method for Gas Turbine Emission Measurements
NASA Astrophysics Data System (ADS)
Hagen, D. E.; Whitefield, P. D.; Lobo, P.
2015-12-01
International concern for the environmental impact of jet engine exhaust emissions in the atmosphere has led to increased attention on gas turbine engine emission testing. The Society of Automotive Engineers Aircraft Exhaust Emissions Measurement Committee (E-31) has published an Aerospace Information Report (AIR) 6241 detailing the sampling system for the measurement of non-volatile particulate matter from aircraft engines, and is developing an Aerospace Recommended Practice (ARP) for methodology and system specification. The Missouri University of Science and Technology (MST) Center for Excellence for Aerospace Particulate Emissions Reduction Research has led numerous jet engine exhaust sampling campaigns to characterize emissions at different locations in the expanding exhaust plume. Particle loss, due to various mechanisms, occurs in the sampling train that transports the exhaust sample from the engine exit plane to the measurement instruments. To account for the losses, both the size dependent penetration functions and the size distribution of the emitted particles need to be known. However in the proposed ARP, particle number and mass are measured, but size is not. Here we present a methodology to generate number and mass correction factors for line loss, without using direct size measurement. A lognormal size distribution is used to represent the exhaust aerosol at the engine exit plane and is defined by the measured number and mass at the downstream end of the sample train. The performance of this line loss correction is compared to corrections based on direct size measurements using data taken by MST during numerous engine test campaigns. The experimental uncertainty in these correction factors is estimated. Average differences between the line loss correction method and size based corrections are found to be on the order of 10% for number and 2.5% for mass.
Pribil, Michael; Ridley, William I.; Emsbo, Poul
2015-01-01
Isotope ratio measurements using a multi-collector inductively coupled plasma mass spectrometer (MC-ICP-MS) commonly use standard-sample bracketing with a single isotope standard for mass bias correction for elements with narrow-range isotope systems measured by MC-ICP-MS, e.g. Cu, Fe, Zn, and Hg. However, sulfur (S) isotopic composition (δ34S) in nature can range from at least − 40 to + 40‰, potentially exceeding the ability of standard-sample bracketing using a single sulfur isotope standard to accurately correct for mass bias. Isotopic fractionation via solution and laser ablation introduction was determined during sulfate sulfur (Ssulfate) isotope measurements. An external isotope calibration curve was constructed using in-house and National Institute of Standards and Technology (NIST) Ssulfate isotope reference materials (RM) in an attempt to correct for the difference. The ability of external isotope correction for Ssulfate isotope measurements was evaluated by analyzing NIST and United States Geological Survey (USGS) Ssulfate isotope reference materials as unknowns. Differences in δ34Ssulfate between standard-sample bracketing and standard-sample bracketing with external isotope correction for sulfate samples ranged from 0.72‰ to 2.35‰ over a δ34S range of 1.40‰ to 21.17‰. No isotopic differences were observed when analyzing Ssulfide reference materials over a δ34Ssulfide range of − 32.1‰ to 17.3‰ and a δ33S range of − 16.5‰ to 8.9‰ via laser ablation (LA)-MC-ICP-MS. Here, we identify a possible plasma induced fractionation for Ssulfate and describe a new method using external isotope calibration corrections using solution and LA-MC-ICP-MS.
LD Score Regression Distinguishes Confounding from Polygenicity in Genome-Wide Association Studies
Bulik-Sullivan, Brendan K.; Loh, Po-Ru; Finucane, Hilary; Ripke, Stephan; Yang, Jian; Patterson, Nick; Daly, Mark J.; Price, Alkes L.; Neale, Benjamin M.
2015-01-01
Both polygenicity (i.e., many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield an inflated distribution of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from true polygenic signal and bias. We have developed an approach, LD Score regression, that quantifies the contribution of each by examining the relationship between test statistics and linkage disequilibrium (LD). The LD Score regression intercept can be used to estimate a more powerful and accurate correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size. PMID:25642630
Accurate elevation and normal moveout corrections of seismic reflection data on rugged topography
Liu, J.; Xia, J.; Chen, C.; Zhang, G.
2005-01-01
The application of the seismic reflection method is often limited in areas of complex terrain. The problem is the incorrect correction of time shifts caused by topography. To apply normal moveout (NMO) correction to reflection data correctly, static corrections are necessary to be applied in advance for the compensation of the time distortions of topography and the time delays from near-surface weathered layers. For environment and engineering investigation, weathered layers are our targets, so that the static correction mainly serves the adjustment of time shifts due to an undulating surface. In practice, seismic reflected raypaths are assumed to be almost vertical through the near-surface layers because they have much lower velocities than layers below. This assumption is acceptable in most cases since it results in little residual error for small elevation changes and small offsets in reflection events. Although static algorithms based on choosing a floating datum related to common midpoint gathers or residual surface-consistent functions are available and effective, errors caused by the assumption of vertical raypaths often generate pseudo-indications of structures. This paper presents the comparison of applying corrections based on the vertical raypaths and bias (non-vertical) raypaths. It also provides an approach of combining elevation and NMO corrections. The advantages of the approach are demonstrated by synthetic and real-world examples of multi-coverage seismic reflection surveys on rough topography. ?? The Royal Society of New Zealand 2005.
Fat fraction bias correction using T1 estimates and flip angle mapping.
Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A
2014-01-01
To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.
Correcting for Systematic Bias in Sample Estimates of Population Variances: Why Do We Divide by n-1?
ERIC Educational Resources Information Center
Mittag, Kathleen Cage
An important topic presented in introductory statistics courses is the estimation of population parameters using samples. Students learn that when estimating population variances using sample data, we always get an underestimate of the population variance if we divide by n rather than n-1. One implication of this correction is that the degree of…
Gravity or turbulence? IV. Collapsing cores in out-of-virial disguise
NASA Astrophysics Data System (ADS)
Ballesteros-Paredes, Javier; Vázquez-Semadeni, Enrique; Palau, Aina; Klessen, Ralf S.
2018-06-01
We study the dynamical state of massive cores by using a simple analytical model, an observational sample, and numerical simulations of collapsing massive cores. From the analytical model, we find that cores increase their column density and velocity dispersion as they collapse, resulting in a time evolution path in the Larson velocity dispersion-size diagram from large sizes and small velocity dispersions to small sizes and large velocity dispersions, while they tend to equipartition between gravity and kinetic energy. From the observational sample, we find that: (a) cores with substantially different column densities in the sample do not follow a Larson-like linewidth-size relation. Instead, cores with higher column densities tend to be located in the upper-left corner of the Larson velocity dispersion σv, 3D-size R diagram, a result explained in the hierarchical and chaotic collapse scenario. (b) Cores appear to have overvirial values. Finally, our numerical simulations reproduce the behavior predicted by the analytical model and depicted in the observational sample: collapsing cores evolve towards larger velocity dispersions and smaller sizes as they collapse and increase their column density. More importantly, however, they exhibit overvirial states. This apparent excess is due to the assumption that the gravitational energy is given by the energy of an isolated homogeneous sphere. However, such excess disappears when the gravitational energy is correctly calculated from the actual spatial mass distribution. We conclude that the observed energy budget of cores is consistent with their non-thermal motions being driven by their self-gravity and in the process of dynamical collapse.
NASA Astrophysics Data System (ADS)
Kouhpeima, A.; Feiznia, S.; Ahmadi, H.; Hashemi, S. A.; Zareiee, A. R.
2010-09-01
The targeting of sediment management strategies is a key requirement in developing countries including Iran because of the limited resources available. These targeting is, however hampered by the lack of reliable information on catchment sediment sources. This paper reports the results of using a quantitative composite fingerprinting technique to estimate the relative importance of the primary potential sources within the Amrovan and Royan catchments in Semnan Province, Iran. Fifteen tracers were first selected for tracing and samples were analyzed in the laboratory for these parameters. Statistical methods were applied to the data including nonparametric Kruskal-Wallis test and Differentiation Function Analysis (DFA). For Amrovan catchment three parameters (N, Cr and Co) were found to be not significant in making the discrimination. The optimum fingerprint, comprising Oc, PH, Kaolinite and K was able to distinguish correctly 100% of the source material samples. For the Royan catchment, all of the 15 properties were able to distinguish between the six source types and the optimum fingerprint provided by stepwise DFA (Cholorite, XFD, N and C) correctly classifies 92.9% of the source material samples. The mean contributions from each sediment source obtained by multivariate mixing model varied at two catchments. For Amrovan catchment Upper Red formation is the main sediment sources as this sediment source approximately supplies 36% of the reservoir sediment whereas the dominant sediment source for the Royan catchment is from Karaj formation that supplies 33% of the reservoir sediments. Results indicate that the source fingerprinting approach appears to work well in the study catchments and to generate reliable results.
The galaxy-subhalo connection in low-redshift galaxy clusters from weak gravitational lensing
NASA Astrophysics Data System (ADS)
Sifón, Cristóbal; Herbonnet, Ricardo; Hoekstra, Henk; van der Burg, Remco F. J.; Viola, Massimo
2018-07-01
We measure the gravitational lensing signal around satellite galaxies in a sample of galaxy clusters at z < 0.15 by combining high-quality imaging data from the Canada-France-Hawaii Telescope with a large sample of spectroscopically confirmed cluster members. We use extensive image simulations to assess the accuracy of shape measurements of faint, background sources in the vicinity of bright satellite galaxies. We find a small but significant bias, as light from the lenses makes the shapes of background galaxies appear radially aligned with the lens. We account for this bias by applying a correction that depends on both lens size and magnitude. We also correct for contamination of the source sample by cluster members. We use a physically motivated definition of subhalo mass, namely the mass bound to the subhalo, mbg, similar to definitions used by common subhalo finders in numerical simulations. Binning the satellites by stellar mass we provide a direct measurement of the subhalo-to-stellar-mass relation, log mbg/M⊙ = (11.54 ± 0.05) + (0.95 ± 0.10)log [m⋆/(2 × 1010 M⊙)]. This best-fitting relation implies that, at a stellar mass m⋆ ˜ 3 × 1010 M⊙, subhalo masses are roughly 50 per cent of those of central galaxies, and this fraction decreases at higher stellar masses. We find some evidence for a sharp change in the total-to-stellar mass ratio around the clusters' scale radius, which could be interpreted as galaxies within the scale radius having suffered more strongly from tidal stripping, but remain cautious regarding this interpretation.
Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried
2008-01-27
In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.
The galaxy-subhalo connection in low-redshift galaxy clusters from weak gravitational lensing
NASA Astrophysics Data System (ADS)
Sifón, Cristóbal; Herbonnet, Ricardo; Hoekstra, Henk; van der Burg, Remco F. J.; Viola, Massimo
2018-05-01
We measure the gravitational lensing signal around satellite galaxies in a sample of galaxy clusters at z < 0.15 by combining high-quality imaging data from the Canada-France-Hawaii Telescope with a large sample of spectroscopically-confirmed cluster members. We use extensive image simulations to assess the accuracy of shape measurements of faint, background sources in the vicinity of bright satellite galaxies. We find a small but significant bias, as light from the lenses makes the shapes of background galaxies appear radially aligned with the lens. We account for this bias by applying a correction that depends on both lens size and magnitude. We also correct for contamination of the source sample by cluster members. We use a physically-motivated definition of subhalo mass, namely the mass bound to the subhalo, mbg, similar to definitions used by common subhalo finders in numerical simulations. Binning the satellites by stellar mass we provide a direct measurement of the subhalo-to-stellar-mass relation, log mbg/M⊙ = (11.54 ± 0.05) + (0.95 ± 0.10)log [m⋆/(2 × 1010M⊙)]. This best-fitting relation implies that, at a stellar mass m⋆ ˜ 3 × 1010 M⊙, subhalo masses are roughly 50 per cent of those of central galaxies, and this fraction decreases at higher stellar masses. We find some evidence for a sharp change in the total-to-stellar mass ratio around the clusters' scale radius, which could be interpreted as galaxies within the scale radius having suffered more strongly from tidal stripping, but remain cautious regarding this interpretation.
Wang, San-Yuan; Kuo, Ching-Hua; Tseng, Yufeng J
2015-03-03
Able to detect known and unknown metabolites, untargeted metabolomics has shown great potential in identifying novel biomarkers. However, elucidating all possible liquid chromatography/time-of-flight mass spectrometry (LC/TOF-MS) ion signals in a complex biological sample remains challenging since many ions are not the products of metabolites. Methods of reducing ions not related to metabolites or simply directly detecting metabolite related (pure) ions are important. In this work, we describe PITracer, a novel algorithm that accurately detects the pure ions of a LC/TOF-MS profile to extract pure ion chromatograms and detect chromatographic peaks. PITracer estimates the relative mass difference tolerance of ions and calibrates the mass over charge (m/z) values for peak detection algorithms with an additional option to further mass correction with respect to a user-specified metabolite. PITracer was evaluated using two data sets containing 373 human metabolite standards, including 5 saturated standards considered to be split peaks resultant from huge m/z fluctuation, and 12 urine samples spiked with 50 forensic drugs of varying concentrations. Analysis of these data sets show that PITracer correctly outperformed existing state-of-art algorithm and extracted the pure ion chromatograms of the 5 saturated standards without generating split peaks and detected the forensic drugs with high recall, precision, and F-score and small mass error.
The lick-index calibration of the Gemini multi-object spectrographs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puzia, Thomas H.; Miller, Bryan W.; Trancho, Gelys
2013-06-01
We present the calibration of the spectroscopic Lick/IDS standard line-index system for measurements obtained with the Gemini Multi-Object Spectrographs known as GMOS-North and GMOS-South. We provide linear correction functions for each of the 25 standard Lick line indices for the B600 grism and two instrumental setups, one with 0.''5 slit width and 1 × 1 CCD pixel binning (corresponding to ∼2.5 Å spectral resolution) and the other with 0.''75 slit width and 2 × 2 binning (∼4 Å). We find small and well-defined correction terms for the set of Balmer indices Hβ, Hγ {sub A}, and Hδ {sub A} alongmore » with the metallicity sensitive indices Fe5015, Fe5270, Fe5335, Fe5406, Mg{sub 2}, and Mgb that are widely used for stellar population diagnostics of distant stellar systems. We find other indices that sample molecular absorption bands, such as TiO{sub 1} and TiO{sub 2}, with very wide wavelength coverage or indices that sample very weak molecular and atomic absorption features, such as Mg{sub 1}, as well as indices with particularly narrow passband definitions, such as Fe4384, Ca4455, Fe4531, Ca4227, and Fe5782, which are less robustly calibrated. These indices should be used with caution.« less
NASA Technical Reports Server (NTRS)
Myint, S. W.; Walker, N. D.
2002-01-01
The ability to quantify suspended sediment concentrations accurately over both time and space using satellite data has been a goal of many environmental researchers over the past few decades This study utilizes data acquired by the NOAA Advanced Very High Resolution Radiometer (AVHRR) and the Orbview-2 Sea-viewing wide field-of-view (SeaWiFS) ocean colour sensor, coupled with field measurements to develop statistical models for the estimation of near-surface suspended sediment and suspended solids "Ground truth" water samples were obtained via helicopter, small boat and automatic water sampler within a few hours of satellite overpasses The NOAA AVHRR atmospheric correction was modified for the high levels of turbidity along the Louisiana coast. Models were developed based on the field measurements and reflectance/radiance measurements in the visible and near infrared Channels of NOAA-14 and Orbview-2 SeaWiFS. The best models for predicting surface suspended sediment concentrations were obtained with a NOAA AVHRR Channel 1 (580-680nm) cubic model, Channel 2 (725-1100 nm) linear mod$ and SeaWiFs Channel 6 (660-68Onm) power modeL The suspended sediment models developed using SeaWiFS Channel 5 (545-565 nm) were inferior, a result that we attribute mainly to the atmospheric correction technique, the shallow depth of the water samples and absorption effects from non-sediment water constituents.
Newsome, Mary R; Scheibel, Randall S; Mayer, Andrew R; Chu, Zili D; Wilde, Elisabeth A; Hanten, Gerri; Steinberg, Joel L; Lin, Xiaodi; Li, Xiaoqi; Merkley, Tricia L; Hunter, Jill V; Vasquez, Ana C; Cook, Lori; Lu, Hanzhang; Vinton, Kami; Levin, Harvey S
2013-09-01
Outcome of moderate to severe traumatic brain injury (TBI) includes impaired emotion regulation. Emotion regulation has been associated with amygdala and rostral anterior cingulate (rACC). However, functional connectivity between the two structures after injury has not been reported. A preliminary examination of functional connectivity of rACC and right amygdala was conducted in adolescents 2 to 3 years after moderate to severe TBI and in typically developing (TD)control adolescents, with the hypothesis that the TBI adolescents would demonstrate altered functional connectivity in the two regions. Functional connectivity was determined by correlating fluctuations in the blood oxygen level dependent(BOLD) signal of the rACC and right amygdala with that of other brain regions. In the TBI adolescents, the rACC was found to be significantly less functionally connected to medial prefrontal cortices and to right temporal regions near the amygdala (height threshold T = 2.5, cluster level p < .05, FDR corrected), while the right amygdala showed a trend in reduced functional connectivity with the rACC (height threshold T = 2.5, cluster level p = .06, FDR corrected). Data suggest disrupted functional connectivity in emotion regulation regions. Limitations include small sample sizes. Studies with larger sample sizes are necessary to characterize the persistent neural damage resulting from moderate to severe TBI during development.
NASA Astrophysics Data System (ADS)
Vagnetti, F.; Middei, R.; Antonucci, M.; Paolillo, M.; Serafinelli, R.
2016-09-01
Context. Most investigations of the X-ray variability of active galactic nuclei (AGN) have been concentrated on the detailed analyses of individual, nearby sources. A relatively small number of studies have treated the ensemble behaviour of the more general AGN population in wider regions of the luminosity-redshift plane. Aims: We want to determine the ensemble variability properties of a rich AGN sample, called Multi-Epoch XMM Serendipitous AGN Sample (MEXSAS), extracted from the fifth release of the XMM-Newton Serendipitous Source Catalogue (XMMSSC-DR5), with redshift between ~0.1 and ~5, and X-ray luminosities in the 0.5-4.5 keV band between ~1042 erg/s and ~1047 erg/s. Methods: We urge caution on the use of the normalised excess variance (NXS), noting that it may lead to underestimate variability if used improperly. We use the structure function (SF), updating our previous analysis for a smaller sample. We propose a correction to the NXS variability estimator, taking account of the light curve duration in the rest frame on the basis of the knowledge of the variability behaviour gained by SF studies. Results: We find an ensemble increase of the X-ray variability with the rest-frame time lag τ, given by SF ∝ τ0.12. We confirm an inverse dependence on the X-ray luminosity, approximately as SF ∝ LX-0.19. We analyse the SF in different X-ray bands, finding a dependence of the variability on the frequency as SF ∝ ν-0.15, corresponding to a so-called softer when brighter trend. In turn, this dependence allows us to parametrically correct the variability estimated in observer-frame bands to that in the rest frame, resulting in a moderate (≲15%) shift upwards (V-correction). Conclusions: Ensemble X-ray variability of AGNs is best described by the structure function. An improper use of the normalised excess variance may lead to an underestimate of the intrinsic variability, so that appropriate corrections to the data or the models must be applied to prevent these effects. Full Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/593/A55
Relativistic Corrections to the Properties of the Alkali Fluorides
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.; Partridge, Harry
1993-01-01
Relativistic corrections to the bond lengths, dissociation energies and harmonic frequencies of KF, RbF and CsF have been obtained at the self-consistent field level by dissociating to ions. The relativistic corrections to the bond lengths, harmonic frequencies and dissociation energies to the ions are very small, due to the ionic nature of these molecules and the similarity of the relativistic and nonrelativistic ionic radii.
76 FR 78182 - Application of the Segregation Rules to Small Shareholders; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-16
... CONTACT: Concerning the proposed regulations, Stephen R. Cleary, (202) 622-7750 (not a toll-free number... ``regard to Sec. 1.382-2T(h)(i)(A)) or a first'' is corrected to read ``regard to Sec. 1.382-2T(h)(2)(i)(A.... Clarification of Sec. 1.382-2T(j)(3)'', last line of the paragraph, the language ``2T(h)(i)(A).'' is corrected...
BLIND ordering of large-scale transcriptomic developmental timecourses.
Anavy, Leon; Levin, Michal; Khair, Sally; Nakanishi, Nagayasu; Fernandez-Valverde, Selene L; Degnan, Bernard M; Yanai, Itai
2014-03-01
RNA-Seq enables the efficient transcriptome sequencing of many samples from small amounts of material, but the analysis of these data remains challenging. In particular, in developmental studies, RNA-Seq is challenged by the morphological staging of samples, such as embryos, since these often lack clear markers at any particular stage. In such cases, the automatic identification of the stage of a sample would enable previously infeasible experimental designs. Here we present the 'basic linear index determination of transcriptomes' (BLIND) method for ordering samples comprising different developmental stages. The method is an implementation of a traveling salesman algorithm to order the transcriptomes according to their inter-relationships as defined by principal components analysis. To establish the direction of the ordered samples, we show that an appropriate indicator is the entropy of transcriptomic gene expression levels, which increases over developmental time. Using BLIND, we correctly recover the annotated order of previously published embryonic transcriptomic timecourses for frog, mosquito, fly and zebrafish. We further demonstrate the efficacy of BLIND by collecting 59 embryos of the sponge Amphimedon queenslandica and ordering their transcriptomes according to developmental stage. BLIND is thus useful in establishing the temporal order of samples within large datasets and is of particular relevance to the study of organisms with asynchronous development and when morphological staging is difficult.
Kobashi, Hidenaga; Kamiya, Kazutaka; Ali, Mohamed A.; Igarashi, Akihito; Elewa, Mohamed Ehab M.; Shimizu, Kimiya
2015-01-01
Purpose To compare postoperative astigmatic correction between femtosecond lenticule extraction (FLEx) and small-incision lenticule extraction (SMILE) in eyes with myopic astigmatism. Methods We examined 26 eyes of 26 patients undergoing FLEx and 26 eyes of 26 patients undergoing SMILE to correct myopic astigmatism (manifest astigmatism of 1 diopter (D) or more). Visual acuity, cylindrical refraction, the predictability of the astigmatic correction, and the astigmatic vector components using Alpin’s method, were compared between the two groups 3 months postoperatively. Results We found no statistically significant difference in manifest cylindrical refraction (p=0.74) or in the percentage of eyes within ± 0.50 D of their refraction (p=0.47) after the two surgical procedures. Moreover, no statistically significant difference was detected between the groups in astigmatic vector components, namely, surgically induced astigmatism (0.80), target induced astigmatism (p=0.87), astigmatic correction index (p=0.77), angle of error (p=0.24), difference vector (p=0.76), index of success (p=0.91), flattening effect (p=0.79), and flattening index (p=0.84). Conclusions Both FLEx and SMILE procedures are essentially equivalent in correcting myopic astigmatism using vector analysis, suggesting that the lifting or non-lifting of the flap does not significantly affect astigmatic outcomes after these surgical procedures. PMID:25849381
Measurement-free implementations of small-scale surface codes for quantum-dot qubits
NASA Astrophysics Data System (ADS)
Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.
2018-01-01
The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.
Brady, Amie M. G.; Meg B. Plona,
2015-07-30
A computer program was developed to manage the nowcasts by running the predictive models and posting the results to a publicly accessible Web site daily by 9 a.m. The nowcasts were able to correctly predict E. coli concentrations above or below the water-quality standard at Jaite for 79 percent of the samples compared with the measured concentrations. In comparison, the persistence model (using the previous day’s sample concentration) correctly predicted concentrations above or below the water-quality standard in only 68 percent of the samples. To determine if the Jaite nowcast could be used for the stretch of the river between Lock 29 and Jaite, the model predictions for Jaite were compared with the measured concentrations at Lock 29. The Jaite nowcast provided correct responses for 77 percent of the Lock 29 samples, which was a greater percentage than the percentage of correct responses (58 percent) from the persistence model at Lock 29.
NASA Astrophysics Data System (ADS)
Jenk, Theo Manuel; Rubino, Mauro; Etheridge, David; Ciobanu, Viorela Gabriela; Blunier, Thomas
2016-08-01
Palaeoatmospheric records of carbon dioxide and its stable carbon isotope composition (δ13C) obtained from polar ice cores provide important constraints on the natural variability of the carbon cycle. However, the measurements are both analytically challenging and time-consuming; thus only data exist from a limited number of sampling sites and time periods. Additional analytical resources with high analytical precision and throughput are thus desirable to extend the existing datasets. Moreover, consistent measurements derived by independent laboratories and a variety of analytical systems help to further increase confidence in the global CO2 palaeo-reconstructions. Here, we describe our new set-up for simultaneous measurements of atmospheric CO2 mixing ratios and atmospheric δ13C and δ18O-CO2 in air extracted from ice core samples. The centrepiece of the system is a newly designed needle cracker for the mechanical release of air entrapped in ice core samples of 8-13 g operated at -45 °C. The small sample size allows for high resolution and replicate sampling schemes. In our method, CO2 is cryogenically and chromatographically separated from the bulk air and its isotopic composition subsequently determined by continuous flow isotope ratio mass spectrometry (IRMS). In combination with thermal conductivity measurement of the bulk air, the CO2 mixing ratio is calculated. The analytical precision determined from standard air sample measurements over ice is ±1.9 ppm for CO2 and ±0.09 ‰ for δ13C. In a laboratory intercomparison study with CSIRO (Aspendale, Australia), good agreement between CO2 and δ13C results is found for Law Dome ice core samples. Replicate analysis of these samples resulted in a pooled standard deviation of 2.0 ppm for CO2 and 0.11 ‰ for δ13C. These numbers are good, though they are rather conservative estimates of the overall analytical precision achieved for single ice sample measurements. Facilitated by the small sample requirement, replicate measurements are feasible, allowing the method precision to be improved potentially. Further, new analytical approaches are introduced for the accurate correction of the procedural blank and for a consistent detection of measurement outliers, which is based on δ18O-CO2 and the exchange of oxygen between CO2 and the surrounding ice (H2O).
Method and Apparatus for Measuring Thermal Conductivity of Small, Highly Insulating Specimens
NASA Technical Reports Server (NTRS)
Miller, Robert A (Inventor); Kuczmarski, Maria A (Inventor)
2013-01-01
A method and apparatus for the measurement of thermal conductivity combines the following capabilities: 1) measurements of very small specimens; 2) measurements of specimens with thermal conductivity on the same order of that as air; and, 3) the ability to use air as a reference material. Care is taken to ensure that the heat flow through the test specimen is essentially one-dimensional. No attempt is made to use heated guards to minimize the flow of heat from the hot plate to the surroundings. Results indicate that since large correction factors must be applied to account for guard imperfections when specimen dimensions are small, simply measuring and correcting for heat from the heater disc that does not flow into the specimen is preferable.
Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele
2016-12-07
Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.
NASA Astrophysics Data System (ADS)
Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele
2016-12-01
Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.
NASA Technical Reports Server (NTRS)
Nechyba, Michael C.; Ettinger, Scott M.; Ifju, Peter G.; Wazak, Martin
2002-01-01
Recently substantial progress has been made towards design building and testifying remotely piloted Micro Air Vehicles (MAVs). This progress in overcoming the aerodynamic obstacles to flight at very small scales has, unfortunately, not been matched by similar progress in autonomous MAV flight. Thus, we propose a robust, vision-based horizon detection algorithm as the first step towards autonomous MAVs. In this paper, we first motivate the use of computer vision for the horizon detection task by examining the flight of birds (biological MAVs) and considering other practical factors. We then describe our vision-based horizon detection algorithm, which has been demonstrated at 30 Hz with over 99.9% correct horizon identification, over terrain that includes roads, buildings large and small, meadows, wooded areas, and a lake. We conclude with some sample horizon detection results and preview a companion paper, where the work discussed here forms the core of a complete autonomous flight stability system.
A novel method for correcting scanline-observational bias of discontinuity orientation
Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong
2016-01-01
Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249
Tie Points Extraction for SAR Images Based on Differential Constraints
NASA Astrophysics Data System (ADS)
Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.
2018-04-01
Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.
Statistical tests and identifiability conditions for pooling and analyzing multisite datasets
Zhou, Hao Henry; Singh, Vikas; Johnson, Sterling C.; Wahba, Grace
2018-01-01
When sample sizes are small, the ability to identify weak (but scientifically interesting) associations between a set of predictors and a response may be enhanced by pooling existing datasets. However, variations in acquisition methods and the distribution of participants or observations between datasets, especially due to the distributional shifts in some predictors, may obfuscate real effects when datasets are combined. We present a rigorous statistical treatment of this problem and identify conditions where we can correct the distributional shift. We also provide an algorithm for the situation where the correction is identifiable. We analyze various properties of the framework for testing model fit, constructing confidence intervals, and evaluating consistency characteristics. Our technical development is motivated by Alzheimer’s disease (AD) studies, and we present empirical results showing that our framework enables harmonizing of protein biomarkers, even when the assays across sites differ. Our contribution may, in part, mitigate a bottleneck that researchers face in clinical research when pooling smaller sized datasets and may offer benefits when the subjects of interest are difficult to recruit or when resources prohibit large single-site studies. PMID:29386387
Chung, W Joon; Goeckeler-Fried, Jennifer L; Havasi, Viktoria; Chiang, Annette; Rowe, Steven M; Plyler, Zackery E; Hong, Jeong S; Mazur, Marina; Piazza, Gary A; Keeton, Adam B; White, E Lucile; Rasmussen, Lynn; Weissman, Allan M; Denny, R Aldrin; Brodsky, Jeffrey L; Sorscher, Eric J
2016-01-01
Small molecules that correct the folding defects and enhance surface localization of the F508del mutation in the Cystic Fibrosis Transmembrane conductance Regulator (CFTR) comprise an important therapeutic strategy for cystic fibrosis lung disease. However, compounds that rescue the F508del mutant protein to wild type (WT) levels have not been identified. In this report, we consider obstacles to obtaining robust and therapeutically relevant levels of F508del CFTR. For example, markedly diminished steady state amounts of F508del CFTR compared to WT CFTR are present in recombinant bronchial epithelial cell lines, even when much higher levels of mutant transcript are present. In human primary airway cells, the paucity of Band B F508del is even more pronounced, although F508del and WT mRNA concentrations are comparable. Therefore, to augment levels of "repairable" F508del CFTR and identify small molecules that then correct this pool, we developed compound library screening protocols based on automated protein detection. First, cell-based imaging measurements were used to semi-quantitatively estimate distribution of F508del CFTR by high content analysis of two-dimensional images. We evaluated ~2,000 known bioactive compounds from the NIH Roadmap Molecular Libraries Small Molecule Repository in a pilot screen and identified agents that increase the F508del protein pool. Second, we analyzed ~10,000 compounds representing diverse chemical scaffolds for effects on total CFTR expression using a multi-plate fluorescence protocol and describe compounds that promote F508del maturation. Together, our findings demonstrate proof of principle that agents identified in this fashion can augment the level of endoplasmic reticulum (ER) resident "Band B" F508del CFTR suitable for pharmacologic correction. As further evidence in support of this strategy, PYR-41-a compound that inhibits the E1 ubiquitin activating enzyme-was shown to synergistically enhance F508del rescue by C18, a small molecule corrector. Our combined results indicate that increasing the levels of ER-localized CFTR available for repair provides a novel route to correct F508del CFTR.
ERIC Educational Resources Information Center
Garcia, Andres; Benjumea, Santiago
2006-01-01
In Experiment 1, 10 pigeons were exposed to a successive symbolic matching-to-sample procedure in which the sample was generated by the pigeons' own behavior. Each trial began with both response keys illuminated white, one being the "correct" key and the other the "incorrect" key. The pigeons had no way of discriminating which key was correct and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benmakhlouf, H; Andreo, P; Brualla, L
2016-06-15
Purpose: To calculate output correction factors for Varian Clinac 2100iX beams for seven small field detectors and use the values to determine the small field output factors for the linacs at Karolinska university hospital. Methods: Phase space files (psf) for square fields between 0.25cm and 10cm were calculated using the PENELOPE-based PRIMO software. The linac MC-model was tuned by comparing PRIMO-estimated and experimentally determined depth doses and lateral dose-profiles for 40cmx40cm fields. The calculated psf were used as radiation sources to calculate the correction factors of IBA and PTW detectors with the code penEasy/PENELOPE. Results: The optimal tuning parameters ofmore » the MClinac model in PRIMO were 5.4 MeV incident electron energy and zero energy spread, focal spot size and beam divergence. Correction factors obtained for the liquid ion chamber (PTW-T31018) are within 1% down to 0.5 cm fields. For unshielded diodes (IBA-EFD, IBA-SFD, PTW-T60017 and PTW-T60018) the corrections are up to 2% at intermediate fields (>1cm side), becoming down to −11% for fields smaller than 1cm. The shielded diode (IBA-PFD and PTW-T60016) corrections vary with field size from 0 to −4%. Volume averaging effects are found for most detectors in the presence of 0.25cm fields. Conclusion: Good agreement was found between correction factors based on PRIMO-generated psf and those from other publications. The calculated factors will be implemented in output factor measurements (using several detectors) in the clinic. PRIMO is a userfriendly general code capable of generating small field psf and can be used without having to code own linac geometries. It can therefore be used to improve the clinical dosimetry, especially in the commissioning of linear accelerators. Important dosimetry data, such as dose-profiles and output factors can be determined more accurately for a specific machine, geometry and setup by using PRIMO and having a MC-model of the detector used.« less
Buoyancy-corrected gravimetric analysis of lightly loaded filters.
Rasmussen, Pat E; Gardner, H David; Niu, Jianjun
2010-09-01
Numerous sources of uncertainty are associated with the gravimetric analysis of lightly loaded air filter samples (< 100 microg). The purpose of the study presented here is to investigate the effectiveness and limitations of air buoyancy corrections over experimentally adjusted conditions of temperature (21-25 degrees C) and relative humidity (RH) (16-60% RH). Conditioning (24 hr) and weighing were performed inside the Archimedes M3 environmentally controlled chamber. The measurements were performed using 20 size-fractionated samples of resuspended house dust loaded onto Teflo (PTFE) filters using a Micro-Orifice Uniform Deposit Impactor representing a wide range of mass loading (7.2-3130 microg) and cut sizes (0.056-9.9 microm). By maintaining tight controls on humidity (within 0.5% RH of control setting) throughout pre- and postweighing at each stepwise increase in RH, it was possible to quantify error due to water absorption: 45% of the total mass change due to water absorption occurred between 16 and 50% RH, and 55% occurred between 50 and 60% RH. The buoyancy corrections ranged from -3.5 to +5.8 microg in magnitude and improved relative standard deviation (RSD) from 21.3% (uncorrected) to 5.6% (corrected) for a 7.2 microg sample. It is recommended that protocols for weighing low-mass particle samples (e.g., nanoparticle samples) should include buoyancy corrections and tight temperature/humidity controls. In some cases, conditioning times longer than 24 hr may be warranted.
Braaf, Boy; Donner, Sabine; Nam, Ahhyun S.; Bouma, Brett E.; Vakoc, Benjamin J.
2018-01-01
Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented. PMID:29552388
Braaf, Boy; Donner, Sabine; Nam, Ahhyun S; Bouma, Brett E; Vakoc, Benjamin J
2018-02-01
Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C
Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfred Wickline
2009-04-01
Corrective Action Unit 562 is located in Areas 2, 23, and 25 of the Nevada Test Site, which is approximately 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 562 is comprised of the 13 corrective action sites (CASs) listed below: • 02-26-11, Lead Shot • 02-44-02, Paint Spills and French Drain • 02-59-01, Septic System • 02-60-01, Concrete Drain • 02-60-02, French Drain • 02-60-03, Steam Cleaning Drain • 02-60-04, French Drain • 02-60-05, French Drain • 02-60-06, French Drain • 02-60-07, French Drain • 23-60-01, Mud Trap Drain and Outfall • 23-99-06, Grease Trap • 25-60-04, Buildingmore » 3123 Outfalls These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on December 11, 2008, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and National Security Technologies, LLC. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 562. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the corrective action investigation for CAU 562 includes the following activities: • Move surface debris and/or materials, as needed, to facilitate sampling. • Conduct radiological surveys. • Perform field screening. • Collect and submit environmental samples for laboratory analysis to determine the nature and extent of any contamination released by each CAS. • Collect samples of source material to determine the potential for a release. • Collect samples of potential remediation wastes. • Collect quality control samples. This Corrective Action Investigation Plan has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the State of Nevada; DOE, Environmental Management; U.S. Department of Defense; and DOE, Legacy Management (FFACO, 1996; as amended February 2008). Under the Federal Facility Agreement and Consent Order, this Corrective Action Investigation Plan will be submitted to the Nevada Division of Environmental Protection for approval. Fieldwork will be conducted following approval of the plan.« less
Quantum Loop Expansion to High Orders, Extended Borel Summation, and Comparison with Exact Results
NASA Astrophysics Data System (ADS)
Noreen, Amna; Olaussen, Kåre
2013-07-01
We compare predictions of the quantum loop expansion to (essentially) infinite orders with (essentially) exact results in a simple quantum mechanical model. We find that there are exponentially small corrections to the loop expansion, which cannot be explained by any obvious “instanton”-type corrections. It is not the mathematical occurrence of exponential corrections but their seeming lack of any physical origin which we find surprising and puzzling.
CMB-lensing beyond the Born approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marozzi, Giovanni; Fanizza, Giuseppe; Durrer, Ruth
2016-09-01
We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles ℓ ∼< 2500, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussianmore » nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum, and its imprint on CMB lensing is too small to be seen in present experiments.« less
Six Degrees-of-Freedom Ascent Control for Small-Body Touch and Go
NASA Technical Reports Server (NTRS)
Blackmore, Lars James C.
2011-01-01
A document discusses a method of controlling touch and go (TAG) of a spacecraft to correct attitude, while ensuring a safe ascent. TAG is a concept whereby a spacecraft is in contact with the surface of a small body, such as a comet or asteroid, for a few seconds or less before ascending to a safe location away from the small body. The report describes a controller that corrects attitude and ensures that the spacecraft ascends to a safe state as quickly as possible. The approach allocates a certain amount of control authority to attitude control, and uses the rest to accelerate the spacecraft as quickly as possible in the ascent direction. The relative allocation to attitude and position is a parameter whose optimal value is determined using a ground software tool. This new approach makes use of the full control authority of the spacecraft to correct the errors imparted by the contact, and ascend as quickly as possible. This is in contrast to prior approaches, which do not optimize the ascent acceleration.
Newsome, Andrew G.; Nikolic, Dejan
2014-01-01
The Critical Assessment of Small Molecule Identification (CASMI) contest was initiated in 2012 to evaluate manual and automated strategies for the identification of small molecules from raw mass spectrometric data. The authors participated in both category 1 (molecular formula determination) and category 2 (molecular structure determination) of the second annual CASMI contest (CASMI 2013) using slow but effective manual methods. The provided high resolution mass spectrometric data were interpreted manually using a combination of molecular formula calculators, fragment and neutral loss analysis, literature consultation, manual database searches, deductive logic, and experience. The authors submitted correct formulas as lead candidates for 16 of 16 challenges and submitted correct structure solutions as lead candidates for 14 of 16 challenges. One structure submission (Challenge 3) was very close but not exact (N2-acetylglutaminylisoleucinamide instead of the correct N2-acetylglutaminylleucinamide). A solution for one (Challenge 13) was not submitted due to an inability to reconcile the provided fragmentation pattern with any known structures with the provided molecular composition. PMID:26819877
Multicategory nets of single-layer perceptrons: complexity and sample-size issues.
Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras
2010-05-01
The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.
NASA Technical Reports Server (NTRS)
White, Raymond E., III
1994-01-01
Preliminary results on the elliptical galaxy NGC 1407 were published in the proceedings of the first ROSAT symposium. NGC 1407 is embedded in diffuse X-ray-emitting gas which is extensive enough that it is likely to be related to the surrounding group of galaxies, rather than just NGC 1407. Spectral data for NGC 1407 (AO2) and IC 1459 (AO3) are also included in a complete sample of elliptical galaxies I compiled in collaboration with David Davis. This allowed us to construct the first complete X-ray sample of optically-selected elliptical galaxies. The complete sample allows us to apply Malmquist bias corrections to the observed correlation between X-ray and optical luminosities. I continue to work on the implications of this first complete X-ray sample of elliptical galaxies. Paul Eskridge Dave Davis and I also analyzed three long ROSAT PSPC observations of the small (but not dwarf) elliptical galaxy M32. We found the X-ray spectra and variability to be consistent with either a Low Mass X-Ray Binary (LMXRB) or a putative 'micro"-AGN.
Selenium isotope ratios as indicators of selenium sources and oxyanion reduction
Johnson, T.M.; Herbel, M.J.; Bullen, T.D.; Zawislanski, P.T.
1999-01-01
Selenium stable isotope ratio measurements should serve as indicators of sources and biogeochemical transformations of Se. We report measurements of Se isotope fractionation during selenate reduction, selenite sorption, oxidation of reduced Se in soils, and Se volatilization by algae and soil samples. These results, combined with previous work with Se isotopes, indicate that reduction of soluble oxyanions is the dominant cause of Se isotope fractionation. Accordingly, Se isotope ratios should be useful as indicators of oxyanion reduction, which can transform mobile species to forms that are less mobile and less bioavailable. Additional investigations of Se isotope fractionation are needed to confirm this preliminary assessment. We have developed a new method for measurement of natural Se isotope ratio variation which requires less than 500 ng Se per analysis and yields ??0.2??? precision on 80Se/76Se. A double isotope spike technique corrects for isotopic fractionation during sample preparation and mass spectrometry. The small minimum sample size is important, as Se concentrations are often below 1 ppm in solids and 1 ??g/L in fluids. The Se purification process is rapid and compatible with various sample matrices, including acidic rock or sediment digests.
Selenium isotope ratios as indicators of selenium sources and oxyanion reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, T.M.; Herbel, M.J.; Bullen, T.D.
1999-09-01
Selenium stable isotope ratio measurements should serve as indicators of sources and biogeochemical transformations of Se. The authors report measurements of Se isotope fractionation during selenate reduction, selenite sorption, oxidation of reduced Se in soils, and Se volatilization by algae and soil samples. These results, combined with previous work with Se isotopes, indicate that reduction of soluble oxyanions is the dominant cause of Se isotope fractionation. Accordingly, Se isotope ratios should be useful as indicators of oxyanion reduction, which can transform mobile species to forms that are less mobile and less bioavailable. Additional investigations of Se isotope fractionation are neededmore » to confirm this preliminary assessment. The authors have developed a new method for measurement of natural Se isotope ratio variation which requires less than 500 ng Se per analysis and yields {+-}0.2% precision on {sup 80}Se/{sup 76}Se. A double isotope spike technique corrects for isotopic fractionation during sample preparation and mass spectrometry. The small minimum sample size is important, as Se concentrations are often below 1 ppm in solids and 1 {micro}g/L in fluids. The Se purification process is rapid and compatible with various sample matrices, including acidic rock or sediment digests.« less
An evaluation of parturition indices in fishers
Frost, H.C.; York, E.C.; Krohn, W.B.; Elowe, K.D.; Decker, T.A.; Powell, S.M.; Fuller, T.K.
1999-01-01
Fishers (Martes pennanti) are important forest carnivores and furbearers that are susceptible to overharvest. Traditional indices used to monitor fisher populations typically overestimate litter size and proportion of females that give birth. We evaluated the usefulness of 2 indices of reproduction to determine proportion of female fishers that gave birth in a particular year. We used female fishers of known age and reproductive histories to compare appearance of placental scars with incidence of pregnancy and litter size. Microscopic observation of freshly removed reproductive tracts correctly identified pregnant fishers and correctly estimated litter size in 3 of 4 instances, but gross observation of placental scars failed to correctly identify pregnant fishers and litter size. Microscopic observations of reproductive tracts in carcasses that were not fresh also failed to identify pregnant animals and litter size. We evaluated mean sizes of anterior nipples to see if different reproductive classes could be distinguished. Mean anterior nipple size of captive and wild fishers correctly identified current-year breeders from nonbreeders. Former breeders were misclassified in 4 of 13 instances. Presence of placental scars accurately predicted parturition in a small sample size of fishers, but absence of placental scars did not signify that a female did not give birth. In addition to enabling the estimation of parturition rates in live animals more accurately than traditional indices, mean anterior nipple size also provided an estimate of the percentage of adult females that successfully raised young. Though using mean anterior nipple size to index reproductive success looks promising, additional data are needed to evaluate effects of using dried, stretched pelts on nipple size for management purposes.
Correcting Estimates of the Occurrence Rate of Earth-like Exoplanets for Stellar Multiplicity
NASA Astrophysics Data System (ADS)
Cantor, Elliot; Dressing, Courtney D.; Ciardi, David R.; Christiansen, Jessie
2018-06-01
One of the most prominent questions in the exoplanet field has been determining the true occurrence rate of potentially habitable Earth-like planets. NASA’s Kepler mission has been instrumental in answering this question by searching for transiting exoplanets, but follow-up observations of Kepler target stars are needed to determine whether or not the surveyed Kepler targets are in multi-star systems. While many researchers have searched for companions to Kepler planet host stars, few studies have investigated the larger target sample. Regardless of physical association, the presence of nearby stellar companions biases our measurements of a system’s planetary parameters and reduces our sensitivity to small planets. Assuming that all Kepler target stars are single (as is done in many occurrence rate calculations) would overestimate our search completeness and result in an underestimate of the frequency of potentially habitable Earth-like planets. We aim to correct for this bias by characterizing the set of targets for which Kepler could have detected Earth-like planets. We are using adaptive optics (AO) imaging to reveal potential stellar companions and near-infrared spectroscopy to refine stellar parameters for a subset of the Kepler targets that are most amenable to the detection of Earth-like planets. We will then derive correction factors to correct for the biases in the larger set of target stars and determine the true frequency of systems with Earth-like planets. Due to the prevalence of stellar multiples, we expect to calculate an occurrence rate for Earth-like exoplanets that is higher than current figures.
Position Corrections for Airspeed and Flow Angle Measurements on Fixed-Wing Aircraft
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2017-01-01
This report addresses position corrections made to airspeed and aerodynamic flow angle measurements on fixed-wing aircraft. These corrections remove the effects of angular rates, which contribute to the measurements when the sensors are installed away from the aircraft center of mass. Simplified corrections, which are routinely used in practice and assume small flow angles and angular rates, are reviewed. The exact, nonlinear corrections are then derived. The simplified corrections are sufficient in most situations; however, accuracy diminishes for smaller aircraft that incur higher angular rates, and for flight at high air flow angles. This is demonstrated using both flight test data and a nonlinear flight dynamics simulation of a subscale transport aircraft in a variety of low-speed, subsonic flight conditions.
Corrective Strategies in Reading for At-Risk Community College Students.
ERIC Educational Resources Information Center
Yevoli, Carole
Focusing on corrective strategies for improving reading skills of at-risk community college students, this document reviews the history of such strategies, highlights current efforts, and assesses future needs. The first section traces the history of remedial reading programs at community colleges, beginning with small individualized sections…
Multichannel error correction code decoder
NASA Technical Reports Server (NTRS)
Wagner, Paul K.; Ivancic, William D.
1993-01-01
A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.
Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy
2016-01-01
Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.
Performance of statistical models to predict mental health and substance abuse cost.
Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K
2006-10-26
Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-08
... product mix. That paragraph reads as follows: But assuming a situation in which there are substantial small cigar marketings in the actual ``small cigar'' tax category, changing the Step B method would...
In Defense of the Chi-Square Continuity Correction.
ERIC Educational Resources Information Center
Veldman, Donald J.; McNemar, Quinn
Published studies of the sampling distribution of chi-square with and without Yates' correction for continuity have been interpreted as discrediting the correction. Yates' correction actually produces a biased chi-square value which in turn yields a better estimate of the exact probability of the discrete event concerned when used in conjunction…
Net thermal radiation in the atmosphere of Venus
NASA Technical Reports Server (NTRS)
Revercomb, H. E.; Sromovsky, L. A.; Suomi, V. E.; Boese, R. W.
1985-01-01
Estimates of the true atmospheric net fluxes at the four Pioneer Venus entry sites are presently obtained through corrections of measured values that are relatively small for the case of the clouds, but generally large deeper in the atmosphere. The correction procedure for both the small and large probe fluxes used model results near 14 km to establish the size of the correction. The thermal net fluxes obtained imply that the contribution of mode 3 particles to the IR opacity of the middle and lower clouds is smaller than indicated by the Pioneer Venus cloud particle spectrometer measurements, and the day probe results favor a reduction of only about 50 percent. The fluxes at all sites imply that a yet-undetermined source of considerable opacity is present in the upper cloud. Beneath the clouds, the thermal net fluxes generally increase with increasing latitude.
Jeong, Hyunjo; Barnard, Daniel; Cho, Sungjong; Zhang, Shuzeng; Li, Xiongbing
2017-11-01
This paper presents analytical and experimental techniques for accurate determination of the nonlinearity parameter (β) in thick solid samples. When piezoelectric transducers are used for β measurements, the receiver calibration is required to determine the transfer function from which the absolute displacement can be calculated. The measured fundamental and second harmonic displacement amplitudes should be modified to account for beam diffraction and material absorption. All these issues are addressed in this study and the proposed technique is validated through the β measurements of thick solid samples. A simplified self-reciprocity calibration procedure for a broadband receiver is described. The diffraction and attenuation corrections for the fundamental and second harmonics are explicitly derived. Aluminum alloy samples in five different thicknesses (4, 6, 8, 10, 12cm) are prepared and β measurements are made using the finite amplitude, through-transmission method. The effects of diffraction and attenuation corrections on β measurements are systematically investigated. When diffraction and attenuation corrections are all properly made, the variation of β between different thickness samples is found to be less than 3.2%. Copyright © 2017 Elsevier B.V. All rights reserved.
Egg embryo development detection with hyperspectral imaging
NASA Astrophysics Data System (ADS)
Lawrence, Kurt C.; Smith, Douglas P.; Windham, William R.; Heitschmidt, Gerald W.; Park, Bosoon
2006-10-01
In the U. S. egg industry, anywhere from 130 million to over one billion infertile eggs are incubated each year. Some of these infertile eggs explode in the hatching cabinet and can potentially spread molds or bacteria to all the eggs in the cabinet. A method to detect the embryo development of incubated eggs was developed. Twelve brown-shell hatching eggs from two replicates (n=24) were incubated and imaged to identify embryo development. A hyperspectral imaging system was used to collect transmission images from 420 to 840 nm of brown-shell eggs positioned with the air cell vertical and normal to the camera lens. Raw transmission images from about 400 to 900 nm were collected for every egg on days 0, 1, 2, and 3 of incubation. A total of 96 images were collected and eggs were broken out on day 6 to determine fertility. After breakout, all eggs were found to be fertile. Therefore, this paper presents results for egg embryo development, not fertility. The original hyperspectral data and spectral means for each egg were both used to create embryo development models. With the hyperspectral data range reduced to about 500 to 700 nm, a minimum noise fraction transformation was used, along with a Mahalanobis Distance classification model, to predict development. Days 2 and 3 were all correctly classified (100%), while day 0 and day 1 were classified at 95.8% and 91.7%, respectively. Alternatively, the mean spectra from each egg were used to develop a partial least squares regression (PLSR) model. First, a PLSR model was developed with all eggs and all days. The data were multiplicative scatter corrected, spectrally smoothed, and the wavelength range was reduced to 539 - 770 nm. With a one-out cross validation, all eggs for all days were correctly classified (100%). Second, a PLSR model was developed with data from day 0 and day 3, and the model was validated with data from day 1 and 2. For day 1, 22 of 24 eggs were correctly classified (91.7%) and for day 2, all eggs were correctly classified (100%). Although the results are based on relatively small sample sizes, they are encouraging. However, larger sample sizes, from multiple flocks, will be needed to fully validate and verify these models. Additionally, future experiments must also include non-fertile eggs so the fertile / non-fertile effect can be determined.
Spelling Equivalency Awareness
ERIC Educational Resources Information Center
Berk, Barbara; Mazurkiewicz, Albert J.
1976-01-01
Concludes that despite instructional emphasis on one correct spelling, a large segment of the sample populations in this study spell differently from that usually thought correct and that a number of students, teachers, and parents recognize the existence of equally correct alternatives. (RB)
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-05
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.
O'Doherty, Jim; Chilcott, Anna; Dunn, Joel
2015-11-01
Arterial sampling with dispersion correction is routinely performed for kinetic analysis of PET studies. Because of the the advent of PET-MRI systems, non-MR safe instrumentation will be required to be kept outside the scan room, which requires the length of the tubing between the patient and detector to increase, thus worsening the effects of dispersion. We examined the effects of dispersion in idealized radioactive blood studies using various lengths of tubing (1.5, 3, and 4.5 m) and applied a well-known transmission-dispersion model to attempt to correct the resulting traces. A simulation study was also carried out to examine noise characteristics of the model. The model was applied to patient traces using a 1.5 m acquisition tubing and extended to its use at 3 m. Satisfactory dispersion correction of the blood traces was achieved in the 1.5 m line. Predictions on the basis of experimental measurements, numerical simulations and noise analysis of resulting traces show that corrections of blood data can also be achieved using the 3 m tubing. The effects of dispersion could not be corrected for the 4.5 m line by the selected transmission-dispersion model. On the basis of our setup, correction of dispersion in arterial sampling tubing up to 3 m by the transmission-dispersion model can be performed. The model could not dispersion correct data acquired using a 4.5 m arterial tubing.
NASA Astrophysics Data System (ADS)
Boughezal, Radja; Isgrò, Andrea; Petriello, Frank
2018-04-01
We present a detailed derivation of the power corrections to the factorization theorem for the 0-jettiness event shape variable T . Our calculation is performed directly in QCD without using the formalism of effective field theory. We analytically calculate the next-to-leading logarithmic power corrections for small T at next-to-leading order in the strong coupling constant, extending previous computations which obtained only the leading-logarithmic power corrections. We address a discrepancy in the literature between results for the leading-logarithmic power corrections to a particular definition of 0-jettiness. We present a numerical study of the power corrections in the context of their application to the N -jettiness subtraction method for higher-order calculations, using gluon-fusion Higgs production as an example. The inclusion of the next-to-leading-logarithmic power corrections further improves the numerical efficiency of the approach beyond the improvement obtained from the leading-logarithmic power corrections.
Malik, Marek; Hnatkova, Katerina; Batchvarov, Velislav; Gang, Yi; Smetana, Peter; Camm, A John
2004-12-01
Regulatory authorities require new drugs to be investigated using a so-called "thorough QT/QTc study" to identify compounds with a potential of influencing cardiac repolarization in man. Presently drafted regulatory consensus requires these studies to be powered for the statistical detection of QTc interval changes as small as 5 ms. Since this translates into a noticeable drug development burden, strategies need to be identified allowing the size and thus the cost of thorough QT/QTc studies to be minimized. This study investigated the influence of QT and RR interval data quality and the precision of heart rate correction on the sample sizes of thorough QT/QTc studies. In 57 healthy subjects (26 women, age range 19-42 years), a total of 4,195 drug-free digital electrocardiograms (ECG) were obtained (65-84 ECGs per subject). All ECG parameters were measured manually using the most accurate approach with reconciliation of measurement differences between different cardiologists and aligning the measurements of corresponding ECG patterns. From the data derived in this measurement process, seven different levels of QT/RR data quality were obtained, ranging from the simplest approach of measuring 3 beats in one ECG lead to the most exact approach. Each of these QT/RR data-sets was processed with eight different heart rate corrections ranging from Bazett and Fridericia corrections to the individual QT/RR regression modelling with optimization of QT/RR curvature. For each combination of data quality and heart rate correction, standard deviation of individual mean QTc values and mean of individual standard deviations of QTc values were calculated and used to derive the size of thorough QT/QTc studies with an 80% power to detect 5 ms QTc changes at the significance level of 0.05. Irrespective of data quality and heart rate corrections, the necessary sample sizes of studies based on between-subject comparisons (e.g., parallel studies) are very substantial requiring >140 subjects per group. However, the required study size may be substantially reduced in investigations based on within-subject comparisons (e.g., crossover studies or studies of several parallel groups each crossing over an active treatment with placebo). While simple measurement approaches with ad-hoc heart rate correction still lead to requirements of >150 subjects, the combination of best data quality with most accurate individualized heart rate correction decreases the variability of QTc measurements in each individual very substantially. In the data of this study, the average of standard deviations of QTc values calculated separately in each individual was only 5.2 ms. Such a variability in QTc data translates to only 18 subjects per study group (e.g., the size of a complete one-group crossover study) to detect 5 ms QTc change with an 80% power. Cost calculations show that by involving the most stringent ECG handling and measurement, the cost of a thorough QT/QTc study may be reduced to approximately 25%-30% of the cost imposed by the simple ECG reading (e.g., three complexes in one lead only).
Sturrock, Hugh J W; Gething, Pete W; Ashton, Ruth A; Kolaczinski, Jan H; Kabatereine, Narcis B; Brooker, Simon
2011-09-01
In schistosomiasis control, there is a need to geographically target treatment to populations at high risk of morbidity. This paper evaluates alternative sampling strategies for surveys of Schistosoma mansoni to target mass drug administration in Kenya and Ethiopia. Two main designs are considered: lot quality assurance sampling (LQAS) of children from all schools; and a geostatistical design that samples a subset of schools and uses semi-variogram analysis and spatial interpolation to predict prevalence in the remaining unsurveyed schools. Computerized simulations are used to investigate the performance of sampling strategies in correctly classifying schools according to treatment needs and their cost-effectiveness in identifying high prevalence schools. LQAS performs better than geostatistical sampling in correctly classifying schools, but at a cost with a higher cost per high prevalence school correctly classified. It is suggested that the optimal surveying strategy for S. mansoni needs to take into account the goals of the control programme and the financial and drug resources available.
Correcting for Sample Contamination in Genotype Calling of DNA Sequence Data
Flickinger, Matthew; Jun, Goo; Abecasis, Gonçalo R.; Boehnke, Michael; Kang, Hyun Min
2015-01-01
DNA sample contamination is a frequent problem in DNA sequencing studies and can result in genotyping errors and reduced power for association testing. We recently described methods to identify within-species DNA sample contamination based on sequencing read data, showed that our methods can reliably detect and estimate contamination levels as low as 1%, and suggested strategies to identify and remove contaminated samples from sequencing studies. Here we propose methods to model contamination during genotype calling as an alternative to removal of contaminated samples from further analyses. We compare our contamination-adjusted calls to calls that ignore contamination and to calls based on uncontaminated data. We demonstrate that, for moderate contamination levels (5%–20%), contamination-adjusted calls eliminate 48%–77% of the genotyping errors. For lower levels of contamination, our contamination correction methods produce genotypes nearly as accurate as those based on uncontaminated data. Our contamination correction methods are useful generally, but are particularly helpful for sample contamination levels from 2% to 20%. PMID:26235984
On the robustness of bucket brigade quantum RAM
NASA Astrophysics Data System (ADS)
Arunachalam, Srinivasan; Gheorghiu, Vlad; Jochym-O'Connor, Tomas; Mosca, Michele; Varshinee Srinivasan, Priyaa
2015-12-01
We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti et al (2008 Phys. Rev. Lett.100 160501). Due to a result of Regev and Schiff (ICALP ’08 733), we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o({2}-n/2) (where N={2}n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion Harrow et al (2009 Phys. Rev. Lett.103 150502) or quantum machine learning Rebentrost et al (2014 Phys. Rev. Lett.113 130503) that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of ‘active’ gates, since all components have to be actively error corrected.
Quan, H T
2014-06-01
We study the maximum efficiency of a heat engine based on a small system. It is revealed that due to the finiteness of the system, irreversibility may arise when the working substance contacts with a heat reservoir. As a result, there is a working-substance-dependent correction to the Carnot efficiency. We derive a general and simple expression for the maximum efficiency of a Carnot cycle heat engine in terms of the relative entropy. This maximum efficiency approaches the Carnot efficiency asymptotically when the size of the working substance increases to the thermodynamic limit. Our study extends Carnot's result of the maximum efficiency to an arbitrary working substance and elucidates the subtlety of thermodynamic laws in small systems.
NASA Astrophysics Data System (ADS)
Kokott, Sebastian; Levchenko, Sergey V.; Rinke, Patrick; Scheffler, Matthias
2018-03-01
We present a density functional theory (DFT) based supercell approach for modeling small polarons with proper account for the long-range elastic response of the material. Our analysis of the supercell dependence of the polaron properties (e.g., atomic structure, binding energy, and the polaron level) reveals long-range electrostatic effects and the electron–phonon (el–ph) interaction as the two main contributors. We develop a correction scheme for DFT polaron calculations that significantly reduces the dependence of polaron properties on the DFT exchange-correlation functional and the size of the supercell in the limit of strong el–ph coupling. Using our correction approach, we present accurate all-electron full-potential DFT results for small polarons in rocksalt MgO and rutile TiO2.
Finite-size radiation force correction for inviscid spheres in standing waves.
Marston, Philip L
2017-09-01
Yosioka and Kawasima gave a widely used approximation for the acoustic radiation force on small liquid spheres surrounded by an immiscible liquid in 1955. Considering the liquids to be inviscid with negligible thermal dissipation, in their approximation the force on the sphere is proportional to the sphere's volume and the levitation position in a vertical standing wave becomes independent of the size. The analysis given here introduces a small correction term proportional to the square of the sphere's radius relative to the aforementioned small-sphere force. The significance of this term also depends on the relative density and sound velocity of the sphere. The improved approximation is supported by comparison with the exact partial-wave-series based radiation force for ideal fluid spheres in ideal fluids.
77 FR 17352 - Federal Acquisition Regulation; Women-Owned Small Business (WOSB) Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-26
...) Concerns (APR 2012) (15 U.S.C. 637(m)). -- (25) 52.219-30, Notice of Set-Aside for Women-Owned Small...-AL97 Federal Acquisition Regulation; Women-Owned Small Business (WOSB) Program Correction In rule document 2012-4475 appearing on pages 12913 through 12924 in the issue of Friday, March 2, 2012 make the...
Farias, Paulo R S; Barbosa, José C; Busoli, Antonio C; Overal, William L; Miranda, Vicente S; Ribeiro, Susane M
2008-01-01
The fall armyworm, Spodoptera frugiperda (J.E. Smith), is one of the chief pests of maize in the Americas. The study of its spatial distribution is fundamental for designing correct control strategies, improving sampling methods, determining actual and potential crop losses, and adopting precise agricultural techniques. In São Paulo state, Brazil, a maize field was sampled at weekly intervals, from germination through harvest, for caterpillar densities, using quadrates. In each of 200 quadrates, 10 plants were sampled per week. Harvest weights were obtained in the field for each quadrate, and ear diameters and lengths were also sampled (15 ears per quadrate) and used to estimate potential productivity of the quadrate. Geostatistical analyses of caterpillar densities showed greatest ranges for small caterpillars when semivariograms were adjusted for a spherical model that showed greatest fit. As the caterpillars developed in the field, their spatial distribution became increasingly random, as shown by a model adjusted to a straight line, indicating a lack of spatial dependence among samples. Harvest weight and ear length followed the spherical model, indicating the existence of spatial variability of the production parameters in the maize field. Geostatistics shows promise for the application of precise methods in the integrated control of pests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hathcock, Charles Dean
The proposed action being assessed in this document occurs in TA-02 in the bottom of Los Alamos Canyon. The DOE proposes to conduct soil sampling at AOC 02-011 (d), AOC 02- 011(a)(ii), and SWMU 02-005, and excavate soils in AOC 02-011(a)(ii) as part of a corrective actions effort. Additional shallow surface soil samples (soil grab samples) will be collected throughout the TA-02 area, including within the floodplain, to perform ecotoxicology studies (Figures 1 and 2). The excavation boundaries in AOC 02-011(a)(ii) are slightly within the delineated 100-year floodplain. The project will use a variety of techniques for soil sampling andmore » remediation efforts to include hand/digging, standard hand auger/sampling, excavation using machinery such as backhoe and front end loader and small drill rig. Heavy equipment will traverse the floodplain and spoils piles will be staged in the floodplain within developed or previously disturbed areas (e.g., existing paved roads and parking areas). The project will utilize and maintain appropriate best management practices (BMPs) to contain excavated materials, and all pollutants, including oil from machinery/vehicles. The project will stabilize disturbed areas as appropriate at the end of the project.« less
Theory of Irregular Waveguides with Slowly Changing Parameters
1979-04-05
different waves are orthogcnal between themselves. The conditions of orthogonality we will record in such a way that it would be correct during any...replace the strain of surface, on which it is correct (3.2), by a small change in this condition on the undisturbed surface. Let us establish/install this...it is possible to show which (6.’ is correct with any sign L. The replacement of strain by boundary condition (6.1) introduces into calculation soca
Monovision correction for small-angle diplopia.
Bujak, Matthew C; Leung, Andrea K; Kisilevsky, Mila; Margolin, Edward
2012-09-01
To assess quantitatively the efficacy of monovision correction in the treatment of acquired small-angle binocular diplopia in adult patients. Prospective, interventional case series. Twenty patients with symptomatic diplopia were enrolled in a prospective treatment trial at a tertiary university neuro-ophthalmology practice. All had stable deviations of 10 prism diopters or less for more than 3 months. Each received monovision spectacles, contact lenses, or both with distance correction in the dominant eye. Half received a +3.00-diopter add and the others received +2.50 diopters. The validated and standardized Diplopia Questionnaire and Amblyopia and Strabismus Questionnaire were used to quantify the efficacy of monovision correction for diplopia by measuring the functional impact on vision-specific quality of life. primary outcome: Based on the results of the Diplopia Questionnaire, 85% of patients experienced significant improvement in diplopia symptoms after monovision correction. There was a statistically significant 58.6% improvement in the Diplopia Questionnaire score in our patients (P < .0001). secondary outcome: The Amblyopia and Strabismus Questionnaire scores demonstrated improved quality of life and daily function after monovision correction (P = .03), especially in the areas of double vision(P = .0003) and social contact and appearance (P = .0002). Monovision decreased the frequency of diplopia and improved subjects' quality of life. Monovision may be a feasible alternative for presbyopic diplopic patients who are dissatisfied with other conservative treatment options. Copyright © 2012 Elsevier Inc. All rights reserved.
Terlier, T; Lee, J; Lee, K; Lee, Y
2018-02-06
Technological progress has spurred the development of increasingly sophisticated analytical devices. The full characterization of structures in terms of sample volume and composition is now highly complex. Here, a highly improved solution for 3D characterization of samples, based on an advanced method for 3D data correction, is proposed. Traditionally, secondary ion mass spectrometry (SIMS) provides the chemical distribution of sample surfaces. Combining successive sputtering with 2D surface projections enables a 3D volume rendering to be generated. However, surface topography can distort the volume rendering by necessitating the projection of a nonflat surface onto a planar image. Moreover, the sputtering is highly dependent on the probed material. Local variation of composition affects the sputter yield and the beam-induced roughness, which in turn alters the 3D render. To circumvent these drawbacks, the correlation of atomic force microscopy (AFM) with SIMS has been proposed in previous studies as a solution for the 3D chemical characterization. To extend the applicability of this approach, we have developed a methodology using AFM-time-of-flight (ToF)-SIMS combined with an empirical sputter model, "dynamic-model-based volume correction", to universally correct 3D structures. First, the simulation of 3D structures highlighted the great advantages of this new approach compared with classical methods. Then, we explored the applicability of this new correction to two types of samples, a patterned metallic multilayer and a diblock copolymer film presenting surface asperities. In both cases, the dynamic-model-based volume correction produced an accurate 3D reconstruction of the sample volume and composition. The combination of AFM-SIMS with the dynamic-model-based volume correction improves the understanding of the surface characteristics. Beyond the useful 3D chemical information provided by dynamic-model-based volume correction, the approach permits us to enhance the correlation of chemical information from spectroscopic techniques with the physical properties obtained by AFM.
Peer-led small groups: Are we on the right track?
Moore, Fraser
2017-10-01
Peer tutor-led small group sessions are a valuable learning strategy but students may lack confidence in the absence of a content expert. This study examined whether faculty reinforcement of peer tutor-led small group content was beneficial. Two peer tutor-led small group sessions were compared with one faculty-led small group session using questionnaires sent to student participants and interviews with the peer tutors. One peer tutor-led session was followed by a lecture with revision of the small group content; after the second, students submitted a group report which was corrected and returned to them with comments. Student participants and peer tutors identified increased discussion and opportunity for personal reflection as major benefits of the peer tutor-led small group sessions, but students did express uncertainty about gaps in their learning following these sessions. Both methods of subsequent faculty reinforcement were perceived as valuable by student participants and peer tutors. Knowing in advance that the group report would be corrected reduced discussion in some groups, potentially negating one of the major benefits of the peer tutor-led sessions. Faculty reinforcement of peer-tutor led small group content benefits students but close attention should be paid to the method of reinforcement.
Applications of potential theory computations to transonic aeroelasticity
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1986-01-01
Unsteady aerodynamic and aeroelastic stability calculations based upon transonic small disturbance (TSD) potential theory are presented. Results from the two-dimensional XTRAN2L code and the three-dimensional XTRAN3S code are compared with experiment to demonstrate the ability of TSD codes to treat transonic effects. The necessity of nonisentropic corrections to transonic potential theory is demonstrated. Dynamic computational effects resulting from the choice of grid and boundary conditions are illustrated. Unsteady airloads for a number of parameter variations including airfoil shape and thickness, Mach number, frequency, and amplitude are given. Finally, samples of transonic aeroelastic calculations are given. A key observation is the extent to which unsteady transonic airloads calculated by inviscid potential theory may be treated in a locally linear manner.
Terahertz imaging with compressed sensing and phase retrieval.
Chan, Wai Lam; Moravec, Matthew L; Baraniuk, Richard G; Mittleman, Daniel M
2008-05-01
We describe a novel, high-speed pulsed terahertz (THz) Fourier imaging system based on compressed sensing (CS), a new signal processing theory, which allows image reconstruction with fewer samples than traditionally required. Using CS, we successfully reconstruct a 64 x 64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels, which defines the image in the Fourier plane, and observe improved reconstruction quality when we apply phase correction. For our chosen image, only about 12% of the pixels are required for reassembling the image. In combination with phase retrieval, our system has the capability to reconstruct images with only a small subset of Fourier amplitude measurements and thus has potential application in THz imaging with cw sources.
Discovery of Super-Thin Disks in Nearby Edge-on Spiral Galaxies
NASA Astrophysics Data System (ADS)
Schechtman-Rook, A.; Bershady, M. A.
2014-03-01
We report the identification of a super-thin disk (hz˜ 60 pc) in the edge-on spiral galaxy NGC 891. This component is only apparent after we perform a physically motivated attenuation correction, based on detailed radiation transfer models, to our sub-arcsecond resolution near-infrared imaging. In addition to the super-thin disk, we also find several structural features near the center of NGC 891, including an inner disk truncation at ˜3 kpc. Inner disk truncations may be commonplace among massive spiral galaxies, possibly due to the effects of instabilities, such as bars. Having successfully demonstrated our methods, we are poised to apply them to a small sample of nearby edge-on galaxies, consisting both of massive and low-mass spirals.
Visualisation and Quantification of Transport in Barrier Rocks with Positron Emission Tomography
NASA Astrophysics Data System (ADS)
Kulenkampff, J.; Gajewski, C.; Gründig, M.; Lippmann-Pipke, J.; Mittmann, H.; Richter, M.; Wolf, M.
2009-04-01
In tight barrier rocks laboratory observation of radionuclide transport and determination of transport parameters is a demanding and interminable task, because of slow rates, small concentrations, and intricate chemical interactions. The validity of results from common laboratory methods, like flow- and diffusion experiments on small samples, is limited by the heterogeneity of the pathways and adherent upscaling issues, because homogeneous conditions have to be presumed for these input-output investigations. But nano-pores or micro-fractures could be present, which would provide pathways for heterogeneous transport processes. Transport properties of these pathways are most influential boundary conditions for reactions between fluid components and crystal surfaces. We propose Positron Emission Tomography (GEO-PET) as an appropriate method for direct observation of heterogeneous transport of radiotracers in tight material on the laboratory scale. With high-resolution PET scanners, which are common instruments of biomedical research ("small animal PET"), it is possible to determine the spatio-temporal distribution of the tracer activity with a resolution of almost 1 mm during about three periods of the tracer half-life (half-lives of some applicable PET tracers: 18F: 1.8 h, 124I: 4.2 days, 58Co: 70.8 days). The PET tracer is applied as ion in solution or as marker for compounds, like colloids. The most considerable difference between PET applications on geomaterial compared to biological tissue is the stronger attenuation and scattering of radiation because of the higher density of rock material. After travelling the positron attenuation length in dense material (about 1 mm), the positron annihilates in contact with an electron, transmitting two photons with 511 keV, propagating in antiparallel direction. The sample size of geomaterial is limited by the attenuation length of these photons. By applying an appropriate attenuation correction it is possible to investigate transport processes in rock cores with diameters up to 10 cm. Then at least 20% of the initial annihilation events are recorded as coincidences. However, one single photon of the annihilation radiation may be recorded while the other is absorbed; therefore, the signal to noise ratio is degraded by attenuation. Other sources of noise are scattered events, and the loss of one coinciding photon due to gaps between the detectors and other detection probability reasons. Also, the ratio of random coincidences increases with the noise level and impairs the image quality of the tomographic reconstruction. The reduction of these reconstruction artefacts by enhanced data correction methods is an important requirement for the development of the GEO-PET method. An other problem is the development of special methods for the quantitative evaluation of the extensive spatio-temporal data sets. We present results from high-resolution PET for tomographic process observation during transport of colloids and conservative tracers in macroscopic samples of clays, saline rocks, and granites (diameter 5 to 10 cm, length 5 to 20 cm). In most cases we observed localized zones of transport, even in a homogenized compressed clay sample. This reflects the non-representative sample volume, which probably is not achievable for any laboratory method. However, at least the PET tomograms reveal these deviations from representativeness. Up to now, break-through-curve parameters can be determined from spatially resolved tracer concentration measurements at distinct regions of the sample, without mandatory penetration of the complete sample extension. A multiscale model-based inversion scheme for continuous scale-dependent parameter determination is currently developed.
ERIC Educational Resources Information Center
Servetti, Sara
2010-01-01
This paper focuses on cooperative learning (CL) used as a correction and grammar revision technique and considers the data collected in six Italian parallel classes, three of which (sample classes) corrected mistakes and revised grammar through cooperative learning, while the other three (control classes) in a traditional way. All the classes…
Evaluation of the Klobuchar model in TaiWan
NASA Astrophysics Data System (ADS)
Li, Jinghua; Wan, Qingtao; Ma, Guanyi; Zhang, Jie; Wang, Xiaolan; Fan, Jiangtao
2017-09-01
Ionospheric delay is the mainly error source in Global Navigation Satellite System (GNSS). Ionospheric model is one of the ways to correct the ionospheric delay. The single-frequency GNSS users modify the ionospheric delay by receiving the correction parameters broadcasted by satellites. Klobuchar model is widely used in Global Positioning System (GPS) and COMPASS because it is simple and convenient for real-time calculation. This model is established on the observations mainly from Europe and USA. It does not describe the equatorial anomaly region. South of China is located near the north crest of the equatorial anomaly, where the ionosphere has complex spatial and temporal variation. The assessment on the validation of Klobuchar model in this area is important to improve this model. Eleven years (2003-2014) data from one GPS receiver located at Taoyuan Taiwan (121°E, 25°N) are used to assess the validation of Klobuchar model in Taiwan. Total electron content (TEC) from the dual-frequency GPS observations is calculated and used as the reference, and TEC based on the Klobuchar model is compared with the reference. The residual is defined as the difference between the TEC from Klobuchar model and the reference. It is a parameter to reflect the absolute correction of the model. RMS correction percentage presents the validation of the model relative to the observations. The residuals' long-term variation, the RMS correction percentage, and their changes with the latitudes are analyzed respectively to access the model. In some months the RMS correction did not reach the goal of 50% purposed by Klobuchar, especially in the winter of the low solar activity years and at nighttime. RMS correction did not depend on the 11-years solar activity, neither the latitudes. Different from RMS correction, the residuals changed with the solar activity, similar to the variation of TEC. The residuals were large in the daytime, during the equinox seasons and in the high solar activity years; they are small at night, during the solstice seasons, and in the low activity years. During 1300-1500 BJT in the high solar activity years, the mean bias was negative, implying the model underestimated TEC on average. The maximum mean bias was 33TECU in April 2014, and the maximum underestimation reached 97TECU in October 2011. During 0000-0200 BJT, the residuals had small mean bias, small variation range and small standard deviation. It suggested that the model could describe the TEC of the ionosphere better than that in the daytime. Besides the variation with the solar activity, the residuals also vary with the latitudes. The means bias reached the maximum at 20-22°N, corresponding to the north crest of the equatorial anomaly. At this latitude, the maximum mean bias was 47TECU lower than the observation in the high activity years, and 12TECU lower in the low activity years. The minimum variation range appeared at 30-32°N in high and low activity years. But the minimum mean bias was at different latitudes in the high and low activity years. In the high activity years, it appeared at 30-32°N, and in the low years it was at 24-26°N. For an ideal model, the residuals should have small mean bias and small variation range. Further study is needed to learn the distribution of the residuals and to improve the model.
An experimental verification of laser-velocimeter sampling bias and its correction
NASA Technical Reports Server (NTRS)
Johnson, D. A.; Modarress, D.; Owen, F. K.
1982-01-01
The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.
Is the PTW 60019 microDiamond a suitable candidate for small field reference dosimetry?
NASA Astrophysics Data System (ADS)
De Coste, Vanessa; Francescon, Paolo; Marinelli, Marco; Masi, Laura; Paganini, Lucia; Pimpinella, Maria; Prestopino, Giuseppe; Russo, Serenella; Stravato, Antonella; Verona, Claudio; Verona-Rinati, Gianluca
2017-09-01
A systematic study of the PTW microDiamond (MD) output factors (OF) is reported, aimed at clarifying its response in small fields and investigating its suitability for small field reference dosimetry. Ten MDs were calibrated under 60Co irradiation. OF measurements were performed in 6 MV photon beams by a CyberKnife M6, a Varian DHX and an Elekta Synergy linacs. Two PTW silicon diodes E (Si-D) were used for comparison. The results obtained by the MDs were evaluated in terms of absorbed dose to water determination in reference conditions and OF measurements, and compared to the results reported in the recent literature. To this purpose, the Monte Carlo (MC) beam-quality correction factor, kQMD , was calculated for the MD, and the small field output correction factors, k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} , were calculated for both the MD and the Si-D by two different research groups. An empirical function was also derived, providing output correction factors within 0.5% from the MC values calculated for all of the three linacs. A high reproducibility of the dosimetric properties was observed among the ten MDs. The experimental kQMD values are in agreement within 1% with the MC calculated ones. Output correction factors within +0.7% and -1.4% were obtained down to field sizes as narrow as 5 mm. The resulting MD and Si-D field factors are in agreement within 0.2% in the case of CyberKnife measurements and 1.6% in the other cases. This latter higher spread of the data was demonstrated to be due to a lower reproducibility of small beam sizes defined by jaws or multi leaf collimators. The results of the present study demonstrate the reproducibility of the MD response and provide a validation of the MC modelling of this device. In principle, accurate reference dosimetry is thus feasible by using the microDiamond dosimeter for field sizes down to 5 mm.
Liu, Paul Z.Y.; Lee, Christopher; McKenzie, David R.; Suchowerska, Natalka
2016-01-01
Flattening filter‐free (FFF) beams are becoming the preferred beam type for stereotactic radiosurgery (SRS) and stereotactic ablative radiation therapy (SABR), as they enable an increase in dose rate and a decrease in treatment time. This work assesses the effects of the flattening filter on small field output factors for 6 MV beams generated by both Elekta and Varian linear accelerators, and determines differences between detector response in flattened (FF) and FFF beams. Relative output factors were measured with a range of detectors (diodes, ionization chambers, radiochromic film, and microDiamond) and referenced to the relative output factors measured with an air core fiber optic dosimeter (FOD), a scintillation dosimeter developed at Chris O'Brien Lifehouse, Sydney. Small field correction factors were generated for both FF and FFF beams. Diode measured detector response was compared with a recently published mathematical relation to predict diode response corrections in small fields. The effect of flattening filter removal on detector response was quantified using a ratio of relative detector responses in FFF and FF fields for the same field size. The removal of the flattening filter was found to have a small but measurable effect on ionization chamber response with maximum deviations of less than ±0.9% across all field sizes measured. Solid‐state detectors showed an increased dependence on the flattening filter of up to ±1.6%. Measured diode response was within ±1.1% of the published mathematical relation for all fields up to 30 mm, independent of linac type and presence or absence of a flattening filter. For 6 MV beams, detector correction factors between FFF and FF beams are interchangeable for a linac between FF and FFF modes, providing that an additional uncertainty of up to ±1.6% is accepted. PACS number(s): 87.55.km, 87.56.bd, 87.56.Da PMID:27167280
Charles, P H; Cranmer-Sargison, G; Thwaites, D I; Kairn, T; Crowe, S B; Pedrazzini, G; Aland, T; Kenny, J; Langton, C M; Trapp, J V
2014-10-01
Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable "air cap". A set of output ratios (ORDet (fclin) ) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to ORDet (fclin) measured using an IBA stereotactic field diode (SFD). kQclin,Qmsr (fclin,fmsr) was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that kQclin,Qmsr (fclin,fmsr) was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which is "correction-free" in small field relative dosimetry. In addition, the feasibility of experimentally transferring kQclin,Qmsr (fclin,fmsr) values from the SFD to unknown diodes was tested by comparing the experimentally transferred kQclin,Qmsr (fclin,fmsr) values for unmodified PTWe and EDGEe diodes to Monte Carlo simulated values. 1.0 mm of air was required to make the PTWe diode correction-free. This modified diode (PTWeair) produced output factors equivalent to those in water at all field sizes (5-50 mm). The optimal air thickness required for the EDGEe diode was found to be 0.6 mm. The modified diode (EDGEeair) produced output factors equivalent to those in water, except at field sizes of 8 and 10 mm where it measured approximately 2% greater than the relative dose to water. The experimentally calculated kQclin,Qmsr (fclin,fmsr) for both the PTWe and the EDGEe diodes (without air) matched Monte Carlo simulated results, thus proving that it is feasible to transfer kQclin,Qmsr (fclin,fmsr) from one commercially available detector to another using experimental methods and the recommended experimental setup. It is possible to create a diode which does not require corrections for small field output factor measurements. This has been performed and verified experimentally. The ability of a detector to be "correction-free" depends strongly on its design and composition. A nonwater-equivalent detector can only be "correction-free" if competing perturbations of the beam cancel out at all field sizes. This should not be confused with true water equivalency of a detector.
Bolea, Juan; Pueyo, Esther; Orini, Michele; Bailón, Raquel
2016-01-01
The purpose of this study is to characterize and attenuate the influence of mean heart rate (HR) on nonlinear heart rate variability (HRV) indices (correlation dimension, sample, and approximate entropy) as a consequence of being the HR the intrinsic sampling rate of HRV signal. This influence can notably alter nonlinear HRV indices and lead to biased information regarding autonomic nervous system (ANS) modulation. First, a simulation study was carried out to characterize the dependence of nonlinear HRV indices on HR assuming similar ANS modulation. Second, two HR-correction approaches were proposed: one based on regression formulas and another one based on interpolating RR time series. Finally, standard and HR-corrected HRV indices were studied in a body position change database. The simulation study showed the HR-dependence of non-linear indices as a sampling rate effect, as well as the ability of the proposed HR-corrections to attenuate mean HR influence. Analysis in a body position changes database shows that correlation dimension was reduced around 21% in median values in standing with respect to supine position ( p < 0.05), concomitant with a 28% increase in mean HR ( p < 0.05). After HR-correction, correlation dimension decreased around 18% in standing with respect to supine position, being the decrease still significant. Sample and approximate entropy showed similar trends. HR-corrected nonlinear HRV indices could represent an improvement in their applicability as markers of ANS modulation when mean HR changes.
The Impact of In-Service Training of Correctional Counselors.
ERIC Educational Resources Information Center
Smith, Thomas H.
An empirical study was made on treatment atmosphere and shifts in interpersonal behavior in a military correctional treatment setting. The program studied was a small rehabilitation unit housing 100 to 140 enlisted men convicted by special or general court martial of various offenses ranging from AWOL to manslaughter. The objective of the unit was…
Galaxy two-point covariance matrix estimation for next generation surveys
NASA Astrophysics Data System (ADS)
Howlett, Cullan; Percival, Will J.
2017-12-01
We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.
NASA Astrophysics Data System (ADS)
Komatsu, Nobuyoshi
2017-11-01
A power-law corrected entropy based on a quantum entanglement is considered to be a viable black-hole entropy. In this study, as an alternative to Bekenstein-Hawking entropy, a power-law corrected entropy is applied to Padmanabhan's holographic equipartition law to thermodynamically examine an extra driving term in the cosmological equations for a flat Friedmann-Robertson-Walker universe at late times. Deviations from the Bekenstein-Hawking entropy generate an extra driving term (proportional to the α th power of the Hubble parameter, where α is a dimensionless constant for the power-law correction) in the acceleration equation, which can be derived from the holographic equipartition law. Interestingly, the value of the extra driving term in the present model is constrained by the second law of thermodynamics. From the thermodynamic constraint, the order of the driving term is found to be consistent with the order of the cosmological constant measured by observations. In addition, the driving term tends to be constantlike when α is small, i.e., when the deviation from the Bekenstein-Hawking entropy is small.
NASA Astrophysics Data System (ADS)
Drake, A. B.; Garel, T.; Wisotzki, L.; Leclercq, F.; Hashimoto, T.; Richard, J.; Bacon, R.; Blaizot, J.; Caruana, J.; Conseil, S.; Contini, T.; Guiderdoni, B.; Herenz, E. C.; Inami, H.; Lewis, J.; Mahler, G.; Marino, R. A.; Pello, R.; Schaye, J.; Verhamme, A.; Ventou, E.; Weilbacher, P. M.
2017-11-01
We present the deepest study to date of the Lyα luminosity function in a blank field using blind integral field spectroscopy from MUSE. We constructed a sample of 604 Lyα emitters (LAEs) across the redshift range 2.91 < z < 6.64 using automatic detection software in the Hubble Ultra Deep Field. The deep data cubes allowed us to calculate accurate total Lyα fluxes capturing low surface-brightness extended Lyα emission now known to be a generic property of high-redshift star-forming galaxies. We simulated realistic extended LAEs to fully characterise the selection function of our samples, and performed flux-recovery experiments to test and correct for bias in our determination of total Lyα fluxes. We find that an accurate completeness correction accounting for extended emission reveals a very steep faint-end slope of the luminosity function, α, down to luminosities of log10L erg s-1< 41.5, applying both the 1 /Vmax and maximum likelihood estimators. Splitting the sample into three broad redshift bins, we see the faint-end slope increasing from -2.03-0.07+ 1.42 at z ≈ 3.44 to -2.86-∞+0.76 at z ≈ 5.48, however no strong evolution is seen between the 68% confidence regions in L∗-α parameter space. Using the Lyα line flux as a proxy for star formation activity, and integrating the observed luminosity functions, we find that LAEs' contribution to the cosmic star formation rate density rises with redshift until it is comparable to that from continuum-selected samples by z ≈ 6. This implies that LAEs may contribute more to the star-formation activity of the early Universe than previously thought, as any additional intergalactic medium (IGM) correction would act to further boost the Lyα luminosities. Finally, assuming fiducial values for the escape of Lyα and LyC radiation, and the clumpiness of the IGM, we integrated the maximum likelihood luminosity function at 5.00
2015-01-01
The reliability of free energy simulations (FES) is limited by two factors: (a) the need for correct sampling and (b) the accuracy of the computational method employed. Classical methods (e.g., force fields) are typically used for FES and present a myriad of challenges, with parametrization being a principle one. On the other hand, parameter-free quantum mechanical (QM) methods tend to be too computationally expensive for adequate sampling. One widely used approach is a combination of methods, where the free energy difference between the two end states is computed by, e.g., molecular mechanics (MM), and the end states are corrected by more accurate methods, such as QM or hybrid QM/MM techniques. Here we report two new approaches that significantly improve the aforementioned scheme; with a focus on how to compute corrections between, e.g., the MM and the more accurate QM calculations. First, a molecular dynamics trajectory that properly samples relevant conformational degrees of freedom is generated. Next, potential energies of each trajectory frame are generated with a QM or QM/MM Hamiltonian. Free energy differences are then calculated based on the QM or QM/MM energies using either a non-Boltzmann Bennett approach (QM-NBB) or non-Boltzmann free energy perturbation (NB-FEP). Both approaches are applied to calculate relative and absolute solvation free energies in explicit and implicit solvent environments. Solvation free energy differences (relative and absolute) between ethane and methanol in explicit solvent are used as the initial test case for QM-NBB. Next, implicit solvent methods are employed in conjunction with both QM-NBB and NB-FEP to compute absolute solvation free energies for 21 compounds. These compounds range from small molecules such as ethane and methanol to fairly large, flexible solutes, such as triacetyl glycerol. Several technical aspects were investigated. Ultimately some best practices are suggested for improving methods that seek to connect MM to QM (or QM/MM) levels of theory in FES. PMID:24803863
König, Gerhard; Hudson, Phillip S; Boresch, Stefan; Woodcock, H Lee
2014-04-08
THE RELIABILITY OF FREE ENERGY SIMULATIONS (FES) IS LIMITED BY TWO FACTORS: (a) the need for correct sampling and (b) the accuracy of the computational method employed. Classical methods (e.g., force fields) are typically used for FES and present a myriad of challenges, with parametrization being a principle one. On the other hand, parameter-free quantum mechanical (QM) methods tend to be too computationally expensive for adequate sampling. One widely used approach is a combination of methods, where the free energy difference between the two end states is computed by, e.g., molecular mechanics (MM), and the end states are corrected by more accurate methods, such as QM or hybrid QM/MM techniques. Here we report two new approaches that significantly improve the aforementioned scheme; with a focus on how to compute corrections between, e.g., the MM and the more accurate QM calculations. First, a molecular dynamics trajectory that properly samples relevant conformational degrees of freedom is generated. Next, potential energies of each trajectory frame are generated with a QM or QM/MM Hamiltonian. Free energy differences are then calculated based on the QM or QM/MM energies using either a non-Boltzmann Bennett approach (QM-NBB) or non-Boltzmann free energy perturbation (NB-FEP). Both approaches are applied to calculate relative and absolute solvation free energies in explicit and implicit solvent environments. Solvation free energy differences (relative and absolute) between ethane and methanol in explicit solvent are used as the initial test case for QM-NBB. Next, implicit solvent methods are employed in conjunction with both QM-NBB and NB-FEP to compute absolute solvation free energies for 21 compounds. These compounds range from small molecules such as ethane and methanol to fairly large, flexible solutes, such as triacetyl glycerol. Several technical aspects were investigated. Ultimately some best practices are suggested for improving methods that seek to connect MM to QM (or QM/MM) levels of theory in FES.
Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine
2011-03-01
International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.
NASA Astrophysics Data System (ADS)
Hudson, R. E.; Holder, A. J.; Hawkins, K. M.; Williams, P. R.; Curtis, D. J.
2017-12-01
The rheological characterisation of viscoelastic materials undergoing a sol-gel transition at the Gel Point (GP) has important applications in a wide range of industrial, biological, and clinical environments and can provide information regarding both kinetic and microstructural aspects of gelation. The most rigorous basis for identifying the GP involves exploiting the frequency dependence of the real and imaginary parts of the complex shear modulus of the critical gel (the system at the GP) measured under small amplitude oscillatory shear conditions. This approach to GP identification requires that rheological data be obtained over a range of oscillatory shear frequencies. Such measurements are limited by sample mutation considerations (at low frequencies) and, when experiments are conducted using combined motor-transducer (CMT) rheometers, by instrument inertia considerations (at high frequencies). Together, sample mutation and inertia induced artefacts can lead to significant errors in the determination of the GP. Overcoming such artefacts is important, however, as the extension of the range of frequencies available to the experimentalist promises both more accurate GP determination and the ability to study rapidly gelling samples. Herein, we exploit the frequency independent viscoelastic properties of the critical gel to develop and evaluate an enhanced rheometer inertia correction procedure. The procedure allows acquisition of valid GP data at previously inaccessible frequencies (using CMT rheometers) and is applied in a study of the concentration dependence of bovine gelatin gelation GP parameters. A previously unreported concentration dependence of the stress relaxation exponent (α) for critical gelatin gels has been identified, which approaches a limiting value (α = 0.7) at low gelatin concentrations, this being in agreement with previous studies and theoretical predictions for percolating systems at the GP.
Qiu, Jianjun; Li, Yangyang; Huang, Qin; Wang, Yang; Li, Pengcheng
2013-11-18
In laser speckle contrast imaging, it was usually suggested that speckle size should exceed two camera pixels to eliminate the spatial averaging effect. In this work, we show the benefit of enhancing signal to noise ratio by correcting the speckle contrast at small speckle size. Through simulations and experiments, we demonstrated that local speckle contrast, even at speckle size much smaller than one pixel size, can be corrected through dividing the original speckle contrast by the static speckle contrast. Moreover, we show a 50% higher signal to noise ratio of the speckle contrast image at speckle size below 0.5 pixel size than that at speckle size of two pixels. These results indicate the possibility of selecting a relatively large aperture to simultaneously ensure sufficient light intensity and high accuracy and signal to noise ratio, making the laser speckle contrast imaging more flexible.
Methods and apparatus for measuring small leaks from carbon dioxide sequestration facilities
Nelson, Jr., David D.; Herndon, Scott C.
2018-01-02
In one embodiment, a CO.sub.2 leak detection instrument detects leaks from a site (e.g., a CO.sub.2 sequestration facility) using rapid concentration measurements of CO.sub.2, O.sub.2 and optionally water concentration that are achieved, for example, using laser spectroscopy (e.g. direct absorption laser spectroscopy). Water vapor in the sample gas may not be removed, or only partially removed. The sample gas may be collected using a multiplexed inlet assembly from a plurality of locations. CO.sub.2 and O.sub.2 concentrations may be corrected based on the water concentration. A resulting dataset of the CO.sub.2 and O.sub.2 concentrations is analyzed over time intervals to detect any changes in CO.sub.2 concentration that are not anti-correlated with O.sub.2 concentration, and to identify a potential CO.sub.2 leak in response thereto. The analysis may include determining eddy covariance flux measurements of sub-surface potential carbon.
Study design in high-dimensional classification analysis.
Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen
2016-10-01
Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Method of Menu Selection by Gaze Movement Using AC EOG Signals
NASA Astrophysics Data System (ADS)
Kanoh, Shin'ichiro; Futami, Ryoko; Yoshinobu, Tatsuo; Hoshimiya, Nozomu
A method to detect the direction and the distance of voluntary eye gaze movement from EOG (electrooculogram) signals was proposed and tested. In this method, AC-amplified vertical and horizontal transient EOG signals were classified into 8-class directions and 2-class distances of voluntary eye gaze movements. A horizontal and a vertical EOGs during eye gaze movement at each sampling time were treated as a two-dimensional vector, and the center of gravity of the sample vectors whose norms were more than 80% of the maximum norm was used as a feature vector to be classified. By the classification using the k-nearest neighbor algorithm, it was shown that the averaged correct detection rates on each subject were 98.9%, 98.7%, 94.4%, respectively. This method can avoid strict EOG-based eye tracking which requires DC amplification of very small signal. It would be useful to develop robust human interfacing systems based on menu selection for severely paralyzed patients.
Takeda, Kohsuke; Norisuye, Tomohisa; Tran-Cong-Miyata, Qui
2013-07-01
Multi-echo reflection ultrasound spectroscopy (MERUS), which enables one to simultaneously evaluate the attenuation coefficient α, the sound velocity v and the density ρ, has been developed for measurements of elastic moduli. In the present study, the technique was further developed to analyze systems undergoing gelation where an unphysical decrease in the apparent density was previously observed after polymerization. The main reason for this problem was that the shrinkage accompanying the gelation led to a small gap between the cell wall and the sample, resulting in the superposition of the reflected signals which were not separable into individual components. By taking into account the multiply reflecting echoes at the interface of the gap, the corrected densities were systematically obtained and compared with the results for the floating test. The present technique opens a new route to simultaneously evaluate the three parameters α, v and ρ and also the sample thickness for solid thin films. Copyright © 2013 Elsevier B.V. All rights reserved.
New Entamoeba group in howler monkeys (Alouatta spp.) associated with parasites of reptiles.
Villanueva-García, Claudia; Gordillo-Chávez, Elías José; Baños-Ojeda, Carlos; Rendón-Franco, Emilio; Muñoz-García, Claudia Irais; Carrero, Julio César; Córdoba-Aguilar, Alex; Maravilla, Pablo; Galian, José; Martínez-Hernández, Fernando; Villalobos, Guiehdani
2017-08-01
Our knowledge of the parasite species present in wildlife hosts is incomplete. Protozoans such as amoebae of the genus Entamoeba infect a large variety of vertebrate species, including NHPs. However, traditionally, their identification has been accomplished through microscopic evaluation; therefore, amoeba species have not always been identified correctly. We searched for Entamoeba spp. using a fragment of the small subunit rDNA in free-ranging howler monkeys (Alouatta palliata and A. pigra) from southeast Mexico. One hundred fifty five samples were collected, with 46 from A. palliata and 109 from A. pigra and 8 of the total samples were positive. We detected a new clade of Entamoeba, which was separated from other described species but closer to E. insolita, as well as an unnamed sequence typically found in iguana species with low shared identity values (<90%). We designated this new clade as conditional lineage 8 (CL8) and we have shown that members of this group are not exclusive to reptiles.
Wavelet-domain de-noising technique for THz pulsed spectroscopy
NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Gavdush, Arsenii A.; Fokina, Irina N.; Karasik, Valeriy E.; Reshetov, Igor V.; Kudrin, Konstantin G.; Nosov, Pavel A.; Yurchenko, Stanislav O.
2014-09-01
De-noising of terahertz (THz) pulsed spectroscopy (TPS) data is an essential problem, since a noise in the TPS system data prevents correct reconstruction of the sample spectral dielectric properties and to perform the sample internal structure studying. There are certain regions in TPS signal Fourier spectrum, where Fourier-domain signal-to-noise ratio is relatively small. Effective de-noising might potentially expand the range of spectrometer spectral sensitivity and reduce the time of waveform registration, which is an essential problem for biomedical applications of TPS. In this work, it is shown how the recent progress in signal processing in wavelet-domain could be used for TPS waveforms de-noising. It demonstrates the ability to perform effective de-noising of TPS data using the algorithm of the Fast Wavelet Transform (FWT). The results of the optimal wavelet basis selection and wavelet-domain thresholding technique selection are reported. Developed technique is implemented for reconstruction of in vivo healthy and deseased skin samplesspectral characteristics at THz frequency range.
NASA Astrophysics Data System (ADS)
Kelson, Julia R.; Huntington, Katharine W.; Schauer, Andrew J.; Saenger, Casey; Lechler, Alex R.
2017-01-01
Carbonate clumped isotope (Δ47) thermometry has been applied to a wide range of problems in earth, ocean and biological sciences over the last decade, but is still plagued by discrepancies among empirical calibrations that show a range of Δ47-temperature sensitivities. The most commonly suggested causes of these discrepancies are the method of mineral precipitation and analytical differences, including the temperature of phosphoric acid used to digest carbonates. However, these mechanisms have yet to be tested in a consistent analytical setting, which makes it difficult to isolate the cause(s) of discrepancies and to evaluate which synthetic calibration is most appropriate for natural samples. Here, we systematically explore the impact of synthetic carbonate precipitation by replicating precipitation experiments of previous workers under a constant analytical setting. We (1) precipitate 56 synthetic carbonates at temperatures of 4-85 °C using different procedures to degas CO2, with and without the use of the enzyme carbonic anhydrase (CA) to promote rapid dissolved inorganic carbon (DIC) equilibration; (2) digest samples in phosphoric acid at both 90 °C and 25 °C; and (3) hold constant all analytical methods including acid preparation, CO2 purification, and mass spectrometry; and (4) reduce our data with 17O corrections that are appropriate for our samples. We find that the CO2 degassing method does not influence Δ47 values of these synthetic carbonates, and therefore probably only influences natural samples with very rapid degassing rates, like speleothems that precipitate out of drip solution with high pCO2. CA in solution does not influence Δ47 values in this work, suggesting that disequilibrium in the DIC pool is negligible. We also find the Δ47 values of samples reacted in 25 and 90 °C acid are within error of each other (once corrected with a constant acid fractionation factor). Taken together, our results show that the Δ47-temperature relationship does not measurably change with either the precipitation methods used in this study or acid digestion temperature. This leaves phosphoric acid preparation, CO2 gas purification, and/or data reduction methods as the possible sources of the discrepancy among published calibrations. In particular, the use of appropriate 17O corrections has the potential to reduce disagreement among calibrations. Our study nearly doubles the available synthetic carbonate calibration data for Δ47 thermometry (adding 56 samples to the 74 previously published samples). This large population size creates a robust calibration that enables us to examine the potential for calibration slope aliasing due to small sample size. The similarity of Δ47 values among carbonates precipitated under such diverse conditions suggests that many natural samples grown at 4-85 °C in moderate pH conditions (6-10) may also be described by our Δ47-temperature relationship.
Thonusin, Chanisa; IglayReger, Heidi B; Soni, Tanu; Rothberg, Amy E; Burant, Charles F; Evans, Charles R
2017-11-10
In recent years, mass spectrometry-based metabolomics has increasingly been applied to large-scale epidemiological studies of human subjects. However, the successful use of metabolomics in this context is subject to the challenge of detecting biologically significant effects despite substantial intensity drift that often occurs when data are acquired over a long period or in multiple batches. Numerous computational strategies and software tools have been developed to aid in correcting for intensity drift in metabolomics data, but most of these techniques are implemented using command-line driven software and custom scripts which are not accessible to all end users of metabolomics data. Further, it has not yet become routine practice to assess the quantitative accuracy of drift correction against techniques which enable true absolute quantitation such as isotope dilution mass spectrometry. We developed an Excel-based tool, MetaboDrift, to visually evaluate and correct for intensity drift in a multi-batch liquid chromatography - mass spectrometry (LC-MS) metabolomics dataset. The tool enables drift correction based on either quality control (QC) samples analyzed throughout the batches or using QC-sample independent methods. We applied MetaboDrift to an original set of clinical metabolomics data from a mixed-meal tolerance test (MMTT). The performance of the method was evaluated for multiple classes of metabolites by comparison with normalization using isotope-labeled internal standards. QC sample-based intensity drift correction significantly improved correlation with IS-normalized data, and resulted in detection of additional metabolites with significant physiological response to the MMTT. The relative merits of different QC-sample curve fitting strategies are discussed in the context of batch size and drift pattern complexity. Our drift correction tool offers a practical, simplified approach to drift correction and batch combination in large metabolomics studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Martins, Thomas B.
2002-01-01
The ability of the Luminex system to simultaneously quantitate multiple analytes from a single sample source has proven to be a feasible and cost-effective technology for assay development. In previous studies, my colleagues and I introduced two multiplex profiles consisting of 20 individual assays into the clinical laboratory. With the Luminex instrument’s ability to classify up to 100 distinct microspheres, however, we have only begun to realize the enormous potential of this technology. By utilizing additional microspheres, it is now possible to add true internal controls to each individual sample. During the development of a seven-analyte serologic viral respiratory antibody profile, internal controls for detecting sample addition and interfering rheumatoid factor (RF) were investigated. To determine if the correct sample was added, distinct microspheres were developed for measuring the presence of sufficient quantities of immunoglobulin G (IgG) or IgM in the diluted patient sample. In a multiplex assay of 82 samples, the IgM verification control correctly identified 23 out of 23 samples with low levels (<20 mg/dl) of this antibody isotype. An internal control microsphere for RF detected 30 out of 30 samples with significant levels (>10 IU/ml) of IgM RF. Additionally, RF-positive samples causing false-positive adenovirus and influenza A virus IgM results were correctly identified. By exploiting the Luminex instrument’s multiplexing capabilities, I have developed true internal controls to ensure correct sample addition and identify interfering RF as part of a respiratory viral serologic profile that includes influenza A and B viruses, adenovirus, parainfluenza viruses 1, 2, and 3, and respiratory syncytial virus. Since these controls are not assay specific, they can be incorporated into any serologic multiplex assay. PMID:11777827
Martins, Thomas B
2002-01-01
The ability of the Luminex system to simultaneously quantitate multiple analytes from a single sample source has proven to be a feasible and cost-effective technology for assay development. In previous studies, my colleagues and I introduced two multiplex profiles consisting of 20 individual assays into the clinical laboratory. With the Luminex instrument's ability to classify up to 100 distinct microspheres, however, we have only begun to realize the enormous potential of this technology. By utilizing additional microspheres, it is now possible to add true internal controls to each individual sample. During the development of a seven-analyte serologic viral respiratory antibody profile, internal controls for detecting sample addition and interfering rheumatoid factor (RF) were investigated. To determine if the correct sample was added, distinct microspheres were developed for measuring the presence of sufficient quantities of immunoglobulin G (IgG) or IgM in the diluted patient sample. In a multiplex assay of 82 samples, the IgM verification control correctly identified 23 out of 23 samples with low levels (<20 mg/dl) of this antibody isotype. An internal control microsphere for RF detected 30 out of 30 samples with significant levels (>10 IU/ml) of IgM RF. Additionally, RF-positive samples causing false-positive adenovirus and influenza A virus IgM results were correctly identified. By exploiting the Luminex instrument's multiplexing capabilities, I have developed true internal controls to ensure correct sample addition and identify interfering RF as part of a respiratory viral serologic profile that includes influenza A and B viruses, adenovirus, parainfluenza viruses 1, 2, and 3, and respiratory syncytial virus. Since these controls are not assay specific, they can be incorporated into any serologic multiplex assay.
NASA Astrophysics Data System (ADS)
Fitz-Diaz, E.; Hall, C. M.; van der Pluijm, B.
2013-12-01
One of the fundamentals of 40Ar-39Ar systematics of illite considers the effects of 39Ar recoil (ejection of 39Ar from tiny illite crystallites during the nuclear reaction 39K(n,p)39Ar), for which sample vacuum encapsulation prior to irradiation has been used since the 1990's. This technique separately measures the fraction of recoiled 39Ar and the Ar (39Ar and 40Ar) retained within illite crystals as they degas during step heating in vacuum. Total-gas ages (TGA) are calculated by using both recoiled and retained argon, while retention ages (RA) only involve retained Ar. Observations in numerous natural examples have shown that TGA fit stratigraphic constraints of geological processes when the average illite crystallite thickness (ICT) is smaller than 10nm, and that RA better matches these constrains for larger ICTs. Illite crystals with ICT >50nm show total gas and retention ages within a few My and they are identical, within analytical error, when ICT exceeds 150nm. We propose a new age correction that takes into account the average ICT and corresponding recoil for a sample , with such corrected ages (XCA) lying between the TGA and RA end-member ages. We apply this correction to samples containing one generation of illite and it particularly affects illite populations formed in the anchizone, with typical ICT values between 10-40nm. We analyzed bentonitic samples (S1, S2 and S3) from sites in Cretaceous carbonates in the front of the Monterrey salient in northern Mexico. Four size fractions (<0.05, 0.05-0.2, 0.2-1 & 1-2 μm) were separated, analyzed with XRD and dated by Ar-Ar. XRD analysis provides mineralogic characterization, illite polytype quantification, and illite crystallite thickness (ICT) determination using half-height peak width (illite crystallinity) and the Scherrer equation. All samples contain illite as the main mineral phase, ICT values between 8-27nm, from fine to coarser grain size fractions. Ages show a range in TGA among the different size fractions of S1, S2 and S3, respectively: 46-49, 36-43 and 40-52 My) and RA (54-64, 47-52 and 53-54 My. XCA calculations produce tighter constrained ranges (53-57, 45.5-48.5 and 49-52 My) with an overall average 51.1Ma×3.9 My. In the ICT vs. apparent age plot, authigenic illite grains show a greater slope that is in general slightly positive for TGA, slightly negative for RA, but close to zero for XCA. In the ICT vs. XCA plot thinner crystallites shows more dispersion than thicker ones. In order to test if such dispersion in the age of the finer/thinner illite is due to a different formation history in each site or the result of retention capability, degassing spectra were modeled for site XCA averages and overall XCA average. The modeling shows that local site ages best match the measured spectra, instead of a single age for the combined sites. The closeness between experimental and artificial degassing patterns also supports the hypothesis that each sample preserves a single age population. All illite grains in these samples grew progressively during folding in a time window that is constrained by the three sites. Small and large grains represent the same population in each sample, representing progressive degrees of grain growth (Ostwald ripening).
SU-E-T-623: Polarity Effects for Small Volume Ionization Chambers in Cobalt-60 Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Bhatnagar, J; Huq, M Saiful
2015-06-15
Purpose: To investigate the polarity effects for small volume ionization chambers in {sup 60}Co gamma-ray beams using the Leksell Gamma Knife Perfexion. Methods: Measurements were made for 7 small volume ionization chambers (a PTW 31016, an Exradin A14, 2 Capintec PR0-5P, and 3 Exradin A16) using a PTW UNIDOSwebline Universal Dosemeter and an ELEKTA solid water phantom with proper inserts. For each ion chamber, the temperature/pressure corrected electric charge readings were obtained for 16 voltage values (±50V, ±100V, ±200V, ±300V, ±400V, ±500V, ±600V, ±700V). For each voltage, a five-minute leakage charge reading and a series of 2-minute readings were continuouslymore » taken during irradiation until 5 stable signals (less than 0.05% variation) were obtained. The average of the 5 reading was then used for the calculation of the polarity corrections at the voltage and for generating the saturation curves. Results: The polarity effects are more pronounced at high or low voltages than at the medium voltages for all chambers studied. The voltage dependence of the 3 Exradin A16 chambers is similar in shape. The polarity corrections for the Exradin A16 chambers changes rapidly from about 1 at 500V to about 0.98 at 700V. The polarity corrections for the 7 ion chambers at 300V are in the range from 0.9925 (for the PTW31016) to 1.0035 (for an Exradin A16). Conclusion: The polarity corrections for certain micro-chambers are large even at normal operating voltage.« less
Automated liver sampling using a gradient dual-echo Dixon-based technique.
Bashir, Mustafa R; Dale, Brian M; Merkle, Elmar M; Boll, Daniel T
2012-05-01
Magnetic resonance spectroscopy of the liver requires input from a physicist or physician at the time of acquisition to insure proper voxel selection, while in multiecho chemical shift imaging, numerous regions of interest must be manually selected in order to ensure analysis of a representative portion of the liver parenchyma. A fully automated technique could improve workflow by selecting representative portions of the liver prior to human analysis. Complete volumes from three-dimensional gradient dual-echo acquisitions with two-point Dixon reconstruction acquired at 1.5 and 3 T were analyzed in 100 subjects, using an automated liver sampling algorithm, based on ratio pairs calculated from signal intensity image data as fat-only/water-only and log(in-phase/opposed-phase) on a voxel-by-voxel basis. Using different gridding variations of the algorithm, the average correct liver volume samples ranged from 527 to 733 mL. The average percentage of sample located within the liver ranged from 95.4 to 97.1%, whereas the average incorrect volume selected was 16.5-35.4 mL (2.9-4.6%). Average run time was 19.7-79.0 s. The algorithm consistently selected large samples of the hepatic parenchyma with small amounts of erroneous extrahepatic sampling, and run times were feasible for execution on an MRI system console during exam acquisition. Copyright © 2011 Wiley Periodicals, Inc.
Multivariate classification of the infrared spectra of cell and tissue samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haaland, D.M.; Jones, H.D.; Thomas, E.V.
1997-03-01
Infrared microspectroscopy of biopsied canine lymph cells and tissue was performed to investigate the possibility of using IR spectra coupled with multivariate classification methods to classify the samples as normal, hyperplastic, or neoplastic (malignant). IR spectra were obtained in transmission mode through BaF{sub 2} windows and in reflection mode from samples prepared on gold-coated microscope slides. Cytology and histopathology samples were prepared by a variety of methods to identify the optimal methods of sample preparation. Cytospinning procedures that yielded a monolayer of cells on the BaF{sub 2} windows produced a limited set of IR transmission spectra. These transmission spectra weremore » converted to absorbance and formed the basis for a classification rule that yielded 100{percent} correct classification in a cross-validated context. Classifications of normal, hyperplastic, and neoplastic cell sample spectra were achieved by using both partial least-squares (PLS) and principal component regression (PCR) classification methods. Linear discriminant analysis applied to principal components obtained from the spectral data yielded a small number of misclassifications. PLS weight loading vectors yield valuable qualitative insight into the molecular changes that are responsible for the success of the infrared classification. These successful classification results show promise for assisting pathologists in the diagnosis of cell types and offer future potential for {ital in vivo} IR detection of some types of cancer. {copyright} {ital 1997} {ital Society for Applied Spectroscopy}« less
Sub-nanometer Resolution Imaging with Amplitude-modulation Atomic Force Microscopy in Liquid
Farokh Payam, Amir; Piantanida, Luca; Cafolla, Clodomiro; Voïtchovsky, Kislon
2016-01-01
Atomic force microscopy (AFM) has become a well-established technique for nanoscale imaging of samples in air and in liquid. Recent studies have shown that when operated in amplitude-modulation (tapping) mode, atomic or molecular-level resolution images can be achieved over a wide range of soft and hard samples in liquid. In these situations, small oscillation amplitudes (SAM-AFM) enhance the resolution by exploiting the solvated liquid at the surface of the sample. Although the technique has been successfully applied across fields as diverse as materials science, biology and biophysics and surface chemistry, obtaining high-resolution images in liquid can still remain challenging for novice users. This is partly due to the large number of variables to control and optimize such as the choice of cantilever, the sample preparation, and the correct manipulation of the imaging parameters. Here, we present a protocol for achieving high-resolution images of hard and soft samples in fluid using SAM-AFM on a commercial instrument. Our goal is to provide a step-by-step practical guide to achieving high-resolution images, including the cleaning and preparation of the apparatus and the sample, the choice of cantilever and optimization of the imaging parameters. For each step, we explain the scientific rationale behind our choices to facilitate the adaptation of the methodology to every user's specific system. PMID:28060262
Steele, L. P. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia; Krummel, P. B. [Commonwealth Scientific and Industrial Research Organization (CSIRO),; Langenfelds, R. L. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia
2008-01-01
Individual measurements have been obtained from flask air samples returned to the CSIRO GASLAB. Typical sample storage times range from days to weeks for some sites (e.g. Cape Grim, Aircraft over Tasmania and Bass Strait) to as much as one year for Macquarie Island and the Antarctic sites. Experiments carried out to test for changes in sample CO2 mixing ratio during storage have shown significant drifts in some flask types over test periods of several months to years (Cooper et al., 1999). Corrections derived from the test results are applied to network data according to flask type. These measurements indicate a rise in annual average atmospheric CO2 concentration from 357.72 parts per million by volume (ppmv) in 1992 to 383.05 ppmv in 2006, or an increase in annual average of about 1.81 ppmv/year. These flask data may be compared with other flask measurements from the Scripps Institution of Oceanography, available through 2004 in TRENDS; both indicate an annual average increase of 1.72 ppmv/year throuth 2004. Differences may be attributed to different sampling times or days, different numbers of samples, and different curve-fitting techniques used to obtain monthly and annual average numbers from flask data. Measurement error in flask data is believed to be small (Masarie et al., 2001).
Fischer, H Felix; Wahl, Inka; Nolte, Sandra; Liegl, Gregor; Brähler, Elmar; Löwe, Bernd; Rose, Matthias
2017-12-01
To investigate differential item functioning (DIF) of PROMIS Depression items between US and German samples we compared data from the US PROMIS calibration sample (n = 780), a German general population survey (n = 2,500) and a German clinical sample (n = 621). DIF was assessed in an ordinal logistic regression framework, with 0.02 as criterion for R 2 -change and 0.096 for Raju's non-compensatory DIF. Item parameters were initially fixed to the PROMIS Depression metric; we used plausible values to account for uncertainty in depression estimates. Only four items showed DIF. Accounting for DIF led to negligible effects for the full item bank as well as a post hoc simulated computer-adaptive test (< 0.1 point on the PROMIS metric [mean = 50, standard deviation =10]), while the effect on the short forms was small (< 1 point). The mean depression severity (43.6) in the German general population sample was considerably lower compared to the US reference value of 50. Overall, we found little evidence for language DIF between US and German samples, which could be addressed by either replacing the DIF items by items not showing DIF or by scoring the short form in German samples with the corrected item parameters reported. Copyright © 2016 John Wiley & Sons, Ltd.
Comparing State SAT Scores: Problems, Biases, and Corrections.
ERIC Educational Resources Information Center
Gohmann, Stephen F.
1988-01-01
One method to correct for selection bias in comparing Scholastic Aptitude Test (SAT) scores among states is presented, which is a modification of J. J. Heckman's Selection Bias Correction (1976, 1979). Empirical results suggest that sample selection bias is present in SAT score regressions. (SLD)
Length bias correction in one-day cross-sectional assessments - The nutritionDay study.
Frantal, Sophie; Pernicka, Elisabeth; Hiesmayr, Michael; Schindler, Karin; Bauer, Peter
2016-04-01
A major problem occurring in cross-sectional studies is sampling bias. Length of hospital stay (LOS) differs strongly between patients and causes a length bias as patients with longer LOS are more likely to be included and are therefore overrepresented in this type of study. To adjust for the length bias higher weights are allocated to patients with shorter LOS. We determined the effect of length-bias adjustment in two independent populations. Length-bias correction is applied to the data of the nutritionDay project, a one-day multinational cross-sectional audit capturing data on disease and nutrition of patients admitted to hospital wards with right-censoring after 30 days follow-up. We applied the weighting method for estimating the distribution function of patient baseline variables based on the method of non-parametric maximum likelihood. Results are validated using data from all patients admitted to the General Hospital of Vienna between 2005 and 2009, where the distribution of LOS can be assumed to be known. Additionally, a simplified calculation scheme for estimating the adjusted distribution function of LOS is demonstrated on a small patient example. The crude median (lower quartile; upper quartile) LOS in the cross-sectional sample was 14 (8; 24) and decreased to 7 (4; 12) when adjusted. Hence, adjustment for length bias in cross-sectional studies is essential to get appropriate estimates. Copyright © 2015 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Classification of plum spirit drinks by synchronous fluorescence spectroscopy.
Sádecká, J; Jakubíková, M; Májek, P; Kleinová, A
2016-04-01
Synchronous fluorescence spectroscopy was used in combination with principal component analysis (PCA) and linear discriminant analysis (LDA) for the differentiation of plum spirits according to their geographical origin. A total of 14 Czech, 12 Hungarian and 18 Slovak plum spirit samples were used. The samples were divided in two categories: colorless (22 samples) and colored (22 samples). Synchronous fluorescence spectra (SFS) obtained at a wavelength difference of 60 nm provided the best results. Considering the PCA-LDA applied to the SFS of all samples, Czech, Hungarian and Slovak colorless samples were properly classified in both the calibration and prediction sets. 100% of correct classification was also obtained for Czech and Hungarian colored samples. However, one group of Slovak colored samples was classified as belonging to the Hungarian group in the calibration set. Thus, the total correct classifications obtained were 94% and 100% for the calibration and prediction steps, respectively. The results were compared with those obtained using near-infrared (NIR) spectroscopy. Applying PCA-LDA to NIR spectra (5500-6000 cm(-1)), the total correct classifications were 91% and 92% for the calibration and prediction steps, respectively, which were slightly lower than those obtained using SFS. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neitzel, R.; Naeher, L., P.; Paulsen, M.
2009-04-01
Urinary methoxyphenols (MPs) have been proposed as biomarkers of woodsmoke exposure. However, few field studies have been undertaken to evaluate the relationship between woodsmoke exposure and urinary MP concentrations. We conducted a pilot study at the US Forest ServiceFSavannah River Site, in which carbon monoxide (CO), levoglucosan (LG), and particulate matter (PM2.5) exposures were measured in wildland firefighters on prescribedburn days. Pre- and post-shift urine samples were collected from each subject, and cross-shift changes in creatinine-corrected urinary MP concentrations were calculated. Correlations between exposure measures and creatine-adjusted urinary MP concentrations were explored, and regression models were developed relating changes inmore » urinary MP concentrations to measured exposure levels. Full-shift measurements were made on 13 firefighters over 20 work shifts in winter 2004 at the US Forest Service Savannah River site, a National Environmental Research Park. The average workshift length across the 20 measured shifts was 701±95 min. LG and CO exposures were significantly correlated for samples where the filter measurement captured at least 60% of the work shift (16 samples), as well as for the smaller set of full-shift exposure samples (n¼9). PM2.5 and CO exposures were not significantly correlated, and LG and PM2.5 exposures were only significantly correlated for samples representing at least 60% of the work shift. Creatinine-corrected urinary concentrations for 20 of the 22 MPs showed cross-shift increases, with 14 of these changes showing statistical significance. Individual and summed creatinine-adjusted guaiacol urinary MPs were highly associated with CO (and, to a lesser degree, LG) exposure levels, and random-effects regression models including CO and LG exposure levels explained up to 80% of the variance in cross-shift changes in summed creatinine-adjusted guaiacol urinary MP concentrations. Although limited by the small sample size, this pilot study demonstrates that urinary MP concentrations may be effective biomarkers of occupational exposure to wood smoke among wildland firefighters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neitzel, R.; Naeher, L., P.; Paulsen, M.
2009-04-01
Urinary methoxyphenols (MPs) have been proposed as biomarkers of woodsmoke exposure. However, few field studies have been undertaken to evaluate the relationship between woodsmoke exposure and urinary MP concentrations. We conducted a pilot study at the US Forest Service Savannah River Site, in which carbon monoxide (CO), levoglucosan (LG), and particulate matter (PM2.5) exposures were measured in wildland firefighters on prescribed burn days. Pre- and post-shift urine samples were collected from each subject, and cross-shift changes in creatinine-corrected urinary MP concentrations were calculated. Correlations between exposure measures and creatine-adjusted urinary MP concentrations were explored, and regression models were developed relatingmore » changes in urinary MP concentrations to measured exposure levels. Full-shift measurements were made on 13 firefighters over 20 work shifts in winter 2004 at the US Forest Service Savannah River site, a National Environmental Research Park. The average workshift length across the 20 measured shifts was 701±95 min. LG and CO exposures were significantly correlated for samples where the filter measurement captured at least 60% of the work shift (16 samples), as well as for the smaller set of full-shift exposure samples (n¼9). PM2.5 and CO exposures were not significantly correlated, and LG and PM2.5 exposures were only significantly correlated for samples representing at least 60% of the work shift. Creatinine-corrected urinary concentrations for 20 of the 22 MPs showed cross-shift increases, with 14 of these changes showing statistical significance. Individual and summed creatinine-adjusted guaiacol urinary MPs were highly associated with CO (and, to a lesser degree, LG) exposure levels, and random-effects regression models including CO and LG exposure levels explained up to 80% of the variance in cross-shift changes in summed creatinine-adjusted guaiacol urinary MP concentrations. Although limited by the small sample size, this pilot study demonstrates that urinary MP concentrations may be effective biomarkers of occupational exposure to wood smoke among wildland firefighters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neitzel, R.; Naeher, L., P.; Paulsen, M.
2009-04-01
Urinary methoxyphenols (MPs) have been proposed as biomarkers of woodsmoke exposure. However, few field studies have been undertaken to evaluate the relationship between woodsmoke exposure and urinary MP concentrations. We conducted a pilot study at the US Forest Service Savannah River Site, in which carbon monoxide (CO), levoglucosan (LG), and particulate matter (PM2.5) exposures were measured in wildland firefighters on prescribed burn days. Pre- and post-shift urine samples were collected from each subject, and cross-shift changes in creatinine-corrected urinary MP concentrations were calculated. Correlations between exposure measures and creatine-adjusted urinary MP concentrations were explored, and regression models were developed relatingmore » changes in urinary MP concentrations to measured exposure levels. Full-shift measurements were made on 13 firefighters over 20 work shifts in winter 2004 at the US Forest Service Savannah River site, a National Environmental Research Park. The average workshift length across the 20 measured shifts was 701±95 min. LG and CO exposures were significantly correlated for samples where the filter measurement captured at least 60% of the work shift (16 samples), as well as for the smaller set of full-shift exposure samples (n¼9). PM2.5 and CO exposures were not significantly correlated, and LG and PM2.5 exposures were only significantly correlated for samples representing at least 60% of the work shift. Creatinine-corrected urinary concentrations for 20 of the 22 MPs showed cross-shift increases, with 14 of these changes showing statistical significance. Individual and summed creatinine-adjusted guaiacol urinary MPs were highly associated with CO (and, to a lesser degree, LG) exposure levels, and random-effects regression models including CO and LG exposure levels explained up to 80% of the variance in cross-shift changes in summed creatinine-adjusted guaiacol urinary MP concentrations. Although limited by the small sample size, this pilot study demonstrates that urinary MP concentrations may be effective biomarkers of occupational exposure to wood smoke among wildland firefighters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neitzel, R.; Naeher, L., P.; Paulsen, M.
2009-04-01
Urinary methoxyphenols (MPs) have been proposed as biomarkers of woodsmoke exposure. However, few field studies have been undertaken to evaluate the relationship between woodsmoke exposure and urinary MP concentrations. We conducted a pilot study at the US Forest ServiceFSavannah River Site, in which carbon monoxide (CO), levoglucosan (LG), and particulate matter (PM2.5) exposures were measured in wildland firefighters on prescribed burn days. Pre- and post-shift urine samples were collected from each subject, and cross-shift changes in creatinine-corrected urinary MP concentrations were calculated. Correlations between exposure measures and creatine-adjusted urinary MP concentrations were explored, and regression models were developed relating changesmore » in urinary MP concentrations to measured exposure levels. Full-shift measurements were made on 13 firefighters over 20 work shifts in winter 2004 at the US Forest Service Savannah River site, a National Environmental Research Park. The average workshift length across the 20 measured shifts was 701±95 min. LG and CO exposures were significantly correlated for samples where the filter measurement captured at least 60% of the work shift (16 samples), as well as for the smaller set of full-shift exposure samples (n¼9). PM2.5 and CO exposures were not significantly correlated, and LG and PM2.5 exposures were only significantly correlated for samples representing at least 60% of the work shift. Creatinine-corrected urinary concentrations for 20 of the 22 MPs showed cross-shift increases, with 14 of these changes showing statistical significance. Individual and summed creatinine-adjusted guaiacol urinary MPs were highly associated with CO (and, to a lesser degree, LG) exposure levels, and random-effects regression models including CO and LG exposure levels explained up to 80% of the variance in cross-shift changes in summed creatinine-adjusted guaiacol urinary MP concentrations. Although limited by the small sample size, this pilot study demonstrates that urinary MP concentrations may be effective biomarkers of occupational exposure to wood smoke among wildland firefighters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neitzel, R.; Naeher, L., P.; Paulsen, M.
2008-04-01
Urinary methoxyphenols (MPs) have been proposed as biomarkers of woodsmoke exposure. However, few field studies have been undertaken to evaluate the relationship between woodsmoke exposure and urinary MP concentrations. We conducted a pilot study at the US Forest Service Savannah River Site, in which carbon monoxide (CO), levoglucosan (LG), and particulate matter (PM2.5) exposures were measured in wildland firefighters on prescribed burn days. Pre- and post-shift urine samples were collected from each subject, and cross-shift changes in creatinine-corrected urinary MP concentrations were calculated. Correlations between exposure measures and creatine-adjusted urinary MP concentrations were explored, and regression models were developed relatingmore » changes in urinary MP concentrations to measured exposure levels. Full-shift measurements were made on 13 firefighters over 20 work shifts in winter 2004 at the US Forest Service Savannah River site, a National Environmental Research Park. The average workshift length across the 20 measured shifts was 701±95 min. LG and CO exposures were significantly correlated for samples where the filter measurement captured at least 60% of the work shift (16 samples), as well as for the smaller set of full-shift exposure samples (n¼9). PM2.5 and CO exposures were not significantly correlated, and LG and PM2.5 exposures were only significantly correlated for samples representing at least 60% of the work shift. Creatinine-corrected urinary concentrations for 20 of the 22 MPs showed cross-shift increases, with 14 of these changes showing statistical significance. Individual and summed creatinine-adjusted guaiacol urinary MPs were highly associated with CO (and, to a lesser degree, LG) exposure levels, and random-effects regression models including CO and LG exposure levels explained up to 80% of the variance in cross-shift changes in summed creatinine-adjusted guaiacol urinary MP concentrations. Although limited by the small sample size, this pilot study demonstrates that urinary MP concentrations may be effective biomarkers of occupational exposure to wood smoke among wildland firefighters.« less
Correction of bias in belt transect studies of immotile objects
Anderson, D.R.; Pospahala, R.S.
1970-01-01
Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.
IMPROVED SPECTROPHOTOMETRIC CALIBRATION OF THE SDSS-III BOSS QUASAR SAMPLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margala, Daniel; Kirkby, David; Dawson, Kyle
2016-11-10
We present a model for spectrophotometric calibration errors in observations of quasars from the third generation of the Sloan Digital Sky Survey Baryon Oscillation Spectroscopic Survey (BOSS) and describe the correction procedure we have developed and applied to this sample. Calibration errors are primarily due to atmospheric differential refraction and guiding offsets during each exposure. The corrections potentially reduce the systematics for any studies of BOSS quasars, including the measurement of baryon acoustic oscillations using the Ly α forest. Our model suggests that, on average, the observed quasar flux in BOSS is overestimated by ∼19% at 3600 Å and underestimatedmore » by ∼24% at 10,000 Å. Our corrections for the entire BOSS quasar sample are publicly available.« less
A Ground Validation Network for the Global Precipitation Measurement Mission
NASA Technical Reports Server (NTRS)
Schwaller, Mathew R.; Morris, K. Robert
2011-01-01
A prototype Validation Network (VN) is currently operating as part of the Ground Validation System for NASA's Global Precipitation Measurement (GPM) mission. The VN supports precipitation retrieval algorithm development in the GPM prelaunch era. Postlaunch, the VN will be used to validate GPM spacecraft instrument measurements and retrieved precipitation data products. The period of record for the VN prototype starts on 8 August 2006 and runs to the present day. The VN database includes spacecraft data from the Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) and coincident ground radar (GR) data from operational meteorological networks in the United States, Australia, Korea, and the Kwajalein Atoll in the Marshall Islands. Satellite and ground radar data products are collected whenever the PR satellite track crosses within 200 km of a VN ground radar, and these data are stored permanently in the VN database. VN products are generated from coincident PR and GR observations when a significant rain event occurs. The VN algorithm matches PR and GR radar data (including retrieved precipitation data in the case of the PR) by calculating averages of PR reflectivity (both raw and attenuation corrected) and rain rate, and GR reflectivity at the geometric intersection of the PR rays with the individual GR elevation sweeps. The algorithm thus averages the minimum PR and GR sample volumes needed to "matchup" the spatially coincident PR and GR data types. The result of this technique is a set of vertical profiles for a given rainfall event, with coincident PR and GR samples matched at specified heights throughout the profile. VN data can be used to validate satellite measurements and to track ground radar calibration over time. A comparison of matched TRMM PR and GR radar reflectivity factor data found a remarkably small difference between the PR and GR radar reflectivity factor averaged over this period of record in stratiform and convective rain cases when samples were taken from high in the atmosphere. A significant difference in PR and GR reflectivity was found in convective cases, particularly in convective samples from the lower part of the atmosphere. In this case, the mean difference between PR and corrected GR reflectivity was -1.88 dBZ. The PR-GR bias was found to increase with the amount of PR attenuation correction applied, with the PR-GR bias reaching -3.07 dBZ in cases where the attenuation correction applied is greater than 6 dBZ. Additional analysis indicated that the version 6 TRMM PR retrieval algorithm underestimates rainfall in case of convective rain in the lower part of the atmosphere by 30%-40%.
Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media
Christensen, S.; Cooley, R.L.
2002-01-01
Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.
Experience gained in testing a theory for modelling groundwater flow in heterogeneous media
Christensen, S.; Cooley, R.L.
2002-01-01
Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.
Acharya, Sayantan; Nandi, Manoj K; Mandal, Arkajit; Sarkar, Sucharita; Bhattacharyya, Sarika Maitra
2015-08-27
We study the diffusion of small solute particles through solvent by keeping the solute-solvent interaction repulsive and varying the solvent properties. The study involves computer simulations, development of a new model to describe diffusion of small solutes in a solvent, and also mode coupling theory (MCT) calculations. In a viscous solvent, a small solute diffuses via coupling to the solvent hydrodynamic modes and also through the transient cages formed by the solvent. The model developed can estimate the independent contributions from these two different channels of diffusion. Although the solute diffusion in all the systems shows an amplification, the degree of it increases with solvent viscosity. The model correctly predicts that when the solvent viscosity is high, the solute primarily diffuses by exploiting the solvent cages. In such a scenario the MCT diffusion performed for a static solvent provides a correct estimation of the cage diffusion.
Howard, Michelle; Lytwyn, Alice; Lohfeld, Lynne; Redwood-Campbell, Lynda; Fowler, Nancy; Karwalajtys, Tina
2009-01-01
Immigrant and low socio-economic (SES) women in North America underutilize Papanicolaou screening. Vaginal swab self-sampling for oncogenic human papillomavirus (HPV) has the potential to increase cervical cancer screening participation. The purpose of this qualitative study was to understand the perceptions of lower SES and immigrant women regarding self-sampling for HPV. Eleven focus-group interviews were conducted: one with Canadian-born English-speaking lower SES women, and two groups each with Arabic, Cantonese, Dari (Afghani), Somali and Spanish (Latino)-speaking women (one group conducted in English, the other in the native language) recently immigrated to Canada. Five to nine women aged 35 to 65 years and married with children participated in each group. Themes included 1) who might use self-sampling and why; 2) aversion to self-sampling and reasons to prefer physician; 3) ways to improve the appeal of self-sampling. Women generally perceived benefits of self-sampling and a small number felt they might use the method, but all groups had some reservations. Reasons included: uncertainty over performing the sampling correctly; fear of hurting themselves; concern about obtaining appropriate material; and concerns about test accuracy. Women preferred testing by a health care professional because they were accustomed to pelvic examinations, it was more convenient, or they trusted the results. Perceptions of self-sampling for HPV were similar across cultures and pertained to issues of confidence in self-sampling and need for physician involvement in care. These findings can inform programs and studies planning to employ self-sampling as a screening modality for cervical cancer.
Supergravity inflation free from harmful relics
NASA Astrophysics Data System (ADS)
Greene, Patrick B.; Kadota, Kenji; Murayama, Hitoshi
2003-08-01
We present a realistic supergravity inflation model that is free from the overproduction of potentially dangerous relics in cosmology, namely, moduli and gravitinos, which can lead to inconsistencies with the predictions of baryon asymmetry and nucleosynthesis. The radiative correction turns out to play a crucial role in our analysis, raising the mass of the supersymmetry breaking field to an intermediate scale. We pay particular attention to the nonthermal production of gravitinos using the nonminimal Kähler potential we obtained from loop correction. This nonthermal gravitino production is diminished, however, because of the relatively small scale of the inflaton mass and the small amplitudes of the hidden sector fields.
A single-scattering correction for the seismo-acoustic parabolic equation.
Collins, Michael D
2012-04-01
An efficient single-scattering correction that does not require iterations is derived and tested for the seismo-acoustic parabolic equation. The approach is applicable to problems involving gradual range dependence in a waveguide with fluid and solid layers, including the key case of a sloping fluid-solid interface. The single-scattering correction is asymptotically equivalent to a special case of a single-scattering correction for problems that only have solid layers [Küsel et al., J. Acoust. Soc. Am. 121, 808-813 (2007)]. The single-scattering correction has a simple interpretation (conservation of interface conditions in an average sense) that facilitated its generalization to problems involving fluid layers. Promising results are obtained for problems in which the ocean bottom interface has a small slope.
Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models
ERIC Educational Resources Information Center
Raykov, Tenko
2005-01-01
A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…
NASA Technical Reports Server (NTRS)
Waegell, Mordecai J.; Palacios, David M.
2011-01-01
Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter
Correcting for batch effects in case-control microbiome studies
Gibbons, Sean M.; Duvallet, Claire
2018-01-01
High-throughput data generation platforms, like mass-spectrometry, microarrays, and second-generation sequencing are susceptible to batch effects due to run-to-run variation in reagents, equipment, protocols, or personnel. Currently, batch correction methods are not commonly applied to microbiome sequencing datasets. In this paper, we compare different batch-correction methods applied to microbiome case-control studies. We introduce a model-free normalization procedure where features (i.e. bacterial taxa) in case samples are converted to percentiles of the equivalent features in control samples within a study prior to pooling data across studies. We look at how this percentile-normalization method compares to traditional meta-analysis methods for combining independent p-values and to limma and ComBat, widely used batch-correction models developed for RNA microarray data. Overall, we show that percentile-normalization is a simple, non-parametric approach for correcting batch effects and improving sensitivity in case-control meta-analyses. PMID:29684016
Drag Corrections in High-Speed Wind Tunnels
NASA Technical Reports Server (NTRS)
Ludwieg, H.
1947-01-01
In the vicinity of a body in a wind tunnel the displacement effect of the wake, due to the finite dimensions of the stream, produces a pressure gradient which evokes a change of drag. In incompressible flow this change of drag is so small, in general, that one does not have to take it into account in wind-tunnel measurements; however, in compressible flow it beoomes considerably larger, so that a correction factor is necessary for measured values. Correction factors for a closed tunnel and an open jet with circular cross sections are calculated and compared with the drag - corrections already bown for high-speed tunnnels.
Hayashi, Toshiyuki; Fukui, Tomoyasu; Nakanishi, Noriko; Yamamoto, Saki; Tomoyasu, Masako; Osamura, Anna; Ohara, Makoto; Yamamoto, Takeshi; Ito, Yasuki; Hirano, Tsutomu
2017-11-13
Following publication of the original article [1], the authors identified a number of errors. In Result (P.3), Table 1 (P.4), Table 5 (P.9) and Supplementary Table 1, the correct unit for adiponectin was μg/mL. In Table 1 (P.4), the correct value for the post treatment body weight in dapagliflozin was 76.2±14.8. In Table 6 (P.10), the correct value for the pre treatment sd LDL/LDL-C in decreased LDL-C group was 0.38±0.10.
Dryland pasture and crop conditions as seen by HCMM. [Washita River watershed, Oklahoma
NASA Technical Reports Server (NTRS)
Rosenthal, W. D.; Harlan, J. C.; Blanchard, B. J. (Principal Investigator)
1980-01-01
Heat capacity mapping mission data were obtained for use in enhancing estimates of soil moisture content. Day/day thermal IR difference between data from August 31 and October 17 were analyzed. Atmospheric correction on HCMM pass dates using the RADTRA model were calculated. Differences between corrections using lake temperatures and calculated temperatures were small.
A Comparison of EFL Teachers' and Students' Attitudes to Oral Corrective Feedback
ERIC Educational Resources Information Center
Roothooft, Hanne; Breeze, Ruth
2016-01-01
A relatively small number of studies on beliefs about oral corrective feedback (CF) have uncovered a mismatch between teachers' and students' attitudes which is potentially harmful to the language learning process, not only because students may become demotivated when their expectations are not met, but also because teachers appear to be reluctant…
ERIC Educational Resources Information Center
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim
2014-01-01
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
A method for rapidly marking adult varroa mites for use in brood inoculation experiments
USDA-ARS?s Scientific Manuscript database
We explored a method for marking varroa mites using correction fluid (PRESTO!TM Jumbo Correction Pen, Pentel Co., Ltd., Japan). Individual mites were placed on a piece of nylon mesh (165 mesh) to prevent the mites from moving during marking. A small piece of nylon fishing line (diameter = 0.30 mm)...
Holographic corrections to the Veneziano amplitude
NASA Astrophysics Data System (ADS)
Armoni, Adi; Ireson, Edwin
2017-08-01
We propose a holographic computation of the 2 → 2 meson scattering in a curved string background, dual to a QCD-like theory. We recover the Veneziano amplitude and compute a perturbative correction due to the background curvature. The result implies a small deviation from a linear trajectory, which is a requirement of the UV regime of QCD.