Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
ERIC Educational Resources Information Center
Spencer, Bryden
2016-01-01
Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…
Cubison, M. J.; Jimenez, J. L.
2015-06-05
Least-squares fitting of overlapping peaks is often needed to separately quantify ions in high-resolution mass spectrometer data. A statistical simulation approach is used to assess the statistical precision of the retrieved peak intensities. The sensitivity of the fitted peak intensities to statistical noise due to ion counting is probed for synthetic data systems consisting of two overlapping ion peaks whose positions are pre-defined and fixed in the fitting procedure. The fitted intensities are sensitive to imperfections in the m/Q calibration. These propagate as a limiting precision in the fitted intensities that may greatly exceed the precision arising from counting statistics.more » The precision on the fitted peak intensity falls into one of three regimes. In the "counting-limited regime" (regime I), above a peak separation χ ~ 2 to 3 half-widths at half-maximum (HWHM), the intensity precision is similar to that due to counting error for an isolated ion. For smaller χ and higher ion counts (~ 1000 and higher), the intensity precision rapidly degrades as the peak separation is reduced ("calibration-limited regime", regime II). Alternatively for χ < 1.6 but lower ion counts (e.g. 10–100) the intensity precision is dominated by the additional ion count noise from the overlapping ion and is not affected by the imprecision in the m/Q calibration ("overlapping-limited regime", regime III). The transition between the counting and m/Q calibration-limited regimes is shown to be weakly dependent on resolving power and data spacing and can thus be approximated by a simple parameterisation based only on peak intensity ratios and separation. A simple equation can be used to find potentially problematic ion pairs when evaluating results from fitted spectra containing many ions. Longer integration times can improve the precision in regimes I and III, but a given ion pair can only be moved out of regime II through increased spectrometer resolving power. As a result, studies presenting data obtained from least-squares fitting procedures applied to mass spectral peaks should explicitly consider these limits on statistical precision.« less
NASA Astrophysics Data System (ADS)
Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.
We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.
NASA Astrophysics Data System (ADS)
Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.
2018-02-01
In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.
A spatial scan statistic for nonisotropic two-level risk cluster.
Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie
2012-01-30
Spatial scan statistic methods are commonly used for geographical disease surveillance and cluster detection. The standard spatial scan statistic does not model any variability in the underlying risks of subregions belonging to a detected cluster. For a multilevel risk cluster, the isotonic spatial scan statistic could model a centralized high-risk kernel in the cluster. Because variations in disease risks are anisotropic owing to different social, economical, or transport factors, the real high-risk kernel will not necessarily take the central place in a whole cluster area. We propose a spatial scan statistic for a nonisotropic two-level risk cluster, which could be used to detect a whole cluster and a noncentralized high-risk kernel within the cluster simultaneously. The performance of the three methods was evaluated through an intensive simulation study. Our proposed nonisotropic two-level method showed better power and geographical precision with two-level risk cluster scenarios, especially for a noncentralized high-risk kernel. Our proposed method is illustrated using the hand-foot-mouth disease data in Pingdu City, Shandong, China in May 2009, compared with two other methods. In this practical study, the nonisotropic two-level method is the only way to precisely detect a high-risk area in a detected whole cluster. Copyright © 2011 John Wiley & Sons, Ltd.
A Study of Particle Beam Spin Dynamics for High Precision Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiedler, Andrew J.
In the search for physics beyond the Standard Model, high precision experiments to measure fundamental properties of particles are an important frontier. One group of such measurements involves magnetic dipole moment (MDM) values as well as searching for an electric dipole moment (EDM), both of which could provide insights about how particles interact with their environment at the quantum level and if there are undiscovered new particles. For these types of high precision experiments, minimizing statistical uncertainties in the measurements plays a critical role. \\\\ \\indent This work leverages computer simulations to quantify the effects of statistical uncertainty for experimentsmore » investigating spin dynamics. In it, analysis of beam properties and lattice design effects on the polarization of the beam is performed. As a case study, the beam lines that will provide polarized muon beams to the Fermilab Muon \\emph{g}-2 experiment are analyzed to determine the effects of correlations between the phase space variables and the overall polarization of the muon beam.« less
Towards Precision Spectroscopy of Baryonic Resonances
NASA Astrophysics Data System (ADS)
Döring, Michael; Mai, Maxim; Rönchen, Deborah
2017-01-01
Recent progress in baryon spectroscopy is reviewed. In a common effort, various groups have analyzed a set of new high-precision polarization observables from ELSA. The Jülich-Bonn group has finalized the analysis of pion-induced meson-baryon production, the potoproduction of pions and eta mesons, and (almost) the KΛ final state. As data become preciser, statistical aspects in the analysis of excited baryons become increasingly relevant and several advances in this direction are proposed.
Towards precision spectroscopy of baryonic resonances
Doring, Michael; Mai, Maxim; Ronchen, Deborah
2017-01-26
Recent progress in baryon spectroscopy is reviewed. In a common effort, various groups have analyzed a set of new high-precision polarization observables from ELSA. The Julich-Bonn group has finalized the analysis of pion-induced meson-baryon production, the potoproduction of pions and eta mesons, and (almost) the KΛ final state. Lastly, as data become preciser, statistical aspects in the analysis of excited baryons become increasingly relevant and several advances in this direction are proposed.
Design of a novel instrument for active neutron interrogation of artillery shells.
Bélanger-Champagne, Camille; Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter
2017-01-01
The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from [Formula: see text]% to [Formula: see text]% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s.
Design of a novel instrument for active neutron interrogation of artillery shells
Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter
2017-01-01
The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from 53-7+7% to 74-10+8% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s. PMID:29211773
NASA Astrophysics Data System (ADS)
Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.
2015-06-01
Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
Siebers, Jeffrey V
2008-04-04
Monte Carlo (MC) is rarely used for IMRT plan optimization outside of research centres due to the extensive computational resources or long computation times required to complete the process. Time can be reduced by degrading the statistical precision of the MC dose calculation used within the optimization loop. However, this eventually introduces optimization convergence errors (OCEs). This study determines the statistical noise levels tolerated during MC-IMRT optimization under the condition that the optimized plan has OCEs <100 cGy (1.5% of the prescription dose) for MC-optimized IMRT treatment plans.Seven-field prostate IMRT treatment plans for 10 prostate patients are used in this study. Pre-optimization is performed for deliverable beams with a pencil-beam (PB) dose algorithm. Further deliverable-based optimization proceeds using: (1) MC-based optimization, where dose is recomputed with MC after each intensity update or (2) a once-corrected (OC) MC-hybrid optimization, where a MC dose computation defines beam-by-beam dose correction matrices that are used during a PB-based optimization. Optimizations are performed with nominal per beam MC statistical precisions of 2, 5, 8, 10, 15, and 20%. Following optimizer convergence, beams are re-computed with MC using 2% per beam nominal statistical precision and the 2 PTV and 10 OAR dose indices used in the optimization objective function are tallied. For both the MC-optimization and OC-optimization methods, statistical equivalence tests found that OCEs are less than 1.5% of the prescription dose for plans optimized with nominal statistical uncertainties of up to 10% per beam. The achieved statistical uncertainty in the patient for the 10% per beam simulations from the combination of the 7 beams is ~3% with respect to maximum dose for voxels with D>0.5D(max). The MC dose computation time for the OC-optimization is only 6.2 minutes on a single 3 Ghz processor with results clinically equivalent to high precision MC computations.
Verzotto, Davide; M Teo, Audrey S; Hillmer, Axel M; Nagarajan, Niranjan
2016-01-01
Resolution of complex repeat structures and rearrangements in the assembly and analysis of large eukaryotic genomes is often aided by a combination of high-throughput sequencing and genome-mapping technologies (for example, optical restriction mapping). In particular, mapping technologies can generate sparse maps of large DNA fragments (150 kilo base pairs (kbp) to 2 Mbp) and thus provide a unique source of information for disambiguating complex rearrangements in cancer genomes. Despite their utility, combining high-throughput sequencing and mapping technologies has been challenging because of the lack of efficient and sensitive map-alignment algorithms for robustly aligning error-prone maps to sequences. We introduce a novel seed-and-extend glocal (short for global-local) alignment method, OPTIMA (and a sliding-window extension for overlap alignment, OPTIMA-Overlap), which is the first to create indexes for continuous-valued mapping data while accounting for mapping errors. We also present a novel statistical model, agnostic with respect to technology-dependent error rates, for conservatively evaluating the significance of alignments without relying on expensive permutation-based tests. We show that OPTIMA and OPTIMA-Overlap outperform other state-of-the-art approaches (1.6-2 times more sensitive) and are more efficient (170-200 %) and precise in their alignments (nearly 99 % precision). These advantages are independent of the quality of the data, suggesting that our indexing approach and statistical evaluation are robust, provide improved sensitivity and guarantee high precision.
Ultra-High Precision Half-Life Measurement for the Superallowed &+circ; Emitter ^26Al^m
NASA Astrophysics Data System (ADS)
Finlay, P.; Demand, G.; Garrett, P. E.; Leach, K. G.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Grinyer, G. F.; Leslie, J. R.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Williams, S. J.
2009-10-01
The calculated nuclear structure dependent correction for ^26Al^m (δC-δNS= 0.305(27)% [1]) is smaller by nearly a factor of two than the other twelve precision superallowed cases, making it an ideal case to pursue a reduction in the experimental errors contributing to the Ft value. An ultra-high precision half-life measurement for the superallowed &+circ; emitter ^26Al^m has been made at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada. A beam of ˜10^5 ^26Al^m/s was delivered in October 2007 and its decay was observed using a 4π continuous gas flow proportional counter as part of an ongoing experimental program in superallowed Fermi β decay studies. With a statistical precision of ˜0.008%, the present work represents the single most precise measurement of any superallowed half-life to date. [4pt] [1] I.S. Towner and J.C. Hardy, Phys. Rev. C 79, 055502 (2009).
Ultra-High Precision Half-Life Measurement for the Superallowed &+circ; Emitter ^26Al^m
NASA Astrophysics Data System (ADS)
Finlay, P.; Demand, G.; Garrett, P. E.; Leach, K. G.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Ball, G. C.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Williams, S. J.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Grinyer, G. F.; Leslie, J. R.
2008-10-01
The calculated nuclear structure dependent correction for ^26Al^m (δC-δNS= 0.305(27)% [1]) is smaller by nearly a factor of two than the other twelve precision superallowed cases, making it an ideal case to pursue a reduction in the experimental errors contributing to the Ft value. An ultra-high precision half-life measurement for the superallowed &+circ; emitter ^26Al^m has been made using a 4π continuous gas flow proportional counter as part of an ongoing experimental program in superallowed Fermi β decay studies at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada, which delivered a beam of ˜10^5 ^26Al^m/s in October 2007. With a statistical precision of ˜0.008%, the present work represents the single most precise measurement of any superallowed half-life to date. [1] I.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008).
Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision
Yang, Bingwei; Xie, Xinhao; Li, Duan
2018-01-01
Time of flight (TOF) based light detection and ranging (LiDAR) is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC) that counts time between trigger signals and analog-to-digital converter (ADC) that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR) with analog discrete return system based ranging (AR), a peak detection method (WR-PK) shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC), WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision. PMID:29642639
Demonstration of improved sensitivity of echo interferometers to gravitational acceleration
NASA Astrophysics Data System (ADS)
Mok, C.; Barrett, B.; Carew, A.; Berthiaume, R.; Beattie, S.; Kumarakrishnan, A.
2013-08-01
We have developed two configurations of an echo interferometer that rely on standing-wave excitation of a laser-cooled sample of rubidium atoms. Both configurations can be used to measure acceleration a along the axis of excitation. For a two-pulse configuration, the signal from the interferometer is modulated at the recoil frequency and exhibits a sinusoidal frequency chirp as a function of pulse spacing. In comparison, for a three-pulse stimulated-echo configuration, the signal is observed without recoil modulation and exhibits a modulation at a single frequency as a function of pulse spacing. The three-pulse configuration is less sensitive to effects of vibrations and magnetic field curvature, leading to a longer experimental time scale. For both configurations of the atom interferometer (AI), we show that a measurement of acceleration with a statistical precision of 0.5% can be realized by analyzing the shape of the echo envelope that has a temporal duration of a few microseconds. Using the two-pulse AI, we obtain measurements of acceleration that are statistically precise to 6 parts per million (ppm) on a 25 ms time scale. In comparison, using the three-pulse AI, we obtain measurements of acceleration that are statistically precise to 0.4 ppm on a time scale of 50 ms. A further statistical enhancement is achieved by analyzing the data across the echo envelope so that the statistical error is reduced to 75 parts per billion (ppb). The inhomogeneous field of a magnetized vacuum chamber limited the experimental time scale and resulted in prominent systematic effects. Extended time scales and improved signal-to-noise ratio observed in recent echo experiments using a nonmagnetic vacuum chamber suggest that echo techniques are suitable for a high-precision measurement of gravitational acceleration g. We discuss methods for reducing systematic effects and improving the signal-to-noise ratio. Simulations of both AI configurations with a time scale of 300 ms suggest that an optimized experiment with improved vibration isolation and atoms selected in the mF=0 state can result in measurements of g statistically precise to 0.3 ppb for the two-pulse AI and 0.6 ppb for the three-pulse AI.
Status and outlook of CHIP-TRAP: The Central Michigan University high precision Penning trap
NASA Astrophysics Data System (ADS)
Redshaw, M.; Bryce, R. A.; Hawks, P.; Gamage, N. D.; Hunt, C.; Kandegedara, R. M. E. B.; Ratnayake, I. S.; Sharp, L.
2016-06-01
At Central Michigan University we are developing a high-precision Penning trap mass spectrometer (CHIP-TRAP) that will focus on measurements with long-lived radioactive isotopes. CHIP-TRAP will consist of a pair of hyperbolic precision-measurement Penning traps, and a cylindrical capture/filter trap in a 12 T magnetic field. Ions will be produced by external ion sources, including a laser ablation source, and transported to the capture trap at low energies enabling ions of a given m / q ratio to be selected via their time-of-flight. In the capture trap, contaminant ions will be removed with a mass-selective rf dipole excitation and the ion of interest will be transported to the measurement traps. A phase-sensitive image charge detection technique will be used for simultaneous cyclotron frequency measurements on single ions in the two precision traps, resulting in a reduction in statistical uncertainty due to magnetic field fluctuations.
ERIC Educational Resources Information Center
Essid, Hedi; Ouellette, Pierre; Vigeant, Stephane
2010-01-01
The objective of this paper is to measure the efficiency of high schools in Tunisia. We use a statistical data envelopment analysis (DEA)-bootstrap approach with quasi-fixed inputs to estimate the precision of our measure. To do so, we developed a statistical model serving as the foundation of the data generation process (DGP). The DGP is…
1998 Conference on Precision Electromagnetic Measurements Digest. Proceedings.
NASA Astrophysics Data System (ADS)
Nelson, T. L.
The following topics were dealt with: fundamental constants; caesium standards; AC-DC transfer; impedance measurement; length measurement; units; statistics; cryogenic resonators; time transfer; QED; resistance scaling and bridges; mass measurement; atomic fountains and clocks; single electron transport; Newtonian constant of gravitation; stabilised lasers and frequency measurements; cryogenic current comparators; optical frequency standards; high voltage devices and systems; international compatibility; magnetic measurement; precision power measurement; high resolution spectroscopy; DC transport standards; waveform acquisition and analysis; ion trap standards; optical metrology; quantised Hall effect; Josephson array comparisons; signal generation and measurement; Avogadro constant; microwave networks; wideband power standards; antennas, fields and EMC; quantum-based standards.
Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G
2016-05-09
The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.
Fully stabilized mid-infrared frequency comb for high-precision molecular spectroscopy.
Vainio, Markku; Karhu, Juho
2017-02-20
A fully stabilized mid-infrared optical frequency comb spanning from 2.9 to 3.4 µm is described in this article. The comb is based on half-harmonic generation in a femtosecond optical parametric oscillator, which transfers the high phase coherence of a fully stabilized near-infrared Er-doped fiber laser comb to the mid-infrared region. The method is simple, as no phase-locked loops or reference lasers are needed. Precise locking of optical frequencies of the mid-infrared comb to the pump comb is experimentally verified at sub-20 mHz level, which corresponds to a fractional statistical uncertainty of 2 × 10-16 at the center frequency of the mid-infrared comb. The fully stabilized mid-infrared comb is an ideal tool for high-precision molecular spectroscopy, as well as for optical frequency metrology in the mid-infrared region, which is difficult to access with other stabilized frequency comb techniques.
Dylla, Daniel P.; Megison, Susan D.
2015-01-01
Objective. We compared the precision of a search strategy designed specifically to retrieve randomized controlled trials (RCTs) and systematic reviews of RCTs with search strategies designed for broader purposes. Methods. We designed an experimental search strategy that automatically revised searches up to five times by using increasingly restrictive queries as long at least 50 citations were retrieved. We compared the ability of the experimental and alternative strategies to retrieve studies relevant to 312 test questions. The primary outcome, search precision, was defined for each strategy as the proportion of relevant, high quality citations among the first 50 citations retrieved. Results. The experimental strategy had the highest median precision (5.5%; interquartile range [IQR]: 0%–12%) followed by the narrow strategy of the PubMed Clinical Queries (4.0%; IQR: 0%–10%). The experimental strategy found the most high quality citations (median 2; IQR: 0–6) and was the strategy most likely to find at least one high quality citation (73% of searches; 95% confidence interval 68%–78%). All comparisons were statistically significant. Conclusions. The experimental strategy performed the best in all outcomes although all strategies had low precision. PMID:25922798
PRECISE:PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare
Chen, Feng; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Jiang, Xiaoqian
2015-01-01
Quality improvement (QI) requires systematic and continuous efforts to enhance healthcare services. A healthcare provider might wish to compare local statistics with those from other institutions in order to identify problems and develop intervention to improve the quality of care. However, the sharing of institution information may be deterred by institutional privacy as publicizing such statistics could lead to embarrassment and even financial damage. In this article, we propose a PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare (PRECISE), which aims at enabling cross-institution comparison of healthcare statistics while protecting privacy. The proposed framework relies on a set of state-of-the-art cryptographic protocols including homomorphic encryption and Yao’s garbled circuit schemes. By securely pooling data from different institutions, PRECISE can rank the encrypted statistics to facilitate QI among participating institutes. We conducted experiments using MIMIC II database and demonstrated the feasibility of the proposed PRECISE framework. PMID:26146645
PRECISE:PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare.
Chen, Feng; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Jiang, Xiaoqian
2014-10-01
Quality improvement (QI) requires systematic and continuous efforts to enhance healthcare services. A healthcare provider might wish to compare local statistics with those from other institutions in order to identify problems and develop intervention to improve the quality of care. However, the sharing of institution information may be deterred by institutional privacy as publicizing such statistics could lead to embarrassment and even financial damage. In this article, we propose a PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare (PRECISE), which aims at enabling cross-institution comparison of healthcare statistics while protecting privacy. The proposed framework relies on a set of state-of-the-art cryptographic protocols including homomorphic encryption and Yao's garbled circuit schemes. By securely pooling data from different institutions, PRECISE can rank the encrypted statistics to facilitate QI among participating institutes. We conducted experiments using MIMIC II database and demonstrated the feasibility of the proposed PRECISE framework.
Determination of the pion-nucleon coupling constant and scattering lengths
NASA Astrophysics Data System (ADS)
Ericson, T. E.; Loiseau, B.; Thomas, A. W.
2002-07-01
We critically evaluate the isovector Goldberger-Miyazawa-Oehme (GMO) sum rule for forward πN scattering using the recent precision measurements of π-p and π-d scattering lengths from pionic atoms. We deduce the charged-pion-nucleon coupling constant, with careful attention to systematic and statistical uncertainties. This determination gives, directly from data, g2c(GMO)/ 4π=14.11+/-0.05(statistical)+/-0.19(systematic) or f2c/4π=0.0783(11). This value is intermediate between that of indirect methods and the direct determination from backward np differential scattering cross sections. We also use the pionic atom data to deduce the coherent symmetric and antisymmetric sums of the pion-proton and pion-neutron scattering lengths with high precision, namely, (aπ-p+aπ-n)/2=[- 12+/-2(statistical)+/-8(systematic)]×10-4 m-1π and (aπ-p-aπ- n)/2=[895+/-3(statistical)+/-13 (systematic)]×10-4 m-1π. For the need of the present analysis, we improve the theoretical description of the pion-deuteron scattering length.
NASA Astrophysics Data System (ADS)
Takabayashi, Sadao; Klein, William P.; Onodera, Craig; Rapp, Blake; Flores-Estrada, Juan; Lindau, Elias; Snowball, Lejmarc; Sam, Joseph T.; Padilla, Jennifer E.; Lee, Jeunghoon; Knowlton, William B.; Graugnard, Elton; Yurke, Bernard; Kuang, Wan; Hughes, William L.
2014-10-01
High precision, high yield, and high density self-assembly of nanoparticles into arrays is essential for nanophotonics. Spatial deviations as small as a few nanometers can alter the properties of near-field coupled optical nanostructures. Several studies have reported assemblies of few nanoparticle structures with controlled spacing using DNA nanostructures with variable yield. Here, we report multi-tether design strategies and attachment yields for homo- and hetero-nanoparticle arrays templated by DNA origami nanotubes. Nanoparticle attachment yield via DNA hybridization is comparable with streptavidin-biotin binding. Independent of the number of binding sites, >97% site-occupation was achieved with four tethers and 99.2% site-occupation is theoretically possible with five tethers. The interparticle distance was within 2 nm of all design specifications and the nanoparticle spatial deviations decreased with interparticle spacing. Modified geometric, binomial, and trinomial distributions indicate that site-bridging, steric hindrance, and electrostatic repulsion were not dominant barriers to self-assembly and both tethers and binding sites were statistically independent at high particle densities.High precision, high yield, and high density self-assembly of nanoparticles into arrays is essential for nanophotonics. Spatial deviations as small as a few nanometers can alter the properties of near-field coupled optical nanostructures. Several studies have reported assemblies of few nanoparticle structures with controlled spacing using DNA nanostructures with variable yield. Here, we report multi-tether design strategies and attachment yields for homo- and hetero-nanoparticle arrays templated by DNA origami nanotubes. Nanoparticle attachment yield via DNA hybridization is comparable with streptavidin-biotin binding. Independent of the number of binding sites, >97% site-occupation was achieved with four tethers and 99.2% site-occupation is theoretically possible with five tethers. The interparticle distance was within 2 nm of all design specifications and the nanoparticle spatial deviations decreased with interparticle spacing. Modified geometric, binomial, and trinomial distributions indicate that site-bridging, steric hindrance, and electrostatic repulsion were not dominant barriers to self-assembly and both tethers and binding sites were statistically independent at high particle densities. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr03069a
Possibility of New Precise Measurements of Muonic Helium Atom HFS at J-PARC MUSE
NASA Astrophysics Data System (ADS)
Strasser, P.; Shimomura, K.; Torii, H. A.
We propose the next generation of precision microwave spectroscopy measurements of the ground state hyperfine structure (HFS) of the muonic helium atom. The HFS interval is a sensitive tool to test three-body atomic system and bound-state QED theory as well as precise direct determination of the negative muon magnetic moment and hence its mass. Previous measurements performed in 1980s at PSI and LAMPF had uncertainties dominated by statistical errors. The new high-intensity pulsed negative muon beam at J-PARC MUSE give an opportunity to improve these measurements by nearly two orders of magnitude for the HFS interval, and almost tenfold for the negative muon mass, thus providing a more precise test of CPT invariance and determination of the negative counterpart of the anomalous g-factor for the existing BNL muon g-2 experiment. Both measurements at zero field and at high magnetic field are considered. An overview of the different aspects of these new muonic helium HFS measurements is presented.
Achieving metrological precision limits through postselection
NASA Astrophysics Data System (ADS)
Alves, G. Bié; Pimentel, A.; Hor-Meyll, M.; Walborn, S. P.; Davidovich, L.; Filho, R. L. de Matos
2017-01-01
Postselection strategies have been proposed with the aim of amplifying weak signals, which may help to overcome detection thresholds associated with technical noise in high-precision measurements. Here we use an optical setup to experimentally explore two different postselection protocols for the estimation of a small parameter: a weak-value amplification procedure and an alternative method that does not provide amplification but nonetheless is shown to be more robust for the sake of parameter estimation. Each technique leads approximately to the saturation of quantum limits for the estimation precision, expressed by the Cramér-Rao bound. For both situations, we show that parameter estimation is improved when the postselection statistics are considered together with the measurement device.
Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A
2018-02-01
To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Roberts, B. M.; Blewitt, G.; Dailey, C.; Derevianko, A.
2018-04-01
We analyze the prospects of employing a distributed global network of precision measurement devices as a dark matter and exotic physics observatory. In particular, we consider the atomic clocks of the global positioning system (GPS), consisting of a constellation of 32 medium-Earth orbit satellites equipped with either Cs or Rb microwave clocks and a number of Earth-based receiver stations, some of which employ highly-stable H-maser atomic clocks. High-accuracy timing data is available for almost two decades. By analyzing the satellite and terrestrial atomic clock data, it is possible to search for transient signatures of exotic physics, such as "clumpy" dark matter and dark energy, effectively transforming the GPS constellation into a 50 000 km aperture sensor array. Here we characterize the noise of the GPS satellite atomic clocks, describe the search method based on Bayesian statistics, and test the method using simulated clock data. We present the projected discovery reach using our method, and demonstrate that it can surpass the existing constrains by several order of magnitude for certain models. Our method is not limited in scope to GPS or atomic clock networks, and can also be applied to other networks of precision measurement devices.
Accardo, L; Aguilar, M; Aisa, D; Alpat, B; Alvino, A; Ambrosi, G; Andeen, K; Arruda, L; Attig, N; Azzarello, P; Bachlechner, A; Barao, F; Barrau, A; Barrin, L; Bartoloni, A; Basara, L; Battarbee, M; Battiston, R; Bazo, J; Becker, U; Behlmann, M; Beischer, B; Berdugo, J; Bertucci, B; Bigongiari, G; Bindi, V; Bizzaglia, S; Bizzarri, M; Boella, G; de Boer, W; Bollweg, K; Bonnivard, V; Borgia, B; Borsini, S; Boschini, M J; Bourquin, M; Burger, J; Cadoux, F; Cai, X D; Capell, M; Caroff, S; Carosi, G; Casaus, J; Cascioli, V; Castellini, G; Cernuda, I; Cerreta, D; Cervelli, F; Chae, M J; Chang, Y H; Chen, A I; Chen, H; Cheng, G M; Chen, H S; Cheng, L; Chikanian, A; Chou, H Y; Choumilov, E; Choutko, V; Chung, C H; Cindolo, F; Clark, C; Clavero, R; Coignet, G; Consolandi, C; Contin, A; Corti, C; Coste, B; Cui, Z; Dai, M; Delgado, C; Della Torre, S; Demirköz, M B; Derome, L; Di Falco, S; Di Masso, L; Dimiccoli, F; Díaz, C; von Doetinchem, P; Du, W J; Duranti, M; D'Urso, D; Eline, A; Eppling, F J; Eronen, T; Fan, Y Y; Farnesini, L; Feng, J; Fiandrini, E; Fiasson, A; Finch, E; Fisher, P; Galaktionov, Y; Gallucci, G; García, B; García-López, R; Gast, H; Gebauer, I; Gervasi, M; Ghelfi, A; Gillard, W; Giovacchini, F; Goglov, P; Gong, J; Goy, C; Grabski, V; Grandi, D; Graziani, M; Guandalini, C; Guerri, I; Guo, K H; Haas, D; Habiby, M; Haino, S; Han, K C; He, Z H; Heil, M; Henning, R; Hoffman, J; Hsieh, T H; Huang, Z C; Huh, C; Incagli, M; Ionica, M; Jang, W Y; Jinchi, H; Kanishev, K; Kim, G N; Kim, K S; Kirn, Th; Kossakowski, R; Kounina, O; Kounine, A; Koutsenko, V; Krafczyk, M S; Kunz, S; La Vacca, G; Laudi, E; Laurenti, G; Lazzizzera, I; Lebedev, A; Lee, H T; Lee, S C; Leluc, C; Levi, G; Li, H L; Li, J Q; Li, Q; Li, Q; Li, T X; Li, W; Li, Y; Li, Z H; Li, Z Y; Lim, S; Lin, C H; Lipari, P; Lippert, T; Liu, D; Liu, H; Lolli, M; Lomtadze, T; Lu, M J; Lu, Y S; Luebelsmeyer, K; Luo, F; Luo, J Z; Lv, S S; Majka, R; Malinin, A; Mañá, C; Marín, J; Martin, T; Martínez, G; Masi, N; Massera, F; Maurin, D; Menchaca-Rocha, A; Meng, Q; Mo, D C; Monreal, B; Morescalchi, L; Mott, P; Müller, M; Ni, J Q; Nikonov, N; Nozzoli, F; Nunes, P; Obermeier, A; Oliva, A; Orcinha, M; Palmonari, F; Palomares, C; Paniccia, M; Papi, A; Pauluzzi, M; Pedreschi, E; Pensotti, S; Pereira, R; Pilastrini, R; Pilo, F; Piluso, A; Pizzolotto, C; Plyaskin, V; Pohl, M; Poireau, V; Postaci, E; Putze, A; Quadrani, L; Qi, X M; Rancoita, P G; Rapin, D; Ricol, J S; Rodríguez, I; Rosier-Lees, S; Rossi, L; Rozhkov, A; Rozza, D; Rybka, G; Sagdeev, R; Sandweiss, J; Saouter, P; Sbarra, C; Schael, S; Schmidt, S M; Schuckardt, D; Schulz von Dratzig, A; Schwering, G; Scolieri, G; Seo, E S; Shan, B S; Shan, Y H; Shi, J Y; Shi, X Y; Shi, Y M; Siedenburg, T; Son, D; Spada, F; Spinella, F; Sun, W; Sun, W H; Tacconi, M; Tang, C P; Tang, X W; Tang, Z C; Tao, L; Tescaro, D; Ting, Samuel C C; Ting, S M; Tomassetti, N; Torsti, J; Türkoğlu, C; Urban, T; Vagelli, V; Valente, E; Vannini, C; Valtonen, E; Vaurynovich, S; Vecchi, M; Velasco, M; Vialle, J P; Vitale, V; Volpini, G; Wang, L Q; Wang, Q L; Wang, R S; Wang, X; Wang, Z X; Weng, Z L; Whitman, K; Wienkenhöver, J; Wu, H; Wu, K Y; Xia, X; Xie, M; Xie, S; Xiong, R Q; Xin, G M; Xu, N S; Xu, W; Yan, Q; Yang, J; Yang, M; Ye, Q H; Yi, H; Yu, Y J; Yu, Z Q; Zeissler, S; Zhang, J H; Zhang, M T; Zhang, X B; Zhang, Z; Zheng, Z M; Zhou, F; Zhuang, H L; Zhukov, V; Zichichi, A; Zimmermann, N; Zuccon, P; Zurbach, C
2014-09-19
A precision measurement by AMS of the positron fraction in primary cosmic rays in the energy range from 0.5 to 500 GeV based on 10.9 million positron and electron events is presented. This measurement extends the energy range of our previous observation and increases its precision. The new results show, for the first time, that above ∼200 GeV the positron fraction no longer exhibits an increase with energy.
Detection of non-Gaussian fluctuations in a quantum point contact.
Gershon, G; Bomze, Yu; Sukhorukov, E V; Reznikov, M
2008-07-04
An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.
Detection of Non-Gaussian Fluctuations in a Quantum Point Contact
NASA Astrophysics Data System (ADS)
Gershon, G.; Bomze, Yu.; Sukhorukov, E. V.; Reznikov, M.
2008-07-01
An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.
NASA Astrophysics Data System (ADS)
Accardo, L.; Aguilar, M.; Aisa, D.; Alvino, A.; Ambrosi, G.; Andeen, K.; Arruda, L.; Attig, N.; Azzarello, P.; Bachlechner, A.; Barao, F.; Barrau, A.; Barrin, L.; Bartoloni, A.; Basara, L.; Battarbee, M.; Battiston, R.; Bazo, J.; Becker, U.; Behlmann, M.; Beischer, B.; Berdugo, J.; Bertucci, B.; Bigongiari, G.; Bindi, V.; Bizzaglia, S.; Bizzarri, M.; Boella, G.; de Boer, W.; Bollweg, K.; Bonnivard, V.; Borgia, B.; Borsini, S.; Boschini, M. J.; Bourquin, M.; Burger, J.; Cadoux, F.; Cai, X. D.; Capell, M.; Caroff, S.; Casaus, J.; Cascioli, V.; Castellini, G.; Cernuda, I.; Cervelli, F.; Chae, M. J.; Chang, Y. H.; Chen, A. I.; Chen, H.; Cheng, G. M.; Chen, H. S.; Cheng, L.; Chikanian, A.; Chou, H. Y.; Choumilov, E.; Choutko, V.; Chung, C. H.; Clark, C.; Clavero, R.; Coignet, G.; Consolandi, C.; Contin, A.; Corti, C.; Coste, B.; Cui, Z.; Dai, M.; Delgado, C.; Della Torre, S.; Demirköz, M. B.; Derome, L.; Di Falco, S.; Di Masso, L.; Dimiccoli, F.; Díaz, C.; von Doetinchem, P.; Du, W. J.; Duranti, M.; D'Urso, D.; Eline, A.; Eppling, F. J.; Eronen, T.; Fan, Y. Y.; Farnesini, L.; Feng, J.; Fiandrini, E.; Fiasson, A.; Finch, E.; Fisher, P.; Galaktionov, Y.; Gallucci, G.; García, B.; García-López, R.; Gast, H.; Gebauer, I.; Gervasi, M.; Ghelfi, A.; Gillard, W.; Giovacchini, F.; Goglov, P.; Gong, J.; Goy, C.; Grabski, V.; Grandi, D.; Graziani, M.; Guandalini, C.; Guerri, I.; Guo, K. H.; Habiby, M.; Haino, S.; Han, K. C.; He, Z. H.; Heil, M.; Hoffman, J.; Hsieh, T. H.; Huang, Z. C.; Huh, C.; Incagli, M.; Ionica, M.; Jang, W. Y.; Jinchi, H.; Kanishev, K.; Kim, G. N.; Kim, K. S.; Kirn, Th.; Kossakowski, R.; Kounina, O.; Kounine, A.; Koutsenko, V.; Krafczyk, M. S.; Kunz, S.; La Vacca, G.; Laudi, E.; Laurenti, G.; Lazzizzera, I.; Lebedev, A.; Lee, H. T.; Lee, S. C.; Leluc, C.; Li, H. L.; Li, J. Q.; Li, Q.; Li, Q.; Li, T. X.; Li, W.; Li, Y.; Li, Z. H.; Li, Z. Y.; Lim, S.; Lin, C. H.; Lipari, P.; Lippert, T.; Liu, D.; Liu, H.; Lomtadze, T.; Lu, M. J.; Lu, Y. S.; Luebelsmeyer, K.; Luo, F.; Luo, J. Z.; Lv, S. S.; Majka, R.; Malinin, A.; Mañá, C.; Marín, J.; Martin, T.; Martínez, G.; Masi, N.; Maurin, D.; Menchaca-Rocha, A.; Meng, Q.; Mo, D. C.; Morescalchi, L.; Mott, P.; Müller, M.; Ni, J. Q.; Nikonov, N.; Nozzoli, F.; Nunes, P.; Obermeier, A.; Oliva, A.; Orcinha, M.; Palmonari, F.; Palomares, C.; Paniccia, M.; Papi, A.; Pedreschi, E.; Pensotti, S.; Pereira, R.; Pilo, F.; Piluso, A.; Pizzolotto, C.; Plyaskin, V.; Pohl, M.; Poireau, V.; Postaci, E.; Putze, A.; Quadrani, L.; Qi, X. M.; Rancoita, P. G.; Rapin, D.; Ricol, J. S.; Rodríguez, I.; Rosier-Lees, S.; Rozhkov, A.; Rozza, D.; Sagdeev, R.; Sandweiss, J.; Saouter, P.; Sbarra, C.; Schael, S.; Schmidt, S. M.; Schuckardt, D.; von Dratzig, A. Schulz; Schwering, G.; Scolieri, G.; Seo, E. S.; Shan, B. S.; Shan, Y. H.; Shi, J. Y.; Shi, X. Y.; Shi, Y. M.; Siedenburg, T.; Son, D.; Spada, F.; Spinella, F.; Sun, W.; Sun, W. H.; Tacconi, M.; Tang, C. P.; Tang, X. W.; Tang, Z. C.; Tao, L.; Tescaro, D.; Ting, Samuel C. C.; Ting, S. M.; Tomassetti, N.; Torsti, J.; Türkoǧlu, C.; Urban, T.; Vagelli, V.; Valente, E.; Vannini, C.; Valtonen, E.; Vaurynovich, S.; Vecchi, M.; Velasco, M.; Vialle, J. P.; Wang, L. Q.; Wang, Q. L.; Wang, R. S.; Wang, X.; Wang, Z. X.; Weng, Z. L.; Whitman, K.; Wienkenhöver, J.; Wu, H.; Xia, X.; Xie, M.; Xie, S.; Xiong, R. Q.; Xin, G. M.; Xu, N. S.; Xu, W.; Yan, Q.; Yang, J.; Yang, M.; Ye, Q. H.; Yi, H.; Yu, Y. J.; Yu, Z. Q.; Zeissler, S.; Zhang, J. H.; Zhang, M. T.; Zhang, X. B.; Zhang, Z.; Zheng, Z. M.; Zhuang, H. L.; Zhukov, V.; Zichichi, A.; Zimmermann, N.; Zuccon, P.; Zurbach, C.; AMS Collaboration
2014-09-01
A precision measurement by AMS of the positron fraction in primary cosmic rays in the energy range from 0.5 to 500 GeV based on 10.9 million positron and electron events is presented. This measurement extends the energy range of our previous observation and increases its precision. The new results show, for the first time, that above ∼200 GeV the positron fraction no longer exhibits an increase with energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.
2014-04-15
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-01-01
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang
2016-01-01
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459
Baudot, Pierre; Levy, Manuel; Marre, Olivier; Monier, Cyril; Pananceau, Marc; Frégnac, Yves
2013-01-01
Synaptic noise is thought to be a limiting factor for computational efficiency in the brain. In visual cortex (V1), ongoing activity is present in vivo, and spiking responses to simple stimuli are highly unreliable across trials. Stimulus statistics used to plot receptive fields, however, are quite different from those experienced during natural visuomotor exploration. We recorded V1 neurons intracellularly in the anaesthetized and paralyzed cat and compared their spiking and synaptic responses to full field natural images animated by simulated eye-movements to those evoked by simpler (grating) or higher dimensionality statistics (dense noise). In most cells, natural scene animation was the only condition where high temporal precision (in the 10–20 ms range) was maintained during sparse and reliable activity. At the subthreshold level, irregular but highly reproducible membrane potential dynamics were observed, even during long (several 100 ms) “spike-less” periods. We showed that both the spatial structure of natural scenes and the temporal dynamics of eye-movements increase the signal-to-noise ratio by a non-linear amplification of the signal combined with a reduction of the subthreshold contextual noise. These data support the view that the sparsening and the time precision of the neural code in V1 may depend primarily on three factors: (1) broadband input spectrum: the bandwidth must be rich enough for recruiting optimally the diversity of spatial and time constants during recurrent processing; (2) tight temporal interplay of excitation and inhibition: conductance measurements demonstrate that natural scene statistics narrow selectively the duration of the spiking opportunity window during which the balance between excitation and inhibition changes transiently and reversibly; (3) signal energy in the lower frequency band: a minimal level of power is needed below 10 Hz to reach consistently the spiking threshold, a situation rarely reached with visual dense noise. PMID:24409121
Baudot, Pierre; Levy, Manuel; Marre, Olivier; Monier, Cyril; Pananceau, Marc; Frégnac, Yves
2013-01-01
Synaptic noise is thought to be a limiting factor for computational efficiency in the brain. In visual cortex (V1), ongoing activity is present in vivo, and spiking responses to simple stimuli are highly unreliable across trials. Stimulus statistics used to plot receptive fields, however, are quite different from those experienced during natural visuomotor exploration. We recorded V1 neurons intracellularly in the anaesthetized and paralyzed cat and compared their spiking and synaptic responses to full field natural images animated by simulated eye-movements to those evoked by simpler (grating) or higher dimensionality statistics (dense noise). In most cells, natural scene animation was the only condition where high temporal precision (in the 10-20 ms range) was maintained during sparse and reliable activity. At the subthreshold level, irregular but highly reproducible membrane potential dynamics were observed, even during long (several 100 ms) "spike-less" periods. We showed that both the spatial structure of natural scenes and the temporal dynamics of eye-movements increase the signal-to-noise ratio by a non-linear amplification of the signal combined with a reduction of the subthreshold contextual noise. These data support the view that the sparsening and the time precision of the neural code in V1 may depend primarily on three factors: (1) broadband input spectrum: the bandwidth must be rich enough for recruiting optimally the diversity of spatial and time constants during recurrent processing; (2) tight temporal interplay of excitation and inhibition: conductance measurements demonstrate that natural scene statistics narrow selectively the duration of the spiking opportunity window during which the balance between excitation and inhibition changes transiently and reversibly; (3) signal energy in the lower frequency band: a minimal level of power is needed below 10 Hz to reach consistently the spiking threshold, a situation rarely reached with visual dense noise.
NASA Astrophysics Data System (ADS)
Hahn, Gitte Holst; Christensen, Karl Bang; Leung, Terence S.; Greisen, Gorm
2010-05-01
Coherence between spontaneous fluctuations in arterial blood pressure (ABP) and the cerebral near-infrared spectroscopy signal can detect cerebral autoregulation. Because reliable measurement depends on signals with high signal-to-noise ratio, we hypothesized that coherence is more precisely determined when fluctuations in ABP are large rather than small. Therefore, we investigated whether adjusting for variability in ABP (variabilityABP) improves precision. We examined the impact of variabilityABP within the power spectrum in each measurement and between repeated measurements in preterm infants. We also examined total monitoring time required to discriminate among infants with a simulation study. We studied 22 preterm infants (GA<30) yielding 215 10-min measurements. Surprisingly, adjusting for variabilityABP within the power spectrum did not improve the precision. However, adjusting for the variabilityABP among repeated measurements (i.e., weighting measurements with high variabilityABP in favor of those with low) improved the precision. The evidence of drift in individual infants was weak. Minimum monitoring time needed to discriminate among infants was 1.3-3.7 h. Coherence analysis in low frequencies (0.04-0.1 Hz) had higher precision and statistically more power than in very low frequencies (0.003-0.04 Hz). In conclusion, a reliable detection of cerebral autoregulation takes hours and the precision is improved by adjusting for variabilityABP between repeated measurements.
Egbewale, Bolaji E; Lewis, Martyn; Sim, Julius
2014-04-09
Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. 126 hypothetical trial scenarios were evaluated (126,000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power.
2014-01-01
Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304
Corpus-based Statistical Screening for Phrase Identification
Kim, Won; Wilbur, W. John
2000-01-01
Purpose: The authors study the extraction of useful phrases from a natural language database by statistical methods. The aim is to leverage human effort by providing preprocessed phrase lists with a high percentage of useful material. Method: The approach is to develop six different scoring methods that are based on different aspects of phrase occurrence. The emphasis here is not on lexical information or syntactic structure but rather on the statistical properties of word pairs and triples that can be obtained from a large database. Measurements: The Unified Medical Language System (UMLS) incorporates a large list of humanly acceptable phrases in the medical field as a part of its structure. The authors use this list of phrases as a gold standard for validating their methods. A good method is one that ranks the UMLS phrases high among all phrases studied. Measurements are 11-point average precision values and precision-recall curves based on the rankings. Result: The authors find of six different scoring methods that each proves effective in identifying UMLS quality phrases in a large subset of MEDLINE. These methods are applicable both to word pairs and word triples. All six methods are optimally combined to produce composite scoring methods that are more effective than any single method. The quality of the composite methods appears sufficient to support the automatic placement of hyperlinks in text at the site of highly ranked phrases. Conclusion: Statistical scoring methods provide a promising approach to the extraction of useful phrases from a natural language database for the purpose of indexing or providing hyperlinks in text. PMID:10984469
Precise measurement of scleral radius using anterior eye profilometry.
Jesus, Danilo A; Kedzia, Renata; Iskander, D Robert
2017-02-01
To develop a new and precise methodology to measure the scleral radius based on anterior eye surface. Eye Surface Profiler (ESP, Eaglet-Eye, Netherlands) was used to acquire the anterior eye surface of 23 emmetropic subjects aged 28.1±6.6years (mean±standard deviation) ranging from 20 to 45. Scleral radius was obtained based on the approximation of the topographical scleral data to a sphere using least squares fitting and considering the axial length as a reference point. To better understand the role of scleral radius in ocular biometry, measurements of corneal radius, central corneal thickness, anterior chamber depth and white-to-white corneal diameter were acquired with IOLMaster 700 (Carl Zeiss Meditec AG, Jena, Germany). The estimated scleral radius (11.2±0.3mm) was shown to be highly precise with a coefficient of variation of 0.4%. A statistically significant correlation between axial length and scleral radius (R 2 =0.957, p<0.001) was observed. Moreover, corneal radius (R 2 =0.420, p<0.001), anterior chamber depth (R 2 =0.141, p=0.039) and white-to-white corneal diameter (R 2 =0.146, p=0.036) have also shown statistically significant correlations with the scleral radius. Lastly, no correlation was observed comparing scleral radius to the central corneal thickness (R 2 =0.047, p=0.161). Three-dimensional topography of anterior eye acquired with Eye Surface Profiler together with a given estimate of the axial length, can be used to calculate the scleral radius with high precision. Copyright © 2016 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
2015-05-12
Deficiencies That Affect the Reliability of Estimates ________________________________________6 Statistical Precision Could Be Improved... statistical precision of improper payments estimates in seven of the DoD payment programs through the use of stratified sample designs. DoD improper...payments not subject to sampling, which made the results statistically invalid. We made a recommendation to correct this problem in a previous report;4
Raymond, Mark R; Clauser, Brian E; Furman, Gail E
2010-10-01
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.
Provably unbounded memory advantage in stochastic simulation using quantum mechanics
NASA Astrophysics Data System (ADS)
Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile
2017-10-01
Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.
NASA Astrophysics Data System (ADS)
Monna, F.; Loizeau, J.-L.; Thomas, B. A.; Guéguen, C.; Favarger, P.-Y.
1998-08-01
One of the factors limiting the precision of inductively coupled plasma mass spectrometry is the counting statistics, which depend upon acquisition time and ion fluxes. In the present study, the precision of the isotopic measurements of Pb and Sr is examined. The time of measurement is optimally shared for each isotope, using a mathematical simulation, to provide the lowest theoretical analytical error. Different algorithms of mass bias correction are also taken into account and evaluated in term of improvement of overall precision. Several experiments allow a comparison of real conditions with theory. The present method significantly improves the precision, regardless of the instrument used. However, this benefit is more important for equipment which originally yields a precision close to that predicted by counting statistics. Additionally, the procedure is flexible enough to be easily adapted to other problems, such as isotopic dilution.
Jones, David T; Kandathil, Shaun M
2018-04-26
In addition to substitution frequency data from protein sequence alignments, many state-of-the-art methods for contact prediction rely on additional sources of information, or features, of protein sequences in order to predict residue-residue contacts, such as solvent accessibility, predicted secondary structure, and scores from other contact prediction methods. It is unclear how much of this information is needed to achieve state-of-the-art results. Here, we show that using deep neural network models, simple alignment statistics contain sufficient information to achieve state-of-the-art precision. Our prediction method, DeepCov, uses fully convolutional neural networks operating on amino-acid pair frequency or covariance data derived directly from sequence alignments, without using global statistical methods such as sparse inverse covariance or pseudolikelihood estimation. Comparisons against CCMpred and MetaPSICOV2 show that using pairwise covariance data calculated from raw alignments as input allows us to match or exceed the performance of both of these methods. Almost all of the achieved precision is obtained when considering relatively local windows (around 15 residues) around any member of a given residue pairing; larger window sizes have comparable performance. Assessment on a set of shallow sequence alignments (fewer than 160 effective sequences) indicates that the new method is substantially more precise than CCMpred and MetaPSICOV2 in this regime, suggesting that improved precision is attainable on smaller sequence families. Overall, the performance of DeepCov is competitive with the state of the art, and our results demonstrate that global models, which employ features from all parts of the input alignment when predicting individual contacts, are not strictly needed in order to attain precise contact predictions. DeepCov is freely available at https://github.com/psipred/DeepCov. d.t.jones@ucl.ac.uk.
Precision electron-beam polarimetry at 1 GeV using diamond microstrip detectors
Narayan, A.; Jones, D.; Cornejo, J. C.; ...
2016-02-16
We report on the highest precision yet achieved in the measurement of the polarization of a low-energy, O(1 GeV), continuous-wave (CW) electron beam, accomplished using a new polarimeter based on electron-photon scattering, in Hall C at Jefferson Lab. A number of technical innovations were necessary, including a novel method for precise control of the laser polarization in a cavity and a novel diamond microstrip detector that was able to capture most of the spectrum of scattered electrons. The data analysis technique exploited track finding, the high granularity of the detector, and its large acceptance. The polarization of the 180–μA, 1.16-GeVmore » electron beam was measured with a statistical precision of <1% per hour and a systematic uncertainty of 0.59%. This exceeds the level of precision required by the Q weak experiment, a measurement of the weak vector charge of the proton. Proposed future low-energy experiments require polarization uncertainty < 0.4%, and this result represents an important demonstration of that possibility. This measurement is the first use of diamond detectors for particle tracking in an experiment. As a result, it demonstrates the stable operation of a diamond-based tracking detector in a high radiation environment, for two years.« less
Vittuari, Luca; Tini, Maria Alessandra; Sarti, Pierguido; Serantoni, Eugenio; Borghi, Alessandra; Negusini, Monia; Guillaume, Sébastien
2016-01-01
This paper compares three different methods capable of estimating the deflection of the vertical (DoV): one is based on the joint use of high precision spirit leveling and Global Navigation Satellite Systems (GNSS), a second uses astro-geodetic measurements and the third gravimetric geoid models. The working data sets refer to the geodetic International Terrestrial Reference Frame (ITRF) co-location sites of Medicina (Northern, Italy) and Noto (Sicily), these latter being excellent test beds for our investigations. The measurements were planned and realized to estimate the DoV with a level of precision comparable to the angular accuracy achievable in high precision network measured by modern high-end total stations. The three methods are in excellent agreement, with an operational supremacy of the astro-geodetic method, being faster and more precise than the others. The method that combines leveling and GNSS has slightly larger standard deviations; although well within the 1 arcsec level, which was assumed as threshold. Finally, the geoid model based method, whose 2.5 arcsec standard deviations exceed this threshold, is also statistically consistent with the others and should be used to determine the DoV components where local ad hoc measurements are lacking. PMID:27104544
Precision of guided scanning procedures for full-arch digital impressions in vivo.
Zimmermann, Moritz; Koller, Christina; Rumetsch, Moritz; Ender, Andreas; Mehl, Albert
2017-11-01
System-specific scanning strategies have been shown to influence the accuracy of full-arch digital impressions. Special guided scanning procedures have been implemented for specific intraoral scanning systems with special regard to the digital orthodontic workflow. The aim of this study was to evaluate the precision of guided scanning procedures compared to conventional impression techniques in vivo. Two intraoral scanning systems with implemented full-arch guided scanning procedures (Cerec Omnicam Ortho; Ormco Lythos) were included along with one conventional impression technique with irreversible hydrocolloid material (alginate). Full-arch impressions were taken three times each from 5 participants (n = 15). Impressions were then compared within the test groups using a point-to-surface distance method after best-fit model matching (OraCheck). Precision was calculated using the (90-10%)/2 quantile and statistical analysis with one-way repeated measures ANOVA and post hoc Bonferroni test was performed. The conventional impression technique with alginate showed the lowest precision for full-arch impressions with 162.2 ± 71.3 µm. Both guided scanning procedures performed statistically significantly better than the conventional impression technique (p < 0.05). Mean values for group Cerec Omnicam Ortho were 74.5 ± 39.2 µm and for group Ormco Lythos 91.4 ± 48.8 µm. The in vivo precision of guided scanning procedures exceeds conventional impression techniques with the irreversible hydrocolloid material alginate. Guided scanning procedures may be highly promising for clinical applications, especially for digital orthodontic workflows.
A High Precision Prediction Model Using Hybrid Grey Dynamic Model
ERIC Educational Resources Information Center
Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro
2008-01-01
In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…
High throughput single cell counting in droplet-based microfluidics.
Lu, Heng; Caen, Ouriel; Vrignon, Jeremy; Zonta, Eleonora; El Harrak, Zakaria; Nizard, Philippe; Baret, Jean-Christophe; Taly, Valérie
2017-05-02
Droplet-based microfluidics is extensively and increasingly used for high-throughput single-cell studies. However, the accuracy of the cell counting method directly impacts the robustness of such studies. We describe here a simple and precise method to accurately count a large number of adherent and non-adherent human cells as well as bacteria. Our microfluidic hemocytometer provides statistically relevant data on large populations of cells at a high-throughput, used to characterize cell encapsulation and cell viability during incubation in droplets.
Constraining the mass–richness relationship of redMaPPer clusters with angular clustering
Baxter, Eric J.; Rozo, Eduardo; Jain, Bhuvnesh; ...
2016-08-04
The potential of using cluster clustering for calibrating the mass–richness relation of galaxy clusters has been recognized theoretically for over a decade. In this paper, we demonstrate the feasibility of this technique to achieve high-precision mass calibration using redMaPPer clusters in the Sloan Digital Sky Survey North Galactic Cap. By including cross-correlations between several richness bins in our analysis, we significantly improve the statistical precision of our mass constraints. The amplitude of the mass–richness relation is constrained to 7 per cent statistical precision by our analysis. However, the error budget is systematics dominated, reaching a 19 per cent total errormore » that is dominated by theoretical uncertainty in the bias–mass relation for dark matter haloes. We confirm the result from Miyatake et al. that the clustering amplitude of redMaPPer clusters depends on galaxy concentration as defined therein, and we provide additional evidence that this dependence cannot be sourced by mass dependences: some other effect must account for the observed variation in clustering amplitude with galaxy concentration. Assuming that the observed dependence of redMaPPer clustering on galaxy concentration is a form of assembly bias, we find that such effects introduce a systematic error on the amplitude of the mass–richness relation that is comparable to the error bar from statistical noise. Finally, the results presented here demonstrate the power of cluster clustering for mass calibration and cosmology provided the current theoretical systematics can be ameliorated.« less
Computational Calorimetry: High-Precision Calculation of Host–Guest Binding Thermodynamics
2015-01-01
We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van’t Hoff equation. Excellent agreement between the direct and van’t Hoff methods is demonstrated for both host–guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design. PMID:26523125
Alania, M; De Backer, A; Lobato, I; Krause, F F; Van Dyck, D; Rosenauer, A; Van Aert, S
2017-10-01
In this paper, we investigate how precise atoms of a small nanocluster can ultimately be located in three dimensions (3D) from a tilt series of images acquired using annular dark field (ADF) scanning transmission electron microscopy (STEM). Therefore, we derive an expression for the statistical precision with which the 3D atomic position coordinates can be estimated in a quantitative analysis. Evaluating this statistical precision as a function of the microscope settings also allows us to derive the optimal experimental design. In this manner, the optimal angular tilt range, required electron dose, optimal detector angles, and number of projection images can be determined. Copyright © 2016 Elsevier B.V. All rights reserved.
High precision measurements of 26Naβ- decay
NASA Astrophysics Data System (ADS)
Grinyer, G. F.; Svensson, C. E.; Andreoiu, C.; Andreyev, A. N.; Austin, R. A.; Ball, G. C.; Chakrawarthy, R. S.; Finlay, P.; Garrett, P. E.; Hackman, G.; Hardy, J. C.; Hyland, B.; Iacob, V. E.; Koopmans, K. A.; Kulp, W. D.; Leslie, J. R.; MacDonald, J. A.; Morton, A. C.; Ormand, W. E.; Osborne, C. J.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Schumaker, M. A.; Scraggs, H. C.; Schwarzenberg, J.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Wood, J. L.; Zganjar, E. F.
2005-04-01
High-precision measurements of the half-life and β-branching ratios for the β- decay of 26Na to 26Mg have been measured in β-counting and γ-decay experiments, respectively. A 4π proportional counter and fast tape transport system were employed for the half-life measurement, whereas the γ rays emitted by the daughter nucleus 26Mg were detected with the 8π γ-ray spectrometer, both located at TRIUMF's isotope separator and accelerator radioactive beam facility. The half-life of 26Na was determined to be T1/2=1.07128±0.00013±0.00021s, where the first error is statistical and the second systematic. The logft values derived from these experiments are compared with theoretical values from a full sd-shell model calculation.
AMMI adjustment for statistical analysis of an international wheat yield trial.
Crossa, J; Fox, P N; Pfeiffer, W H; Rajaram, S; Gauch, H G
1991-01-01
Multilocation trials are important for the CIMMYT Bread Wheat Program in producing high-yielding, adapted lines for a wide range of environments. This study investigated procedures for improving predictive success of a yield trial, grouping environments and genotypes into homogeneous subsets, and determining the yield stability of 18 CIMMYT bread wheats evaluated at 25 locations. Additive Main effects and Multiplicative Interaction (AMMI) analysis gave more precise estimates of genotypic yields within locations than means across replicates. This precision facilitated formation by cluster analysis of more cohesive groups of genotypes and locations for biological interpretation of interactions than occurred with unadjusted means. Locations were clustered into two subsets for which genotypes with positive interactions manifested in high, stable yields were identified. The analyses highlighted superior selections with both broad and specific adaptation.
Reduction to Outside the Atmosphere and Statistical Tests Used in Geneva Photometry
NASA Technical Reports Server (NTRS)
Rufener, F.
1984-01-01
Conditions for creating a precise photometric system are investigated. The analytical and discriminatory potentials of a photometry obviously result from the localization of the passbands in the spectrum; they do, however, also depend critically on the precision attained. This precision is the result of two different types of precautions. Two procedures which contribute in an efficient manner to achieving greater precision are examined. These two methods are known as hardware related precision and software related precision.
Statistical issues in the design, conduct and analysis of two large safety studies.
Gaffney, Michael
2016-10-01
The emergence, post approval, of serious medical events, which may be associated with the use of a particular drug or class of drugs, is an important public health and regulatory issue. The best method to address this issue is through a large, rigorously designed safety study. Therefore, it is important to elucidate the statistical issues involved in these large safety studies. Two such studies are PRECISION and EAGLES. PRECISION is the primary focus of this article. PRECISION is a non-inferiority design with a clinically relevant non-inferiority margin. Statistical issues in the design, conduct and analysis of PRECISION are discussed. Quantitative and clinical aspects of the selection of the composite primary endpoint, the determination and role of the non-inferiority margin in a large safety study and the intent-to-treat and modified intent-to-treat analyses in a non-inferiority safety study are shown. Protocol changes that were necessary during the conduct of PRECISION are discussed from a statistical perspective. Issues regarding the complex analysis and interpretation of the results of PRECISION are outlined. EAGLES is presented as a large, rigorously designed safety study when a non-inferiority margin was not able to be determined by a strong clinical/scientific method. In general, when a non-inferiority margin is not able to be determined, the width of the 95% confidence interval is a way to size the study and to assess the cost-benefit of relative trial size. A non-inferiority margin, when able to be determined by a strong scientific method, should be included in a large safety study. Although these studies could not be called "pragmatic," they are examples of best real-world designs to address safety and regulatory concerns. © The Author(s) 2016.
Machine vision system for measuring conifer seedling morphology
NASA Astrophysics Data System (ADS)
Rigney, Michael P.; Kranzler, Glenn A.
1995-01-01
A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.
High precision mass measurements for wine metabolomics
Roullier-Gall, Chloé; Witting, Michael; Gougeon, Régis D.; Schmitt-Kopplin, Philippe
2014-01-01
An overview of the critical steps for the non-targeted Ultra-High Performance Liquid Chromatography coupled with Quadrupole Time-of-Flight Mass Spectrometry (UPLC-Q-ToF-MS) analysis of wine chemistry is given, ranging from the study design, data preprocessing and statistical analyses, to markers identification. UPLC-Q-ToF-MS data was enhanced by the alignment of exact mass data from FTICR-MS, and marker peaks were identified using UPLC-Q-ToF-MS2. In combination with multivariate statistical tools and the annotation of peaks with metabolites from relevant databases, this analytical process provides a fine description of the chemical complexity of wines, as exemplified in the case of red (Pinot noir) and white (Chardonnay) wines from various geographic origins in Burgundy. PMID:25431760
High precision mass measurements for wine metabolomics
NASA Astrophysics Data System (ADS)
Roullier-Gall, Chloé; Witting, Michael; Gougeon, Régis; Schmitt-Kopplin, Philippe
2014-11-01
An overview of the critical steps for the non-targeted Ultra-High Performance Liquid Chromatography coupled with Quadrupole Time-of-Flight Mass Spectrometry (UPLC-Q-ToF-MS) analysis of wine chemistry is given, ranging from the study design, data preprocessing and statistical analyses, to markers identification. UPLC-Q-ToF-MS data was enhanced by the alignment of exact mass data from FTICR-MS, and marker peaks were identified using UPLC-Q-ToF-MS². In combination with multivariate statistical tools and the annotation of peaks with metabolites from relevant databases, this analytical process provides a fine description of the chemical complexity of wines, as exemplified in the case of red (Pinot noir) and white (Chardonnay) wines from various geographic origins in Burgundy.
Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan
2010-09-01
Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.
Skeletal Correlates for Body Mass Estimation in Modern and Fossil Flying Birds
Field, Daniel J.; Lynner, Colton; Brown, Christian; Darroch, Simon A. F.
2013-01-01
Scaling relationships between skeletal dimensions and body mass in extant birds are often used to estimate body mass in fossil crown-group birds, as well as in stem-group avialans. However, useful statistical measurements for constraining the precision and accuracy of fossil mass estimates are rarely provided, which prevents the quantification of robust upper and lower bound body mass estimates for fossils. Here, we generate thirteen body mass correlations and associated measures of statistical robustness using a sample of 863 extant flying birds. By providing robust body mass regressions with upper- and lower-bound prediction intervals for individual skeletal elements, we address the longstanding problem of body mass estimation for highly fragmentary fossil birds. We demonstrate that the most precise proxy for estimating body mass in the overall dataset, measured both as coefficient determination of ordinary least squares regression and percent prediction error, is the maximum diameter of the coracoid’s humeral articulation facet (the glenoid). We further demonstrate that this result is consistent among the majority of investigated avian orders (10 out of 18). As a result, we suggest that, in the majority of cases, this proxy may provide the most accurate estimates of body mass for volant fossil birds. Additionally, by presenting statistical measurements of body mass prediction error for thirteen different body mass regressions, this study provides a much-needed quantitative framework for the accurate estimation of body mass and associated ecological correlates in fossil birds. The application of these regressions will enhance the precision and robustness of many mass-based inferences in future paleornithological studies. PMID:24312392
NASA Astrophysics Data System (ADS)
Vorobel, Vit; Daya Bay Collaboration
2017-07-01
The Daya Bay Reactor Neutrino Experiment was designed to measure θ 13, the smallest mixing angle in the three-neutrino mixing framework, with unprecedented precision. The experiment consists of eight functionally identical detectors placed underground at different baselines from three pairs of nuclear reactors in South China. Since Dec. 2011, the experiment has been running stably for more than 4 years, and has collected the largest reactor anti-neutrino sample to date. Daya Bay is able to greatly improve the precision on θ 13 and to make an independent measurement of the effective mass splitting in the electron antineutrino disappearance channel. Daya Bay can also perform a number of other precise measurements, such as a high-statistics determination of the absolute reactor antineutrino flux and spectrum, as well as a search for sterile neutrino mixing, among others. The most recent results from Daya Bay are discussed in this paper, as well as the current status and future prospects of the experiment.
Dynamics of statistical distance: Quantum limits for two-level clocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braunstein, S.L.; Milburn, G.J.
1995-03-01
We study the evolution of statistical distance on the Bloch sphere under unitary and nonunitary dynamics. This corresponds to studying the limits to clock precision for a clock constructed from a two-state system. We find that the initial motion away from pure states under nonunitary dynamics yields the greatest accuracy for a one-tick'' clock; in this case the clock's precision is not limited by the largest frequency of the system.
Time Delay Embedding Increases Estimation Precision of Models of Intraindividual Variability
ERIC Educational Resources Information Center
von Oertzen, Timo; Boker, Steven M.
2010-01-01
This paper investigates the precision of parameters estimated from local samples of time dependent functions. We find that "time delay embedding," i.e., structuring data prior to analysis by constructing a data matrix of overlapping samples, increases the precision of parameter estimates and in turn statistical power compared to standard…
NASA Technical Reports Server (NTRS)
Stanev, T.
1986-01-01
The first generation of large and precise detectors, some initially dedicated to search for nucleon decay has accumulated significant statistics on neutrinos and high-energy muons. A second generation of even better and bigger detectors are already in operation or in advanced construction stage. The present set of experimental data on muon groups and neutrinos is qualitatively better than several years ago and the expectations for the following years are high. Composition studies with underground muon groups, neutrino detection, and expected extraterrestrial neutrino fluxes are discussed.
Sub-sampling genetic data to estimate black bear population size: A case study
Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.
2007-01-01
Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.
Precision Branching Ratio Measurement for the Superallowed &+circ; Emitter ^62Ga
NASA Astrophysics Data System (ADS)
Finlay, Paul; Svensson, C. E.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Chaffey, A.; Chakrawarthy, R. S.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Kanungo, R.; Leslie, J. R.; Mattoon, C.; Morton, A. C.; Pearson, C. J.; Ressler, J. J.; Sarazin, F.; Savajols, H.
2007-10-01
A high-precision branching ratio measurement for the superallowed &+circ; emitter ^62Ga has been made using the 8π γ-ray spectrometer in conjunction with the SCintillating Electron-Positron Tagging ARray (SCEPTAR) as part of an ongoing experimental program in superallowed Fermi beta decay studies at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada, which delivered a high-purity beam of ˜10^4 ^62Ga/s in December 2005. The present work represents the highest statistics measurement of the ^62Ga superallowed branching ratio to date. 25 γ rays emitted following non-superallowed decay branches of ^62Ga have been identified and their intensities determined. These data yield a superallowed branching ratio with 10-4 precision, and our observed branch to the first nonanalogue 0^+ state sets a new upper limit on the isospin-mixing correction δC1^1. By comparing our ft value with the world average Ft, we make stringent tests of the different calculations for the isospin-symmetry-breaking correction δC, which is predicted to be large for ^62Ga.
Statistical Symbolic Execution with Informed Sampling
NASA Technical Reports Server (NTRS)
Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco
2014-01-01
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad
2011-07-01
The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.
Statistical evaluation of rainfall-simulator and erosion testing procedure : final report.
DOT National Transportation Integrated Search
1977-01-01
The specific aims of this study were (1) to supply documentation of statistical repeatability and precision of the rainfall-simulator and to document the statistical repeatabiity of the soil-loss data when using the previously recommended tentative l...
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
Holloway, Andrew J; Oshlack, Alicia; Diyagama, Dileepa S; Bowtell, David DL; Smyth, Gordon K
2006-01-01
Background Concerns are often raised about the accuracy of microarray technologies and the degree of cross-platform agreement, but there are yet no methods which can unambiguously evaluate precision and sensitivity for these technologies on a whole-array basis. Results A methodology is described for evaluating the precision and sensitivity of whole-genome gene expression technologies such as microarrays. The method consists of an easy-to-construct titration series of RNA samples and an associated statistical analysis using non-linear regression. The method evaluates the precision and responsiveness of each microarray platform on a whole-array basis, i.e., using all the probes, without the need to match probes across platforms. An experiment is conducted to assess and compare four widely used microarray platforms. All four platforms are shown to have satisfactory precision but the commercial platforms are superior for resolving differential expression for genes at lower expression levels. The effective precision of the two-color platforms is improved by allowing for probe-specific dye-effects in the statistical model. The methodology is used to compare three data extraction algorithms for the Affymetrix platforms, demonstrating poor performance for the commonly used proprietary algorithm relative to the other algorithms. For probes which can be matched across platforms, the cross-platform variability is decomposed into within-platform and between-platform components, showing that platform disagreement is almost entirely systematic rather than due to measurement variability. Conclusion The results demonstrate good precision and sensitivity for all the platforms, but highlight the need for improved probe annotation. They quantify the extent to which cross-platform measures can be expected to be less accurate than within-platform comparisons for predicting disease progression or outcome. PMID:17118209
Berg, Wolfgang; Bechler, Robin; Laube, Norbert
2009-01-01
Since its first publication in 2000, the BONN-Risk-Index (BRI) has been successfully used to determine the calcium oxalate (CaOx) crystallization risk from urine samples. To date, a BRI-measuring device, the "Urolizer", has been developed, operating automatically and requiring only a minimum of preparation. Two major objectives were pursued: determination of Urolizer precision, and determination of the influence of 24-h urine storage at moderate temperatures on BRI. 24-h urine samples from 52 CaOx stone-formers were collected. A total of 37 urine samples were used for the investigation of Urolizer precision by performing six independent BRI determinations in series. In total, 30 samples were taken for additional investigation of urine storability. Each sample was measured thrice: directly after collection, after 24-h storage at T=21 degrees C, and after 24-h cooling at T=4 degrees C. Outcomes were statistically tested for identity with regard to the immediately obtained results. Repeat measurements for evaluation of Urolizer precision revealed statistical identity of data (p-0.05). 24-h storage of urine at both tested temperatures did not significantly affect BRI (p-0.05). The pilot-run Urolizer shows high analytical reliability. The innovative analysis device may be especially suited for urologists specializing in urolithiasis treatment. The possibility for urine storage at moderate temperatures without loss of analysis quality further demonstrates the applicability of the BRI method.
Marsh, Adam G.; Cottrell, Matthew T.; Goldman, Morton F.
2016-01-01
Epigenetics is a rapidly developing field focused on deciphering chemical fingerprints that accumulate on human genomes over time. As the nascent idea of precision medicine expands to encompass epigenetic signatures of diagnostic and prognostic relevance, there is a need for methodologies that provide high-throughput DNA methylation profiling measurements. Here we report a novel quantification methodology for computationally reconstructing site-specific CpG methylation status from next generation sequencing (NGS) data using methyl-sensitive restriction endonucleases (MSRE). An integrated pipeline efficiently incorporates raw NGS metrics into a statistical discrimination platform to identify functional linkages between shifts in epigenetic DNA methylation and disease phenotypes in samples being analyzed. In this pilot proof-of-concept study we quantify and compare DNA methylation in blood serum of individuals with Parkinson's Disease relative to matched healthy blood profiles. Even with a small study of only six samples, a high degree of statistical discrimination was achieved based on CpG methylation profiles between groups, with 1008 statistically different CpG sites (p < 0.0025, after false discovery rate correction). A methylation load calculation was used to assess higher order impacts of methylation shifts on genes and pathways and most notably identified FGF3, FGF8, HTT, KMTA5, MIR8073, and YWHAG as differentially methylated genes with high relevance to Parkinson's Disease and neurodegeneration (based on PubMed literature citations). Of these, KMTA5 is a histone methyl-transferase gene and HTT is Huntington Disease Protein or Huntingtin, for which there are well established neurodegenerative impacts. The future need for precision diagnostics now requires more tools for exploring epigenetic processes that may be linked to cellular dysfunction and subsequent disease progression. PMID:27853465
de Andrade, Jucimara Kulek; de Andrade, Camila Kulek; Komatsu, Emy; Perreault, Hélène; Torres, Yohandra Reyes; da Rosa, Marcos Roberto; Felsner, Maria Lurdes
2017-08-01
Corn syrups, important ingredients used in food and beverage industries, often contain high levels of 5-hydroxymethyl-2-furfural (HMF), a toxic contaminant. In this work, an in house validation of a difference spectrophotometric method for HMF analysis in corn syrups was developed using sophisticated statistical tools by the first time. The methodology showed excellent analytical performance with good selectivity, linearity (R 2 =99.9%, r>0.99), accuracy and low limits (LOD=0.10mgL -1 and LOQ=0.34mgL -1 ). An excellent precision was confirmed by repeatability (RSD (%)=0.30) and intermediate precision (RSD (%)=0.36) estimates and by Horrat value (0.07). A detailed study of method precision using a nested design demonstrated that variation sources such as instruments, operators and time did not interfere in the variability of results within laboratory and consequently in its intermediate precision. The developed method is environmentally friendly, fast, cheap and easy to implement resulting in an attractive alternative for corn syrups quality control in industries and official laboratories. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Aguilar-Arevalo, A. A.; Anderson, C. E.; Bazarko, A. O.; Brice, S. J.; Brown, B. C.; Bugel, L.; Cao, J.; Coney, L.; Conrad, J. M.; Cox, D. C.; Curioni, A.; Djurcic, Z.; Finley, D. A.; Fleming, B. T.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Green, C.; Green, J. A.; Hart, T. L.; Hawker, E.; Imlay, R.; Johnson, R. A.; Karagiorgi, G.; Kasper, P.; Katori, T.; Kobilarcik, T.; Kourbanis, I.; Koutsoliotas, S.; Laird, E. M.; Linden, S. K.; Link, J. M.; Liu, Y.; Liu, Y.; Louis, W. C.; Mahn, K. B. M.; Marsh, W.; McGary, V. T.; McGregor, G.; Metcalf, W.; Meyers, P. D.; Mills, F.; Mills, G. B.; Monroe, J.; Moore, C. D.; Nelson, R. H.; Nienaber, P.; Nowak, J. A.; Osmanov, B.; Ouedraogo, S.; Patterson, R. B.; Perevalov, D.; Polly, C. C.; Prebys, E.; Raaf, J. L.; Ray, H.; Roe, B. P.; Russell, A. D.; Sandberg, V.; Schirato, R.; Schmitz, D.; Shaevitz, M. H.; Shoemaker, F. C.; Smith, D.; Soderberg, M.; Sorel, M.; Spentzouris, P.; Spitz, J.; Stancu, I.; Stefanski, R. J.; Sung, M.; Tanaka, H. A.; Tayloe, R.; Tzanov, M.; van de Water, R.; Wascko, M. O.; White, D. H.; Wilking, M. J.; Yang, H. J.; Zeller, G. P.; Zimmerman, E. D.
2009-08-01
Using high statistics samples of charged-current νμ interactions, the MiniNooNE Collaboration reports a measurement of the single-charged-pion production to quasielastic cross section ratio on mineral oil (CH2), both with and without corrections for hadron reinteractions in the target nucleus. The result is provided as a function of neutrino energy in the range 0.4GeV
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Maximum entropy models as a tool for building precise neural controls.
Savin, Cristina; Tkačik, Gašper
2017-10-01
Neural responses are highly structured, with population activity restricted to a small subset of the astronomical range of possible activity patterns. Characterizing these statistical regularities is important for understanding circuit computation, but challenging in practice. Here we review recent approaches based on the maximum entropy principle used for quantifying collective behavior in neural activity. We highlight recent models that capture population-level statistics of neural data, yielding insights into the organization of the neural code and its biological substrate. Furthermore, the MaxEnt framework provides a general recipe for constructing surrogate ensembles that preserve aspects of the data, but are otherwise maximally unstructured. This idea can be used to generate a hierarchy of controls against which rigorous statistical tests are possible. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kovač, Marko; Bauer, Arthur; Ståhl, Göran
2014-01-01
Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120
Developing Statistical Knowledge for Teaching during Design-Based Research
ERIC Educational Resources Information Center
Groth, Randall E.
2017-01-01
Statistical knowledge for teaching is not precisely equivalent to statistics subject matter knowledge. Teachers must know how to make statistics understandable to others as well as understand the subject matter themselves. This dual demand on teachers calls for the development of viable teacher education models. This paper offers one such model,…
Thermospheric temperature measurement technique.
NASA Technical Reports Server (NTRS)
Hueser, J. E.; Fowler, P.
1972-01-01
A method for measurement of temperature in the earth's lower thermosphere from a high-velocity probes is described. An undisturbed atmospheric sample is admitted to the instrument by means of a free molecular flow inlet system of skimmers which avoids surface collisions of the molecules prior to detection. Measurement of the time-of-flight distribution of an initially well-localized group of nitrogen metastable molecular states produced in an open, crossed electron-molecular beam source, yields information on the atmospheric temperature. It is shown that for high vehicle velocities, the time-of-flight distribution of the metastable flux is a sensitive indicator of atmospheric temperature. The temperature measurement precision should be greater than 94% at the 99% confidence level over the range of altitudes from 120-170 km. These precision and altitude range estimates are based on the statistical consideration of the counting rates achieved with a multichannel analyzer using realistic values for system parameters.
Nucleon Charges from 2+1+1-flavor HISQ and 2+1-flavor clover lattices
Gupta, Rajan
2016-07-24
Precise estimates of the nucleon charges g A, g S and g T are needed in many phenomenological analyses of SM and BSM physics. In this talk, we present results from two sets of calculations using clover fermions on 9 ensembles of 2+1+1-flavor HISQ and 4 ensembles of 2+1-flavor clover lattices. In addition, we show that high statistics can be obtained cost-effectively using the truncated solver method with bias correction and the coherent source sequential propagator technique. By performing simulations at 4–5 values of the source-sink separation t sep, we demonstrate control over excited-state contamination using 2- and 3-state fits.more » Using the high-precision 2+1+1-flavor data, we perform a simultaneous fit in a, M π and M πL to obtain our final results for the charges.« less
Multicharged and/or water-soluble fluorescent dendrimers: properties and uses.
Caminade, Anne-Marie; Hameau, Aurélien; Majoral, Jean-Pierre
2009-09-21
The fluorescence of water-soluble dendritic compounds can be due to the whole structure or to fluorophores used as core, as peripheral groups, or as branches. Highly sophisticated precisely defined structures with other functional groups usable for material or biological purposes have been synthesised, but many recent examples have shown that dendrimers can be used as versatile platforms for statistically linking various types of functional groups.
Precision measurements of g1 of the proton and of the deuteron with 6 GeV electrons
NASA Astrophysics Data System (ADS)
Prok, Y.; Bosted, P.; Kvaltine, N.; Adhikari, K. P.; Adikaram, D.; Aghasyan, M.; Amaryan, M. J.; Anderson, M. D.; Anefalos Pereira, S.; Avakian, H.; Baghdasaryan, H.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Biselli, A. S.; Bono, J.; Briscoe, W. J.; Brock, J.; Brooks, W. K.; Bültmann, S.; Burkert, V. D.; Carlin, C.; Carman, D. S.; Celentano, A.; Chandavar, S.; Colaneri, L.; Cole, P. L.; Contalbrigo, M.; Cortes, O.; Crabb, D.; Crede, V.; D'Angelo, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Dodge, G. E.; Doughty, D.; Dupre, R.; El Alaoui, A.; El Fassi, L.; Elouadrhiri, L.; Fedotov, G.; Fegan, S.; Fersch, R.; Fleming, J. A.; Forest, T. A.; Garçon, M.; Garillon, B.; Gevorgyan, N.; Ghandilyan, Y.; Gilfoyle, G. P.; Girod, F. X.; Giovanetti, K. L.; Goetz, J. T.; Gohn, W.; Gothe, R. W.; Griffioen, K. A.; Guegan, B.; Guler, N.; Hafidi, K.; Hanretty, C.; Harrison, N.; Hattawy, M.; Hicks, K.; Ho, D.; Holtrop, M.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jawalkar, S.; Jiang, X.; Jo, H. S.; Joo, K.; Kalantarians, N.; Keith, C.; Keller, D.; Khandaker, M.; Kim, A.; Kim, W.; Klein, A.; Klein, F. J.; Koirala, S.; Kubarovsky, V.; Kuhn, S. E.; Kuleshov, S. V.; Lenisa, P.; Livingston, K.; Lu, H. Y.; MacGregor, I. J. D.; Markov, N.; Mayer, M.; McKinnon, B.; Meekins, D.; Mineeva, T.; Mirazita, M.; Mokeev, V.; Montgomery, R. A.; Moutarde, H.; Movsisyan, A.; Munevar, E.; Munoz Camacho, C.; Nadel-Turonski, P.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Ostrovidov, A. I.; Pappalardo, L. L.; Paremuzyan, R.; Park, K.; Peng, P.; Phillips, J. J.; Pierce, J.; Pisano, S.; Pogorelko, O.; Pozdniakov, S.; Price, J. W.; Procureur, S.; Protopopescu, D.; Puckett, A. J. R.; Raue, B. A.; Rimal, D.; Ripani, M.; Rizzo, A.; Rosner, G.; Rossi, P.; Roy, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Schott, D.; Schumacher, R. A.; Seder, E.; Sharabian, Y. G.; Simonyan, A.; Smith, C.; Smith, G.; Sober, D. I.; Sokhan, D.; Stepanyan, S. S.; Stepanyan, S.; Strakovsky, I. I.; Strauch, S.; Sytnik, V.; Taiuti, M.; Tang, W.; Tkachenko, S.; Ungaro, M.; Vernarsky, B.; Vlassov, A. V.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Watts, D. P.; Weinstein, L. B.; Zachariou, N.; Zana, L.; Zhang, J.; Zhao, B.; Zhao, Z. W.; Zonta, I.; CLAS Collaboration
2014-08-01
The inclusive polarized structure functions of the proton and deuteron, g1p and g1d, were measured with high statistical precision using polarized 6 GeV electrons incident on a polarized ammonia target in Hall B at Jefferson Laboratory. Electrons scattered at laboratory angles between 18 and 45 degrees were detected using the CEBAF Large Acceptance Spectrometer (CLAS). For the usual deep inelastic region kinematics, Q2>1 GeV2 and the final-state invariant mass W >2 GeV, the ratio of polarized to unpolarized structure functions g1/F1 is found to be nearly independent of Q2 at fixed x. Significant resonant structure is apparent at values of W up to 2.3 GeV. In the framework of perturbative quantum chromodynamics, the high-W results can be used to better constrain the polarization of quarks and gluons in the nucleon, as well as high-twist contributions.
CALET On-orbit Calibration and Performance
NASA Astrophysics Data System (ADS)
Akaike, Yosui; Calet Collaboration
2017-01-01
The CALorimetric Electron Telescope (CALET) was installed on the International Space Station (ISS) in August 2015, and has been accumulating high-statistics data to perform high-precision measurements of cosmic ray electrons, nuclei and gamma-rays. CALET has an imaging and a fully active calorimeter, with a total thickness of 30 radiation lengths and 1.3 proton interaction lengths, that allow measurements well into the TeV energy region with excellent energy resolution, 2% for electrons above 100 GeV, and powerful particle identification. CALET's performance has been confirmed by Monte Carlo simulations and beam tests. In order to maximize the detector performance and keep the high resolution for long observation on the ISS, it is required to perform the precise calibration of each detector component. We have therefore evaluated the detector response and monitored it by using penetrating cosmic ray events such as protons and helium nuclei. In this paper, we will present the on-orbit calibration and detector performance of CALET on the ISS. This research was supported by JSPS postdoctral fellowships for research abroad.
Precision measurements of g1 of the proton and the deuteron with 6 GeV electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prok, Yelena; Bosted, Peter; Kvaltine, Nicholas
2014-08-01
The inclusive polarized structure functions of the proton and deuteron, g1p and g1d, were measured with high statistical precision using polarized 6 GeV electrons incident on a polarized ammonia target in Hall B at Jefferson Laboratory. Electrons scattered at lab angles between 18 and 45 degrees were detected using the CEBAF Large Acceptance Spectrometer (CLAS). For the usual DIS kinematics, Q^2>1 GeV^2 and the final-state invariant mass W>2 GeV, the ratio of polarized to unpolarized structure functions g1/F1 is found to be nearly independent of Q^2 at fixed x. Significant resonant structure is apparent at values of W up tomore » 2.3 GeV. In the framework of perturbative QCD, the high-W results can be used to better constrain the polarization of quarks and gluons in the nucleon, as well as high-twist contributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoegg, Edward D.; Marcus, R. Kenneth; Hager, George J.
RATIONALE: The field of highly accurate and precise isotope ratio (IR) analysis has been dominated by inductively coupled plasma and thermal ionization mass spectrometers. While these instruments are considered the gold standard for IR analysis, the International Atomic Energy Agency desires a field deployable instrument capable of accurately and precisely measuring U isotope ratios. METHODS: The proposed system interfaces the liquid sampling – atmospheric pressure glow discharge (LS-APGD) ion source with a high resolution Exactive Orbitrap mass spectrometer. With this experimental setup certified U isotope standards and unknown samples were analyzed. The accuracy and precision of the system were thenmore » determined. RESULTS: The LS-APGD /Exactive instrument measures a certified reference material of natural U (235U/238U = 0.007258) as 0.007041 with a relative standard deviation of 0.158% meeting the International Target Values for Uncertainty for the destructive analysis of U. Additionally, when three unknowns measured and compared to the results from an ICP multi collector instrument, there is no statistical difference between the two instruments.CONCLUSIONS: The LS-APGD / Orbitrap system, while still in the preliminary stages of development, offers highly accurate and precise IR analysis that suggest a paradigm shift in the world of IR analysis. Furthermore, the portability of the LS-APGD as an elemental ion source combined with the low overhead and small size of the Orbitrap suggest that the instrumentation is capable of being field deployable.With liquid sampling glow discharge-Orbitrap MS, isotope ratio and precision performance improves with rejection of concomitant ion species.« less
Object detection in cinematographic video sequences for automatic indexing
NASA Astrophysics Data System (ADS)
Stauder, Jurgen; Chupeau, Bertrand; Oisel, Lionel
2003-06-01
This paper presents an object detection framework applied to cinematographic post-processing of video sequences. Post-processing is done after production and before editing. At the beginning of each shot of a video, a slate (also called clapperboard) is shown. The slate contains notably an electronic audio timecode that is necessary for audio-visual synchronization. This paper presents an object detection framework to detect slates in video sequences for automatic indexing and post-processing. It is based on five steps. The first two steps aim to reduce drastically the video data to be analyzed. They ensure high recall rate but have low precision. The first step detects images at the beginning of a shot possibly showing up a slate while the second step searches in these images for candidates regions with color distribution similar to slates. The objective is to not miss any slate while eliminating long parts of video without slate appearance. The third and fourth steps are statistical classification and pattern matching to detected and precisely locate slates in candidate regions. These steps ensure high recall rate and high precision. The objective is to detect slates with very little false alarms to minimize interactive corrections. In a last step, electronic timecodes are read from slates to automize audio-visual synchronization. The presented slate detector has a recall rate of 89% and a precision of 97,5%. By temporal integration, much more than 89% of shots in dailies are detected. By timecode coherence analysis, the precision can be raised too. Issues for future work are to accelerate the system to be faster than real-time and to extend the framework for several slate types.
Díaz-González, Lorena; Quiroz-Ruiz, Alfredo
2014-01-01
Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15) for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ = 0 and ε = ±1), were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15 > N14 > N8. PMID:24737992
Verma, Surendra P; Díaz-González, Lorena; Rosales-Rivera, Mauricio; Quiroz-Ruiz, Alfredo
2014-01-01
Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15) for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ = 0 and ε = ±1), were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15 > N14 > N8.
NASA Astrophysics Data System (ADS)
Rosenheim, B. E.; Firesinger, D.; Roberts, M. L.; Burton, J. R.; Khan, N.; Moyer, R. P.
2016-12-01
Radiocarbon (14C) sediment core chronologies benefit from a high density of dates, even when precision of individual dates is sacrificed. This is demonstrated by a combined approach of rapid 14C analysis of CO2 gas generated from carbonates and organic material coupled with Bayesian statistical modeling. Analysis of 14C is facilitated by the gas ion source on the Continuous Flow Accelerator Mass Spectrometry (CFAMS) system at the Woods Hole Oceanographic Institution's National Ocean Sciences Accelerator Mass Spectrometry facility. This instrument is capable of producing a 14C determination of +/- 100 14C y precision every 4-5 minutes, with limited sample handling (dissolution of carbonates and/or combustion of organic carbon in evacuated containers). Rapid analysis allows over-preparation of samples to include replicates at each depth and/or comparison of different sample types at particular depths in a sediment or peat core. Analysis priority is given to depths that have the least chronologic precision as determined by Bayesian modeling of the chronology of calibrated ages. Use of such a statistical approach to determine the order in which samples are run ensures that the chronology constantly improves so long as material is available for the analysis of chronologic weak points. Ultimately, accuracy of the chronology is determined by the material that is actually being dated, and our combined approach allows testing of different constituents of the organic carbon pool and the carbonate minerals within a core. We will present preliminary results from a deep-sea sediment core abundant in deep-sea foraminifera as well as coastal wetland peat cores to demonstrate statistical improvements in sediment- and peat-core chronologies obtained by increasing the quantity and decreasing the quality of individual dates.
Huang, Yang; Lowe, Henry J; Klein, Dan; Cucina, Russell J
2005-01-01
The aim of this study was to develop and evaluate a method of extracting noun phrases with full phrase structures from a set of clinical radiology reports using natural language processing (NLP) and to investigate the effects of using the UMLS(R) Specialist Lexicon to improve noun phrase identification within clinical radiology documents. The noun phrase identification (NPI) module is composed of a sentence boundary detector, a statistical natural language parser trained on a nonmedical domain, and a noun phrase (NP) tagger. The NPI module processed a set of 100 XML-represented clinical radiology reports in Health Level 7 (HL7)(R) Clinical Document Architecture (CDA)-compatible format. Computed output was compared with manual markups made by four physicians and one author for maximal (longest) NP and those made by one author for base (simple) NP, respectively. An extended lexicon of biomedical terms was created from the UMLS Specialist Lexicon and used to improve NPI performance. The test set was 50 randomly selected reports. The sentence boundary detector achieved 99.0% precision and 98.6% recall. The overall maximal NPI precision and recall were 78.9% and 81.5% before using the UMLS Specialist Lexicon and 82.1% and 84.6% after. The overall base NPI precision and recall were 88.2% and 86.8% before using the UMLS Specialist Lexicon and 93.1% and 92.6% after, reducing false-positives by 31.1% and false-negatives by 34.3%. The sentence boundary detector performs excellently. After the adaptation using the UMLS Specialist Lexicon, the statistical parser's NPI performance on radiology reports increased to levels comparable to the parser's native performance in its newswire training domain and to that reported by other researchers in the general nonmedical domain.
Moraes, Carolina Borsoi; Yang, Gyongseon; Kang, Myungjoo; Freitas-Junior, Lucio H.; Hansen, Michael A. E.
2014-01-01
We present a customized high content (image-based) and high throughput screening algorithm for the quantification of Trypanosoma cruzi infection in host cells. Based solely on DNA staining and single-channel images, the algorithm precisely segments and identifies the nuclei and cytoplasm of mammalian host cells as well as the intracellular parasites infecting the cells. The algorithm outputs statistical parameters including the total number of cells, number of infected cells and the total number of parasites per image, the average number of parasites per infected cell, and the infection ratio (defined as the number of infected cells divided by the total number of cells). Accurate and precise estimation of these parameters allow for both quantification of compound activity against parasites, as well as the compound cytotoxicity, thus eliminating the need for an additional toxicity-assay, hereby reducing screening costs significantly. We validate the performance of the algorithm using two known drugs against T.cruzi: Benznidazole and Nifurtimox. Also, we have checked the performance of the cell detection with manual inspection of the images. Finally, from the titration of the two compounds, we confirm that the algorithm provides the expected half maximal effective concentration (EC50) of the anti-T. cruzi activity. PMID:24503652
What to use to express the variability of data: Standard deviation or standard error of mean?
Barde, Mohini P; Barde, Prajakt J
2012-07-01
Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.
Statistical analysis of radioimmunoassay. In comparison with bioassay (in Japanese)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakano, R.
1973-01-01
Using the data of RIA (radioimmunoassay), statistical procedures for dealing with two problems of the linearization of dose response curve and calculation of relative potency were described. There were three methods for linearization of dose response curve of RIA. In each method, the following parameters were shown on the horizontal and vertical axis: dose x, (B/T)/sup -1/; c/x + c, B/T (C: dose which makes B/T 50%); log x, logit B/T. Among them, the last method seems to be most practical. The statistical procedures for bioassay were employed for calculating the relative potency of unknown samples compared to the standardmore » samples from dose response curves of standand and unknown samples using regression coefficient. It is desirable that relative potency is calculated by plotting more than 5 points in the standard curve and plotting more than 2 points in unknow samples. For examining the statistical limit of precision of measuremert, LH activity of gonadotropin in urine was measured and relative potency, precision coefficient and the upper and lower limits of relative potency at 95% confidence limit were calculated. On the other hand, bioassay (by the ovarian ascorbic acid reduction method and anteriol lobe of prostate weighing method) was done in the same samples, and the precision was compared with that of RIA. In these examinations, the upper and lower limits of the relative potency at 95% confidence limit were near each other, while in bioassay, a considerable difference was observed between the upper and lower limits. The necessity of standardization and systematization of the statistical procedures for increasing the precision of RIA was pointed out. (JA)« less
NASA Astrophysics Data System (ADS)
Sikora, Mark; Compton@HIGS Team
2017-01-01
The electric (αn) and magnetic (βn) polarizabilities of the neutron are fundamental properties arising from its internal structure which describe the nucleon's response to applied electromagnetic fields. Precise measurements of the polarizabilities provide crucial constraints on models of Quantum Chromodynamics (QCD) in the low energy regime such as Chiral Effective Field Theories as well as emerging ab initio calculations from lattice-QCD. These values also contribute the most uncertainty to theoretical determinations of the proton-neutron mass difference. Historically, the experimental challenges to measuring αn and βn have been due to the difficulty in obtaining suitable targets and sufficiently intense beams, leading to significant statistical uncertainties. To address these issues, a program of Compton scattering experiments on the deuteron is underway at the High Intensity Gamma Source (HI γS) at Duke University with the aim of providing the world's most precise measurement of αn and βn. We report measurements of the Compton scattering differential cross section obtained at an incident photon energy of 65 MeV and discuss the sensitivity of these data to the polarizabilities.
NASA Astrophysics Data System (ADS)
Sikora, Mark
2016-09-01
The electric (αn) and magnetic (βn) polarizabilities of the neutron are fundamental properties arising from its internal structure which describe the nucleon's response to applied electromagnetic fields. Precise measurements of the polarizabilities provide crucial constraints on models of Quantum Chromodynamics (QCD) in the low energy regime such as Chiral Effective Field Theories as well as emerging ab initio calculations from lattice-QCD. These values also contribute the most uncertainty to theoretical determinations of the proton-neutron mass difference. Historically, the experimental challenges to measuring αn and βn have been due to the difficulty in obtaining suitable targets and sufficiently intense beams, leading to significant statistical uncertainties. To address these issues, a program of Compton scattering experiments on the deuteron is underway at the High Intensity Gamma Source (HI γS) at Duke University with the aim of providing the world's most precise measurement of αn and βn. We report measurements of the Compton scattering differential cross section obtained at incident photon energies of 65 and 85 MeV and discuss the sensitivity of these data to the polarizabilities.
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
Precision measurement of the nuclear polarization in laser-cooled, optically pumped 37 K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenker, B.; Behr, J. A.; Melconian, D.
We report a measurement of the nuclear polarization of laser-cooled, optically pumped 37K atoms which will allow us to precisely measure angular correlation parameters in themore » $${\\beta }^{+}$$-decay of the same atoms. These results will be used to test the V ₋ A framework of the weak interaction at high precision. At the Triumf neutral atom trap (Trinat), a magneto-optical trap confines and cools neutral 37K atoms and optical pumping spin-polarizes them. We monitor the nuclear polarization of the same atoms that are decaying in situ by photoionizing a small fraction of the partially polarized atoms and then use the standard optical Bloch equations to model their population distribution. We obtain an average nuclear polarization of $$\\bar{P}=0.9913\\pm 0.0009$$, which is significantly more precise than previous measurements with this technique. Since our current measurement of the β-asymmetry has $$0.2 \\% $$ statistical uncertainty, the polarization measurement reported here will not limit its overall uncertainty. This result also demonstrates the capability to measure the polarization to $$\\lt 0.1 \\% $$, allowing for a measurement of angular correlation parameters to this level of precision, which would be competitive in searches for new physics.« less
Precision measurement of the nuclear polarization in laser-cooled, optically pumped 37 K
Fenker, B.; Behr, J. A.; Melconian, D.; ...
2016-07-13
We report a measurement of the nuclear polarization of laser-cooled, optically pumped 37K atoms which will allow us to precisely measure angular correlation parameters in themore » $${\\beta }^{+}$$-decay of the same atoms. These results will be used to test the V ₋ A framework of the weak interaction at high precision. At the Triumf neutral atom trap (Trinat), a magneto-optical trap confines and cools neutral 37K atoms and optical pumping spin-polarizes them. We monitor the nuclear polarization of the same atoms that are decaying in situ by photoionizing a small fraction of the partially polarized atoms and then use the standard optical Bloch equations to model their population distribution. We obtain an average nuclear polarization of $$\\bar{P}=0.9913\\pm 0.0009$$, which is significantly more precise than previous measurements with this technique. Since our current measurement of the β-asymmetry has $$0.2 \\% $$ statistical uncertainty, the polarization measurement reported here will not limit its overall uncertainty. This result also demonstrates the capability to measure the polarization to $$\\lt 0.1 \\% $$, allowing for a measurement of angular correlation parameters to this level of precision, which would be competitive in searches for new physics.« less
D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C
2014-07-01
Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.
Comparison of Accuracy Between a Conventional and Two Digital Intraoral Impression Techniques.
Malik, Junaid; Rodriguez, Jose; Weisbloom, Michael; Petridis, Haralampos
To compare the accuracy (ie, precision and trueness) of full-arch impressions fabricated using either a conventional polyvinyl siloxane (PVS) material or one of two intraoral optical scanners. Full-arch impressions of a reference model were obtained using addition silicone impression material (Aquasil Ultra; Dentsply Caulk) and two optical scanners (Trios, 3Shape, and CEREC Omnicam, Sirona). Surface matching software (Geomagic Control, 3D Systems) was used to superimpose the scans within groups to determine the mean deviations in precision and trueness (μm) between the scans, which were calculated for each group and compared statistically using one-way analysis of variance with post hoc Bonferroni (trueness) and Games-Howell (precision) tests (IBM SPSS ver 24, IBM UK). Qualitative analysis was also carried out from three-dimensional maps of differences between scans. Means and standard deviations (SD) of deviations in precision for conventional, Trios, and Omnicam groups were 21.7 (± 5.4), 49.9 (± 18.3), and 36.5 (± 11.12) μm, respectively. Means and SDs for deviations in trueness were 24.3 (± 5.7), 87.1 (± 7.9), and 80.3 (± 12.1) μm, respectively. The conventional impression showed statistically significantly improved mean precision (P < .006) and mean trueness (P < .001) compared to both digital impression procedures. There were no statistically significant differences in precision (P = .153) or trueness (P = .757) between the digital impressions. The qualitative analysis revealed local deviations along the palatal surfaces of the molars and incisal edges of the anterior teeth of < 100 μm. Conventional full-arch PVS impressions exhibited improved mean accuracy compared to two direct optical scanners. No significant differences were found between the two digital impression methods.
First high-statistics and high-resolution recoil-ion data from the WITCH retardation spectrometer
NASA Astrophysics Data System (ADS)
Finlay, P.; Breitenfeldt, M.; Porobić, T.; Wursten, E.; Ban, G.; Beck, M.; Couratin, C.; Fabian, X.; Fléchard, X.; Friedag, P.; Glück, F.; Herlert, A.; Knecht, A.; Kozlov, V. Y.; Liénard, E.; Soti, G.; Tandecki, M.; Traykov, E.; Van Gorp, S.; Weinheimer, Ch.; Zákoucký, D.; Severijns, N.
2016-07-01
The first high-statistics and high-resolution data set for the integrated recoil-ion energy spectrum following the β^+ decay of 35Ar has been collected with the WITCH retardation spectrometer located at CERN-ISOLDE. Over 25 million recoil-ion events were recorded on a large-area multichannel plate (MCP) detector with a time-stamp precision of 2ns and position resolution of 0.1mm due to the newly upgraded data acquisition based on the LPC Caen FASTER protocol. The number of recoil ions was measured for more than 15 different settings of the retardation potential, complemented by dedicated background and half-life measurements. Previously unidentified systematic effects, including an energy-dependent efficiency of the main MCP and a radiation-induced time-dependent background, have been identified and incorporated into the analysis. However, further understanding and treatment of the radiation-induced background requires additional dedicated measurements and remains the current limiting factor in extracting a beta-neutrino angular correlation coefficient for 35Ar decay using the WITCH spectrometer.
Fast and precise dense grid size measurement method based on coaxial dual optical imaging system
NASA Astrophysics Data System (ADS)
Guo, Jiping; Peng, Xiang; Yu, Jiping; Hao, Jian; Diao, Yan; Song, Tao; Li, Ameng; Lu, Xiaowei
2015-10-01
Test sieves with dense grid structure are widely used in many fields, accurate gird size calibration is rather critical for success of grading analysis and test sieving. But traditional calibration methods suffer from the disadvantages of low measurement efficiency and shortage of sampling number of grids which could lead to quality judgment risk. Here, a fast and precise test sieve inspection method is presented. Firstly, a coaxial imaging system with low and high optical magnification probe is designed to capture the grid images of the test sieve. Then, a scaling ratio between low and high magnification probes can be obtained by the corresponding grids in captured images. With this, all grid dimensions in low magnification image can be obtained by measuring few corresponding grids in high magnification image with high accuracy. Finally, by scanning the stage of the tri-axis platform of the measuring apparatus, whole surface of the test sieve can be quickly inspected. Experiment results show that the proposed method can measure the test sieves with higher efficiency compare to traditional methods, which can measure 0.15 million grids (gird size 0.1mm) within only 60 seconds, and it can measure grid size range from 20μm to 5mm precisely. In a word, the presented method can calibrate the grid size of test sieve automatically with high efficiency and accuracy. By which, surface evaluation based on statistical method can be effectively implemented, and the quality judgment will be more reasonable.
Finding the "true" age: ways to read high-precision U-Pb zircon dates
NASA Astrophysics Data System (ADS)
Schaltegger, U.; Schoene, B.; Ovtcharova, M.; Sell, B. K.; Broderick, C. A.; Wotzlaw, J.
2011-12-01
Refined U-Pb dating techniques, applying an empirical chemical abrasion treatment prior to analysis [1], and using a precisely calibrated double isotope Pb, U EARTHTIME tracer solution, have led to an unprecedented <0.1% precision and accuracy of obtained 206Pb/238U dates of single zircon crystals or fragments. Results very often range over 10e4 to 10e6 years and cannot be treated as statistically singular age populations. The interpretation of precise zircon U-Pb ages is biased by two problems: (A) Post-crystallization Pb loss from decay damaged areas is considered to be mitigated by applying chemical abrasion techniques. The success of such treatment can, however, not be assumed a priori. The following examples demonstrate that youngest zircons are not biased by lead loss but represent close-to-youngest zircon growth: (i) coincidence of youngest zircon dates with co-magmatic titanite in tonalite; (ii) coincidence with statistically equivalent clusters of 206Pb/238U dates from zircon in residual melts of cogenetic mafic magmas; (iii) youngest zircons in ash beds of sedimentary sequences do not violate the stratigraphic superposition, whereas conventional statistical interpretation (mean or median values) does; (iv) results of published inter-laboratory cross-calibration tests using chemical abrasion on natural zircon crystals of the same sample arrive at the same 206Pb/238U result within <0.1% (e.g., [2]); (v) Youngest crystals coincide in age with the astronomical age of hosting cyclic sediments. Residual lead loss may, however, still be identified in the case of single, significantly younger dates (>3 sigma), and are common in many pre-Triassic and hydrothermally altered rocks. (B) Pre-eruptive/pre-intrusive growth is found to be the main reason for scattered zircon ages in igneous rocks. Zircons crystallizing from the final magma batch are called autocrystic [3]. Autocrystic growth will happen in a moving or stagnant magma shortly before or after the rheological lockup by the crystals. Last crystallizing zircons in the interstitial melt may therefore postdate emplacement of the magma. The range of 206Pb/238U ages may yield a time frame for the cooling of a given magma batch, which could be added to quantitative thermal models of magma emplacement and cooling. Hf isotopes and trace elements of the dated zircon are used to trace the nature of the dated grains [4], specifically for identification of crystals that form earlier at lower crustal levels (antecrysts). Autocrystic zircons typically show, e.g., distinctly different (higher or lower) Th/U ratios. Cautiously interpreted high-precision U-Pb data of chemically abraded zircons may resolve the evolution of a magmatic system from its roots to final emplacement or eruption, trace fractional crystallization of zircon and other accessory and major phases in a magma batch, and add quantitative temporal constraints to thermal models. The proposed interpretation scheme thus adds significant information compared to conventional statistics. [1] Mattinson J., 2005, Chem. Geol. 200, 47-66; ; [2] Slama et al., 2008, Chem. Geol. 249, 1-35; [3] Miller et al., 2007, J. Volc. Geotherm. Res. 167, 282-299; [4] Schoene et al., 2010, Geochim. Cosmochim. Acta 74, 7144-7159
Precision of computer-assisted core decompression drilling of the knee.
Beckmann, J; Goetz, J; Bäthis, H; Kalteis, T; Grifka, J; Perlick, L
2006-06-01
Core decompression by exact drilling into the ischemic areas is the treatment of choice in early stages of osteonecrosis of the femoral condyle. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision-navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. 20 sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany). Ten sawbones were drilled by fluoroscopic control only. A statistically significant difference with a mean distance of 0.58 mm in the navigated group and 0.98 mm in the control group regarding the distance to the desired mid-point of the lesion could be stated. Significant difference was further found in the number of drilling corrections as well as radiation time needed. The fluoroscopic-based VectorVision-navigation system shows a high feasibility and precision of computer-guided drilling with simultaneously reduction of radiation time and therefore could be integrated into clinical routine.
Precision Distances with the Tip of the Red Giant Branch Method
NASA Astrophysics Data System (ADS)
Beaton, Rachael Lynn; Carnegie-Chicago Hubble Program Team
2018-01-01
The Carnegie-Chicago Hubble Program aims to construct a distance ladder that utilizes old stellar populations in the outskirts of galaxies to produce a high precision measurement of the Hubble Constant that is independent of Cepheids. The CCHP uses the tip of the red giant branch (TRGB) method, which is a statistical measurement technique that utilizes the termination of the red giant branch. Two innovations combine to make the TRGB a competitive route to the Hubble Constant (i) the large-scale measurement of trigonometric parallax by the Gaia mission and (ii) the development of both precise and accurate means of determining the TRGB in both nearby (~1 Mpc) and distant (~20 Mpc) galaxies. Here I will summarize our progress in developing these standardized techniques, focusing on both our edge-detection algorithm and our field selection strategy. Using these methods, the CCHP has determined equally precise (~2%) distances to galaxies in the Local Group (< 1 Mpc) and across the Local Volume (< 20 Mpc). The TRGB is, thus, an incredibly powerful and straightforward means to determine distances to galaxies of any Hubble Type and, thus, has enormous potential for putting any number of astrophyiscal phenomena on absolute units.
[Evaluation of the Performance of Two Kinds of Anti-TP Enzyme-Linked Immunosorbent Assay].
Gao, Nan; Huang, Li-Qin; Wang, Rui; Jia, Jun-Jie; Wu, Shuo; Zhang, Jing; Ge, Hong-Wei
2018-06-01
To evaluate the accuracy and precision of 2 kinds of anti-treponema pallidum (anti-TP) ELISA reagents in our laboratory for detecting the anti-TP in voluntary blood donors, so as to provide the data support for use of ELISA reagents after introduction of chemiluminescene immunoassay (CLIA). The route detection of anti-TP was performed by using 2 kinds of ELISA reagents, then 546 responsive positive samples detected by anti-TP ELISA were collected, and the infections status of samples confirmed by treponema pallidum particle agglutination (TPPA) test was identified. The confirmed results of responsive samples detected by 2 kinds of anti-TP ELISA reagents were compared, the accuracy of 2 kinds of anti-TP ELISA reagents was analyzed by drawing ROC and comparing area under curve (AUC), and precision of 2 kinds of anti-TP ELISA reagents was compared by statistical analysis of quality control data from 7.1 2016 to 6.30 2017. There were no statistical difference in confirmed positive rate of responsive samples and weak positive samples between 2 kinds of anti-TP ELISA reagents. The responsive samples detected by 2 kinds of anti-TP ELISA reagents accounted for 85.53%(467/546) of all responsive samples, the positive rate confirmed by TPPA test was 82.87%. 44 responsive samples detected by anti-TP ELISA reagent A and 35 responsive samples detected by anti-TP ELISA reagent B were confirmed to be negative by TPPA test. Comparison of AUC showed that the accuracy of 2 kinds of anti-TP ELISA reagents was more high, the difference between 2 reagents was not statistically significant. The coefficient of variation (CV) of anti-TP ELISA reagent A and B was 14.98% and 18.04% respectively, which met the precision requirement of ELISA test. The accuracy and precision of 2 kinds of anti-TP ELISA reagents used in our laboratory are similar, and using any one of anti-TP ELISA reagents all can satisfy the requirements of blood screening.
DOT National Transportation Integrated Search
2010-03-01
This document provides guidance for using the ACS Statistical Analyzer. It is an Excel-based template for users of estimates from the American Community Survey (ACS) to assess the precision of individual estimates and to compare pairs of estimates fo...
Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling
ERIC Educational Resources Information Center
Banjanovic, Erin S.; Osborne, Jason W.
2016-01-01
Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…
Townsend, A T
2000-08-01
A magnetic sector ICP-MS with enhanced sensitivity was used to measure Os isotope ratios in solutions of low Os concentration (approximately 1 ng g(-1) or less). Ratios with 192Os as the basis were determined, while the geologically useful 187Os/188Os ratio was also measured. Sample introduction was via the traditional nebuliser-spray chamber method. A capacitive decoupling Pt shield torch was developed "in-house" and was found to increase Os signals by approximately 5 x under "moderate" plasma conditions (1050 W) over that found during normal operation (1250 W). Sensitivity using the guard electrode for 192Os was approximately 250-350,000 counts s(-1) per ng g(-1) Os. For a I ng g(-1) Os solution with no guard electrode, precisions of the order of 0.2-0.3% (189Os/192Os and 190Os/192Os) to approximately 1% or greater (186Os/192Os, 187Os/192Os and 187Os/188Os) were found (values as 1 sigma for n = 10). With the guard electrode in use, ratio precisions were found to improve to 0.2 to 0.8%. The total amount of Os used in the acquisition of this data was approximately 2.5 ng per measurement per replicate. At the higher concentration of 10 ng g(-1), precisions of the order of 0.15-0.3% were measured (for all ratios), irrespective of whether the shield torch was used. Ratio accuracy was confirmed by comparison with independently obtained NTIMS data. For both Os concentrations considered, the improvement in precision offered by the guard electrode (if any) was small in comparison to calculated theoretical values based on Poisson counting statistics, suggesting noise contributions from other sources (such as the sample introduction system, plasma flicker etc). At lower Os concentrations (to 100 pg g(-1)) no appreciable loss of ratio accuracy was observed, although as expected based on counting statistics, poorer precisions of the order of 0.45-3% (1 sigma, n = 5) were noted. Re was found to have a detrimental effect on the precision of Os ratios involving 187Os, indicating that separation of Re and Os samples is a necessary pre-requisite for highly accurate and precise Os isotope ratio measurements.
Evaluation on the use of cerium in the NBL Titrimetric Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zebrowski, J.P.; Orlowicz, G.J.; Johnson, K.D.
An alternative to potassium dichromate as titrant in the New Brunswick Laboratory Titrimetric Method for uranium analysis was sought since chromium in the waste makes disposal difficult. Substitution of a ceric-based titrant was statistically evaluated. Analysis of the data indicated statistically equivalent precisions for the two methods, but a significant overall bias of +0.035% for the ceric titrant procedure. The cause of the bias was investigated, alterations to the procedure were made, and a second statistical study was performed. This second study revealed no statistically significant bias, nor any analyst-to-analyst variation in the ceric titration procedure. A statistically significant day-to-daymore » variation was detected, but this was physically small (0.01 5%) and was only detected because of the within-day precision of the method. The added mean and standard deviation of the %RD for a single measurement was found to be 0.031%. A comparison with quality control blind dichromate titration data again indicated similar overall precision. Effects of ten elements on the ceric titration`s performance was determined. Co, Ti, Cu, Ni, Na, Mg, Gd, Zn, Cd, and Cr in previous work at NBL these impurities did not interfere with the potassium dichromate titrant. This study indicated similar results for the ceric titrant, with the exception of Ti. All the elements (excluding Ti and Cr), caused no statistically significant bias in uranium measurements at levels of 10 mg impurity per 20-40 mg uranium. The presence of Ti was found to cause a bias of {minus}0.05%; this is attributed to the presence of sulfate ions, resulting in precipitation of titanium sulfate and occlusion of uranium. A negative bias of 0.012% was also statistically observed in the samples containing chromium impurities.« less
Error, Power, and Blind Sentinels: The Statistics of Seagrass Monitoring
Schultz, Stewart T.; Kruschel, Claudia; Bakran-Petricioli, Tatjana; Petricioli, Donat
2015-01-01
We derive statistical properties of standard methods for monitoring of habitat cover worldwide, and criticize them in the context of mandated seagrass monitoring programs, as exemplified by Posidonia oceanica in the Mediterranean Sea. We report the novel result that cartographic methods with non-trivial classification errors are generally incapable of reliably detecting habitat cover losses less than about 30 to 50%, and the field labor required to increase their precision can be orders of magnitude higher than that required to estimate habitat loss directly in a field campaign. We derive a universal utility threshold of classification error in habitat maps that represents the minimum habitat map accuracy above which direct methods are superior. Widespread government reliance on blind-sentinel methods for monitoring seafloor can obscure the gradual and currently ongoing losses of benthic resources until the time has long passed for meaningful management intervention. We find two classes of methods with very high statistical power for detecting small habitat cover losses: 1) fixed-plot direct methods, which are over 100 times as efficient as direct random-plot methods in a variable habitat mosaic; and 2) remote methods with very low classification error such as geospatial underwater videography, which is an emerging, low-cost, non-destructive method for documenting small changes at millimeter visual resolution. General adoption of these methods and their further development will require a fundamental cultural change in conservation and management bodies towards the recognition and promotion of requirements of minimal statistical power and precision in the development of international goals for monitoring these valuable resources and the ecological services they provide. PMID:26367863
Radial velocity detection of extra-solar planetary systems
NASA Technical Reports Server (NTRS)
Cochran, William D.
1991-01-01
The goal of this program was to detect planetary systems in orbit around other stars through the ultra high precision measurement of the orbital motion of the star around the star-planet barycenter. The survey of 33 nearby solar-type stars is the essential first step in understanding the overall problem of planet formation. The program will accumulate the necessary statistics to determine the frequency of planet formation as a function of stellar mass, age, and composition.
2012 Workplace and Gender Relations Survey of Active Duty Members: Nonresponse Bias Analysis Report
2014-01-01
Control and Prevention ), or command climate surveys (e.g., DEOCS). 6 Table 1. Comparison of Trends in WGRA and SOFS-A Response Rates (Shown in...DMDC draws optimized samples to reduce survey burden on members as well as produce high levels of precision for important domain estimates by using...statistical significance at α= .05 Because paygrade is a significant predictor of survey response, we next examined the odds ratio of each paygrade levels
Single photon laser altimeter simulator and statistical signal processing
NASA Astrophysics Data System (ADS)
Vacek, Michael; Prochazka, Ivan
2013-05-01
Spaceborne altimeters are common instruments onboard the deep space rendezvous spacecrafts. They provide range and topographic measurements critical in spacecraft navigation. Simultaneously, the receiver part may be utilized for Earth-to-satellite link, one way time transfer, and precise optical radiometry. The main advantage of single photon counting approach is the ability of processing signals with very low signal-to-noise ratio eliminating the need of large telescopes and high power laser source. Extremely small, rugged and compact microchip lasers can be employed. The major limiting factor, on the other hand, is the acquisition time needed to gather sufficient volume of data in repetitive measurements in order to process and evaluate the data appropriately. Statistical signal processing is adopted to detect signals with average strength much lower than one photon per measurement. A comprehensive simulator design and range signal processing algorithm are presented to identify a mission specific altimeter configuration. Typical mission scenarios (celestial body surface landing and topographical mapping) are simulated and evaluated. The high interest and promising single photon altimeter applications are low-orbit (˜10 km) and low-radial velocity (several m/s) topographical mapping (asteroids, Phobos and Deimos) and landing altimetry (˜10 km) where range evaluation repetition rates of ˜100 Hz and 0.1 m precision may be achieved. Moon landing and asteroid Itokawa topographical mapping scenario simulations are discussed in more detail.
NASA Astrophysics Data System (ADS)
Coelho, Carlos A.; Marques, Filipe J.
2013-09-01
In this paper the authors combine the equicorrelation and equivariance test introduced by Wilks [13] with the likelihood ratio test (l.r.t.) for independence of groups of variables to obtain the l.r.t. of block equicorrelation and equivariance. This test or its single block version may find applications in many areas as in psychology, education, medicine, genetics and they are important "in many tests of multivariate analysis, e.g. in MANOVA, Profile Analysis, Growth Curve analysis, etc" [12, 9]. By decomposing the overall hypothesis into the hypotheses of independence of groups of variables and the hypothesis of equicorrelation and equivariance we are able to obtain the expressions for the overall l.r.t. statistic and its moments. From these we obtain a suitable factorization of the characteristic function (c.f.) of the logarithm of the l.r.t. statistic, which enables us to develop highly manageable and precise near-exact distributions for the test statistic.
Spatial variability effects on precision and power of forage yield estimation
USDA-ARS?s Scientific Manuscript database
Spatial analyses of yield trials are important, as they adjust cultivar means for spatial variation and improve the statistical precision of yield estimation. While the relative efficiency of spatial analysis has been frequently reported in several yield trials, its application on long-term forage y...
Pretorius, Etheresia
2017-01-01
The latest statistics from the 2016 heart disease and stroke statistics update shows that cardiovascular disease is the leading global cause of death, currently accounting for more than 17.3 million deaths per year. Type II diabetes is also on the rise with out-of-control numbers. To address these pandemics, we need to treat patients using an individualized patient care approach, but simultaneously gather data to support the precision medicine initiative. Last year the NIH announced the precision medicine initiative to generate novel knowledge regarding diseases, with a near-term focus on cancers, followed by a longer-term aim, applicable to a whole range of health applications and diseases. The focus of this paper is to suggest a combined effort between the latest precision medicine initiative, researchers and clinicians; whereby novel techniques could immediately make a difference in patient care, but long-term add to knowledge for use in precision medicine. We discuss the intricate relationship between individualized patient care and precision medicine and the current thoughts regarding which data is actually suitable for the precision medicine data gathering. The uses of viscoelastic techniques in precision medicine are discussed and how these techniques might give novel perspectives on the success of treatment regimes of cardiovascular patients are explored. Thrombo-embolic stroke, rheumathoid arthritis and type II diabetes are used as examples of diseases where precision medicine and a patient-orientated approach can possibly be implemented. In conclusion it is suggested that if all role players work together by embracing a new way of thought in treating and managing cardiovascular disease and diabetes will we be able to adequately address these out-ofcontrol conditions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
High-precision gauging of metal rings
NASA Astrophysics Data System (ADS)
Carlin, Mats; Lillekjendlie, Bjorn
1994-11-01
Raufoss AS designs and produces air brake fittings for trucks and buses on the international market. One of the critical components in the fittings is a small, circular metal ring, which is going through 100% dimension control. This article describes a low-price, high accuracy solution developed at SINTEF Instrumentation based on image metrology and a subpixel resolution algorithm. The measurement system consists of a PC-plugg-in transputer video board, a CCD camera, telecentric optics and a machine vision strobe. We describe the measurement technique in some detail, as well as the robust statistical techniques found to be essential in the real life environment.
Removing the Impact of Correlated PSF Uncertainties in Weak Lensing
NASA Astrophysics Data System (ADS)
Lu, Tianhuan; Zhang, Jun; Dong, Fuyu; Li, Yingke; Liu, Dezi; Fu, Liping; Li, Guoliang; Fan, Zuhui
2018-05-01
Accurate reconstruction of the spatial distributions of the point-spread function (PSF) is crucial for high precision cosmic shear measurements. Nevertheless, current methods are not good at recovering the PSF fluctuations of high spatial frequencies. In general, the residual PSF fluctuations are spatially correlated, and therefore can significantly contaminate the correlation functions of the weak lensing signals. We propose a method to correct for this contamination statistically, without any assumptions on the PSF and galaxy morphologies or their spatial distribution. We demonstrate our idea with the data from the W2 field of CFHTLenS.
Mendez, Andreas S L; Steppe, Martin; Schapoval, Elfrides E S
2003-12-04
A high-performance liquid chromatographic method and a UV spectrophotometric method for the quantitative determination of meropenem, a highly active carbapenem antibiotic, in powder for injection were developed in present work. The parameters linearity, precision, accuracy, specificity, robustness, limit of detection and limit of quantitation were studied according to International Conference on Harmonization guidelines. Chromatography was carried out by reversed-phase technique on an RP-18 column with a mobile phase composed of 30 mM monobasic phosphate buffer and acetonitrile (90:10; v/v), adjusted to pH 3.0 with orthophosphoric acid. The UV spectrophotometric method was performed at 298 nm. The samples were prepared in water and the stability of meropenem in aqueous solution at 4 and 25 degrees C was studied. The results were satisfactory with good stability after 24 h at 4 degrees C. Statistical analysis by Student's t-test showed no significant difference between the results obtained by the two methods. The proposed methods are highly sensitive, precise and accurate and can be used for the reliable quantitation of meropenem in pharmaceutical dosage form.
Srinubabu, Gedela; Sudharani, Batchu; Sridhar, Lade; Rao, Jvln Seshagiri
2006-06-01
A high-performance liquid chromatographic method and a UV derivative spectrophotometric method for the determination of famciclovir, a highly active antiviral agent, in tablets were developed in the present work. The various parameters, such as linearity, precision, accuracy, specificity, robustness, limit of detection and limit of quantitation were studied according to International Conference on Harmonization guidelines. HPLC was carried out by using the reversed-phase technique on an RP-18 column with a mobile phase composed of 50 mM monobasic phosphate buffer and methanol (50 : 50; v/v), adjusted to pH 3.05 with orthophosphoric acid. The mobile phase was pumped at a flow rate of 1 ml/min and detection was made at 242 nm with UV dual absorbance detector. The first derivative UV spectrophotometric method was performed at 226.5 nm. Statistical analysis was done by Student's t-test and F-test, which showed no significant difference between the results obtained by the two methods. The proposed methods are highly sensitive, precise and accurate and therefore can be used for its Intended purpose.
Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete
2008-08-20
Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.
Optimization of the MINERVA Exoplanet Search Strategy via Simulations
NASA Astrophysics Data System (ADS)
Nava, Chantell; Johnson, Samson; McCrady, Nate; Minerva
2015-01-01
Detection of low-mass exoplanets requires high spectroscopic precision and high observational cadence. MINERVA is a dedicated observatory capable of sub meter-per-second radial velocity precision. As a dedicated observatory, MINERVA can observe with every-clear-night cadence that is essential for low-mass exoplanet detection. However, this cadence complicates the determination of an optimal observing strategy. We simulate MINERVA observations to optimize our observing strategy and maximize exoplanet detections. A dispatch scheduling algorithm provides observations of MINERVA targets every day over a three-year observing campaign. An exoplanet population with a distribution informed by Kepler statistics is assigned to the targets, and radial velocity curves induced by the planets are constructed. We apply a correlated noise model that realistically simulates stellar astrophysical noise sources. The simulated radial velocity data is fed to the MINERVA planet detection code and the expected exoplanet yield is calculated. The full simulation provides a tool to test different strategies for scheduling observations of our targets and optimizing the MINERVA exoplanet search strategy.
El-Didamony, Akram M; Gouda, Ayman A
2011-01-01
A new highly sensitive and specific spectrofluorimetric method has been developed to determine a sympathomimetic drug pseudoephedrine hydrochloride. The present method was based on derivatization with 4-chloro-7-nitrobenzofurazan in phosphate buffer at pH 7.8 to produce a highly fluorescent product which was measured at 532 nm (excitation at 475 nm). Under the optimized conditions a linear relationship and good correlation was found between the fluorescence intensity and pseudoephedrine hydrochloride concentration in the range of 0.5-5 µg mL(-1). The proposed method was successfully applied to the assay of pseudoephedrine hydrochloride in commercial pharmaceutical formulations with good accuracy and precision and without interferences from common additives. Statistical comparison of the results with a well-established method showed excellent agreement and proved that there was no significant difference in the accuracy and precision. The stoichiometry of the reaction was determined and the reaction pathway was postulated. Copyright © 2010 John Wiley & Sons, Ltd.
Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method
Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni
2017-01-01
The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508
Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun
2016-01-01
An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. PMID:26833260
Collection of Medical Original Data with Search Engine for Decision Support.
Orthuber, Wolfgang
2016-01-01
Medicine is becoming more and more complex and humans can capture total medical knowledge only partially. For specific access a high resolution search engine is demonstrated, which allows besides conventional text search also search of precise quantitative data of medical findings, therapies and results. Users can define metric spaces ("Domain Spaces", DSs) with all searchable quantitative data ("Domain Vectors", DSs). An implementation of the search engine is online in http://numericsearch.com. In future medicine the doctor could make first a rough diagnosis and check which fine diagnostics (quantitative data) colleagues had collected in such a case. Then the doctor decides about fine diagnostics and results are sent (half automatically) to the search engine which filters a group of patients which best fits to these data. In this specific group variable therapies can be checked with associated therapeutic results, like in an individual scientific study for the current patient. The statistical (anonymous) results could be used for specific decision support. Reversely the therapeutic decision (in the best case with later results) could be used to enhance the collection of precise pseudonymous medical original data which is used for better and better statistical (anonymous) search results.
NASA Astrophysics Data System (ADS)
Böhm, Fabian; Grosse, Nicolai B.; Kolarczik, Mirco; Herzog, Bastian; Achtstein, Alexander; Owschimikow, Nina; Woggon, Ulrike
2017-09-01
Quantum state tomography and the reconstruction of the photon number distribution are techniques to extract the properties of a light field from measurements of its mean and fluctuations. These techniques are particularly useful when dealing with macroscopic or mesoscopic systems, where a description limited to the second order autocorrelation soon becomes inadequate. In particular, the emission of nonclassical light is expected from mesoscopic quantum dot systems strongly coupled to a cavity or in systems with large optical nonlinearities. We analyze the emission of a quantum dot-semiconductor optical amplifier system by quantifying the modifications of a femtosecond laser pulse propagating through the device. Using a balanced detection scheme in a self-heterodyning setup, we achieve precise measurements of the quadrature components and their fluctuations at the quantum noise limit1. We resolve the photon number distribution and the thermal-to-coherent evolution in the photon statistics of the emission. The interferometric detection achieves a high sensitivity in the few photon limit. From our data, we can also reconstruct the second order autocorrelation function with higher precision and time resolution compared with classical Hanbury Brown-Twiss experiments.
NASA Astrophysics Data System (ADS)
Glavanović, Siniša; Glavanović, Marija; Tomišić, Vladislav
2016-03-01
The UV spectrophotometric methods for simultaneous quantitative determination of paracetamol and tramadol in paracetamol-tramadol tablets were developed. The spectrophotometric data obtained were processed by means of partial least squares (PLS) and genetic algorithm coupled with PLS (GA-PLS) methods in order to determine the content of active substances in the tablets. The results gained by chemometric processing of the spectroscopic data were statistically compared with those obtained by means of validated ultra-high performance liquid chromatographic (UHPLC) method. The accuracy and precision of data obtained by the developed chemometric models were verified by analysing the synthetic mixture of drugs, and by calculating recovery as well as relative standard error (RSE). A statistically good agreement was found between the amounts of paracetamol determined using PLS and GA-PLS algorithms, and that obtained by UHPLC analysis, whereas for tramadol GA-PLS results were proven to be more reliable compared to those of PLS. The simplest and the most accurate and precise models were constructed by using the PLS method for paracetamol (mean recovery 99.5%, RSE 0.89%) and the GA-PLS method for tramadol (mean recovery 99.4%, RSE 1.69%).
Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations
NASA Astrophysics Data System (ADS)
Kozak, P.
2014-12-01
Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.
Spatio-temporal conditional inference and hypothesis tests for neural ensemble spiking precision
Harrison, Matthew T.; Amarasingham, Asohan; Truccolo, Wilson
2014-01-01
The collective dynamics of neural ensembles create complex spike patterns with many spatial and temporal scales. Understanding the statistical structure of these patterns can help resolve fundamental questions about neural computation and neural dynamics. Spatio-temporal conditional inference (STCI) is introduced here as a semiparametric statistical framework for investigating the nature of precise spiking patterns from collections of neurons that is robust to arbitrarily complex and nonstationary coarse spiking dynamics. The main idea is to focus statistical modeling and inference, not on the full distribution of the data, but rather on families of conditional distributions of precise spiking given different types of coarse spiking. The framework is then used to develop families of hypothesis tests for probing the spatio-temporal precision of spiking patterns. Relationships among different conditional distributions are used to improve multiple hypothesis testing adjustments and to design novel Monte Carlo spike resampling algorithms. Of special note are algorithms that can locally jitter spike times while still preserving the instantaneous peri-stimulus time histogram (PSTH) or the instantaneous total spike count from a group of recorded neurons. The framework can also be used to test whether first-order maximum entropy models with possibly random and time-varying parameters can account for observed patterns of spiking. STCI provides a detailed example of the generic principle of conditional inference, which may be applicable in other areas of neurostatistical analysis. PMID:25380339
ERIC Educational Resources Information Center
Cassel, Russell N.
This paper relates educational and psychological statistics to certain "Research Statistical Tools" (RSTs) necessary to accomplish and understand general research in the behavioral sciences. Emphasis is placed on acquiring an effective understanding of the RSTs and to this end they are are ordered to a continuum scale in terms of individual…
Precision of natural satellite ephemerides from observations of different types
NASA Astrophysics Data System (ADS)
Emelyanov, N. V.
2017-08-01
Currently, various types of observations of natural planetary satellites are used to refine their ephemerides. A new type of measurement - determining the instants of apparent satellite encounters - has recently been proposed by Morgado and co-workers. The problem that arises is which type of measurement to choose in order to obtain an ephemeris precision that is as high as possible. The answer can be obtained only by modelling the entire process: observations, obtaining the measured values, refining the satellite motion parameters, and generating the ephemeris. The explicit dependence of the ephemeris precision on observational accuracy as well as on the type of observations is unknown. In this paper, such a dependence is investigated using the Monte Carlo statistical method. The relationship between the ephemeris precision for different types of observations is then assessed. The possibility of using the instants of apparent satellite encounters to obtain an ephemeris is investigated. A method is proposed that can be used to fit the satellite orbital parameters to this type of measurement. It is shown that, in the absence of systematic scale errors in the CCD frame, the use of the instants of apparent encounters leads to less precise ephemerides. However, in the presence of significant scale errors, which is often the case, this type of measurement becomes effective because the instants of apparent satellite encounters do not depend on scale errors.
Sutherland, Andrew M; Parrella, Michael P
2011-08-01
Western flower thrips, Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), is a major horticultural pest and an important vector of plant viruses in many parts of the world. Methods for assessing thrips population density for pest management decision support are often inaccurate or imprecise due to thrips' positive thigmotaxis, small size, and naturally aggregated populations. Two established methods, flower tapping and an alcohol wash, were compared with a novel method, plant desiccation coupled with passive trapping, using accuracy, precision and economic efficiency as comparative variables. Observed accuracy was statistically similar and low (37.8-53.6%) for all three methods. Flower tapping was the least expensive method, in terms of person-hours, whereas the alcohol wash method was the most expensive. Precision, expressed by relative variation, depended on location within the greenhouse, location on greenhouse benches, and the sampling week, but it was generally highest for the flower tapping and desiccation methods. Economic efficiency, expressed by relative net precision, was highest for the flower tapping method and lowest for the alcohol wash method. Advantages and disadvantages are discussed for all three methods used. If relative density assessment methods such as these can all be assumed to accurately estimate a constant proportion of absolute density, then high precision becomes the methodological goal in terms of measuring insect population density, decision making for pest management, and pesticide efficacy assessments.
Estimating the color of maxillary central incisors based on age and gender
Gozalo-Diaz, David; Johnston, William M.; Wee, Alvin G.
2008-01-01
Statement of problem There is no scientific information regarding the selection of the color of teeth for edentulous patients. Purpose The purpose of this study was to evaluate linear regression models that may be used to predict color parameters for central incisors of edentulous patients based on some characteristics of dentate subjects. Material and methods A spectroradiometer and an external light source were set in a noncontacting 45/0 degree (45-degree illumination and 0-degree observer) optical configuration to measure the color of subjects’ vital craniofacial structures (maxillary central incisor, attached gingiva, and facial skin). The subjects (n=120) were stratified into 5 age groups with 4 racial groups and balanced for gender. Linear first-order regression was used to determine the significant factors (α=.05) in the prediction model for each color direction of the color of the maxillary central incisor. Age, gender, and color of the other craniofacial structures were studied as potential predictors. Final predictions in each color direction were based only on the statistically significant factors, and then the color differences between observed and predicted CIELAB values for the central incisors were calculated and summarized. Results The statistically significant predictors of age and gender accounted for 36% of the total variability in L*. The statistically significant predictor of age accounted for 16% of the total variability in a*. The statistically significant predictors of age and gender accounted for 21% of the variability in b*. The mean ΔE (SD) between predicted and observed CIELAB values for the central incisor was 5.8 (3.2). Conclusions Age and gender were found to be statistically significant determinants in predicting the natural color of central incisors. Although the precision of these predictions was less than the median color difference found for all pairs of teeth studied, and may be considered an acceptable precision, further study is needed to reduce this precision to the limit of detection. Clinical Implications Age is highly correlated with the natural color of the central incisors. When age increases, the central incisor becomes darker, more reddish, and more yellow. Also, the women subjects in this study had lighter and less yellow central incisors than the men. PMID:18672125
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagné, Jonathan; Plavchan, Peter; Gao, Peter
2016-05-01
We present the results of a precise near-infrared (NIR) radial velocity (RV) survey of 32 low-mass stars with spectral types K2–M4 using CSHELL at the NASA InfraRed Telescope Facility in the K band with an isotopologue methane gas cell to achieve wavelength calibration and a novel, iterative RV extraction method. We surveyed 14 members of young (≈25–150 Myr) moving groups, the young field star ε Eridani, and 18 nearby (<25 pc) low-mass stars and achieved typical single-measurement precisions of 8–15 m s{sup −1}with a long-term stability of 15–50 m s{sup −1} over longer baselines. We obtain the best NIR RVmore » constraints to date on 27 targets in our sample, 19 of which were never followed by high-precision RV surveys. Our results indicate that very active stars can display long-term RV variations as low as ∼25–50 m s{sup −1} at ≈2.3125 μ m, thus constraining the effect of jitter at these wavelengths. We provide the first multiwavelength confirmation of GJ 876 bc and independently retrieve orbital parameters consistent with previous studies. We recovered RV variabilities for HD 160934 AB and GJ 725 AB that are consistent with their known binary orbits, and nine other targets are candidate RV variables with a statistical significance of 3 σ –5 σ . Our method, combined with the new iSHELL spectrograph, will yield long-term RV precisions of ≲5 m s{sup −1} in the NIR, which will allow the detection of super-Earths near the habitable zone of mid-M dwarfs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreras, Ignacio; Trujillo, Ignacio, E-mail: i.ferreras@ucl.ac.uk
At the core of the standard cosmological model lies the assumption that the redshift of distant galaxies is independent of photon wavelength. This invariance of cosmological redshift with wavelength is routinely found in all galaxy spectra with a precision of Δ z ∼ 10{sup −4}. The combined use of approximately half a million high-quality galaxy spectra from the Sloan Digital Sky Survey (SDSS) allows us to explore this invariance down to a nominal precision in redshift of 10{sup −6} (statistical). Our analysis is performed over the redshift interval 0.02 < z < 0.25. We use the centroids of spectral linesmore » over the 3700–6800 Å rest-frame optical window. We do not find any difference in redshift between the blue and red sides down to a precision of 10{sup −6} at z ≲ 0.1 and 10{sup −5} at 0.1 ≲ z ≲ 0.25 (i.e., at least an order of magnitude better than with single galaxy spectra). This is the first time the wavelength-independence of the (1 + z ) redshift law is confirmed over a wide spectral window at this precision level. This result holds independently of the stellar population of the galaxies and their kinematical properties. This result is also robust against wavelength calibration issues. The limited spectral resolution ( R ∼ 2000) of the SDSS data, combined with the asymmetric wavelength sampling of the spectral features in the observed restframe due to the (1 + z ) stretching of the lines, prevent our methodology from achieving a precision higher than 10{sup −5}, at z > 0.1. Future attempts to constrain this law will require high quality galaxy spectra at higher resolution ( R ≳ 10,000).« less
2007-01-01
Metrology; (270.5290) Photon statistics. References and links 1. W. H. Louisell, A. Yariv, and A. E. Siegman , “Quantum Fluctuations and Noise in...939–941 (1981). 7. S. R. Bowman, Y. H. Shih, and C. O. Alley, “The use of Geiger mode avalanche photodiodes for precise laser ranging at very low...light levels: An experimental evaluation”, in Laser Radar Technology and Applications I, James M. Cruickshank, Robert C. Harney eds., Proc. SPIE 663
Characterizing Giant Exoplanets through Multiwavelength Transit Observations: HD 189733b
NASA Astrophysics Data System (ADS)
Kar, Aman; Cole, Jackson Lane; Gardner, Cristilyn N.; Garver, Bethany Ray; Jarka, Kyla L.; McGough, Aylin Marie; PeQueen, David Jeffrey; Rivera, Daniel Ivan; Kasper, David; Jang-Condell, Hannah; Kobulnicky, Henry; Dale, Daniel
2018-01-01
Observing the transits of exoplanets in multiple wavelengths enables the characterization of their atmospheres. We used the Wyoming Infrared Observatory to obtain high precision photometry on HD 189733b, one of the most studied exoplanets. We employed the photometry package AIJ and Bayesian statistics in our analysis. Preliminary results suggest a wavelength dependence in the size of the exoplanet, indicative of scattering in the atmosphere. This work is supported by the National Science Foundation under REU grant AST 1560461.
NASA Astrophysics Data System (ADS)
Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang
2018-01-01
Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm for BeiDou GEO satellites. The real-time positioning results prove that the GPS + BeiDou + Galileo RT-PPP comparing to GPS-only can effectively accelerate convergence time by about 60%, improve the positioning accuracy by about 30% and obtain averaged RMS 4 cm in horizontal and 6 cm in vertical; additionally RT-SPP accuracy in the prototype system can realize positioning accuracy with about averaged RMS 1 m in horizontal and 1.5-2 m in vertical, which are improved by 60% and 70% to SPP based on broadcast ephemeris, respectively.
ERIC Educational Resources Information Center
Richardson, William H., Jr.
2006-01-01
Computational precision is sometimes given short shrift in a first programming course. Treating this topic requires discussing integer and floating-point number representations and inaccuracies that may result from their use. An example of a moderately simple programming problem from elementary statistics was examined. It forced students to…
ERIC Educational Resources Information Center
Bloom, Howard S.; Richburg-Hayes, Lashawn; Black, Alison Rebeck
2007-01-01
This article examines how controlling statistically for baseline covariates, especially pretests, improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement. Empirical findings from five urban school districts indicate that (1) pretests can reduce the number of randomized…
Inverse probability weighting for covariate adjustment in randomized studies
Li, Xiaochun; Li, Lingling
2013-01-01
SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458
NASA Astrophysics Data System (ADS)
Jones, Bernard J. T.
2017-04-01
Preface; Notation and conventions; Part I. 100 Years of Cosmology: 1. Emerging cosmology; 2. The cosmic expansion; 3. The cosmic microwave background; 4. Recent cosmology; Part II. Newtonian Cosmology: 5. Newtonian cosmology; 6. Dark energy cosmological models; 7. The early universe; 8. The inhomogeneous universe; 9. The inflationary universe; Part III. Relativistic Cosmology: 10. Minkowski space; 11. The energy momentum tensor; 12. General relativity; 13. Space-time geometry and calculus; 14. The Einstein field equations; 15. Solutions of the Einstein equations; 16. The Robertson-Walker solution; 17. Congruences, curvature and Raychaudhuri; 18. Observing and measuring the universe; Part IV. The Physics of Matter and Radiation: 19. Physics of the CMB radiation; 20. Recombination of the primeval plasma; 21. CMB polarisation; 22. CMB anisotropy; Part V. Precision Tools for Precision Cosmology: 23. Likelihood; 24. Frequentist hypothesis testing; 25. Statistical inference: Bayesian; 26. CMB data processing; 27. Parametrising the universe; 28. Precision cosmology; 29. Epilogue; Appendix A. SI, CGS and Planck units; Appendix B. Magnitudes and distances; Appendix C. Representing vectors and tensors; Appendix D. The electromagnetic field; Appendix E. Statistical distributions; Appendix F. Functions on a sphere; Appendix G. Acknowledgements; References; Index.
Inverse probability weighting for covariate adjustment in randomized studies.
Shen, Changyu; Li, Xiaochun; Li, Lingling
2014-02-20
Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Identifiability of PBPK Models with Applications to Dimethylarsinic Acid Exposure
Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss diff...
Characterization of Piezoelectric Stacks for Space Applications
NASA Technical Reports Server (NTRS)
Sherrit, Stewart; Jones, Christopher; Aldrich, Jack; Blodget, Chad; Bao, Xiaoqi; Badescu, Mircea; Bar-Cohen, Yoseph
2008-01-01
Future NASA missions are increasingly seeking to actuate mechanisms to precision levels in the nanometer range and below. Co-fired multilayer piezoelectric stacks offer the required actuation precision that is needed for such mechanisms. To obtain performance statistics and determine reliability for extended use, sets of commercial PZT stacks were tested in various AC and DC conditions at both nominal and high temperatures and voltages. In order to study the lifetime performance of these stacks, five actuators were driven sinusoidally for up to ten billion cycles. An automated data acquisition system was developed and implemented to monitor each stack's electrical current and voltage waveforms over the life of the test. As part of the monitoring tests, the displacement, impedance, capacitance and leakage current were measured to assess the operation degradation. This paper presents some of the results of this effort.
Improved half-life determination and β-delayed γ-ray spectroscopy for 18Ne decay
NASA Astrophysics Data System (ADS)
Grinyer, G. F.; Ball, G. C.; Bouzomita, H.; Ettenauer, S.; Finlay, P.; Garnsworthy, A. B.; Garrett, P. E.; Green, K. L.; Hackman, G.; Leslie, J. R.; Pearson, C. J.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Thomas, J. C.; Triambak, S.; Williams, S. J.
2013-04-01
The half-life of the superallowed Fermi β+ emitter 18Ne has been determined to ±0.07% precision by counting 1042 keV delayed γ rays that follow approximately 8% of all β decays. The deduced half-life, T1/2=1.6648(11) s, includes a 0.7% correction that accounts for systematic losses associated with rate-dependent detector pulse pileup that was determined using a recently developed γ-ray photopeak-counting technique. This result is a factor of two times more precise than, and in excellent agreement with, a previous lower-statistics measurement that employed the same experimental setup. High-resolution β-delayed γ-ray spectroscopy results for the relative γ-ray intensities and β-decay branching ratios to excited states in the daughter 18F are also presented.
HPTLC Determination of Artemisinin and Its Derivatives in Bulk and Pharmaceutical Dosage
NASA Astrophysics Data System (ADS)
Agarwal, Suraj P.; Ahuja, Shipra
A simple, selective, accurate, and precise high-performance thin-layer chromatographic (HPTLC) method has been established and validated for the analysis of artemisinin and its derivatives (artesunate, artemether, and arteether) in the bulk drugs and formulations. The artemisinin, artesunate, artemether, and arteether were separated on aluminum-backed silica gel 60 F254 plates with toluene:ethyl acetate (10:1), toluene: ethyl acetate: acetic acid (2:8:0.2), toluene:butanol (10:1), and toluene:dichloro methane (0.5:10) mobile phase, respectively. The linear detector response for concentrations between 100 and 600 ng/spot showed good linear relationship with r value 0.9967, 0.9989, 0.9981 and 0.9989 for artemisinin, artesunate, artemether, and arteether, respectively. Statistical analysis proves that the method is precise, accurate, and reproducible and hence can be employed for the routine analysis.
Conn, Paul B.; Johnson, Devin S.; Ver Hoef, Jay M.; Hooten, Mevin B.; London, Joshua M.; Boveng, Peter L.
2015-01-01
Ecologists often fit models to survey data to estimate and explain variation in animal abundance. Such models typically require that animal density remains constant across the landscape where sampling is being conducted, a potentially problematic assumption for animals inhabiting dynamic landscapes or otherwise exhibiting considerable spatiotemporal variation in density. We review several concepts from the burgeoning literature on spatiotemporal statistical models, including the nature of the temporal structure (i.e., descriptive or dynamical) and strategies for dimension reduction to promote computational tractability. We also review several features as they specifically relate to abundance estimation, including boundary conditions, population closure, choice of link function, and extrapolation of predicted relationships to unsampled areas. We then compare a suite of novel and existing spatiotemporal hierarchical models for animal count data that permit animal density to vary over space and time, including formulations motivated by resource selection and allowing for closed populations. We gauge the relative performance (bias, precision, computational demands) of alternative spatiotemporal models when confronted with simulated and real data sets from dynamic animal populations. For the latter, we analyze spotted seal (Phoca largha) counts from an aerial survey of the Bering Sea where the quantity and quality of suitable habitat (sea ice) changed dramatically while surveys were being conducted. Simulation analyses suggested that multiple types of spatiotemporal models provide reasonable inference (low positive bias, high precision) about animal abundance, but have potential for overestimating precision. Analysis of spotted seal data indicated that several model formulations, including those based on a log-Gaussian Cox process, had a tendency to overestimate abundance. By contrast, a model that included a population closure assumption and a scale prior on total abundance produced estimates that largely conformed to our a priori expectation. Although care must be taken to tailor models to match the study population and survey data available, we argue that hierarchical spatiotemporal statistical models represent a powerful way forward for estimating abundance and explaining variation in the distribution of dynamical populations.
Huang, Yang; Lowe, Henry J.; Klein, Dan; Cucina, Russell J.
2005-01-01
Objective: The aim of this study was to develop and evaluate a method of extracting noun phrases with full phrase structures from a set of clinical radiology reports using natural language processing (NLP) and to investigate the effects of using the UMLS® Specialist Lexicon to improve noun phrase identification within clinical radiology documents. Design: The noun phrase identification (NPI) module is composed of a sentence boundary detector, a statistical natural language parser trained on a nonmedical domain, and a noun phrase (NP) tagger. The NPI module processed a set of 100 XML-represented clinical radiology reports in Health Level 7 (HL7)® Clinical Document Architecture (CDA)–compatible format. Computed output was compared with manual markups made by four physicians and one author for maximal (longest) NP and those made by one author for base (simple) NP, respectively. An extended lexicon of biomedical terms was created from the UMLS Specialist Lexicon and used to improve NPI performance. Results: The test set was 50 randomly selected reports. The sentence boundary detector achieved 99.0% precision and 98.6% recall. The overall maximal NPI precision and recall were 78.9% and 81.5% before using the UMLS Specialist Lexicon and 82.1% and 84.6% after. The overall base NPI precision and recall were 88.2% and 86.8% before using the UMLS Specialist Lexicon and 93.1% and 92.6% after, reducing false-positives by 31.1% and false-negatives by 34.3%. Conclusion: The sentence boundary detector performs excellently. After the adaptation using the UMLS Specialist Lexicon, the statistical parser's NPI performance on radiology reports increased to levels comparable to the parser's native performance in its newswire training domain and to that reported by other researchers in the general nonmedical domain. PMID:15684131
PSF estimation for defocus blurred image based on quantum back-propagation neural network
NASA Astrophysics Data System (ADS)
Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang
2010-11-01
Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.
Topics in inference and decision-making with partial knowledge
NASA Technical Reports Server (NTRS)
Safavian, S. Rasoul; Landgrebe, David
1990-01-01
Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher
2018-01-01
Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377
Detecting Patchy Reionization in the Cosmic Microwave Background.
Smith, Kendrick M; Ferraro, Simone
2017-07-14
Upcoming cosmic microwave background (CMB) experiments will measure temperature fluctuations on small angular scales with unprecedented precision. Small-scale CMB fluctuations are a mixture of late-time effects: gravitational lensing, Doppler shifting of CMB photons by moving electrons [the kinematic Sunyaev-Zel'dovich (KSZ) effect], and residual foregrounds. We propose a new statistic which separates the KSZ signal from the others, and also allows the KSZ signal to be decomposed in redshift bins. The decomposition extends to high redshift and does not require external data sets such as galaxy surveys. In particular, the high-redshift signal from patchy reionization can be cleanly isolated, enabling future CMB experiments to make high-significance and qualitatively new measurements of the reionization era.
The European Society for Medical Oncology (ESMO) Precision Medicine Glossary.
Yates, L R; Seoane, J; Le Tourneau, C; Siu, L L; Marais, R; Michiels, S; Soria, J C; Campbell, P; Normanno, N; Scarpa, A; Reis-Filho, J S; Rodon, J; Swanton, C; Andre, F
2018-01-01
Precision medicine is rapidly evolving within the field of oncology and has brought many new concepts and terminologies that are often poorly defined when first introduced, which may subsequently lead to miscommunication within the oncology community. The European Society for Medical Oncology (ESMO) recognises these challenges and is committed to support the adoption of precision medicine in oncology. To add clarity to the language used by oncologists and basic scientists within the context of precision medicine, the ESMO Translational Research and Personalised Medicine Working Group has developed a standardised glossary of relevant terms. Relevant terms for inclusion in the glossary were identified via an ESMO member survey conducted in Autumn 2016, and by the ESMO Translational Research and Personalised Medicine Working Group members. Each term was defined by experts in the field, discussed and, if necessary, modified by the Working Group before reaching consensus approval. A literature search was carried out to determine which of the terms, 'precision medicine' and 'personalised medicine', is most appropriate to describe this field. A total of 43 terms are included in the glossary, grouped into five main themes-(i) mechanisms of decision, (ii) characteristics of molecular alterations, (iii) tumour characteristics, (iv) clinical trials and statistics and (v) new research tools. The glossary classes 'precision medicine' or 'personalised medicine' as technically interchangeable but the term 'precision medicine' is favoured as it more accurately reflects the highly precise nature of new technologies that permit base pair resolution dissection of cancer genomes and is less likely to be misinterpreted. The ESMO Precision Medicine Glossary provides a resource to facilitate consistent communication in this field by clarifying and raising awareness of the language employed in cancer research and oncology practice. The glossary will be a dynamic entity, undergoing expansion and refinement over the coming years. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. [br]All rights reserved. For Permissions, please email: journals.permissions@oup.com.
"Describing our whole experience": the statistical philosophies of W. F. R. Weldon and Karl Pearson.
Pence, Charles H
2011-12-01
There are two motivations commonly ascribed to historical actors for taking up statistics: to reduce complicated data to a mean value (e.g., Quetelet), and to take account of diversity (e.g., Galton). Different motivations will, it is assumed, lead to different methodological decisions in the practice of the statistical sciences. Karl Pearson and W. F. R. Weldon are generally seen as following directly in Galton's footsteps. I argue for two related theses in light of this standard interpretation, based on a reading of several sources in which Weldon, independently of Pearson, reflects on his own motivations. First, while Pearson does approach statistics from this "Galtonian" perspective, he is, consistent with his positivist philosophy of science, utilizing statistics to simplify the highly variable data of biology. Weldon, on the other hand, is brought to statistics by a rich empiricism and a desire to preserve the diversity of biological data. Secondly, we have here a counterexample to the claim that divergence in motivation will lead to a corresponding separation in methodology. Pearson and Weldon, despite embracing biometry for different reasons, settled on precisely the same set of statistical tools for the investigation of evolution. Copyright © 2011 Elsevier Ltd. All rights reserved.
Frontal sinus parameters in computed tomography and sex determination.
Akhlaghi, Mitra; Bakhtavar, Khadijeh; Moarefdoost, Jhale; Kamali, Artin; Rafeifar, Shahram
2016-03-01
The frontal sinus is a sturdy part of the skull that is likely to be retrieved for forensic investigations. We evaluated frontal sinus parameters in paranasal sinus computed tomography (CT) images for sex determination. The study was conducted on 200 normal paranasal sinus CT images of 100 men and 100 women of Persian origin. We categorized the studied population into three age groups of 20-34, 35-49 and ⩾ 50 years. The number of partial septa in the right frontal sinus and the maximum height and width were significantly different between the two sexes. The highest precision for sex determination was for the maximum height of the left frontal sinus (61.3%). In the 20-34 years age-group, height and width of the frontal sinus were significantly different between the two sexes and the height of the left sinus had the highest precision (60.8%). In the 35-49 years age-group, right anterior-posterior diameter had a sex determination precision of 52.3%. No frontal sinus parameter reached a statistically significant level for sex determination in the ⩾ 50 years age-group. The number of septa and scallopings were not useful in sex determination. Frontal sinus parameters did not have a high precision in sex determination among Persian adults. Copyright © 2016. Published by Elsevier Ireland Ltd.
Monitoring the soil degradation by Metastatistical Analysis
NASA Astrophysics Data System (ADS)
Oleschko, K.; Gaona, C.; Tarquis, A.
2009-04-01
The effectiveness of fractal toolbox to capture the critical behavior of soil structural patterns during the chemical and physical degradation was documented by our numerous experiments (Oleschko et al., 2008 a; 2008 b). The spatio-temporal dynamics of these patterns was measured and mapped with high precision in terms of fractal descriptors. All tested fractal techniques were able to detect the statistically significant differences in structure between the perfect spongy and massive patterns of uncultivated and sodium-saline agricultural soils, respectively. For instance, the Hurst exponent, extracted from the Chernozeḿ micromorphological images and from the time series of its physical and mechanical properties measured in situ, detected the roughness decrease (and therefore the increase in H - from 0.17 to 0.30 for images) derived from the loss of original structure complexity. The combined use of different fractal descriptors brings statistical precision into the quantification of natural system degradation and provides a means for objective soil structure comparison (Oleschko et al., 2000). The ability of fractal parameters to capture critical behavior and phase transition was documented for different contrasting situations, including from Andosols deforestation and erosion, to Vertisols high fructuring and consolidation. The Hurst exponent is used to measure the type of persistence and degree of complexity of structure dynamics. We conclude that there is an urgent need to select and adopt a standardized toolbox for fractal analysis and complexity measures in Earth Sciences. We propose to use the second-order (meta-) statistics as subtle measures of complexity (Atmanspacher et al., 1997). The high degree of correlation was documented between the fractal and high-order statistical descriptors (four central moments of stochastic variable distribution) used to the system heterogeneity and variability analysis. We proposed to call this combined fractal/statistical toolbox Metastatistical Analysis and recommend it to the projects directed to soil degradation monitoring. References: 1. Oleschko, K., B.S. Figueroa, M.E. Miranda, M.A. Vuelvas and E.R. Solleiro, Soil & Till. Res. 55, 43 (2000). 2. Oleschko, K., Korvin, G., Figueroa S. B., Vuelvas, M.A., Balankin, A., Flores L., Carreño, D. Fractal radar scattering from soil. Physical Review E.67, 041403, 2003. 3. Zamora-Castro S., Oleschko, K. Flores, L., Ventura, E. Jr., Parrot, J.-F., 2008. Fractal mapping of pore and solids attributes. Vadose Zone Journal, v. 7, Issue2: 473-492. 4. Oleschko, K., Korvin, G., Muñoz, A., Velásquez, J., Miranda, M.E., Carreon, D., Flores, L., Martínez, M., Velásquez-Valle, M., Brambilla, F., Parrot, J.-F. Ronquillo, G., 2008. Fractal mapping of soil moisture content from remote sensed multi-scale data. Nonlinear Proceses in Geophysics Journal, 15: 711-725. 5. Atmanspacher, H., Räth, Ch., Wiedenmann, G., 1997. Statistics and meta-statistics in the concept of complexity. Physica A, 234: 819-829.
Validity of a smartphone protractor to measure sagittal parameters in adult spinal deformity.
Kunkle, William Aaron; Madden, Michael; Potts, Shannon; Fogelson, Jeremy; Hershman, Stuart
2017-10-01
Smartphones have become an integral tool in the daily life of health-care professionals (Franko 2011). Their ease of use and wide availability often make smartphones the first tool surgeons use to perform measurements. This technique has been validated for certain orthopedic pathologies (Shaw 2012; Quek 2014; Milanese 2014; Milani 2014), but never to assess sagittal parameters in adult spinal deformity (ASD). This study was designed to assess the validity, reproducibility, precision, and efficiency of using a smartphone protractor application to measure sagittal parameters commonly measured in ASD assessment and surgical planning. This study aimed to (1) determine the validity of smartphone protractor applications, (2) determine the intra- and interobserver reliability of smartphone protractor applications when used to measure sagittal parameters in ASD, (3) determine the efficiency of using a smartphone protractor application to measure sagittal parameters, and (4) elucidate whether a physician's level of experience impacts the reliability or validity of using a smartphone protractor application to measure sagittal parameters in ASD. An experimental validation study was carried out. Thirty standard 36″ standing lateral radiographs were examined. Three separate measurements were performed using a marker and protractor; then at a separate time point, three separate measurements were performed using a smartphone protractor application for all 30 radiographs. The first 10 radiographs were then re-measured two more times, for a total of three measurements from both the smartphone protractor and marker and protractor. The parameters included lumbar lordosis, pelvic incidence, and pelvic tilt. Three raters performed all measurements-a junior level orthopedic resident, a senior level orthopedic resident, and a fellowship-trained spinal deformity surgeon. All data, including the time to perform the measurements, were recorded, and statistical analysis was performed to determine intra- and interobserver reliability, as well as accuracy, efficiency, and precision. Statistical analysis using the intra- and interclass correlation coefficient was calculated using R (version 3.3.2, 2016) to determine the degree of intra- and interobserver reliability. High rates of intra- and interobserver reliability were observed between the junior resident, senior resident, and attending surgeon when using the smartphone protractor application as demonstrated by high inter- and intra-class correlation coefficients greater than 0.909 and 0.874 respectively. High rates of inter- and intraobserver reliability were also seen between the junior resident, senior resident, and attending surgeon when a marker and protractor were used as demonstrated by high inter- and intra-class correlation coefficients greater than 0.909 and 0.807 respectively. The lumbar lordosis, pelvic incidence, and pelvic tilt values were accurately measured by all three raters, with excellent inter- and intra-class correlation coefficient values. When the first 10 radiographs were re-measured at different time points, a high degree of precision was noted. Measurements performed using the smartphone application were consistently faster than using a marker and protractor-this difference reached statistical significance of p<.05. Adult spinal deformity radiographic parameters can be measured accurately, precisely, reliably, and more efficiently using a smartphone protractor application than with a standard protractor and wax pencil. A high degree of intra- and interobserver reliability was seen between the residents and attending surgeon, indicating measurements made with a smartphone protractor are unaffected by an observer's level of experience. As a result, smartphone protractors may be used when planning ASD surgery. Copyright © 2017 Elsevier Inc. All rights reserved.
Validation of the Filovirus Plaque Assay for Use in Preclinical Studies
Shurtleff, Amy C.; Bloomfield, Holly A.; Mort, Shannon; Orr, Steven A.; Audet, Brian; Whitaker, Thomas; Richards, Michelle J.; Bavari, Sina
2016-01-01
A plaque assay for quantitating filoviruses in virus stocks, prepared viral challenge inocula and samples from research animals has recently been fully characterized and standardized for use across multiple institutions performing Biosafety Level 4 (BSL-4) studies. After standardization studies were completed, Good Laboratory Practices (GLP)-compliant plaque assay method validation studies to demonstrate suitability for reliable and reproducible measurement of the Marburg Virus Angola (MARV) variant and Ebola Virus Kikwit (EBOV) variant commenced at the United States Army Medical Research Institute of Infectious Diseases (USAMRIID). The validation parameters tested included accuracy, precision, linearity, robustness, stability of the virus stocks and system suitability. The MARV and EBOV assays were confirmed to be accurate to ±0.5 log10 PFU/mL. Repeatability precision, intermediate precision and reproducibility precision were sufficient to return viral titers with a coefficient of variation (%CV) of ≤30%, deemed acceptable variation for a cell-based bioassay. Intraclass correlation statistical techniques for the evaluation of the assay’s precision when the same plaques were quantitated by two analysts returned values passing the acceptance criteria, indicating high agreement between analysts. The assay was shown to be accurate and specific when run on Nonhuman Primates (NHP) serum and plasma samples diluted in plaque assay medium, with negligible matrix effects. Virus stocks demonstrated stability for freeze-thaw cycles typical of normal usage during assay retests. The results demonstrated that the EBOV and MARV plaque assays are accurate, precise and robust for filovirus titration in samples associated with the performance of GLP animal model studies. PMID:27110807
J-GFT NMR for precise measurement of mutually correlated nuclear spin-spin couplings.
Atreya, Hanudatta S; Garcia, Erwin; Shen, Yang; Szyperski, Thomas
2007-01-24
G-matrix Fourier transform (GFT) NMR spectroscopy is presented for accurate and precise measurement of chemical shifts and nuclear spin-spin couplings correlated according to spin system. The new approach, named "J-GFT NMR", is based on a largely extended GFT NMR formalism and promises to have a broad impact on projection NMR spectroscopy. Specifically, constant-time J-GFT (6,2)D (HA-CA-CO)-N-HN was implemented for simultaneous measurement of five mutually correlated NMR parameters, that is, 15N backbone chemical shifts and the four one-bond spin-spin couplings 13Calpha-1Halpha, 13Calpha-13C', 15N-13C', and 15N-1HNu. The experiment was applied for measuring residual dipolar couplings (RDCs) in an 8 kDa protein Z-domain aligned with Pf1 phages. Comparison with RDC values extracted from conventional NMR experiments reveals that RDCs are measured with high precision and accuracy, which is attributable to the facts that (i) the use of constant time evolution ensures that signals do not broaden whenever multiple RDCs are jointly measured in a single dimension and (ii) RDCs are multiply encoded in the multiplets arising from the joint sampling. This corresponds to measuring the couplings multiple times in a statistically independent manner. A key feature of J-GFT NMR, i.e., the correlation of couplings according to spin systems without reference to sequential resonance assignments, promises to be particularly valuable for rapid identification of backbone conformation and classification of protein fold families on the basis of statistical analysis of dipolar couplings.
Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size
ERIC Educational Resources Information Center
Shieh, Gwowen
2015-01-01
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
NASA Astrophysics Data System (ADS)
Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer
2011-03-01
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
NASA Astrophysics Data System (ADS)
Wang, Haipeng; Chen, Jianhui; Zhang, Shengda; Zhang, David D.; Wang, Zongli; Xu, Qinghai; Chen, Shengqian; Wang, Shijin; Kang, Shichang; Chen, Fahu
2018-03-01
Long-term, high-resolution temperature records which combine an unambiguous proxy and precise dating are rare in China. In addition, the societal implications of past temperature change on a regional scale have not been sufficiently assessed. Here, based on the modern relationship between chironomids and temperature, we use fossil chironomid assemblages in a precisely dated sediment core from Gonghai Lake to explore temperature variability during the past 4000 years in northern China. Subsequently, we address the possible regional societal implications of temperature change through a statistical analysis of the occurrence of wars. Our results show the following. (1) The mean annual temperature (TANN) was relatively high during 4000-2700 cal yr BP, decreased gradually during 2700-1270 cal yr BP and then fluctuated during the last 1270 years. (2) A cold event in the Period of Disunity, the Sui-Tang Warm Period (STWP), the Medieval Warm Period (MWP) and the Little Ice Age (LIA) can all be recognized in the paleotemperature record, as well as in many other temperature reconstructions in China. This suggests that our chironomid-inferred temperature record for the Gonghai Lake region is representative. (3) Local wars in Shanxi Province, documented in the historical literature during the past 2700 years, are statistically significantly correlated with changes in temperature, and the relationship is a good example of the potential societal implications of temperature change on a regional scale.
A Highly Efficient Design Strategy for Regression with Outcome Pooling
Mitchell, Emily M.; Lyles, Robert H.; Manatunga, Amita K.; Perkins, Neil J.; Schisterman, Enrique F.
2014-01-01
The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. PMID:25220822
A highly efficient design strategy for regression with outcome pooling.
Mitchell, Emily M; Lyles, Robert H; Manatunga, Amita K; Perkins, Neil J; Schisterman, Enrique F
2014-12-10
The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. Copyright © 2014 John Wiley & Sons, Ltd.
Pincus, Steven M; Schmidt, Peter J; Palladino-Negro, Paula; Rubinow, David R
2008-04-01
Enhanced statistical characterization of mood-rating data holds the potential to more precisely classify and sub-classify recurrent mood disorders like premenstrual dysphoric disorder (PMDD) and recurrent brief depressive disorder (RBD). We applied several complementary statistical methods to differentiate mood rating dynamics among women with PMDD, RBD, and normal controls (NC). We compared three subgroups of women: NC (n=8); PMDD (n=15); and RBD (n=9) on the basis of daily self-ratings of sadness, study lengths between 50 and 120 days. We analyzed mean levels; overall variability, SD; sequential irregularity, approximate entropy (ApEn); and a quantification of the extent of brief and staccato dynamics, denoted 'Spikiness'. For each of SD, irregularity (ApEn), and Spikiness, we showed highly significant subgroup differences, ANOVA0.001 for each statistic; additionally, many paired subgroup comparisons showed highly significant differences. In contrast, mean levels were indistinct among the subgroups. For SD, normal controls had much smaller levels than the other subgroups, with RBD intermediate. ApEn showed PMDD to be significantly more regular than the other subgroups. Spikiness showed NC and RBD data sets to be much more staccato than their PMDD counterparts, and appears to suitably characterize the defining feature of RBD dynamics. Compound criteria based on these statistical measures discriminated diagnostic subgroups with high sensitivity and specificity. Taken together, the statistical suite provides well-defined specifications of each subgroup. This can facilitate accurate diagnosis, and augment the prediction and evaluation of response to treatment. The statistical methodologies have broad and direct applicability to behavioral studies for many psychiatric disorders, and indeed to similar analyses of associated biological signals across multiple axes.
Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation
Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.
2013-01-01
Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205
Quasi-Monochromatic Visual Environments and the Resting Point of Accommodation
1988-01-01
accommodation. No statistically significant differences were revealed to support the possibility of color mediated differential regression to resting...discussed with respect to the general findings of the total sample as well as the specific behavior of individual participants. The summarized statistics ...remaining ten varied considerably with respect to the averaged trends reported in the above descriptive statistics as well as with respect to precision
A design of optical measurement laboratory for space-based illumination condition emulation
NASA Astrophysics Data System (ADS)
Xu, Rong; Zhao, Fei; Yang, Xin
2015-10-01
Space Objects Identification(SOI) and related technology have aroused wide attention from spacefaring nations due to the increasingly severe space environment. Multiple ground-based assets have been employed to acquire statistical survey data, detect faint debris, acquire photometric and spectroscopic data. Great efforts have been made to characterize different space objects using the statistical data acquired by telescopes. Furthermore, detailed laboratory data are needed to optimize the characterization of orbital debris and satellites via material composition and potential rotation axes, which calls for a high-precision and flexible optical measurement system. A typical method of taking optical measurements of a space object(or model) is to move light source and sensors through every possible orientation around it and keep the target still. However, moving equipments to accurate orientations in the air is difficult, especially for those large precise instruments sensitive to vibrations. Here, a rotation structure of "3+1" axes, with a three-axis turntable manipulating attitudes of the target and the sensor revolving around a single axis, is utilized to emulate every possible illumination condition in space, which can also avoid the inconvenience of moving large aparatus. Firstly, the source-target-sensor orientation of a real satellite was analyzed with vectors and coordinate systems built to illustrate their spatial relationship. By bending the Reference Coordinate Frame to the Phase Angle plane, the sensor only need to revolve around a single axis while the other three degrees of freedom(DOF) are associated with the Euler's angles of the satellite. Then according to practical engineering requirements, an integrated rotation system of four-axis structure is brought forward. Schemetic diagrams of the three-axis turntable and other equipments show an overview of the future laboratory layout. Finally, proposals on evironment arrangements, light source precautions and sensor selections are provided. Comparing to current methods, this design shows better effects on device simplication, automatic control and high-precision measurement.
The Too-Much-Precision Effect.
Loschelder, David D; Friese, Malte; Schaerer, Michael; Galinsky, Adam D
2016-12-01
Past research has suggested a fundamental principle of price precision: The more precise an opening price, the more it anchors counteroffers. The present research challenges this principle by demonstrating a too-much-precision effect. Five experiments (involving 1,320 experts and amateurs in real-estate, jewelry, car, and human-resources negotiations) showed that increasing the precision of an opening offer had positive linear effects for amateurs but inverted-U-shaped effects for experts. Anchor precision backfired because experts saw too much precision as reflecting a lack of competence. This negative effect held unless first movers gave rationales that boosted experts' perception of their competence. Statistical mediation and experimental moderation established the critical role of competence attributions. This research disentangles competing theoretical accounts (attribution of competence vs. scale granularity) and qualifies two putative truisms: that anchors affect experts and amateurs equally, and that more precise prices are linearly more potent anchors. The results refine current theoretical understanding of anchoring and have significant implications for everyday life.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, He
2016-11-20
Angular velocity information is a requisite for a spacecraft guidance, navigation, and control system. In this paper, an approach for angular velocity estimation based merely on star vector measurement with an improved current statistical model Kalman filter is proposed. High-precision angular velocity estimation can be achieved under dynamic conditions. The amount of calculation is also reduced compared to a Kalman filter. Different trajectories are simulated to test this approach, and experiments with real starry sky observation are implemented for further confirmation. The estimation accuracy is proved to be better than 10-4 rad/s under various conditions. Both the simulation and the experiment demonstrate that the described approach is effective and shows an excellent performance under both static and dynamic conditions.
NASA Astrophysics Data System (ADS)
Zender, Charles S.
2016-09-01
Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.
Characterizing the D2 statistic: word matches in biological sequences.
Forêt, Sylvain; Wilson, Susan R; Burden, Conrad J
2009-01-01
Word matches are often used in sequence comparison methods, either as a measure of sequence similarity or in the first search steps of algorithms such as BLAST or BLAT. The D2 statistic is the number of matches of words of k letters between two sequences. Recent advances have been made in the characterization of this statistic and in the approximation of its distribution. Here, these results are extended to the case of approximate word matches. We compute the exact value of the variance of the D2 statistic for the case of a uniform letter distribution, and introduce a method to provide accurate approximations of the variance in the remaining cases. This enables the distribution of D2 to be approximated for typical situations arising in biological research. We apply these results to the identification of cis-regulatory modules, and show that this method detects such sequences with a high accuracy. The ability to approximate the distribution of D2 for both exact and approximate word matches will enable the use of this statistic in a more precise manner for sequence comparison, database searches, and identification of transcription factor binding sites.
How Large Should a Statistical Sample Be?
ERIC Educational Resources Information Center
Menil, Violeta C.; Ye, Ruili
2012-01-01
This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…
Visualizing Teacher Education as a Complex System: A Nested Simplex System Approach
ERIC Educational Resources Information Center
Ludlow, Larry; Ell, Fiona; Cochran-Smith, Marilyn; Newton, Avery; Trefcer, Kaitlin; Klein, Kelsey; Grudnoff, Lexie; Haigh, Mavis; Hill, Mary F.
2017-01-01
Our purpose is to provide an exploratory statistical representation of initial teacher education as a complex system comprised of dynamic influential elements. More precisely, we reveal what the system looks like for differently-positioned teacher education stakeholders based on our framework for gathering, statistically analyzing, and graphically…
First β-ν correlation measurement from the recoil-energy spectrum of Penning trapped Ar35 ions
NASA Astrophysics Data System (ADS)
Van Gorp, S.; Breitenfeldt, M.; Tandecki, M.; Beck, M.; Finlay, P.; Friedag, P.; Glück, F.; Herlert, A.; Kozlov, V.; Porobic, T.; Soti, G.; Traykov, E.; Wauters, F.; Weinheimer, Ch.; Zákoucký, D.; Severijns, N.
2014-08-01
We demonstrate a novel method to search for physics beyond the standard model by determining the β-ν angular correlation from the recoil-ion energy distribution after β decay of ions stored in a Penning trap. This recoil-ion energy distribution is measured with a retardation spectrometer. The unique combination of the spectrometer with a Penning trap provides a number of advantages, e.g., a high recoil-ion count rate and low sensitivity to the initial position and velocity distribution of the ions and completely different sources of systematic errors compared to other state-of-the-art experiments. Results of a first measurement with the isotope Ar35 are presented. Although currently at limited precision, we show that a statistical precision of about 0.5% is achievable with this unique method, thereby opening up the possibility of contributing to state-of-the-art searches for exotic currents in weak interactions.
Reproducible and Verifiable Equations of State Using Microfabricated Materials
NASA Astrophysics Data System (ADS)
Martin, J. F.; Pigott, J. S.; Panero, W. R.
2017-12-01
Accurate interpretation of observable geophysical data, relevant to the structure, composition, and evolution of planetary interiors, requires precise determination of appropriate equations of state. We present the synthesis of controlled-geometry nanofabricated samples and insulation layers for the laser-heated diamond anvil cell. We present electron-gun evaporation, sputter deposition, and photolithography methods to mass-produce Pt/SiO2/Fe/SiO2 stacks and MgO insulating disks to be used in LHDAC experiments to reduce uncertainties in equation of state measurements due to large temperature gradients. We present a reanalysis of published iron PVT data to establish a statistically-valid extrapolation of the equation of state to inner core conditions with quantified uncertainties, addressing the complication of covariance in equation of state parameters. We use this reanalysis, together with the synthesized samples, to propose a scheme for measurement and validation of high-precision equations of state relevant to the Earth and super-Earth exoplanets.
Potential Application of a Graphical Processing Unit to Parallel Computations in the NUBEAM Code
NASA Astrophysics Data System (ADS)
Payne, J.; McCune, D.; Prater, R.
2010-11-01
NUBEAM is a comprehensive computational Monte Carlo based model for neutral beam injection (NBI) in tokamaks. NUBEAM computes NBI-relevant profiles in tokamak plasmas by tracking the deposition and the slowing of fast ions. At the core of NUBEAM are vector calculations used to track fast ions. These calculations have recently been parallelized to run on MPI clusters. However, cost and interlink bandwidth limit the ability to fully parallelize NUBEAM on an MPI cluster. Recent implementation of double precision capabilities for Graphical Processing Units (GPUs) presents a cost effective and high performance alternative or complement to MPI computation. Commercially available graphics cards can achieve up to 672 GFLOPS double precision and can handle hundreds of thousands of threads. The ability to execute at least one thread per particle simultaneously could significantly reduce the execution time and the statistical noise of NUBEAM. Progress on implementation on a GPU will be presented.
NASA Astrophysics Data System (ADS)
Bianchi, Eugenio; Haggard, Hal M.; Rovelli, Carlo
2017-08-01
We show that in Oeckl's boundary formalism the boundary vectors that do not have a tensor form represent, in a precise sense, statistical states. Therefore the formalism incorporates quantum statistical mechanics naturally. We formulate general-covariant quantum statistical mechanics in this language. We illustrate the formalism by showing how it accounts for the Unruh effect. We observe that the distinction between pure and mixed states weakens in the general covariant context, suggesting that local gravitational processes are naturally statistical without a sharp quantal versus probabilistic distinction.
Characterizations of linear sufficient statistics
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.
1977-01-01
A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.
AlKhalidi, Bashar A; Shtaiwi, Majed; AlKhatib, Hatim S; Mohammad, Mohammad; Bustanji, Yasser
2008-01-01
A fast and reliable method for the determination of repaglinide is highly desirable to support formulation screening and quality control. A first-derivative UV spectroscopic method was developed for the determination of repaglinide in tablet dosage form and for dissolution testing. First-derivative UV absorbance was measured at 253 nm. The developed method was validated for linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ) in comparison to the U.S. Pharmacopeia (USP) column high-performance liquid chromatographic (HPLC) method. The first-derivative UV spectrophotometric method showed excellent linearity [correlation coefficient (r) = 0.9999] in the concentration range of 1-35 microg/mL and precision (relative standard deviation < 1.5%). The LOD and LOQ were 0.23 and 0.72 microg/mL, respectively, and good recoveries were achieved (98-101.8%). Statistical comparison of results of the first-derivative UV spectrophotometric and the USP HPLC methods using the t-test showed that there was no significant difference between the 2 methods. Additionally, the method was successfully used for the dissolution test of repaglinide and was found to be reliable, simple, fast, and inexpensive.
Rhee, Seung Joon; Park, Shi Hwan; Cho, He Myung
2014-01-01
Purpose The purpose of this study is to compare and analyze the precision of optical and electromagnetic navigation systems in total knee arthroplasty (TKA). Materials and Methods We retrospectively reviewed 60 patients who underwent TKA using an optical navigation system and 60 patients who underwent TKA using an electromagnetic navigation system from June 2010 to March 2012. The mechanical axis that was measured on preoperative radiographs and by the intraoperative navigation systems were compared between the groups. The postoperative positions of the femoral and tibial components in the sagittal and coronal plane were assessed. Results The difference of the mechanical axis measured on the preoperative radiograph and by the intraoperative navigation systems was 0.6 degrees more varus in the electromagnetic navigation system group than in the optical navigation system group, but showed no statistically significant difference between the two groups (p>0.05). The positions of the femoral and tibial components in the sagittal and coronal planes on the postoperative radiographs also showed no statistically significant difference between the two groups (p>0.05). Conclusions In TKA, both optical and electromagnetic navigation systems showed high accuracy and reproducibility, and the measurements from the postoperative radiographs showed no significant difference between the two groups. PMID:25505703
Automated brain volumetrics in multiple sclerosis: a step closer to clinical application
Beadnall, H N; Hatton, S N; Bader, G; Tomic, D; Silva, D G
2016-01-01
Background Whole brain volume (WBV) estimates in patients with multiple sclerosis (MS) correlate more robustly with clinical disability than traditional, lesion-based metrics. Numerous algorithms to measure WBV have been developed over the past two decades. We compare Structural Image Evaluation using Normalisation of Atrophy-Cross-sectional (SIENAX) to NeuroQuant and MSmetrix, for assessment of cross-sectional WBV in patients with MS. Methods MRIs from 61 patients with relapsing-remitting MS and 2 patients with clinically isolated syndrome were analysed. WBV measurements were calculated using SIENAX, NeuroQuant and MSmetrix. Statistical agreement between the methods was evaluated using linear regression and Bland-Altman plots. Precision and accuracy of WBV measurement was calculated for (1) NeuroQuant versus SIENAX and (2) MSmetrix versus SIENAX. Results Precision (Pearson's r) of WBV estimation for NeuroQuant and MSmetrix versus SIENAX was 0.983 and 0.992, respectively. Accuracy (Cb) was 0.871 and 0.994, respectively. NeuroQuant and MSmetrix showed a 5.5% and 1.0% volume difference compared with SIENAX, respectively, that was consistent across low and high values. Conclusions In the analysed population, NeuroQuant and MSmetrix both quantified cross-sectional WBV with comparable statistical agreement to SIENAX, a well-validated cross-sectional tool that has been used extensively in MS clinical studies. PMID:27071647
An Evaluation of Different Statistical Targets for Assembling Parallel Forms in Item Response Theory
Ali, Usama S.; van Rijn, Peter W.
2015-01-01
Assembly of parallel forms is an important step in the test development process. Therefore, choosing a suitable theoretical framework to generate well-defined test specifications is critical. The performance of different statistical targets of test specifications using the test characteristic curve (TCC) and the test information function (TIF) was investigated. Test length, the number of test forms, and content specifications are considered as well. The TCC target results in forms that are parallel in difficulty, but not necessarily in terms of precision. Vice versa, test forms created using a TIF target are parallel in terms of precision, but not necessarily in terms of difficulty. As sometimes the focus is either on TIF or TCC, differences in either difficulty or precision can arise. Differences in difficulty can be mitigated by equating, but differences in precision cannot. In a series of simulations using a real item bank, the two-parameter logistic model, and mixed integer linear programming for automated test assembly, these differences were found to be quite substantial. When both TIF and TCC are combined into one target with manipulation to relative importance, these differences can be made to disappear.
In vivo precision of conventional and digital methods for obtaining quadrant dental impressions.
Ender, Andreas; Zimmermann, Moritz; Attin, Thomas; Mehl, Albert
2016-09-01
Quadrant impressions are commonly used as alternative to full-arch impressions. Digital impression systems provide the ability to take these impressions very quickly; however, few studies have investigated the accuracy of the technique in vivo. The aim of this study is to assess the precision of digital quadrant impressions in vivo in comparison to conventional impression techniques. Impressions were obtained via two conventional (metal full-arch tray, CI, and triple tray, T-Tray) and seven digital impression systems (Lava True Definition Scanner, T-Def; Lava Chairside Oral Scanner, COS; Cadent iTero, ITE; 3Shape Trios, TRI; 3Shape Trios Color, TRC; CEREC Bluecam, Software 4.0, BC4.0; CEREC Bluecam, Software 4.2, BC4.2; and CEREC Omnicam, OC). Impressions were taken three times for each of five subjects (n = 15). The impressions were then superimposed within the test groups. Differences from model surfaces were measured using a normal surface distance method. Precision was calculated using the Perc90_10 value. The values for all test groups were statistically compared. The precision ranged from 18.8 (CI) to 58.5 μm (T-Tray), with the highest precision in the CI, T-Def, BC4.0, TRC, and TRI groups. The deviation pattern varied distinctly depending on the impression method. Impression systems with single-shot capture exhibited greater deviations at the tooth surface whereas high-frame rate impression systems differed more in gingival areas. Triple tray impressions displayed higher local deviation at the occlusal contact areas of upper and lower jaw. Digital quadrant impression methods achieve a level of precision, comparable to conventional impression techniques. However, there are significant differences in terms of absolute values and deviation pattern. With all tested digital impression systems, time efficient capturing of quadrant impressions is possible. The clinical precision of digital quadrant impression models is sufficient to cover a broad variety of restorative indications. Yet the precision differs significantly between the digital impression systems.
Robust functional statistics applied to Probability Density Function shape screening of sEMG data.
Boudaoud, S; Rix, H; Al Harrach, M; Marin, F
2014-01-01
Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.
Precision determination of the πN scattering lengths and the charged πNN coupling constant
NASA Astrophysics Data System (ADS)
Ericson, T. E. O.; Loiseau, B.; Thomas, A. W.
2000-01-01
We critically evaluate the isovector GMO sumrule for the charged πNN coupling constant using recent precision data from π-p and π-d atoms and with careful attention to systematic errors. From the π-d scattering length we deduce the pion-proton scattering lengths 1/2(aπ-p + aπ-n) = (-20 +/- 6(statistic)+/-10 (systematic) .10-4m-1πc and 1/2(aπ-p - aπ-n) = (903 +/- 14) . 10-4m-1πc. From this a direct evaluation gives g2c(GMO)/4π = 14.20 +/- 0.07 (statistic)+/-0.13(systematic) or f2c/4π = 0.0786 +/- 0.0008.
Phenomenological constraints on the bulk viscosity of QCD
NASA Astrophysics Data System (ADS)
Paquet, Jean-François; Shen, Chun; Denicol, Gabriel; Jeon, Sangyong; Gale, Charles
2017-11-01
While small at very high temperature, the bulk viscosity of Quantum Chromodynamics is expected to grow in the confinement region. Although its precise magnitude and temperature-dependence in the cross-over region is not fully understood, recent theoretical and phenomenological studies provided evidence that the bulk viscosity can be sufficiently large to have measurable consequences on the evolution of the quark-gluon plasma. In this work, a Bayesian statistical analysis is used to establish probabilistic constraints on the temperature-dependence of bulk viscosity using hadronic measurements from RHIC and LHC.
Reiffsteck, A; Dehennin, L; Scholler, R
1982-11-01
Estrone, 2-methoxyestrone and estradiol-17 beta have been definitely identified in seminal plasma of man, bull, boar and stallion by high resolution gas chromatography associated with selective monitoring of characteristic ions of suitable derivatives. Quantitative estimations were performed by isotope dilution with deuterated analogues and by monitoring molecular ions of trimethylsilyl ethers of labelled and unlabelled compounds. Concentrations of unconjugated and total estrogens are reported together with the statistical evaluation of accuracy and precision.
Precise determination of neutron binding energy of 64Cu
NASA Astrophysics Data System (ADS)
Telezhnikov, S. A.; Granja, C.; Honzatko, J.; Pospisil, S.; Tomandl, I.
2016-05-01
The neutron binding energy in 64Cu has been accurately measured in thermal neutron capture. A composite target of natural Cu and NaCl was used on a high flux neutron beam using a large measuring time. The γ-ray spectrum emitted in the ( n, γ) reaction was measured with a HPGe detector in large statistics (up to 106 events per channel). Intrinsic limitations of HPGe detectors, which restrict the accuracy of energy calibration, were determined. The value B n of 64Cu was determined as 7915.867(24) keV.
Direct computational approach to lattice supersymmetric quantum mechanics
NASA Astrophysics Data System (ADS)
Kadoh, Daisuke; Nakayama, Katsumasa
2018-07-01
We study the lattice supersymmetric models numerically using the transfer matrix approach. This method consists only of deterministic processes and has no statistical uncertainties. We improve it by performing a scale transformation of variables such that the Witten index is correctly reproduced from the lattice model, and the other prescriptions are shown in detail. Compared to the precious Monte-Carlo results, we can estimate the effective masses, SUSY Ward identity and the cut-off dependence of the results in high precision. Those kinds of information are useful in improving lattice formulation of supersymmetric models.
SPIPS: Spectro-Photo-Interferometry of Pulsating Stars
NASA Astrophysics Data System (ADS)
Mérand, Antoine
2017-10-01
SPIPS (Spectro-Photo-Interferometry of Pulsating Stars) combines radial velocimetry, interferometry, and photometry to estimate physical parameters of pulsating stars, including presence of infrared excess, color excess, Teff, and ratio distance/p-factor. The global model-based parallax-of-pulsation method is implemented in Python. Derived parameters have a high level of confidence; statistical precision is improved (compared to other methods) due to the large number of data taken into account, accuracy is improved by using consistent physical modeling and reliability of the derived parameters is strengthened by redundancy in the data.
Amplitude analysis and the nature of the Z c(3900)
Pilloni, A.; Fernandez-Ramirez, C.; Jackura, A.; ...
2017-06-21
The microscopic nature of the XYZ states remains an unsettled topic. We show how a thorough amplitude analysis of the data can help constraining models of these states. Specifically, we consider the case of the Z c(3900) peak and discuss possible scenarios of a QCD state, virtual state, or a kinematical enhancement. Here, we conclude that current data are not precise enough to distinguish between these hypotheses, however, the method we propose, when applied to the forthcoming high-statistics measurements should shed light on the nature of these exotic enhancements.
Results from the HARP Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catanesi, M. G.
2008-02-21
Hadron production is a key ingredient in many aspects of {nu} physics. Precise prediction of atmospheric {nu} fluxes, characterization of accelerator {nu} beams, quantification of {pi} production and capture for {nu}-factory designs, all of these would profit from hadron production measurements. HARP at the CERN PS was the first hadron production experiment designed on purpose to match all these requirements. It combines a large, full phase space acceptance with low systematic errors and high statistics. HARP was operated in the range from 3 GeV to 15 GeV. We briefly describe here the most recent results.
Amplitude analysis and the nature of the Z c(3900)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilloni, A.; Fernandez-Ramirez, C.; Jackura, A.
The microscopic nature of the XYZ states remains an unsettled topic. We show how a thorough amplitude analysis of the data can help constraining models of these states. Specifically, we consider the case of the Z c(3900) peak and discuss possible scenarios of a QCD state, virtual state, or a kinematical enhancement. Here, we conclude that current data are not precise enough to distinguish between these hypotheses, however, the method we propose, when applied to the forthcoming high-statistics measurements should shed light on the nature of these exotic enhancements.
Hassan-Esfahani, Leila; Ebtehaj, Ardeshir M; Torres-Rua, Alfonso; McKee, Mac
2017-09-14
Applications of satellite-borne observations in precision agriculture (PA) are often limited due to the coarse spatial resolution of satellite imagery. This paper uses high-resolution airborne observations to increase the spatial resolution of satellite data for related applications in PA. A new variational downscaling scheme is presented that uses coincident aerial imagery products from "AggieAir", an unmanned aerial system, to increase the spatial resolution of Landsat satellite data. This approach is primarily tested for downscaling individual band Landsat images that can be used to derive normalized difference vegetation index (NDVI) and surface soil moisture (SSM). Quantitative and qualitative results demonstrate promising capabilities of the downscaling approach enabling effective increase of the spatial resolution of Landsat imageries by orders of 2 to 4. Specifically, the downscaling scheme retrieved the missing high-resolution feature of the imageries and reduced the root mean squared error by 15, 11, and 10 percent in visual, near infrared, and thermal infrared bands, respectively. This metric is reduced by 9% in the derived NDVI and remains negligibly for the soil moisture products.
Hassan-Esfahani, Leila; Ebtehaj, Ardeshir M.; McKee, Mac
2017-01-01
Applications of satellite-borne observations in precision agriculture (PA) are often limited due to the coarse spatial resolution of satellite imagery. This paper uses high-resolution airborne observations to increase the spatial resolution of satellite data for related applications in PA. A new variational downscaling scheme is presented that uses coincident aerial imagery products from “AggieAir”, an unmanned aerial system, to increase the spatial resolution of Landsat satellite data. This approach is primarily tested for downscaling individual band Landsat images that can be used to derive normalized difference vegetation index (NDVI) and surface soil moisture (SSM). Quantitative and qualitative results demonstrate promising capabilities of the downscaling approach enabling effective increase of the spatial resolution of Landsat imageries by orders of 2 to 4. Specifically, the downscaling scheme retrieved the missing high-resolution feature of the imageries and reduced the root mean squared error by 15, 11, and 10 percent in visual, near infrared, and thermal infrared bands, respectively. This metric is reduced by 9% in the derived NDVI and remains negligibly for the soil moisture products. PMID:28906428
Ha, Jung-Yun; Chun, Ju-Na; Son, Jun Sik; Kim, Kyo-Han
2014-01-01
Dental modeling resins have been developed for use in areas where highly precise resin structures are needed. The manufacturers claim that these polymethyl methacrylate/methyl methacrylate (PMMA/MMA) resins show little or no shrinkage after polymerization. This study examined the polymerization shrinkage of five dental modeling resins as well as one temporary PMMA/MMA resin (control). The morphology and the particle size of the prepolymerized PMMA powders were investigated by scanning electron microscopy and laser diffraction particle size analysis, respectively. Linear polymerization shrinkage strains of the resins were monitored for 20 minutes using a custom-made linometer, and the final values (at 20 minutes) were converted into volumetric shrinkages. The final volumetric shrinkage values for the modeling resins were statistically similar (P > 0.05) or significantly larger (P < 0.05) than that of the control resin and were related to the polymerization kinetics (P < 0.05) rather than the PMMA bead size (P = 0.335). Therefore, the optimal control of the polymerization kinetics seems to be more important for producing high-precision resin structures rather than the use of dental modeling resins. PMID:24779020
A new global anthropogenic heat estimation based on high-resolution nighttime light data
Yang, Wangming; Luan, Yibo; Liu, Xiaolei; Yu, Xiaoyong; Miao, Lijuan; Cui, Xuefeng
2017-01-01
Consumption of fossil fuel resources leads to global warming and climate change. Apart from the negative impact of greenhouse gases on the climate, the increasing emission of anthropogenic heat from energy consumption also brings significant impacts on urban ecosystems and the surface energy balance. The objective of this work is to develop a new method of estimating the global anthropogenic heat budget and validate it on the global scale with a high precision and resolution dataset. A statistical algorithm was applied to estimate the annual mean anthropogenic heat (AH-DMSP) from 1992 to 2010 at 1×1 km2 spatial resolution for the entire planet. AH-DMSP was validated for both provincial and city scales, and results indicate that our dataset performs well at both scales. Compared with other global anthropogenic heat datasets, the AH-DMSP has a higher precision and finer spatial distribution. Although there are some limitations, the AH-DMSP could provide reliable, multi-scale anthropogenic heat information, which could be used for further research on regional or global climate change and urban ecosystems. PMID:28829436
Sterling, D A; Lewis, R D; Luke, D A; Shadel, B N
2000-06-01
Dust wipe samples collected in the field were tested by nondestructive X-ray fluorescence (XRF) followed by laboratory analysis with flame atomic absorption spectrophotometry (FAAS). Data were analyzed for precision and accuracy of measurement. Replicate samples with the XRF show high precision with an intraclass correlation coefficient (ICC) of 0.97 (P<0.0001) and an overall coefficient of variation of 11.6%. Paired comparison indicates no statistical difference (P=0.272) between XRF and FAAS analysis. Paired samples are highly correlated with an R(2) ranging between 0.89 for samples that contain paint chips and 0.93 for samples that do not contain paint chips. The ICC for absolute agreement between XRF and laboratory results was 0.95 (P<0.0001). The relative error over the concentration range of 25 to 14,200 microgram Pb is -12% (95% CI, -18 to -5). The XRF appears to be an excellent method for rapid on-site evaluation of dust wipes for clearance and risk assessment purposes, although there are indications of some confounding when paint chips are present. Copyright 2000 Academic Press.
Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun
2016-05-05
An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Infrared Thermal Imaging During Ultrasonic Aspiration of Bone
NASA Astrophysics Data System (ADS)
Cotter, D. J.; Woodworth, G.; Gupta, S. V.; Manandhar, P.; Schwartz, T. H.
Ultrasonic surgical aspirator tips target removal of bone in approaches to tumors or aneurysms. Low profile angled tips provide increased visualization and safety in many high risk surgical situations that commonly were approached using a high speed rotary drill. Utilization of the ultrasonic aspirator for bone removal raised questions about relative amount of local and transmitted heat energy. In the sphenoid wing of a cadaver section, ultrasonic bone aspiration yielded lower thermal rise in precision bone removal than rotary mechanical drills, with maximum temperature of 31 °C versus 69 °C for fluted and 79 °C for diamond drill bits. Mean ultrasonic fragmentation power was about 8 Watts. Statistical studies using tenacious porcine cranium yielded mean power levels of about 4.5 Watts to 11 Watts and mean temperature of less than 41.1 °C. Excessively loading the tip yielded momentary higher power; however, mean thermal rise was less than 8 °C with bone removal starting at near body temperature of about 37 °C. Precision bone removal and thermal management were possible with conditions tested for ultrasonic bone aspiration.
Collette, Laurence; Burzykowski, Tomasz; Carroll, Kevin J; Newling, Don; Morris, Tom; Schröder, Fritz H
2005-09-01
The long duration of phase III clinical trials of overall survival (OS) slows down the treatment-development process. It could be shortened by using surrogate end points. Prostate-specific antigen (PSA) is the most studied biomarker in prostate cancer (PCa). This study attempts to validate PSA end points as surrogates for OS in advanced PCa. Individual data from 2,161 advanced PCa patients treated in studies comparing bicalutamide to castration were used in a meta-analytic approach to surrogate end-point validation. PSA response, PSA normalization, time to PSA progression, and longitudinal PSA measurements were considered. The known association between PSA and OS at the individual patient level was confirmed. The association between the effect of intervention on any PSA end point and on OS was generally low (determination coefficient, < 0.69). It is a common misconception that high correlation between biomarkers and true end point justify the use of the former as surrogates. To statistically validate surrogate end points, a high correlation between the treatment effects on the surrogate and true end point needs to be established across groups of patients treated with two alternative interventions. The levels of association observed in this study indicate that the effect of hormonal treatment on OS cannot be predicted with a high degree of precision from observed treatment effects on PSA end points, and thus statistical validity is unproven. In practice, non-null treatment effects on OS can be predicted only from precisely estimated large effects on time to PSA progression (TTPP; hazard ratio, < 0.50).
NASA Technical Reports Server (NTRS)
Sprowls, D. O.; Bucci, R. J.; Ponchel, B. M.; Brazill, R. L.; Bretz, P. E.
1984-01-01
A technique is demonstrated for accelerated stress corrosion testing of high strength aluminum alloys. The method offers better precision and shorter exposure times than traditional pass fail procedures. The approach uses data from tension tests performed on replicate groups of smooth specimens after various lengths of exposure to static stress. The breaking strength measures degradation in the test specimen load carrying ability due to the environmental attack. Analysis of breaking load data by extreme value statistics enables the calculation of survival probabilities and a statistically defined threshold stress applicable to the specific test conditions. A fracture mechanics model is given which quantifies depth of attack in the stress corroded specimen by an effective flaw size calculated from the breaking stress and the material strength and fracture toughness properties. Comparisons are made with experimental results from three tempers of 7075 alloy plate tested by the breaking load method and by traditional tests of statistically loaded smooth tension bars and conventional precracked specimens.
Direct, experimental evidence of the Fermi surface in YBa2Cu3O(7-x)
NASA Astrophysics Data System (ADS)
Haghighi, H.; Kaiser, J. H.; Rayner, S. L.; West, R. N.; Liu, J. Z.; Shelton, R.; Howell, R. H.; Sterne, P. A.; Solal, F. R.; Fluss, M. J.
1991-04-01
We report new measurements of the electron positron momentum spectra of YBa2Cu3O(7-x) performed with ultra-high statistical precision. These data differ from previous results in two significant respects: They show the D(sub 2) symmetry appropriate for untwinned crystals and, more importantly, they show unmistakable, statistically significant, discontinuities that are evidence of a major Fermi surface section. These results provide a partial answer to a question of special significance to the study of high temperature superconductors i.e., the distribution of the electrons in the material, the electronic structure. Special consideration has been given both experimentally and theoretically to the existence and shape of a Fermi surface in the materials and to the superconducting gap. There are only three experimental techniques that can provide details of the electronic structure at useful resolutions. They are angular correlation of positron annihilation radiation, ACAR, angle resolved photo emission, PE, and de Haas van Alphen measurements.
Carroll, Robert; Lee, Chi; Tsai, Che-Wei; ...
2015-11-23
In this study, high-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly-equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. Themore » ratio of the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin- LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloy design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.
An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less
Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.; ...
2018-04-19
An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less
The precise time-dependent solution of the Fokker–Planck equation with anomalous diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Ran; Du, Jiulin, E-mail: jiulindu@aliyun.com
2015-08-15
We study the time behavior of the Fokker–Planck equation in Zwanzig’s rule (the backward-Ito’s rule) based on the Langevin equation of Brownian motion with an anomalous diffusion in a complex medium. The diffusion coefficient is a function in momentum space and follows a generalized fluctuation–dissipation relation. We obtain the precise time-dependent analytical solution of the Fokker–Planck equation and at long time the solution approaches to a stationary power-law distribution in nonextensive statistics. As a test, numerically we have demonstrated the accuracy and validity of the time-dependent solution. - Highlights: • The precise time-dependent solution of the Fokker–Planck equation with anomalousmore » diffusion is found. • The anomalous diffusion satisfies a generalized fluctuation–dissipation relation. • At long time the time-dependent solution approaches to a power-law distribution in nonextensive statistics. • Numerically we have demonstrated the accuracy and validity of the time-dependent solution.« less
Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng
2017-12-01
Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.
Reliable Classification of Geologic Surfaces Using Texture Analysis
NASA Astrophysics Data System (ADS)
Foil, G.; Howarth, D.; Abbey, W. J.; Bekker, D. L.; Castano, R.; Thompson, D. R.; Wagstaff, K.
2012-12-01
Communication delays and bandwidth constraints are major obstacles for remote exploration spacecraft. Due to such restrictions, spacecraft could make use of onboard science data analysis to maximize scientific gain, through capabilities such as the generation of bandwidth-efficient representative maps of scenes, autonomous instrument targeting to exploit targets of opportunity between communications, and downlink prioritization to ensure fast delivery of tactically-important data. Of particular importance to remote exploration is the precision of such methods and their ability to reliably reproduce consistent results in novel environments. Spacecraft resources are highly oversubscribed, so any onboard data analysis must provide a high degree of confidence in its assessment. The TextureCam project is constructing a "smart camera" that can analyze surface images to autonomously identify scientifically interesting targets and direct narrow field-of-view instruments. The TextureCam instrument incorporates onboard scene interpretation and mapping to assist these autonomous science activities. Computer vision algorithms map scenes such as those encountered during rover traverses. The approach, based on a machine learning strategy, trains a statistical model to recognize different geologic surface types and then classifies every pixel in a new scene according to these categories. We describe three methods for increasing the precision of the TextureCam instrument. The first uses ancillary data to segment challenging scenes into smaller regions having homogeneous properties. These subproblems are individually easier to solve, preventing uncertainty in one region from contaminating those that can be confidently classified. The second involves a Bayesian approach that maximizes the likelihood of correct classifications by abstaining from ambiguous ones. We evaluate these two techniques on a set of images acquired during field expeditions in the Mojave Desert. Finally, the algorithm was expanded to perform robust texture classification across a wide range of lighting conditions. We characterize both the increase in precision achieved using different input data representations as well as the range of conditions under which reliable performance can be achieved. An ensemble learning approach is used to increase performance by leveraging the illumination-dependent statistics of an image. Our results show that the three algorithmic modifications lead to a significant increase in classification performance as well as an increase in precision using an adjustable and human-understandable metric of confidence.
Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R
ERIC Educational Resources Information Center
Dogan, C. Deha
2017-01-01
Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…
Statistical inference of selection and divergence of rice blast resistance gene Pi-ta
USDA-ARS?s Scientific Manuscript database
The resistance gene Pi-ta has been effectively used to control rice blast disease worldwide. A few recent studies have described the possible evolution of Pi-ta in cultivated and weedy rice. However, evolutionary statistics used for the studies are too limited to precisely understand selection and d...
NASA Astrophysics Data System (ADS)
Grimm, Guido W.; Potts, Alastair J.
2016-03-01
The Coexistence Approach has been used to infer palaeoclimates for many Eurasian fossil plant assemblages. However, the theory that underpins the method has never been examined in detail. Here we discuss acknowledged and implicit assumptions and assess the statistical nature and pseudo-logic of the method. We also compare the Coexistence Approach theory with the active field of species distribution modelling. We argue that the assumptions will inevitably be violated to some degree and that the method lacks any substantive means to identify or quantify these violations. The absence of a statistical framework makes the method highly vulnerable to the vagaries of statistical outliers and exotic elements. In addition, we find numerous logical inconsistencies, such as how climate shifts are quantified (the use of a "centre value" of a coexistence interval) and the ability to reconstruct "extinct" climates from modern plant distributions. Given the problems that have surfaced in species distribution modelling, accurate and precise quantitative reconstructions of palaeoclimates (or even climate shifts) using the nearest-living-relative principle and rectilinear niches (the basis of the method) will not be possible. The Coexistence Approach can be summarised as an exercise that shoehorns a plant fossil assemblage into coexistence and then assumes that this must be the climate. Given the theoretical issues and methodological issues highlighted elsewhere, we suggest that the method be discontinued and that all past reconstructions be disregarded and revisited using less fallacious methods. We outline six steps for (further) validation of available and future taxon-based methods and advocate developing (semi-quantitative) methods that prioritise robustness over precision.
NASA Astrophysics Data System (ADS)
Profe, Jörn; Ohlendorf, Christian
2017-04-01
XRF-scanning is the state-of-the-art technique for geochemical analyses in marine and lacustrine sedimentology for more than a decade. However, little attention has been paid to data precision and technical limitations so far. Using homogenized, dried and powdered samples (certified geochemical reference standards and samples from a lithologically-contrasting loess-paleosol sequence) minimizes many adverse effects that influence the XRF-signal when analyzing wet sediment cores. This allows the investigation of data precision under ideal conditions and documents a new application of the XRF core-scanner technology at the same time. Reliable interpretations of XRF results require data precision evaluation of single elements as a function of X-ray tube, measurement time, sample compaction and quality of peak fitting. Ten-fold measurement of each sample constitutes data precision. Data precision of XRF measurements theoretically obeys Poisson statistics. Fe and Ca exhibit largest deviations from Poisson statistics. The same elements show the least mean relative standard deviations in the range from 0.5% to 1%. This represents the technical limit of data precision achievable by the installed detector. Measurement times ≥ 30 s reveal mean relative standard deviations below 4% for most elements. The quality of peak fitting is only relevant for elements with overlapping fluorescence lines such as Ba, Ti and Mn or for elements with low concentrations such as Y, for example. Differences in sample compaction are marginal and do not change mean relative standard deviation considerably. Data precision is in the range reported for geochemical reference standards measured by conventional techniques. Therefore, XRF scanning of discrete samples provide a cost- and time-efficient alternative to conventional multi-element analyses. As best trade-off between economical operation and data quality, we recommend a measurement time of 30 s resulting in a total scan time of 30 minutes for 30 samples.
The RICH detector of AMS-02: 5 years of operation in space
NASA Astrophysics Data System (ADS)
Liu, Hu; Casaus, J.; Giovacchini, F.; Oliva, A.; Xia, X.; AMS02-RICH Collaboration
2017-12-01
AMS-02 is a high-energy particle physics magnetic spectrometer installed on the International Space Station since May 2011, and operating continuously since then. The AMS-02 Ring Imaging Čerenkov counter (RICH) is a specialised sub-detector for the precise measurement of the particle velocity β with a resolution of Δβ / β = 0.7 (2.4) ×10-3 for helium nuclei passing through the Aerogel (NaF) radiator. From the emitted photon counting the particle absolute charge magnitude | Z | can be estimated with an uncertainty of 0.3 charge units for helium nuclei. In 5 years of operations the optical properties of the RICH had no significant degradation and the performances of the detector have been stable in time. By means of the simultaneous use of the Silicon Tracker and the RICH, AMS is able to investigate the isotopic composition of cosmic rays in the kinetic energy range from few GeV/n to ∼10 GeV/n for elements with charge | Z | up to 4 with unprecedented statistics. The precise measurement of cosmic rays light nuclei isotopes ratio, such as 3He/4He and 10Be/9Be, provide important constraints to the free parameters in the models for cosmic rays propagation in our Galaxy. In particular the mass distinction between 3He and 4He needed for the measurement of the 3He/4He flux ratio is obtained by statistical methods. The excellent simulation of the AMS detector provides the precise description needed for this analysis. Moreover, the use of the geomagnetic field for selecting control samples of CRs with enhanced abundances of heavy isotopes provides an independent tool for the study of the light nuclei isotopic composition with AMS.
Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A
2018-05-28
To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.
NASA Astrophysics Data System (ADS)
Eason, Thomas J.; Bond, Leonard J.; Lozev, Mark G.
2016-02-01
The accuracy, precision, and reliability of ultrasonic thickness structural health monitoring systems are discussed in-cluding the influence of systematic and environmental factors. To quantify some of these factors, a compression wave ultrasonic thickness structural health monitoring experiment is conducted on a flat calibration block at ambient temperature with forty four thin-film sol-gel transducers and various time-of-flight thickness calculation methods. As an initial calibration, the voltage response signals from each sensor are used to determine the common material velocity as well as the signal offset unique to each calculation method. Next, the measurement precision of the thickness error of each method is determined with a proposed weighted censored relative maximum likelihood analysis technique incorporating the propagation of asymmetric measurement uncertainty. The results are presented as upper and lower confidence limits analogous to the a90/95 terminology used in industry recognized Probability-of-Detection assessments. Future work is proposed to apply the statistical analysis technique to quantify measurement precision of various thickness calculation methods under different environmental conditions such as high temperature, rough back-wall surface, and system degradation with an intended application to monitor naphthenic acid corrosion in oil refineries.
Higgs physics at the CLIC electron-positron linear collider.
Abramowicz, H; Abusleme, A; Afanaciev, K; Alipour Tehrani, N; Balázs, C; Benhammou, Y; Benoit, M; Bilki, B; Blaising, J-J; Boland, M J; Boronat, M; Borysov, O; Božović-Jelisavčić, I; Buckland, M; Bugiel, S; Burrows, P N; Charles, T K; Daniluk, W; Dannheim, D; Dasgupta, R; Demarteau, M; Díaz Gutierrez, M A; Eigen, G; Elsener, K; Felzmann, U; Firlej, M; Firu, E; Fiutowski, T; Fuster, J; Gabriel, M; Gaede, F; García, I; Ghenescu, V; Goldstein, J; Green, S; Grefe, C; Hauschild, M; Hawkes, C; Hynds, D; Idzik, M; Kačarević, G; Kalinowski, J; Kananov, S; Klempt, W; Kopec, M; Krawczyk, M; Krupa, B; Kucharczyk, M; Kulis, S; Laštovička, T; Lesiak, T; Levy, A; Levy, I; Linssen, L; Lukić, S; Maier, A A; Makarenko, V; Marshall, J S; Martin, V J; Mei, K; Milutinović-Dumbelović, G; Moroń, J; Moszczyński, A; Moya, D; Münker, R M; Münnich, A; Neagu, A T; Nikiforou, N; Nikolopoulos, K; Nürnberg, A; Pandurović, M; Pawlik, B; Perez Codina, E; Peric, I; Petric, M; Pitters, F; Poss, S G; Preda, T; Protopopescu, D; Rassool, R; Redford, S; Repond, J; Robson, A; Roloff, P; Ros, E; Rosenblat, O; Ruiz-Jimeno, A; Sailer, A; Schlatter, D; Schulte, D; Shumeiko, N; Sicking, E; Simon, F; Simoniello, R; Sopicki, P; Stapnes, S; Ström, R; Strube, J; Świentek, K P; Szalay, M; Tesař, M; Thomson, M A; Trenado, J; Uggerhøj, U I; van der Kolk, N; van der Kraaij, E; Vicente Barreto Pinto, M; Vila, I; Vogel Gonzalez, M; Vos, M; Vossebeld, J; Watson, M; Watson, N; Weber, M A; Weerts, H; Wells, J D; Weuste, L; Winter, A; Wojtoń, T; Xia, L; Xu, B; Żarnecki, A F; Zawiejski, L; Zgura, I-S
2017-01-01
The Compact Linear Collider (CLIC) is an option for a future [Formula: see text] collider operating at centre-of-mass energies up to [Formula: see text], providing sensitivity to a wide range of new physics phenomena and precision physics measurements at the energy frontier. This paper is the first comprehensive presentation of the Higgs physics reach of CLIC operating at three energy stages: [Formula: see text], 1.4 and [Formula: see text]. The initial stage of operation allows the study of Higgs boson production in Higgsstrahlung ([Formula: see text]) and [Formula: see text]-fusion ([Formula: see text]), resulting in precise measurements of the production cross sections, the Higgs total decay width [Formula: see text], and model-independent determinations of the Higgs couplings. Operation at [Formula: see text] provides high-statistics samples of Higgs bosons produced through [Formula: see text]-fusion, enabling tight constraints on the Higgs boson couplings. Studies of the rarer processes [Formula: see text] and [Formula: see text] allow measurements of the top Yukawa coupling and the Higgs boson self-coupling. This paper presents detailed studies of the precision achievable with Higgs measurements at CLIC and describes the interpretation of these measurements in a global fit.
[Precision medicine: new opportunities and challenges for molecular epidemiology].
Song, Jing; Hu, Yonghua
2016-04-01
Since the completion of the Human Genome Project in 2003 and the announcement of the Precision Medicine Initiative by U.S. President Barack Obama in January 2015, human beings have initially completed the " three steps" of " genomics to biology, genomics to health as well as genomics to society". As a new inter-discipline, the emergence and development of precision medicine have relied on the support and promotion from biological science, basic medicine, clinical medicine, epidemiology, statistics, sociology and information science, etc. Meanwhile, molecular epidemiology is considered to be the core power to promote precision medical as a cross discipline of epidemiology and molecular biology. This article is based on the characteristics and research progress of medicine and molecular epidemiology respectively, focusing on the contribution and significance of molecular epidemiology to precision medicine, and exploring the possible opportunities and challenges in the future.
Collective flow measurements with HADES in Au+Au collisions at 1.23A GeV
NASA Astrophysics Data System (ADS)
Kardan, Behruz; Hades Collaboration
2017-11-01
HADES has a large acceptance combined with a good mass-resolution and therefore allows the study of dielectron and hadron production in heavy-ion collisions with unprecedented precision. With the statistics of seven billion Au-Au collisions at 1.23A GeV recorded in 2012, the investigation of higher-order flow harmonics is possible. At the BEVALAC and SIS18 directed and elliptic flow has been measured for pions, charged kaons, protons, neutrons and fragments, but higher-order harmonics have not yet been studied. They provide additional important information on the properties of the dense hadronic medium produced in heavy-ion collisions. We present here a high-statistics, multidifferential measurement of v1 and v2 for protons in Au+Au collisions at 1.23A GeV.
Xu, Xiaobin; Li, Zhenghui; Li, Guo; Zhou, Zhe
2017-04-21
Estimating the state of a dynamic system via noisy sensor measurement is a common problem in sensor methods and applications. Most state estimation methods assume that measurement noise and state perturbations can be modeled as random variables with known statistical properties. However in some practical applications, engineers can only get the range of noises, instead of the precise statistical distributions. Hence, in the framework of Dempster-Shafer (DS) evidence theory, a novel state estimatation method by fusing dependent evidence generated from state equation, observation equation and the actual observations of the system states considering bounded noises is presented. It can be iteratively implemented to provide state estimation values calculated from fusion results at every time step. Finally, the proposed method is applied to a low-frequency acoustic resonance level gauge to obtain high-accuracy measurement results.
van 't Hoff, Marcel; Reuter, Marcel; Dryden, David T F; Oheim, Martin
2009-09-21
Bacteriophage lambda-DNA molecules are frequently used as a scaffold to characterize the action of single proteins unwinding, translocating, digesting or repairing DNA. However, scaling up such single-DNA-molecule experiments under identical conditions to attain statistically relevant sample sizes remains challenging. Additionally the movies obtained are frequently noisy and difficult to analyse with any precision. We address these two problems here using, firstly, a novel variable-angle total internal reflection fluorescence (VA-TIRF) reflector composed of a minimal set of optical reflective elements, and secondly, using single value decomposition (SVD) to improve the signal-to-noise ratio prior to analysing time-lapse image stacks. As an example, we visualize under identical optical conditions hundreds of surface-tethered single lambda-DNA molecules, stained with the intercalating dye YOYO-1 iodide, and stretched out in a microcapillary flow. Another novelty of our approach is that we arrange on a mechanically driven stage several capillaries containing saline, calibration buffer and lambda-DNA, respectively, thus extending the approach to high-content, high-throughput screening of single molecules. Our length measurements of individual DNA molecules from noise-reduced kymograph images using SVD display a 6-fold enhanced precision compared to raw-data analysis, reaching approximately 1 kbp resolution. Combining these two methods, our approach provides a straightforward yet powerful way of collecting statistically relevant amounts of data in a semi-automated manner. We believe that our conceptually simple technique should be of interest for a broader range of single-molecule studies, well beyond the specific example of lambda-DNA shown here.
How to classify plantar plate injuries: parameters from history and physical examination.
Nery, Caio; Coughlin, Michael; Baumfeld, Daniel; Raduan, Fernando; Mann, Tania Szejnfeld; Catena, Fernanda
2015-01-01
To find the best clinical parameters for defining and classifying the degree of plantar plate injuries. Sixty-eight patients (100 metatarsophalangeal joints) were classified in accordance with the Arthroscopic Anatomical Classification for plantar plate injuries and were divided into five groups (0 to IV). Their medical files were reviewed and the incidence of each parameter for the respective group was correlated. These parameters were: use of high heels, sports, acute pain, local edema, Mulder's sign, widening of the interdigital space, pain in the head of the corresponding metatarsal, touching the ground, "drawer test", toe grip and toe deformities (in the sagittal, coronal and transversal planes). There were no statistically significant associations between the degree of injury and use of high-heel shoes, sports trauma, pain at the head of the metatarsal, Mulder's sign, deformity in pronation or displacement in the transversal and sagittal planes (although their combination, i.e. "cross toe", showed a statistically significant correlation). Positive correlations with the severity of the injuries were found in relation to initial acute pain, progressive widening of the interdigital space, loss of "touching the ground", positive results from the "drawer test" on the metatarsophalangeal joint, diminished grip strength and toe deformity in supination. The "drawer test" was seen to be the more reliable and precise tool for classifying the degree of plantar plate injury, followed by "touching the ground" and rotational deformities. It is possible to improve the precision of the diagnosis and the predictions of the anatomical classification for plantar plate injuries through combining the clinical history and data from the physical examination.
Revolution of Alzheimer Precision Neurology Passageway of Systems Biology and Neurophysiology.
Hampel, Harald; Toschi, Nicola; Babiloni, Claudio; Baldacci, Filippo; Black, Keith L; Bokde, Arun L W; Bun, René S; Cacciola, Francesco; Cavedo, Enrica; Chiesa, Patrizia A; Colliot, Olivier; Coman, Cristina-Maria; Dubois, Bruno; Duggento, Andrea; Durrleman, Stanley; Ferretti, Maria-Teresa; George, Nathalie; Genthon, Remy; Habert, Marie-Odile; Herholz, Karl; Koronyo, Yosef; Koronyo-Hamaoui, Maya; Lamari, Foudil; Langevin, Todd; Lehéricy, Stéphane; Lorenceau, Jean; Neri, Christian; Nisticò, Robert; Nyasse-Messene, Francis; Ritchie, Craig; Rossi, Simone; Santarnecchi, Emiliano; Sporns, Olaf; Verdooner, Steven R; Vergallo, Andrea; Villain, Nicolas; Younesi, Erfan; Garaci, Francesco; Lista, Simone
2018-03-16
The Precision Neurology development process implements systems theory with system biology and neurophysiology in a parallel, bidirectional research path: a combined hypothesis-driven investigation of systems dysfunction within distinct molecular, cellular, and large-scale neural network systems in both animal models as well as through tests for the usefulness of these candidate dynamic systems biomarkers in different diseases and subgroups at different stages of pathophysiological progression. This translational research path is paralleled by an "omics"-based, hypothesis-free, exploratory research pathway, which will collect multimodal data from progressing asymptomatic, preclinical, and clinical neurodegenerative disease (ND) populations, within the wide continuous biological and clinical spectrum of ND, applying high-throughput and high-content technologies combined with powerful computational and statistical modeling tools, aimed at identifying novel dysfunctional systems and predictive marker signatures associated with ND. The goals are to identify common biological denominators or differentiating classifiers across the continuum of ND during detectable stages of pathophysiological progression, characterize systems-based intermediate endophenotypes, validate multi-modal novel diagnostic systems biomarkers, and advance clinical intervention trial designs by utilizing systems-based intermediate endophenotypes and candidate surrogate markers. Achieving these goals is key to the ultimate development of early and effective individualized treatment of ND, such as Alzheimer's disease. The Alzheimer Precision Medicine Initiative (APMI) and cohort program (APMI-CP), as well as the Paris based core of the Sorbonne University Clinical Research Group "Alzheimer Precision Medicine" (GRC-APM) were recently launched to facilitate the passageway from conventional clinical diagnostic and drug development toward breakthrough innovation based on the investigation of the comprehensive biological nature of aging individuals. The APMI movement is gaining momentum to systematically apply both systems neurophysiology and systems biology in exploratory translational neuroscience research on ND.
Revolution of Alzheimer Precision Neurology: Passageway of Systems Biology and Neurophysiology
Hampel, Harald; Toschi, Nicola; Babiloni, Claudio; Baldacci, Filippo; Black, Keith L.; Bokde, Arun L.W.; Bun, René S.; Cacciola, Francesco; Cavedo, Enrica; Chiesa, Patrizia A.; Colliot, Olivier; Coman, Cristina-Maria; Dubois, Bruno; Duggento, Andrea; Durrleman, Stanley; Ferretti, Maria-Teresa; George, Nathalie; Genthon, Remy; Habert, Marie-Odile; Herholz, Karl; Koronyo, Yosef; Koronyo-Hamaoui, Maya; Lamari, Foudil; Langevin, Todd; Lehéricy, Stéphane; Lorenceau, Jean; Neri, Christian; Nisticò, Robert; Nyasse-Messene, Francis; Ritchie, Craig; Rossi, Simone; Santarnecchi, Emiliano; Sporns, Olaf; Verdooner, Steven R.; Vergallo, Andrea; Villain, Nicolas; Younesi, Erfan; Garaci, Francesco; Lista, Simone
2018-01-01
The Precision Neurology development process implements systems theory with system biology and neurophysiology in a parallel, bidirectional research path: a combined hypothesis-driven investigation of systems dysfunction within distinct molecular, cellular and large-scale neural network systems in both animal models as well as through tests for the usefulness of these candidate dynamic systems biomarkers in different diseases and subgroups at different stages of pathophysiological progression. This translational research path is paralleled by an “omics”-based, hypothesis-free, exploratory research pathway, which will collect multimodal data from progressing asymptomatic, preclinical and clinical neurodegenerative disease (ND) populations, within the wide continuous biological and clinical spectrum of ND, applying high-throughput and high-content technologies combined with powerful computational and statistical modeling tools, aimed at identifying novel dysfunctional systems and predictive marker signatures associated with ND. The goals are to identify common biological denominators or differentiating classifiers across the continuum of ND during detectable stages of pathophysiological progression, characterize systems-based intermediate endophenotypes, validate multi-modal novel diagnostic systems biomarkers, and advance clinical intervention trial designs by utilizing systems-based intermediate endophenotypes and candidate surrogate markers. Achieving these goals is key to the ultimate development of early and effective individualized treatment of ND, such as Alzheimer’s disease (AD). The Alzheimer Precision Medicine Initiative (APMI) and cohort program (APMI-CP), as well as the Paris based core of the Sorbonne University Clinical Research Group “Alzheimer Precision Medicine” (GRC-APM) were recently launched to facilitate the passageway from conventional clinical diagnostic and drug development towards breakthrough innovation based on the investigation of the comprehensive biological nature of aging individuals. The APMI movement is gaining momentum to systematically apply both systems neurophysiology and systems biology in exploratory translational neuroscience research on ND. PMID:29562524
Proton radius from electron scattering data
NASA Astrophysics Data System (ADS)
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent; Meekins, David; Norum, Blaine; Sawatzky, Brad
2016-05-01
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon, and Stanford. Methods: We make use of stepwise regression techniques using the F test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate error estimates. Results: Starting with the precision, low four-momentum transfer (Q2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q2 data on GE to select functions which extrapolate to high Q2, we find that a Padé (N =M =1 ) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, GE(Q2) =(1+Q2/0.66 GeV2) -2 . Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extremely-low-Q2 data or by use of the Padé approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering results and the muonic hydrogen results are consistent. It is the atomic hydrogen results that are the outliers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, David R.; Bershady, Matthew A., E-mail: david.andersen@nrc-cnrc.gc.ca, E-mail: mab@astro.wisc.edu
2013-05-01
Using the integral field unit DensePak on the WIYN 3.5 m telescope we have obtained H{alpha} velocity fields of 39 nearly face-on disks at echelle resolutions. High-quality, uniform kinematic data and a new modeling technique enabled us to derive accurate and precise kinematic inclinations with mean i{sub kin} = 23 Degree-Sign for 90% of these galaxies. Modeling the kinematic data as single, inclined disks in circular rotation improves upon the traditional tilted-ring method. We measure kinematic inclinations with a precision in sin i of 25% at 20 Degree-Sign and 6% at 30 Degree-Sign . Kinematic inclinations are consistent with photometricmore » and inverse Tully-Fisher inclinations when the sample is culled of galaxies with kinematic asymmetries, for which we give two specific prescriptions. Kinematic inclinations can therefore be used in statistical ''face-on'' Tully-Fisher studies. A weighted combination of multiple, independent inclination measurements yield the most precise and accurate inclination. Combining inverse Tully-Fisher inclinations with kinematic inclinations yields joint probability inclinations with a precision in sin i of 10% at 15 Degree-Sign and 5% at 30 Degree-Sign . This level of precision makes accurate mass decompositions of galaxies possible even at low inclination. We find scaling relations between rotation speed and disk-scale length identical to results from more inclined samples. We also observe the trend of more steeply rising rotation curves with increased rotation speed and light concentration. This trend appears to be uncorrelated with disk surface brightness.« less
Zender, Charles S.
2016-09-19
Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.« less
Nonlinear Statistical Estimation with Numerical Maximum Likelihood
1974-10-01
probably most directly attributable to the speed, precision and compactness of the linear programming algorithm exercised ; the mutual primal-dual...discriminant analysis is to classify the individual as a member of T# or IT, 1 2 according to the relative...Introduction to the Dissertation 1 Introduction to Statistical Estimation Theory 3 Choice of Estimator.. .Density Functions 12 Choice of Estimator
2009-12-01
events. Work associated with aperiodic tasks have the same statistical behavior and the same timing requirements. The timing deadlines are soft. • Sporadic...answers, but it is possible to calculate how precise the estimates are. Simulation-based performance analysis of a model includes a statistical ...to evaluate all pos- sible states in a timely manner. This is the principle reason for resorting to simulation and statistical analysis to evaluate
Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision.
Ender, Andreas; Mehl, Albert
2013-02-01
A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. A steel reference dentate model was fabricated and measured with a reference scanner (digital reference model). Conventional impressions were made from the reference model, poured with Type IV dental stone, scanned with the reference scanner, and exported as digital models. Additionally, digital impressions of the reference model were made and the digital models were exported. Precision was measured by superimposing the digital models within each group. Superimposing the digital models on the digital reference model assessed the trueness of each impression method. Statistical significance was assessed with an independent sample t test (α=.05). The reference scanner delivered high accuracy over the entire dental arch with a precision of 1.6 ±0.6 µm and a trueness of 5.3 ±1.1 µm. Conventional impressions showed significantly higher precision (12.5 ±2.5 µm) and trueness values (20.4 ±2.2 µm) with small deviations in the second molar region (P<.001). Digital impressions were significantly less accurate with a precision of 32.4 ±9.6 µm and a trueness of 58.6 ±15.8µm (P<.001). More systematic deviations of the digital models were visible across the entire dental arch. The new reference scanner is capable of measuring the precision and trueness of both digital and conventional complete-arch impressions. The digital impression is less accurate and shows a different pattern of deviation than the conventional impression. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jin, Zhenyu; Lin, Jing; Liu, Zhong
2008-07-01
By study of the classical testing techniques (such as Shack-Hartmann Wave-front Sensor) adopted in testing the aberration of ground-based astronomical optical telescopes, we bring forward two testing methods on the foundation of high-resolution image reconstruction technology. One is based on the averaged short-exposure OTF and the other is based on the Speckle Interferometric OTF by Antoine Labeyrie. Researches made by J.Ohtsubo, F. Roddier, Richard Barakat and J.-Y. ZHANG indicated that the SITF statistical results would be affected by the telescope optical aberrations, which means the SITF statistical results is a function of optical system aberration and the atmospheric Fried parameter (seeing). Telescope diffraction-limited information can be got through two statistics methods of abundant speckle images: by the first method, we can extract the low frequency information such as the full width at half maximum (FWHM) of the telescope PSF to estimate the optical quality; by the second method, we can get a more precise description of the telescope PSF with high frequency information. We will apply the two testing methods to the 2.4m optical telescope of the GMG Observatory, in china to validate their repeatability and correctness and compare the testing results with that of the Shack-Hartmann Wave-Front Sensor got. This part will be described in detail in our paper.
Identifiability of PBPK Models with Applications to ...
Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss different types of identifiability that occur in PBPK models and give reasons why they occur. We particularly focus on how the mathematical structure of a PBPK model and lack of appropriate data can lead to statistical models in which it is impossible to estimate at least some parameters precisely. Methods are reviewed which can determine whether a purely linear PBPK model is globally identifiable. We propose a theorem which determines when identifiability at a set of finite and specific values of the mathematical PBPK model (global discrete identifiability) implies identifiability of the statistical model. However, we are unable to establish conditions that imply global discrete identifiability, and conclude that the only safe approach to analysis of PBPK models involves Bayesian analysis with truncated priors. Finally, computational issues regarding posterior simulations of PBPK models are discussed. The methodology is very general and can be applied to numerous PBPK models which can be expressed as linear time-invariant systems. A real data set of a PBPK model for exposure to dimethyl arsinic acid (DMA(V)) is presented to illustrate the proposed methodology. We consider statistical analy
Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan
2013-11-01
Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms, such that volumes quantified from scans of different reconstruction algorithms can be compared. The little difference found between the precision of FBP and iterative reconstructions could be a result of both iterative reconstruction's diminished noise reduction at the edge of the nodules as well as the loss of resolution at high noise levels with iterative reconstruction. The findings do not rule out potential advantage of IR that might be evident in a study that uses a larger number of nodules or repeated scans.
Yue Xu, Selene; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki
2018-04-01
Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.
NASA Astrophysics Data System (ADS)
Cabalín, L. M.; González, A.; Ruiz, J.; Laserna, J. J.
2010-08-01
Statistical uncertainty in the quantitative analysis of solid samples in motion by laser-induced breakdown spectroscopy (LIBS) has been assessed. For this purpose, a LIBS demonstrator was designed and constructed in our laboratory. The LIBS system consisted of a laboratory-scale conveyor belt, a compact optical module and a Nd:YAG laser operating at 532 nm. The speed of the conveyor belt was variable and could be adjusted up to a maximum speed of 2 m s - 1 . Statistical uncertainty in the analytical measurements was estimated in terms of precision (reproducibility and repeatability) and accuracy. The results obtained by LIBS on shredded scrap samples under real conditions have demonstrated that the analytical precision and accuracy of LIBS is dependent on the sample geometry, position on the conveyor belt and surface cleanliness. Flat, relatively clean scrap samples exhibited acceptable reproducibility and repeatability; by contrast, samples with an irregular shape or a dirty surface exhibited a poor relative standard deviation.
Metz, Thomas; Walewski, Joachim; Kaminski, Clemens F
2003-03-20
Evaluation schemes, e.g., least-squares fitting, are not generally applicable to any types of experiments. If the evaluation schemes were not derived from a measurement model that properly described the experiment to be evaluated, poorer precision or accuracy than attainable from the measured data could result. We outline ways in which statistical data evaluation schemes should be derived for all types of experiment, and we demonstrate them for laser-spectroscopic experiments, in which pulse-to-pulse fluctuations of the laser power cause correlated variations of laser intensity and generated signal intensity. The method of maximum likelihood is demonstrated in the derivation of an appropriate fitting scheme for this type of experiment. Statistical data evaluation contains the following steps. First, one has to provide a measurement model that considers statistical variation of all enclosed variables. Second, an evaluation scheme applicable to this particular model has to be derived or provided. Third, the scheme has to be characterized in terms of accuracy and precision. A criterion for accepting an evaluation scheme is that it have accuracy and precision as close as possible to the theoretical limit. The fitting scheme derived for experiments with pulsed lasers is compared to well-established schemes in terms of fitting power and rational functions. The precision is found to be as much as three timesbetter than for simple least-squares fitting. Our scheme also suppresses the bias on the estimated model parameters that other methods may exhibit if they are applied in an uncritical fashion. We focus on experiments in nonlinear spectroscopy, but the fitting scheme derived is applicable in many scientific disciplines.
Flow Chamber System for the Statistical Evaluation of Bacterial Colonization on Materials
Menzel, Friederike; Conradi, Bianca; Rodenacker, Karsten; Gorbushina, Anna A.; Schwibbert, Karin
2016-01-01
Biofilm formation on materials leads to high costs in industrial processes, as well as in medical applications. This fact has stimulated interest in the development of new materials with improved surfaces to reduce bacterial colonization. Standardized tests relying on statistical evidence are indispensable to evaluate the quality and safety of these new materials. We describe here a flow chamber system for biofilm cultivation under controlled conditions with a total capacity for testing up to 32 samples in parallel. In order to quantify the surface colonization, bacterial cells were DAPI (4`,6-diamidino-2-phenylindole)-stained and examined with epifluorescence microscopy. More than 100 images of each sample were automatically taken and the surface coverage was estimated using the free open source software g’mic, followed by a precise statistical evaluation. Overview images of all gathered pictures were generated to dissect the colonization characteristics of the selected model organism Escherichia coli W3310 on different materials (glass and implant steel). With our approach, differences in bacterial colonization on different materials can be quantified in a statistically validated manner. This reliable test procedure will support the design of improved materials for medical, industrial, and environmental (subaquatic or subaerial) applications. PMID:28773891
NASA Astrophysics Data System (ADS)
Haagmans, G. G.; Verhagen, S.; Voûte, R. L.; Verbree, E.
2017-09-01
Since GPS tends to fail for indoor positioning purposes, alternative methods like indoor positioning systems (IPS) based on Bluetooth low energy (BLE) are developing rapidly. Generally, IPS are deployed in environments covered with obstacles such as furniture, walls, people and electronics influencing the signal propagation. The major factor influencing the system performance and to acquire optimal positioning results is the geometry of the beacons. The geometry of the beacons is limited to the available infrastructure that can be deployed (number of beacons, basestations and tags), which leads to the following challenge: Given a limited number of beacons, where should they be placed in a specified indoor environment, such that the geometry contributes to optimal positioning results? This paper aims to propose a statistical model that is able to select the optimal configuration that satisfies the user requirements in terms of precision. The model requires the definition of a chosen 3D space (in our case 7 × 10 × 6 meter), number of beacons, possible user tag locations and a performance threshold (e.g. required precision). For any given set of beacon and receiver locations, the precision, internal- and external reliability can be determined on forehand. As validation, the modeled precision has been compared with observed precision results. The measurements have been performed with an IPS of BlooLoc at a chosen set of user tag locations for a given geometric configuration. Eventually, the model is able to select the optimal geometric configuration out of millions of possible configurations based on a performance threshold (e.g. required precision).
Tarone, Aaron M; Foran, David R
2011-01-01
Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.
Metal ion transport quantified by ICP-MS in intact cells
Figueroa, Julio A. Landero; Stiner, Cory A.; Radzyukevich, Tatiana L.; Heiny, Judith A.
2016-01-01
The use of ICP-MS to measure metal ion content in biological tissues offers a highly sensitive means to study metal-dependent physiological processes. Here we describe the application of ICP-MS to measure membrane transport of Rb and K ions by the Na,K-ATPase in mouse skeletal muscles and human red blood cells. The ICP-MS method provides greater precision and statistical power than possible with conventional tracer flux methods. The method is widely applicable to studies of other metal ion transporters and metal-dependent processes in a range of cell types and conditions. PMID:26838181
Metal ion transport quantified by ICP-MS in intact cells.
Figueroa, Julio A Landero; Stiner, Cory A; Radzyukevich, Tatiana L; Heiny, Judith A
2016-02-03
The use of ICP-MS to measure metal ion content in biological tissues offers a highly sensitive means to study metal-dependent physiological processes. Here we describe the application of ICP-MS to measure membrane transport of Rb and K ions by the Na,K-ATPase in mouse skeletal muscles and human red blood cells. The ICP-MS method provides greater precision and statistical power than possible with conventional tracer flux methods. The method is widely applicable to studies of other metal ion transporters and metal-dependent processes in a range of cell types and conditions.
Spectroflurimetric estimation of the new antiviral agent ledipasvir in presence of sofosbuvir
NASA Astrophysics Data System (ADS)
Salama, Fathy M.; Attia, Khalid A.; Abouserie, Ahmed A.; El-Olemy, Ahmed; Abolmagd, Ebrahim
2018-02-01
A spectroflurimetric method has been developed and validated for the selective quantitative determination of ledipasvir in presence of sofosbuvir. In this method the native fluorescence of ledipasvir in ethanol at 405 nm was measured after excitation at 340 nm. The proposed method was validated according to ICH guidelines and show high sensitivity, accuracy and precision. Furthermore this method was successfully applied to the analysis of ledipasvir in pharmaceutical dosage form without interference from sofosbuvir and other additives and the results were statistically compared to a reported method and found no significant difference.
Mass Conservation and Inference of Metabolic Networks from High-Throughput Mass Spectrometry Data
Bandaru, Pradeep; Bansal, Mukesh
2011-01-01
Abstract We present a step towards the metabolome-wide computational inference of cellular metabolic reaction networks from metabolic profiling data, such as mass spectrometry. The reconstruction is based on identification of irreducible statistical interactions among the metabolite activities using the ARACNE reverse-engineering algorithm and on constraining possible metabolic transformations to satisfy the conservation of mass. The resulting algorithms are validated on synthetic data from an abridged computational model of Escherichia coli metabolism. Precision rates upwards of 50% are routinely observed for identification of full metabolic reactions, and recalls upwards of 20% are also seen. PMID:21314454
VizieR Online Data Catalog: Fundamental parameters of Kepler stars (Silva Aguirre+, 2015)
NASA Astrophysics Data System (ADS)
Silva Aguirre, V.; Davies, G. R.; Basu, S.; Christensen-Dalsgaard, J.; Creevey, O.; Metcalfe, T. S.; Bedding, T. R.; Casagrande, L.; Handberg, R.; Lund, M. N.; Nissen, P. E.; Chaplin, W. J.; Huber, D.; Serenelli, A. M.; Stello, D.; van Eylen, V.; Campante, T. L.; Elsworth, Y.; Gilliland, R. L.; Hekker, S.; Karoff, C.; Kawaler, S. D.; Kjeldsen, H.; Lundkvist, M. S.
2016-02-01
Our sample has been extracted from the 77 exoplanet host stars presented in Huber et al. (2013, Cat. J/ApJ/767/127). We have made use of the full time-base of observations from the Kepler satellite to uniformly determine precise fundamental stellar parameters, including ages, for a sample of exoplanet host stars where high-quality asteroseismic data were available. We devised a Bayesian procedure flexible in its input and applied it to different grids of models to study systematics from input physics and extract statistically robust properties for all stars. (4 data files).
Effect of high altitude on blood glucose meter performance.
Fink, Kenneth S; Christensen, Dale B; Ellsworth, Allan
2002-01-01
Participation in high-altitude wilderness activities may expose persons to extreme environmental conditions, and for those with diabetes mellitus, euglycemia is important to ensure safe travel. We conducted a field assessment of the precision and accuracy of seven commonly used blood glucose meters while mountaineering on Mount Rainier, located in Washington State (elevation 14,410 ft). At various elevations each climber-subject used the randomly assigned device to measure the glucose level of capillary blood and three different concentrations of standardized control solutions, and a venous sample was also collected for later glucose analysis. Ordinary least squares regression was used to assess the effect of elevation and of other environmental potential covariates on the precision and accuracy of blood glucose meters. Elevation affects glucometer precision (p = 0.08), but becomes less significant (p = 0.21) when adjusted for temperature and relative humidity. The overall effect of elevation was to underestimate glucose levels by approximately 1-2% (unadjusted) for each 1,000 ft gain in elevation. Blood glucose meter accuracy was affected by elevation (p = 0.03), temperature (p < 0.01), and relative humidity (p = 0.04) after adjustment for the other variables. The interaction between elevation and relative humidity had a meaningful but not statistically significant effect on accuracy (p = 0.07). Thus, elevation, temperature, and relative humidity affect blood glucose meter performance, and elevated glucose levels are more greatly underestimated at higher elevations. Further research will help to identify which blood glucose meters are best suited for specific environments.
Nasso, Sara; Goetze, Sandra; Martens, Lennart
2015-09-04
Selected reaction monitoring (SRM) MS is a highly selective and sensitive technique to quantify protein abundances in complex biological samples. To enhance the pace of SRM large studies, a validated, robust method to fully automate absolute quantification and to substitute for interactive evaluation would be valuable. To address this demand, we present Ariadne, a Matlab software. To quantify monitored targets, Ariadne exploits metadata imported from the transition lists, and targets can be filtered according to mProphet output. Signal processing and statistical learning approaches are combined to compute peptide quantifications. To robustly estimate absolute abundances, the external calibration curve method is applied, ensuring linearity over the measured dynamic range. Ariadne was benchmarked against mProphet and Skyline by comparing its quantification performance on three different dilution series, featuring either noisy/smooth traces without background or smooth traces with complex background. Results, evaluated as efficiency, linearity, accuracy, and precision of quantification, showed that Ariadne's performance is independent of data smoothness and complex background presence and that Ariadne outperforms mProphet on the noisier data set and improved 2-fold Skyline's accuracy and precision for the lowest abundant dilution with complex background. Remarkably, Ariadne could statistically distinguish from each other all different abundances, discriminating dilutions as low as 0.1 and 0.2 fmol. These results suggest that Ariadne offers reliable and automated analysis of large-scale SRM differential expression studies.
PHYSICAL PROPERTIES OF THE 0.94-DAY PERIOD TRANSITING PLANETARY SYSTEM WASP-18
DOE Office of Scientific and Technical Information (OSTI.GOV)
Southworth, John; Anderson, D. R.; Maxted, P. F. L.
2009-12-10
We present high-precision photometry of five consecutive transits of WASP-18, an extrasolar planetary system with one of the shortest orbital periods known. Through the use of telescope defocusing we achieve a photometric precision of 0.47-0.83 mmag per observation over complete transit events. The data are analyzed using the JKTEBOP code and three different sets of stellar evolutionary models. We find the mass and radius of the planet to be M {sub b} = 10.43 +- 0.30 +- 0.24 M {sub Jup} and R {sub b} = 1.165 +- 0.055 +- 0.014 R {sub Jup} (statistical and systematic errors), respectively. Themore » systematic errors in the orbital separation and the stellar and planetary masses, arising from the use of theoretical predictions, are of a similar size to the statistical errors and set a limit on our understanding of the WASP-18 system. We point out that seven of the nine known massive transiting planets (M {sub b} > 3 M {sub Jup}) have eccentric orbits, whereas significant orbital eccentricity has been detected for only four of the 46 less-massive planets. This may indicate that there are two different populations of transiting planets, but could also be explained by observational biases. Further radial velocity observations of low-mass planets will make it possible to choose between these two scenarios.« less
REDUNDANT ARRAY CONFIGURATIONS FOR 21 cm COSMOLOGY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillon, Joshua S.; Parsons, Aaron R., E-mail: jsdillon@berkeley.edu
Realizing the potential of 21 cm tomography to statistically probe the intergalactic medium before and during the Epoch of Reionization requires large telescopes and precise control of systematics. Next-generation telescopes are now being designed and built to meet these challenges, drawing lessons from first-generation experiments that showed the benefits of densely packed, highly redundant arrays—in which the same mode on the sky is sampled by many antenna pairs—for achieving high sensitivity, precise calibration, and robust foreground mitigation. In this work, we focus on the Hydrogen Epoch of Reionization Array (HERA) as an interferometer with a dense, redundant core designed followingmore » these lessons to be optimized for 21 cm cosmology. We show how modestly supplementing or modifying a compact design like HERA’s can still deliver high sensitivity while enhancing strategies for calibration and foreground mitigation. In particular, we compare the imaging capability of several array configurations, both instantaneously (to address instrumental and ionospheric effects) and with rotation synthesis (for foreground removal). We also examine the effects that configuration has on calibratability using instantaneous redundancy. We find that improved imaging with sub-aperture sampling via “off-grid” antennas and increased angular resolution via far-flung “outrigger” antennas is possible with a redundantly calibratable array configuration.« less
Redundant Array Configurations for 21 cm Cosmology
NASA Astrophysics Data System (ADS)
Dillon, Joshua S.; Parsons, Aaron R.
2016-08-01
Realizing the potential of 21 cm tomography to statistically probe the intergalactic medium before and during the Epoch of Reionization requires large telescopes and precise control of systematics. Next-generation telescopes are now being designed and built to meet these challenges, drawing lessons from first-generation experiments that showed the benefits of densely packed, highly redundant arrays—in which the same mode on the sky is sampled by many antenna pairs—for achieving high sensitivity, precise calibration, and robust foreground mitigation. In this work, we focus on the Hydrogen Epoch of Reionization Array (HERA) as an interferometer with a dense, redundant core designed following these lessons to be optimized for 21 cm cosmology. We show how modestly supplementing or modifying a compact design like HERA’s can still deliver high sensitivity while enhancing strategies for calibration and foreground mitigation. In particular, we compare the imaging capability of several array configurations, both instantaneously (to address instrumental and ionospheric effects) and with rotation synthesis (for foreground removal). We also examine the effects that configuration has on calibratability using instantaneous redundancy. We find that improved imaging with sub-aperture sampling via “off-grid” antennas and increased angular resolution via far-flung “outrigger” antennas is possible with a redundantly calibratable array configuration.
Liu, Jen-Pei; Lu, Li-Tien; Liao, C T
2009-09-01
Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.
Identifying Galactic Cosmic Ray Origins With Super-TIGER
NASA Technical Reports Server (NTRS)
deNolfo, Georgia; Binns, W. R.; Israel, M. H.; Christian, E. R.; Mitchell, J. W.; Hams, T.; Link, J. T.; Sasaki, M.; Labrador, A. W.; Mewaldt, R. A.;
2009-01-01
Super-TIGER (Super Trans-Iron Galactic Element Recorder) is a new long-duration balloon-borne instrument designed to test and clarify an emerging model of cosmic-ray origins and models for atomic processes by which nuclei are selected for acceleration. A sensitive test of the origin of cosmic rays is the measurement of ultra heavy elemental abundances (Z > or equal 30). Super-TIGER is a large-area (5 sq m) instrument designed to measure the elements in the interval 30 < or equal Z < or equal 42 with individual-element resolution and high statistical precision, and make exploratory measurements through Z = 60. It will also measure with high statistical accuracy the energy spectra of the more abundant elements in the interval 14 < or equal Z < or equal 30 at energies 0.8 < or equal E < or equal 10 GeV/nucleon. These spectra will give a sensitive test of the hypothesis that microquasars or other sources could superpose spectral features on the otherwise smooth energy spectra previously measured with less statistical accuracy. Super-TIGER builds on the heritage of the smaller TIGER, which produced the first well-resolved measurements of elemental abundances of the elements Ga-31, Ge-32, and Se-34. We present the Super-TIGER design, schedule, and progress to date, and discuss the relevance of UH measurements to cosmic-ray origins.
Lotfy, Hayam Mahmoud; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom
2014-05-21
Two smart and novel spectrophotometric methods namely; absorbance subtraction (AS) and amplitude modulation (AM) were developed and validated for the determination of a binary mixture of timolol maleate (TIM) and dorzolamide hydrochloride (DOR) in presence of benzalkonium chloride without prior separation, using unified regression equation. Additionally, simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of the binary mixture namely; simultaneous ratio subtraction (SRS), ratio difference (RD), ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), constant multiplication method (CM) and mean centering of ratio spectra (MCR). The proposed spectrophotometric procedures do not require any separation steps. Accuracy, precision and linearity ranges of the proposed methods were determined and the specificity was assessed by analyzing synthetic mixtures of both drugs. They were applied to their pharmaceutical formulation and the results obtained were statistically compared to that of a reported spectrophotometric method. The statistical comparison showed that there is no significant difference between the proposed methods and the reported one regarding both accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.
Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O
2011-01-01
The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.
A precise measurement of the [Formula: see text] meson oscillation frequency.
Aaij, R; Abellán Beteta, C; Adeva, B; Adinolfi, M; Affolder, A; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Anderson, J; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Aquines Gutierrez, O; Archilli, F; d'Argent, P; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Batozskaya, V; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Bel, L J; Bellee, V; Belloli, N; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bertolin, A; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Billoir, P; Bird, T; Birnkraut, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borsato, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britsch, M; Britton, T; Brodzicka, J; Brook, N H; Buchanan, E; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Campana, P; Campora Perez, D; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cavallero, G; Cenci, R; Charles, M; Charpentier, Ph; Chefdeville, M; Chen, S; Cheung, S-F; Chiapolini, N; Chrzaszcz, M; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collazuol, G; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombes, M; Coquereau, S; Corti, G; Corvo, M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Dall'Occo, E; Dalseno, J; David, P N Y; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Simone, P; Dean, C-T; Decamp, D; Deckenhoff, M; Del Buono, L; Déléage, N; Demmer, M; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Di Ruscio, F; Dijkstra, H; Donleavy, S; Dordei, F; Dorigo, M; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dupertuis, F; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Easo, S; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; El Rifai, I; Elsasser, Ch; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Färber, C; Farley, N; Farry, S; Fay, R; Ferguson, D; Fernandez Albor, V; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fohl, K; Fol, P; Fontana, M; Fontanelli, F; C Forshaw, D; Forty, R; Frank, M; Frei, C; Frosini, M; Fu, J; Furfaro, E; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; García Pardiñas, J; Garra Tico, J; Garrido, L; Gascon, D; Gaspar, C; Gauld, R; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gotti, C; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Greening, E; Gregson, S; Griffith, P; Grillo, L; Grünberg, O; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; He, J; Head, T; Heijne, V; Heister, A; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hoballah, M; Hombach, C; Hulsbergen, W; Humair, T; Hussain, N; Hutchcroft, D; Hynds, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jawahery, A; Jing, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Kanso, W; Karacson, M; Karbach, T M; Karodia, S; Kecke, M; Kelsey, M; Kenyon, I R; Kenzie, M; Ketel, T; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Kochebina, O; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Kozeiha, M; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; K Kuonen, A; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lanfranchi, G; Langenbruch, C; Langhans, B; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Likhomanenko, T; Liles, M; Lindner, R; Linn, C; Lionetto, F; Liu, B; Liu, X; Loh, D; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusardi, N; Lusiani, A; Machefert, F; Maciuc, F; Maev, O; Maguire, K; Malde, S; Malinin, A; Manca, G; Mancinelli, G; Manning, P; Mapelli, A; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurin, B; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; Meadows, B; Meier, F; Meissner, M; Melnychuk, D; Merk, M; Michielin, E; Milanes, D A; Minard, M-N; Mitzel, D S; Molina Rodriguez, J; Monroy, I A; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A B; Mountain, R; Muheim, F; Müller, D; Müller, J; Müller, K; Müller, V; Mussini, M; Muster, B; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Niess, V; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Okhrimenko, O; Oldeman, R; Onderwater, C J G; Osorio Rodrigues, B; Otalora Goicochea, J M; Otto, A; Owen, P; Oyanguren, A; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Pappalardo, L L; Pappenheimer, C; Parkes, C; Passaleva, G; Patel, G D; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pilař, T; Pinci, D; Pistone, A; Piucci, A; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poluektov, A; Polyakov, I; Polycarpo, E; Popov, A; Popov, D; Popovici, B; Potterat, C; Price, E; Price, J D; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rama, M; Rangel, M S; Raniuk, I; Rauschmayr, N; Raven, G; Redi, F; Reichert, S; Reid, M M; Dos Reis, A C; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Roiser, S; Romanovsky, V; Romero Vidal, A; W Ronayne, J; Rotondo, M; Rouvinet, J; Ruf, T; Ruiz Valls, P; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Shires, A; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, E; Smith, I T; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Soomro, F; Souza, D; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefkova, S; Steinkamp, O; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Swientek, S; Syropoulos, V; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Tayduganov, A; Tekampe, T; Teklishyn, M; Tellarini, G; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Todd, J; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Torr, N; Tournefier, E; Tourneur, S; Trabelsi, K; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagnoni, V; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vázquez Sierra, C; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Vilasis-Cardona, X; Vollhardt, A; Volyanskyy, D; Voong, D; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wandernoth, S; Wang, J; Ward, D R; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wright, S; Wyllie, K; Xie, Y; Xu, Z; Yang, Z; Yu, J; Yuan, X; Yushchenko, O; Zangoli, M; Zavertyaev, M; Zhang, L; Zhang, Y; Zhelezov, A; Zhokhov, A; Zhong, L; Zhukov, V; Zucchelli, S
2016-01-01
The oscillation frequency, [Formula: see text], of [Formula: see text] mesons is measured using semileptonic decays with a [Formula: see text] or [Formula: see text] meson in the final state. The data sample corresponds to 3.0[Formula: see text] of pp collisions, collected by the LHCb experiment at centre-of-mass energies [Formula: see text] = 7 and 8[Formula: see text]. A combination of the two decay modes gives [Formula: see text], where the first uncertainty is statistical and the second is systematic. This is the most precise single measurement of this parameter. It is consistent with the current world average and has similar precision.
Biomarker development in the precision medicine era: lung cancer as a case study.
Vargas, Ashley J; Harris, Curtis C
2016-08-01
Precision medicine relies on validated biomarkers with which to better classify patients by their probable disease risk, prognosis and/or response to treatment. Although affordable 'omics'-based technology has enabled faster identification of putative biomarkers, the validation of biomarkers is still stymied by low statistical power and poor reproducibility of results. This Review summarizes the successes and challenges of using different types of molecule as biomarkers, using lung cancer as a key illustrative example. Efforts at the national level of several countries to tie molecular measurement of samples to patient data via electronic medical records are the future of precision medicine research.
Turbulence statistics with quantified uncertainty in cold-wall supersonic channel flow
NASA Astrophysics Data System (ADS)
Ulerich, Rhys; Moser, Robert D.
2012-11-01
To investigate compressibility effects in wall-bounded turbulence, a series of direct numerical simulations of compressible channel flow with isothermal (cold) walls have been conducted. All combinations of Re = { 3000 , 5000 } and Ma = { 0 . 1 , 0 . 5 , 1 . 5 , 3 . 0 } have been simulated where the Reynolds and Mach numbers are based on bulk velocity and sound speed at the wall temperature. Turbulence statistics with precisely quantified uncertainties computed from these simulations will be presented and are being made available in a public data base at http://turbulence.ices.utexas.edu/. The simulations were performed using a new pseudo-spectral code called Suzerain, which was designed to efficiently produce high quality data on compressible, wall-bounded turbulent flows using a semi-implicit Fourier/B-spline numerical formulation. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].
2012-01-01
Multiple reaction monitoring mass spectrometry (MRM-MS) with stable isotope dilution (SID) is increasingly becoming a widely accepted assay for the quantification of proteins and peptides. These assays have shown great promise in relatively high throughput verification of candidate biomarkers. While the use of MRM-MS assays is well established in the small molecule realm, their introduction and use in proteomics is relatively recent. As such, statistical and computational methods for the analysis of MRM-MS data from proteins and peptides are still being developed. Based on our extensive experience with analyzing a wide range of SID-MRM-MS data, we set forth a methodology for analysis that encompasses significant aspects ranging from data quality assessment, assay characterization including calibration curves, limits of detection (LOD) and quantification (LOQ), and measurement of intra- and interlaboratory precision. We draw upon publicly available seminal datasets to illustrate our methods and algorithms. PMID:23176545
Mani, D R; Abbatiello, Susan E; Carr, Steven A
2012-01-01
Multiple reaction monitoring mass spectrometry (MRM-MS) with stable isotope dilution (SID) is increasingly becoming a widely accepted assay for the quantification of proteins and peptides. These assays have shown great promise in relatively high throughput verification of candidate biomarkers. While the use of MRM-MS assays is well established in the small molecule realm, their introduction and use in proteomics is relatively recent. As such, statistical and computational methods for the analysis of MRM-MS data from proteins and peptides are still being developed. Based on our extensive experience with analyzing a wide range of SID-MRM-MS data, we set forth a methodology for analysis that encompasses significant aspects ranging from data quality assessment, assay characterization including calibration curves, limits of detection (LOD) and quantification (LOQ), and measurement of intra- and interlaboratory precision. We draw upon publicly available seminal datasets to illustrate our methods and algorithms.
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Nadler, Walder; Grassberger, Peter
2005-07-01
The scaling behavior of randomly branched polymers in a good solvent is studied in two to nine dimensions, modeled by lattice animals on simple hypercubic lattices. For the simulations, we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. We obtain high statistics of animals with up to several thousand sites in all dimension 2⩽d⩽9. The partition sum (number of different animals) and gyration radii are estimated. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4, and ⩾8. In addition, we present the hitherto most precise estimates for growth constants in d⩾3. For clusters with one site attached to an attractive surface, we verify the superuniversality of the cross-over exponent at the adsorption transition predicted by Janssen and Lyssy.
NASA Astrophysics Data System (ADS)
Xie, Yanan; Zhou, Mingliang; Pan, Dengke
2017-10-01
The forward-scattering model is introduced to describe the response of normalized radar cross section (NRCS) of precipitation with synthetic aperture radar (SAR). Since the distribution of near-surface rainfall is related to the rate of near-surface rainfall and horizontal distribution factor, a retrieval algorithm called modified regression empirical and model-oriented statistical (M-M) based on the volterra integration theory is proposed. Compared with the model-oriented statistical and volterra integration (MOSVI) algorithm, the biggest difference is that the M-M algorithm is based on the modified regression empirical algorithm rather than the linear regression formula to retrieve the value of near-surface rainfall rate. Half of the empirical parameters are reduced in the weighted integral work and a smaller average relative error is received while the rainfall rate is less than 100 mm/h. Therefore, the algorithm proposed in this paper can obtain high-precision rainfall information.
NASA Astrophysics Data System (ADS)
Renne, Paul R.; Fulford, Madeleine M.; Busby-Spera, Cathy
1991-03-01
Laser probe 40Ar/39Ar analyses of individual sanidine grains from four tuffs in the alluvial Late Cretaceous (Campanian) El Gallo Formation yield statistically distinct mean dates ranging from 74.87±0.05 Ma to 73.59±0.09 Ma. The exceptional precision of these dates permits calculation of statistically significant sediment accumulation rates that are much higher than passive sediment loading would cause, implying rapid tectonically induced subsidence. The dates bracket tightly the age of important dinosaur and mammalian faunas previously reported from the El Gallo Formation. The dates support an age less than 73 Ma for the Campanian/Maastrichtian stage boundary, younger than indicated by several currently used time scales. Further application of the single grain 40Ar/39Ar technique may be expected to greatly benefit stratigraphic studies of Mesozoic sedimentary basins and contribute to calibration of biostratigraphic and magnetostratigraphic time scales.
NASA Astrophysics Data System (ADS)
Abdellatef, Hisham E.
2007-04-01
Picric acid, bromocresol green, bromothymol blue, cobalt thiocyanate and molybdenum(V) thiocyanate have been tested as spectrophotometric reagents for the determination of disopyramide and irbesartan. Reaction conditions have been optimized to obtain coloured comoplexes of higher sensitivity and longer stability. The absorbance of ion-pair complexes formed were found to increases linearity with increases in concentrations of disopyramide and irbesartan which were corroborated by correction coefficient values. The developed methods have been successfully applied for the determination of disopyramide and irbesartan in bulk drugs and pharmaceutical formulations. The common excipients and additives did not interfere in their determination. The results obtained by the proposed methods have been statistically compared by means of student t-test and by the variance ratio F-test. The validity was assessed by applying the standard addition technique. The results were compared statistically with the official or reference methods showing a good agreement with high precision and accuracy.
Implications of MOLA Global Roughness, Statistics, and Topography
NASA Technical Reports Server (NTRS)
Aharonson, O.; Zuber, M. T.; Neumann, G. A.
1999-01-01
New insights are emerging as the ongoing high-quality measurements of the Martian surface topography by Mars Orbiter Laser Altimeter (MOLA) on board the Mars Global Surveyor (MGS) spacecraft increase in coverage, resolution, and diversity. For the first time, a global characterization of the statistical properties of topography is possible. The data were collected during the aerobreaking hiatus, science phasing, and mapping orbits of MGS, and have a resolution of 300-400 m along track, a range resolution of 37.5 cm, a range precision of 1-10 m for surface slopes up to 30 deg., and an absolute accuracy of topography of 13 m. The spacecraft's orbit inclination dictates that nadir observations have latitude coverage of about 87.1S to 87.1N; the addition of observations obtained during a period of off-nadir pointing over the north pole extended coverage to 90N. Additional information is contained in the original extended abstract.
Characterization and photometric performance of the Hyper Suprime-Cam Software Pipeline
NASA Astrophysics Data System (ADS)
Huang, Song; Leauthaud, Alexie; Murata, Ryoma; Bosch, James; Price, Paul; Lupton, Robert; Mandelbaum, Rachel; Lackner, Claire; Bickerton, Steven; Miyazaki, Satoshi; Coupon, Jean; Tanaka, Masayuki
2018-01-01
The Subaru Strategic Program (SSP) is an ambitious multi-band survey using the Hyper Suprime-Cam (HSC) on the Subaru telescope. The Wide layer of the SSP is both wide and deep, reaching a detection limit of i ˜ 26.0 mag. At these depths, it is challenging to achieve accurate, unbiased, and consistent photometry across all five bands. The HSC data are reduced using a pipeline that builds on the prototype pipeline for the Large Synoptic Survey Telescope. We have developed a Python-based, flexible framework to inject synthetic galaxies into real HSC images, called SynPipe. Here we explain the design and implementation of SynPipe and generate a sample of synthetic galaxies to examine the photometric performance of the HSC pipeline. For stars, we achieve 1% photometric precision at i ˜ 19.0 mag and 6% precision at i ˜ 25.0 in the i band (corresponding to statistical scatters of ˜0.01 and ˜0.06 mag respectively). For synthetic galaxies with single-Sérsic profiles, forced CModel photometry achieves 13% photometric precision at i ˜ 20.0 mag and 18% precision at i ˜ 25.0 in the i band (corresponding to statistical scatters of ˜0.15 and ˜0.22 mag respectively). We show that both forced point spread function and CModel photometry yield unbiased color estimates that are robust to seeing conditions. We identify several caveats that apply to the version of HSC pipeline used for the first public HSC data release (DR1) that need to be taking into consideration. First, the degree to which an object is blended with other objects impacts the overall photometric performance. This is especially true for point sources. Highly blended objects tend to have larger photometric uncertainties, systematically underestimated fluxes, and slightly biased colors. Secondly, >20% of stars at 22.5 < i < 25.0 mag can be misclassified as extended objects. Thirdly, the current CModel algorithm tends to strongly underestimate the half-light radius and ellipticity of galaxy with i > 21.5 mag.
NASA Astrophysics Data System (ADS)
Riad, Safaa M.; El-Rahman, Mohamed K. Abd; Fawaz, Esraa M.; Shehata, Mostafa A.
2015-06-01
Three sensitive, selective, and precise stability indicating spectrophotometric methods for the determination of the X-ray contrast agent, diatrizoate sodium (DTA) in the presence of its acidic degradation product (highly cytotoxic 3,5-diamino metabolite) and in pharmaceutical formulation, were developed and validated. The first method is ratio difference, the second one is the bivariate method, and the third one is the dual wavelength method. The calibration curves for the three proposed methods are linear over a concentration range of 2-24 μg/mL. The selectivity of the proposed methods was tested using laboratory prepared mixtures. The proposed methods have been successfully applied to the analysis of DTA in pharmaceutical dosage forms without interference from other dosage form additives. The results were statistically compared with the official US pharmacopeial method. No significant difference for either accuracy or precision was observed.
sPHENIX: The next generation heavy ion detector at RHIC
NASA Astrophysics Data System (ADS)
Campbell, Sarah;
2017-04-01
sPHENIX is a new collaboration and future detector project at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC). It seeks to answer fundamental questions on the nature of the quark gluon plasma (QGP), including its coupling strength and temperature dependence, by using a suite of precision jet and upsilon measurements that probe different length scales of the QGP. This is possible with a full acceptance, |η| < 1 and 0-2π in φ, electromagentic and hadronic calorimeters and precision tracking enabled by a 1.5 T superconducting magnet. With the increased luminosity afforded by accelerator upgrades, sPHENIX is going to perform high statistics measurements extending the kinematic reach at RHIC to overlap the LHC’s. This overlap is going to facilitate a better understanding of the role of temperature, density and parton virtuality in QGP dynamics and, specifically, jet quenching. This paper focuses on key future measurements and the current state of the sPHENIX project.
Prostate biopsies assisted by comanipulated probe-holder: first in man.
Vitrani, Marie-Aude; Baumann, Michael; Reversat, David; Morel, Guillaume; Moreau-Gaudry, Alexandre; Mozer, Pierre
2016-06-01
A comanipulator for assisting endorectal prostate biopsies is evaluated through a first-in-man clinical trial. This lightweight system, based on conventional robotic components, possesses six degrees of freedom. It uses three electric motors and three brakes. It features a free mode, where its low friction and inertia allow for natural manipulation of the probe and a locked mode, exhibiting both a very low stiffness and a high steady-state precision. Clinical trials focusing on the free mode and the locked mode of the robot are presented. The objective was to evaluate the practical usability and performance of the robot during clinical procedures. A research protocol for a prospective randomized clinical trial has been designed. Its specific goal was to compare the accuracy of biopsies performed with and without the assistance of the comanipulator. The accuracy is compared between biopsies performed with and without the assistance of the comanipulator, across the 10 first patients included in the trial. Results show a statistically significant increase in the precision.
NASA Astrophysics Data System (ADS)
Lu, Shan; Zhang, Hanmo
2016-01-01
To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.
Vacuum ultraviolet spectropolarimeter design for precise polarization measurements.
Narukage, Noriyuki; Auchère, Frédéric; Ishikawa, Ryohko; Kano, Ryouhei; Tsuneta, Saku; Winebarger, Amy R; Kobayashi, Ken
2015-03-10
Precise polarization measurements in the vacuum ultraviolet (VUV) region provide a new means for inferring weak magnetic fields in the upper atmosphere of the Sun and stars. We propose a VUV spectropolarimeter design ideally suited for this purpose. This design is proposed and adopted for the NASA-JAXA chromospheric lyman-alpha spectropolarimeter (CLASP), which will record the linear polarization (Stokes Q and U) of the hydrogen Lyman-α line (121.567 nm) profile. The expected degree of polarization is on the order of 0.1%. Our spectropolarimeter has two optically symmetric channels to simultaneously measure orthogonal linear polarization states with a single concave diffraction grating that serves both as the spectral dispersion element and beam splitter. This design has a minimal number of reflective components with a high VUV throughput. Consequently, these design features allow us to minimize the polarization errors caused by possible time variation of the VUV flux during the polarization modulation and by statistical photon noise.
Foong, Shaohui; Sun, Zhenglong
2016-08-12
In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.
Scientific applications of frequency-stabilized laser technology in space
NASA Technical Reports Server (NTRS)
Schumaker, Bonny L.
1990-01-01
A synoptic investigation of the uses of frequency-stabilized lasers for scientific applications in space is presented. It begins by summarizing properties of lasers, characterizing their frequency stability, and describing limitations and techniques to achieve certain levels of frequency stability. Limits to precision set by laser frequency stability for various kinds of measurements are investigated and compared with other sources of error. These other sources include photon-counting statistics, scattered laser light, fluctuations in laser power, and intensity distribution across the beam, propagation effects, mechanical and thermal noise, and radiation pressure. Methods are explored to improve the sensitivity of laser-based interferometric and range-rate measurements. Several specific types of science experiments that rely on highly precise measurements made with lasers are analyzed, and anticipated errors and overall performance are discussed. Qualitative descriptions are given of a number of other possible science applications involving frequency-stabilized lasers and related laser technology in space. These applications will warrant more careful analysis as technology develops.
Rezende, Patrícia Sueli; Carmo, Geraldo Paulo do; Esteves, Eduardo Gonçalves
2015-06-01
We report the use of a method to determine the refractive index of copper(II) serum (RICS) in milk as a tool to detect the fraudulent addition of water. This practice is highly profitable, unlawful, and difficult to deter. The method was optimized and validated and is simple, fast and robust. The optimized method yielded statistically equivalent results compared to the reference method with an accuracy of 0.4% and quadrupled analytical throughput. Trueness, precision (repeatability and intermediate precision) and ruggedness are determined to be satisfactory at a 95.45% confidence level. The expanded uncertainty of the measurement was ±0.38°Zeiss at the 95.45% confidence level (k=3.30), corresponding to 1.03% of the minimum measurement expected in adequate samples (>37.00°Zeiss). Copyright © 2015 Elsevier B.V. All rights reserved.
Riad, Safaa M; El-Rahman, Mohamed K Abd; Fawaz, Esraa M; Shehata, Mostafa A
2015-06-15
Three sensitive, selective, and precise stability indicating spectrophotometric methods for the determination of the X-ray contrast agent, diatrizoate sodium (DTA) in the presence of its acidic degradation product (highly cytotoxic 3,5-diamino metabolite) and in pharmaceutical formulation, were developed and validated. The first method is ratio difference, the second one is the bivariate method, and the third one is the dual wavelength method. The calibration curves for the three proposed methods are linear over a concentration range of 2-24 μg/mL. The selectivity of the proposed methods was tested using laboratory prepared mixtures. The proposed methods have been successfully applied to the analysis of DTA in pharmaceutical dosage forms without interference from other dosage form additives. The results were statistically compared with the official US pharmacopeial method. No significant difference for either accuracy or precision was observed. Copyright © 2015 Elsevier B.V. All rights reserved.
Probing the top-quark width using the charge identification of b jets
Giardino, Pier Paolo; Zhang, Cen
2017-07-18
We propose a new method for measuring the top-quark width based on the on-/off-shell ratio of b -charge asymmetry in pp → Wbj production at the LHC. The charge asymmetry removes virtually all backgrounds and related uncertainties, while remaining systematic and theoretical uncertainties can be taken under control by the ratio of cross sections. Limited only by statistical error, in an optimistic scenario, we find that our approach leads to good precision at high integrated luminosity, at a few hundred MeV assuming 300 – 3000 fb -1 at the LHC. The approach directly probes the total width, in such amore » way that model-dependence can be minimized. It is complementary to existing cross section measurements which always leave a degeneracy between the total rate and the branching ratio, and provides valuable information about the properties of the top quark. Here, the proposal opens up new opportunities for precision top measurements using a b-charge identification algorithm.« less
Gated Sensor Fusion: A way to Improve the Precision of Ambulatory Human Body Motion Estimation.
Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, Gonzalo
2014-01-01
Human body motion is usually variable in terms of intensity and, therefore, any Inertial Measurement Unit attached to a subject will measure both low and high angular rate and accelerations. This can be a problem for the accuracy of orientation estimation algorithms based on adaptive filters such as the Kalman filter, since both the variances of the process noise and the measurement noise are set at the beginning of the algorithm and remain constant during its execution. Setting fixed noise parameters burdens the adaptation capability of the filter if the intensity of the motion changes rapidly. In this work we present a conjoint novel algorithm which uses a motion intensity detector to dynamically vary the noise statistical parameters of different approaches of the Kalman filter. Results show that the precision of the estimated orientation in terms of the RMSE can be improved up to 29% with respect to the standard fixed-parameters approaches.
Murillo, Gabriel H; You, Na; Su, Xiaoquan; Cui, Wei; Reilly, Muredach P; Li, Mingyao; Ning, Kang; Cui, Xinping
2016-05-15
Single nucleotide variant (SNV) detection procedures are being utilized as never before to analyze the recent abundance of high-throughput DNA sequencing data, both on single and multiple sample datasets. Building on previously published work with the single sample SNV caller genotype model selection (GeMS), a multiple sample version of GeMS (MultiGeMS) is introduced. Unlike other popular multiple sample SNV callers, the MultiGeMS statistical model accounts for enzymatic substitution sequencing errors. It also addresses the multiple testing problem endemic to multiple sample SNV calling and utilizes high performance computing (HPC) techniques. A simulation study demonstrates that MultiGeMS ranks highest in precision among a selection of popular multiple sample SNV callers, while showing exceptional recall in calling common SNVs. Further, both simulation studies and real data analyses indicate that MultiGeMS is robust to low-quality data. We also demonstrate that accounting for enzymatic substitution sequencing errors not only improves SNV call precision at low mapping quality regions, but also improves recall at reference allele-dominated sites with high mapping quality. The MultiGeMS package can be downloaded from https://github.com/cui-lab/multigems xinping.cui@ucr.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Huneke, J. C.; Armstrong, J. T.; Wassserburg, G. J.
1983-01-01
Isotopic ratios have been determined, at a precision level approaching that of counting statistics using beam switching, by employing PANURGE, a modified CAMECA IMS3F ion microprobe at a mass resolving power of 5000. This technique is used to determine the isotopic composition of Mg and Si and the atomic ratio of Al/Mg in minerals from the Allende inclusion WA and the Allende FUN inclusion C1. Results show enrichment in Mg-26 of up to 260 percent. Results of Mg and Al/Mg measurements on cogenetic spinel inclusion and host plagiclase crystals show Mg-Al isochrons in excellent agreement with precise mineral isochrons determined by thermal emission mass spectrometry. The measurements are found to confirm the presence of substantial excess Mg-26 in WA and its near absence in C1. Data is obtained which indicates a metamorphic reequilibrium of Mg in Allende plagioclase at least 0.6 my after WA formation. Ion probe measurements are obtained which confirm that the Mg composition in Allende C1 is highly fractionated and is uniform among pyroxene, melilite, plagioclase, spinel crystals, and spinel included in melilite and plagioclase crystals.
A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics
NASA Astrophysics Data System (ADS)
Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.
2017-03-01
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
Data processing in neutron protein crystallography using positron-sensitive detectors
NASA Astrophysics Data System (ADS)
Schoenborn, B. P.
Neutrons provide a unique probe for localizing hydrogen atoms and for distinguishing hydrogen from deuterons. Hydrogen atoms largely determine the three dimensional structure of proteins and are responsible for many catalytic reactions. The study of hydrogen bonding and hydrogen exchange will therefore give insight into reaction mechanisms and conformational fluctuations. In addition, neutrons provide the ability to distinguish N from C and O and to allow correct orientation of groups such as histidine and glutamine. To take advantage of these unique features of neutron crystallography, one needs accurate Fourier maps depicting atomic structure to a high precision. Special attention is given to subtraction of the high background associated with hydrogen containing molecules, which produces a disproportionately large statistical error.
A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding
NASA Astrophysics Data System (ADS)
Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui
2016-02-01
In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.
The Super-TIGER Instrument to Probe Galactic Cosmic-Ray Origins
NASA Astrophysics Data System (ADS)
Ward, John E.
2013-04-01
Super-TIGER is a large area (5.4 m^2) balloon-borne instrument designed to measure cosmic-ray nuclei in the charge interval 30 <= Z <= 42 with individual-element resolution and high statistical precision, and make exploratory measurements through Z = 56. These measurements will provide sensitive tests of the emerging model of cosmic-ray origins in OB associations and models of the mechanism for selection of nuclei for acceleration. Furthermore, Super-TIGER will measure with high statistical accuracy the energy spectra of the more abundant elements in the interval 10 <= Z <= 28 at energies 0.8 < E < 10 GeV/nucleon to test the hypothesis that nearby micro-quasars could superpose features on the energy spectra. Super-TIGER, which builds on the heritage of the smaller TIGER, was constructed by a collaboration involving WUSTL, NASA/GSFC, Caltech, JPL and U Minn. It was successfully launched from Antarctica in December 2012, collecting high-quality data for over one month. Particle charge and energy were measured with a combination of plastic scintillators, acrylic and silica-aerogel Cherenkov detectors, and a scintillating fiber hodoscope. Details of the flight, instrument performance, data analysis and preliminary results of the Super-TIGER flight will be presented.
Statistical clumped isotope signatures
Röckmann, T.; Popa, M. E.; Krol, M. C.; Hofmann, M. E. G.
2016-01-01
High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168
Effect of Different Ceramic Crown Preparations on Tooth Structure Loss: An In Vitro Study
NASA Astrophysics Data System (ADS)
Ebrahimpour, Ashkan
Objective: To quantify and compare the amount of tooth-structure reduction following the full-coverage preparations for crown materials of porcelain-fused-to-metal, lithium disilicate glass-ceramic and yttria-stabilized tetragonal zirconia polycrystalline for three tooth morphologies. Methods: Groups of resin teeth of different morphologies were individually weighed to high precision, then prepared following the preparation guidelines. The teeth were re-weighed after preparation and the amount of structural reduction was calculated. Statistical analyses were performed to find out if there was a significant difference among the groups. Results: Amount of tooth reduction for zirconia crown preparations was the lowest and statistically different compared with the other two materials. No statistical significance was found between the amount of reduction for porcelain-fused-to-metal and lithium disilicate glass-ceramic crowns. Conclusion: Within the limitations of this study, more tooth structure can be saved when utilizing zirconia full-coverage restorations compared with lithium disilicate glass-ceramic and porcelain-fused-to-metal crowns in maxillary central incisors, first premolars and first molars.
Statistical characterization of short wind waves from stereo images of the sea surface
NASA Astrophysics Data System (ADS)
Mironov, Alexey; Yurovskaya, Maria; Dulov, Vladimir; Hauser, Danièle; Guérin, Charles-Antoine
2013-04-01
We propose a methodology to extract short-scale statistical characteristics of the sea surface topography by means of stereo image reconstruction. The possibilities and limitations of the technique are discussed and tested on a data set acquired from an oceanographic platform at the Black Sea. The analysis shows that reconstruction of the topography based on stereo method is an efficient way to derive non-trivial statistical properties of surface short- and intermediate-waves (say from 1 centimer to 1 meter). Most technical issues pertaining to this type of datasets (limited range of scales, lacunarity of data or irregular sampling) can be partially overcome by appropriate processing of the available points. The proposed technique also allows one to avoid linear interpolation which dramatically corrupts properties of retrieved surfaces. The processing technique imposes that the field of elevation be polynomially detrended, which has the effect of filtering out the large scales. Hence the statistical analysis can only address the small-scale components of the sea surface. The precise cut-off wavelength, which is approximatively half the patch size, can be obtained by applying a high-pass frequency filter on the reference gauge time records. The results obtained for the one- and two-points statistics of small-scale elevations are shown consistent, at least in order of magnitude, with the corresponding gauge measurements as well as other experimental measurements available in the literature. The calculation of the structure functions provides a powerful tool to investigate spectral and statistical properties of the field of elevations. Experimental parametrization of the third-order structure function, the so-called skewness function, is one of the most important and original outcomes of this study. This function is of primary importance in analytical scattering models from the sea surface and was up to now unavailable in field conditions. Due to the lack of precise reference measurements for the small-scale wave field, we could not quantify exactly the accuracy of the retrieval technique. However, it appeared clearly that the obtained accuracy is good enough for the estimation of second-order statistical quantities (such as the correlation function), acceptable for third-order quantities (such as the skwewness function) and insufficient for fourth-order quantities (such as kurtosis). Therefore, the stereo technique in the present stage should not be thought as a self-contained universal tool to characterize the surface statistics. Instead, it should be used in conjunction with other well calibrated but sparse reference measurement (such as wave gauges) for cross-validation and calibration. It then completes the statistical analysis in as much as it provides a snapshot of the three-dimensional field and allows for the evaluation of higher-order spatial statistics.
Melching, C.S.; Coupe, R.H.
1995-01-01
During water years 1985-91, the U.S. Geological Survey (USGS) and the Illinois Environmental Protection Agency (IEPA) cooperated in the collection and analysis of concurrent and split stream-water samples from selected sites in Illinois. Concurrent samples were collected independently by field personnel from each agency at the same time and sent to the IEPA laboratory, whereas the split samples were collected by USGS field personnel and divided into aliquots that were sent to each agency's laboratory for analysis. The water-quality data from these programs were examined by means of the Wilcoxon signed ranks test to identify statistically significant differences between results of the USGS and IEPA analyses. The data sets for constituents and properties identified by the Wilcoxon test as having significant differences were further examined by use of the paired t-test, mean relative percentage difference, and scattergrams to determine if the differences were important. Of the 63 constituents and properties in the concurrent-sample analysis, differences in only 2 (pH and ammonia) were statistically significant and large enough to concern water-quality engineers and planners. Of the 27 constituents and properties in the split-sample analysis, differences in 9 (turbidity, dissolved potassium, ammonia, total phosphorus, dissolved aluminum, dissolved barium, dissolved iron, dissolved manganese, and dissolved nickel) were statistically significant and large enough to con- cern water-quality engineers and planners. The differences in concentration between pairs of the concurrent samples were compared to the precision of the laboratory or field method used. The differences in concentration between pairs of the concurrent samples were compared to the precision of the laboratory or field method used. The differences in concentration between paris of split samples were compared to the precision of the laboratory method used and the interlaboratory precision of measuring a given concentration or property. Consideration of method precision indicated that differences between concurrent samples were insignificant for all concentrations and properties except pH, and that differences between split samples were significant for all concentrations and properties. Consideration of interlaboratory precision indicated that the differences between the split samples were not unusually large. The results for the split samples illustrate the difficulty in obtaining comparable and accurate water-quality data.
Accurate Black Hole Spin Measurements using ABC
NASA Astrophysics Data System (ADS)
Connolly, Andrew
Measuring the spin of black holes provides important insights into the supernova formation mechanism of stellar-mass black holes, galaxy merger scenarios for supermassive black holes, and the launching mechanisms of ballistic jets. It is therefore of crucial importance to measure black hole spins to a high degree of accuracy. Stellar-mass black holes in binary systems (BHBs) have two major advantages over Active Galactic Nuclei (AGN): (1) owing to their proximity and brightness, observations of BHBs are not as limited by counting statistics as their supermassive counter-parts; (2) unlike in AGN, one can use two largely independent methods to measure the spin in BHBs, providing a check on spin measurements. However, the high flux that makes BHBs such excellent targets for spin measurements also proves to be their Achilles heel: modern CCD cameras are optimized for observing faint sources. Consequently, observations of bright BHBs with CCD cameras are subject to non-linear instrumental effects among them pile-up and grade migration that strongly distort the spectrum. Since spin measurements rely on a very precise model of both the continuum X-ray flux and disc reflection signatures superimposed on top of the former, these instrumental effects may cause inferred spin measurements to differ by a factor of two or more. Current mitigation strategies are aimed at removing instrumental effects either during the observations themselves, by requiring simultaneous observations with multiple telescopes, or in post-processing. Even when these techniques are employed, pile-up may remain unrecognized and still distort results, whereas mitigation strategies may introduce additional systematic biases, e.g. due to increased (cross-)calibration uncertainties. Advances in modern statistical methodology allow for efficient modeling of instrumental effects during the analysis stage, largely eliminating the requirements for observations with multiple instruments or increased observation time. In particular, a class of methods col- lectively called Approximate Bayesian Computation (ABC) is capable of exploiting the fact that it is possible to simulate instrumental effects to a high degree of accuracy in order to build reliable statistical models incorporating pile-up and related effects. With the loss of the Hitomi spacecraft, it is more important than ever to make full use of the data we collect with current instruments. We propose an ambitious program to estimate the spins of 13 black holes in X-ray binaries using observations with XMMNewton s EPIC MOS and pn, Suzaku s XIS and Chandra s ACIS and HETG instruments. We will build a general framework for dealing with pile-up in spectral modeling using ABC and refine current instrumental simulators for inclusion in this framework. Coupled with state-of-the- art sampling methods, this will allow us to take advantage of dozens of observations in the archives of all three instruments. We will be able to estimate spins to much bet- ter accuracy than ever before and test current models for black hole formation as well as jet launching mechanisms. The program will deliver a considerable legacy, because the statistical and methodological framework will be general. Application to other instruments suffering from photon pile-up, e.g. Swift/XRT, Fermi/GBM, ASCA/SIS, and GALEX, will only require is a model capable of simulating the relevant instrumental effects. This will enable other science cases beyond that proposed here which rely on precise spectral measurements or cases where pile-up cannot be avoided, e.g. high-precision radius measurements in neutron stars, understanding X-ray dust scattering, and stellar evolution studies of globular clusters.
Development of statistical linear regression model for metals from transportation land uses.
Maniquiz, Marla C; Lee, Soyoung; Lee, Eunju; Kim, Lee-Hyung
2009-01-01
The transportation landuses possessing impervious surfaces such as highways, parking lots, roads, and bridges were recognized as the highly polluted non-point sources (NPSs) in the urban areas. Lots of pollutants from urban transportation are accumulating on the paved surfaces during dry periods and are washed-off during a storm. In Korea, the identification and monitoring of NPSs still represent a great challenge. Since 2004, the Ministry of Environment (MOE) has been engaged in several researches and monitoring to develop stormwater management policies and treatment systems for future implementation. The data over 131 storm events during May 2004 to September 2008 at eleven sites were analyzed to identify correlation relationships between particulates and metals, and to develop simple linear regression (SLR) model to estimate event mean concentration (EMC). Results indicate that there was no significant relationship between metals and TSS EMC. However, the SLR estimation models although not providing useful results are valuable indicators of high uncertainties that NPS pollution possess. Therefore, long term monitoring employing proper methods and precise statistical analysis of the data should be undertaken to eliminate these uncertainties.
NASA Astrophysics Data System (ADS)
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
Cartagena, Alvaro; Bakhshandeh, Azam; Ekstrand, Kim Rud
2018-02-07
With this in vitro study we aimed to assess the possibility of precise application of sealant on accessible artificial white spot lesions (WSL) on approximal surfaces next to a tooth surface under operative treatment. A secondary aim was to evaluate whether the use of magnifying glasses improved the application precision. Fifty-six extracted premolars were selected, approximal WSL lesions were created with 15% HCl gel and standardized photographs were taken. The premolars were mounted in plaster-models in contact with a neighbouring molar with Class II/I-II restoration (Sample 1) or approximal, cavitated dentin lesion (Sample 2). The restorations or the lesion were removed, and Clinpro Sealant was placed over the WSL. Magnifying glasses were used when sealing half the study material. The sealed premolar was removed from the plaster-model and photographed. Adobe Photoshop was used to measure the size of WSL and sealed area. The degree of match between the areas was determined in Photoshop. Interclass agreement for WSL, sealed, and matched areas were found as excellent (κ = 0.98-0.99). The sealant covered 48-100% of the WSL-area (median = 93%) in Sample 1 and 68-100% of the WSL-area (median = 95%) in Sample 2. No statistical differences were observed concerning uncovered proportions of the WSL-area between groups with and without using magnifying glasses (p values ≥ .19). However, overextended sealed areas were more pronounced when magnification was used (p = .01). The precision did not differ between the samples (p = .31). It was possible to seal accessible approximal lesions with high precision. Use of magnifying glasses did not improve the precision.
Rudolph, Heike; Quaas, Sebastian; Haim, Manuela; Preißler, Jörg; Walter, Michael H; Koch, Rainer; Luthardt, Ralph G
2013-06-01
The use of fast-setting impression materials with different viscosities for the one-stage impression technique demands precise working times when mixing. We examined the effect of varying working time on impression precision in a randomized clinical trial. Focusing on tooth 46, three impressions were made from each of 96 volunteers, using either a polyether (PE: Impregum Penta H/L DuoSoft Quick, 3 M ESPE) or an addition-curing silicone (AS: Aquasil Ultra LV, Dentsply/DeTrey), one with the manufacturer's recommended working time (used as a reference) and two with altered working times. All stages of the impression-taking were subject to randomization. The three-dimensional precision of the non-standard working time impressions was digitally analyzed compared to the reference impression. Statistical analysis was performed using multivariate models. The mean difference in the position of the lower right first molar (vs. the reference impression) ranged from ±12 μm for PE to +19 and -14 μm for AS. Significantly higher mean values (+62 to -40 μm) were found for AS compared to PE (+21 to -26 μm) in the area of the distal adjacent tooth. Fast-set impression materials offer high precision when used for single tooth restorations as part of a one-stage impression technique, even when the working time (mixing plus application of the light- and heavy-body components) diverges significantly from the manufacturer's recommended protocol. Best accuracy was achieved with machine-mixed heavy-body/light-body polyether. Both materials examined met the clinical requirements regarding precision when the teeth were completely syringed with light material.
Samadzadeh, Gholam Reza; Rigi, Tahereh; Ganjali, Ali Reza
2013-01-01
Surveying valuable and most recent information from internet, has become vital for researchers and scholars, because every day, thousands and perhaps millions of scientific works are brought out as digital resources which represented by internet and researchers can't ignore this great resource to find related documents for their literature search, which may not be found in any library. With regard to variety of documents presented on the internet, search engines are one of the most effective search tools for finding information. The aim of this study is to evaluate the three criteria, recall, preciseness and importance of the four search engines which are PubMed, Science Direct, Google Scholar and federated search of Iranian National Medical Digital Library in addiction (prevention and treatment) to select the most effective search engine for offering the best literature research. This research was a cross-sectional study by which four popular search engines in medical sciences were evaluated. To select keywords, medical subject heading (Mesh) was used. We entered given keywords in the search engines and after searching, 10 first entries were evaluated. Direct observation was used as a mean for data collection and they were analyzed by descriptive statistics (number, percent number and mean) and inferential statistics, One way analysis of variance (ANOVA) and post hoc Tukey in Spss. 15 statistical software. P Value < 0.05 was considered statistically significant. Results have shown that the search engines had different operations with regard to the evaluated criteria. Since P Value was 0.004 < 0.05 for preciseness and was 0.002 < 0.05 for importance, it shows significant difference among search engines. PubMed, Science Direct and Google Scholar were the best in recall, preciseness and importance respectively. As literature research is one of the most important stages of research, it's better for researchers, especially Substance-Related Disorders scholars to use different search engines with the best recall, preciseness and importance in that subject field to reach desirable results while searching and they don't depend on just one search engine.
Samadzadeh, Gholam Reza; Rigi, Tahereh; Ganjali, Ali Reza
2013-01-01
Background Surveying valuable and most recent information from internet, has become vital for researchers and scholars, because every day, thousands and perhaps millions of scientific works are brought out as digital resources which represented by internet and researchers can’t ignore this great resource to find related documents for their literature search, which may not be found in any library. With regard to variety of documents presented on the internet, search engines are one of the most effective search tools for finding information. Objectives The aim of this study is to evaluate the three criteria, recall, preciseness and importance of the four search engines which are PubMed, Science Direct, Google Scholar and federated search of Iranian National Medical Digital Library in addiction (prevention and treatment) to select the most effective search engine for offering the best literature research. Materials and Methods This research was a cross-sectional study by which four popular search engines in medical sciences were evaluated. To select keywords, medical subject heading (Mesh) was used. We entered given keywords in the search engines and after searching, 10 first entries were evaluated. Direct observation was used as a mean for data collection and they were analyzed by descriptive statistics (number, percent number and mean) and inferential statistics, One way analysis of variance (ANOVA) and post hoc Tukey in Spss. 15 statistical software. P Value < 0.05 was considered statistically significant. Results Results have shown that the search engines had different operations with regard to the evaluated criteria. Since P Value was 0.004 < 0.05 for preciseness and was 0.002 < 0.05 for importance, it shows significant difference among search engines. PubMed, Science Direct and Google Scholar were the best in recall, preciseness and importance respectively. Conclusions As literature research is one of the most important stages of research, it's better for researchers, especially Substance-Related Disorders scholars to use different search engines with the best recall, preciseness and importance in that subject field to reach desirable results while searching and they don’t depend on just one search engine. PMID:24971257
NASA Astrophysics Data System (ADS)
Capozzi, F.; Lisi, E.; Marrone, A.
2015-11-01
Nuclear reactors provide intense sources of electron antineutrinos, characterized by few-MeV energy E and unoscillated spectral shape Φ (E ). High-statistics observations of reactor neutrino oscillations over medium-baseline distances L ˜O (50 ) km would provide unprecedented opportunities to probe both the long-wavelength mass-mixing parameters (δ m2 and θ12) and the short-wavelength ones (Δ mee 2 and θ13), together with the subtle interference effects associated with the neutrino mass hierarchy (either normal or inverted). In a given experimental setting—here taken as in the JUNO project for definiteness—the achievable hierarchy sensitivity and parameter accuracy depend not only on the accumulated statistics but also on systematic uncertainties, which include (but are not limited to) the mass-mixing priors and the normalizations of signals and backgrounds. We examine, in addition, the effect of introducing smooth deformations of the detector energy scale, E →E'(E ), and of the reactor flux shape, Φ (E )→Φ'(E ), within reasonable error bands inspired by state-of-the-art estimates. It turns out that energy-scale and flux-shape systematics can noticeably affect the performance of a JUNO-like experiment, both on the hierarchy discrimination and on precision oscillation physics. It is shown that a significant reduction of the assumed energy-scale and flux-shape uncertainties (by, say, a factor of 2) would be highly beneficial to the physics program of medium-baseline reactor projects. Our results also shed some light on the role of the inverse-beta decay threshold, of geoneutrino backgrounds, and of matter effects in the analysis of future reactor oscillation data.
Extracting laboratory test information from biomedical text
Kang, Yanna Shen; Kayaalp, Mehmet
2013-01-01
Background: No previous study reported the efficacy of current natural language processing (NLP) methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE) system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens) was very limited or when lexical morphology of the entity was distinctive (as in units of measures), yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure. PMID:24083058
Arce, Pedro; Lagares, Juan Ignacio
2018-01-25
We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2 × 2 cm 2 to 40 × 40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.
Optical nano artifact metrics using silicon random nanostructures
NASA Astrophysics Data System (ADS)
Matsumoto, Tsutomu; Yoshida, Naoki; Nishio, Shumpei; Hoga, Morihisa; Ohyagi, Yasuyuki; Tate, Naoya; Naruse, Makoto
2016-08-01
Nano-artifact metrics exploit unique physical attributes of nanostructured matter for authentication and clone resistance, which is vitally important in the age of Internet-of-Things where securing identities is critical. However, expensive and huge experimental apparatuses, such as scanning electron microscopy, have been required in the former studies. Herein, we demonstrate an optical approach to characterise the nanoscale-precision signatures of silicon random structures towards realising low-cost and high-value information security technology. Unique and versatile silicon nanostructures are generated via resist collapse phenomena, which contains dimensions that are well below the diffraction limit of light. We exploit the nanoscale precision ability of confocal laser microscopy in the height dimension; our experimental results demonstrate that the vertical precision of measurement is essential in satisfying the performances required for artifact metrics. Furthermore, by using state-of-the-art nanostructuring technology, we experimentally fabricate clones from the genuine devices. We demonstrate that the statistical properties of the genuine and clone devices are successfully exploited, showing that the liveness-detection-type approach, which is widely deployed in biometrics, is valid in artificially-constructed solid-state nanostructures. These findings pave the way for reasonable and yet sufficiently secure novel principles for information security based on silicon random nanostructures and optical technologies.
NASA Astrophysics Data System (ADS)
Li, Da; Cheung, Chifai; Zhao, Xing; Ren, Mingjun; Zhang, Juan; Zhou, Liqiu
2016-10-01
Autostereoscopy based three-dimensional (3D) digital reconstruction has been widely applied in the field of medical science, entertainment, design, industrial manufacture, precision measurement and many other areas. The 3D digital model of the target can be reconstructed based on the series of two-dimensional (2D) information acquired by the autostereoscopic system, which consists multiple lens and can provide information of the target from multiple angles. This paper presents a generalized and precise autostereoscopic three-dimensional (3D) digital reconstruction method based on Direct Extraction of Disparity Information (DEDI) which can be used to any transform autostereoscopic systems and provides accurate 3D reconstruction results through error elimination process based on statistical analysis. The feasibility of DEDI method has been successfully verified through a series of optical 3D digital reconstruction experiments on different autostereoscopic systems which is highly efficient to perform the direct full 3D digital model construction based on tomography-like operation upon every depth plane with the exclusion of the defocused information. With the absolute focused information processed by DEDI method, the 3D digital model of the target can be directly and precisely formed along the axial direction with the depth information.
The theory precision analyse of RFM localization of satellite remote sensing imagery
NASA Astrophysics Data System (ADS)
Zhang, Jianqing; Xv, Biao
2009-11-01
The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.
Bit-Grooming: Shave Your Bits with Razor-sharp Precision
NASA Astrophysics Data System (ADS)
Zender, C. S.; Silver, J.
2017-12-01
Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-09-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.
Passage relevance models for genomics search.
Urbain, Jay; Frieder, Ophir; Goharian, Nazli
2009-03-19
We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.
Real-time movement detection and analysis for video surveillance applications
NASA Astrophysics Data System (ADS)
Hueber, Nicolas; Hennequin, Christophe; Raymond, Pierre; Moeglin, Jean-Pierre
2014-06-01
Pedestrian movement along critical infrastructures like pipes, railways or highways, is of major interest in surveillance applications as well as its behavior in urban environment. The goal is to anticipate illicit or dangerous human activities. For this purpose, we propose an all-in-one small autonomous system which delivers high level statistics and reports alerts in specific cases. This situational awareness project leads us to manage efficiently the scene by performing movement analysis. A dynamic background extraction algorithm is developed to reach the degree of robustness against natural and urban environment perturbations and also to match the embedded implementation constraints. When changes are detected in the scene, specific patterns are applied to detect and highlight relevant movements. Depending on the applications, specific descriptors can be extracted and fused in order to reach a high level of interpretation. In this paper, our approach is applied to two operational use cases: pedestrian urban statistics and railway surveillance. In the first case, a grid of prototypes is deployed over a city centre to collect pedestrian movement statistics up to a macroscopic level of analysis. The results demonstrate the relevance of the delivered information; in particular, the flow density map highlights pedestrian preferential paths along the streets. In the second case, one prototype is set next to high speed train tracks to secure the area. The results exhibit a low false alarm rate and assess our approach of a large sensor network for delivering a precise operational picture without overwhelming a supervisor.
Accuracy evaluation of intraoral optical impressions: A clinical study using a reference appliance.
Atieh, Mohammad A; Ritter, André V; Ko, Ching-Chang; Duqum, Ibrahim
2017-09-01
Trueness and precision are used to evaluate the accuracy of intraoral optical impressions. Although the in vivo precision of intraoral optical impressions has been reported, in vivo trueness has not been evaluated because of limitations in the available protocols. The purpose of this clinical study was to compare the accuracy (trueness and precision) of optical and conventional impressions by using a novel study design. Five study participants consented and were enrolled. For each participant, optical and conventional (vinylsiloxanether) impressions of a custom-made intraoral Co-Cr alloy reference appliance fitted to the mandibular arch were obtained by 1 operator. Three-dimensional (3D) digital models were created for stone casts obtained from the conventional impression group and for the reference appliances by using a validated high-accuracy reference scanner. For the optical impression group, 3D digital models were obtained directly from the intraoral scans. The total mean trueness of each impression system was calculated by averaging the mean absolute deviations of the impression replicates from their 3D reference model for each participant, followed by averaging the obtained values across all participants. The total mean precision for each impression system was calculated by averaging the mean absolute deviations between all the impression replicas for each participant (10 pairs), followed by averaging the obtained values across all participants. Data were analyzed using repeated measures ANOVA (α=.05), first to assess whether a systematic difference in trueness or precision of replicate impressions could be found among participants and second to assess whether the mean trueness and precision values differed between the 2 impression systems. Statistically significant differences were found between the 2 impression systems for both mean trueness (P=.010) and mean precision (P=.007). Conventional impressions had higher accuracy with a mean trueness of 17.0 ±6.6 μm and mean precision of 16.9 ±5.8 μm than optical impressions with a mean trueness of 46.2 ±11.4 μm and mean precision of 61.1 ±4.9 μm. Complete arch (first molar-to-first molar) optical impressions were less accurate than conventional impressions but may be adequate for quadrant impressions. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Privacy, confidentiality and abortion statistics: a question of public interest?
McHale, Jean V; Jones, June
2012-01-01
The precise nature and scope of healthcare confidentiality has long been the subject of debate. While the obligation of confidentiality is integral to professional ethical codes and is also safeguarded under English law through the equitable remedy of breach of confidence, underpinned by the right to privacy enshrined in Article 8 of the Human Rights Act 1998, it has never been regarded as absolute. But when can and should personal information be made available for statistical and research purposes and what if the information in question is highly sensitive information, such as that relating to the termination of pregnancy after 24 weeks? This article explores the case of In the Matter of an Appeal to the Information Tribunal under section 57 of the Freedom of Information Act 2000, concerning the decision of the Department of Health to withhold some statistical data from the publication of its annual abortion statistics. The specific data being withheld concerned the termination for serious fetal handicap under section 1(1)d of the Abortion Act 1967. The paper explores the implications of this case, which relate both to the nature and scope of personal privacy. It suggests that lessons can be drawn from this case about public interest and use of statistical information and also about general policy issues concerning the legal regulation of confidentiality and privacy in the future.
A precise measurement of the $B^0$ meson oscillation frequency
Aaij, R.; Abellán Beteta, C.; Adeva, B.; ...
2016-07-21
The oscillation frequency, Δm d, of B 0 mesons is measured using semileptonic decays with a D – or D* – meson in the final state. The data sample corresponds to 3.0fb –1 of pp collisions, collected by the LHCb experiment at centre-of-mass energies √s = 7 and 8TeV. A combination of the two decay modes gives Δm d=(505.0±2.1±1.0)ns –1, where the first uncertainty is statistical and the second is systematic. This is the most precise single measurement of this parameter. It is consistent with the current world average and has similar precision.
Advances and Best Practices in Airborne Gravimetry from the U.S. GRAV-D Project
NASA Astrophysics Data System (ADS)
Diehl, Theresa; Childers, Vicki; Preaux, Sandra; Holmes, Simon; Weil, Carly
2013-04-01
The Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, an official policy of the U.S. National Geodetic Survey as of 2007, is working to survey the entire U.S. and its holdings with high-altitude airborne gravimetry. The goal of the project is to provide a consistent, high-quality gravity dataset that will become the cornerstone of a new gravimetric geoid and national vertical datum in 2022. Over the last five years, the GRAV-D project has surveyed more than 25% of the country, accomplishing almost 500 flights on six different aircraft platforms and producing more than 3.7 Million square km of data thus far. This wealth of experience has led to advances in the collection, processing, and evaluation of high-altitude (20,000 - 35,000 ft) airborne gravity data. This presentation will highlight the most important practical and theoretical advances of the GRAV-D project, giving an introduction to each. Examples of innovation include: 1. Use of navigation grade inertial measurement unit data and precise lever arm measurements for positioning; 2. New quality control tests and software for near real-time analysis of data in the field; 3. Increased accuracy of gravity post-processing by reexamining assumptions and simplifications that were inconsistent with a goal of 1 mGal precision; and 4. Better final data evaluation through crossovers, additional statistics, and inclusion of airborne data into harmonic models that use EGM08 as a base model. The increases in data quality that resulted from implementation of the above advances (and others) will be shown with a case study of the GRAV-D 2008 southern Alaska survey near Anchorage, over Cook Inlet. The case study's statistics and comparisons to global models illustrate the impact that these advances have had on the final airborne gravity data quality. Finally, the presentation will summarize the best practices identified by the project from its last five years of experience.
Research on the tool holder mode in high speed machining
NASA Astrophysics Data System (ADS)
Zhenyu, Zhao; Yongquan, Zhou; Houming, Zhou; Xiaomei, Xu; Haibin, Xiao
2018-03-01
High speed machining technology can improve the processing efficiency and precision, but also reduce the processing cost. Therefore, the technology is widely regarded in the industry. With the extensive application of high-speed machining technology, high-speed tool system has higher and higher requirements on the tool chuck. At present, in high speed precision machining, several new kinds of clip heads are as long as there are heat shrinkage tool-holder, high-precision spring chuck, hydraulic tool-holder, and the three-rib deformation chuck. Among them, the heat shrinkage tool-holder has the advantages of high precision, high clamping force, high bending rigidity and dynamic balance, etc., which are widely used. Therefore, it is of great significance to research the new requirements of the machining tool system. In order to adapt to the requirement of high speed machining precision machining technology, this paper expounds the common tool holder technology of high precision machining, and proposes how to select correctly tool clamping system in practice. The characteristics and existing problems are analyzed in the tool clamping system.
How Do Statistical Detection Methods Compare to Entropy Measures
2012-08-28
October 2001. It is known as RS attack or “Reliable Detection of LSB Steganography in Grayscale and color images ”. The algorithm they use is very...precise for the detection of pseudo-aleatory LSB steganography . Its precision varies with the image but, its referential value is a 0.005 bits by...Jessica Fridrich, Miroslav Goljan, Rui Du, "Detecting LSB Steganography in Color and Gray-Scale Images ," IEEE Multimedia, vol. 8, no. 4, pp. 22-28, Oct
Corpus and Method for Identifying Citations in Non-Academic Text (Open Access, Publisher’s Version)
2014-05-31
patents, train a CRF classifier to find new citations, and apply a reranker to incorporate non-local information. Our best system achieves 0.83 F -score on...report precision, recall, and F -scores on chunk level. CRF training and decoding is performed with the CRF++ package7 using its default setting. 5.1...only obtain a very small number of training examples for statistical rerankers. 7http://crfpp.sourceforge.net Precision Recall F -score TEXT 0.7997 0.7805
Study of the effect of cloud inhomogeneity on the earth radiation budget experiment
NASA Technical Reports Server (NTRS)
Smith, Phillip J.
1988-01-01
The Earth Radiation Budget Experiment (ERBE) is the most recent and probably the most intensive mission designed to gather precise measurements of the Earth's radiation components. The data obtained from ERBE is of great importance for future climatological studies. A statistical study reveals that the ERBE scanner data are highly correlated and that instantaneous measurements corresponding to neighboring pixels contain almost the same information. Analyzing only a fraction of the data set when sampling is suggested and applications of this strategy are given in the calculation of the albedo of the Earth and of the cloud-forcing over ocean.
Differential cross sections for the reactions γ p → p η and γ p → p η '
Williams, M.; Krahn, Z.; Applegate, D.; ...
2009-10-29
In high-statistics differential cross sections for the reactions γ p -> p η and γ p -> p η' the CLAS at Jefferson Lab was used to measure the center-of-mass energies from near threshold up to 2.84 GeV. The eta-prime results are the most precise to date and provide the largest energy and angular coverage. The eta measurements extend the energy range of the world's large-angle results by approximately 300 MeV. These new data, in particular the η' measurements, are likely to help constrain the analyses being performed to search for new baryon resonance states.
High-precision surface analysis of the roughness of Michelangelo's David
NASA Astrophysics Data System (ADS)
Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Materazzi, Marzia; Pampaloni, Enrico; Pezzati, Luca
2003-10-01
The knowledge of the shape of an artwork is an important element for its study and conservation. When dealing with a statue, roughness measurement is a very useful contribution to document its surface conditions, to assess either changes due to restoration intervention or surface decays due to wearing agents, and to monitor its time-evolution in terms of shape variations. In this work we present the preliminary results of the statistical analysis carried out on acquired data relative to six areas of the Michelangelo"s David marble statue, representative of differently degraded surfaces. Determination of the roughness and its relative characteristic wavelength is shown.
Nickel and chromium isotopes in Allende inclusions
NASA Technical Reports Server (NTRS)
Birck, J. L.; Lugmair, G. W.
1988-01-01
High-precision nickel and chromium isotopic measurements were carried out on nine Allende inclusions. It is found that Ni-62, Ni-64, excesses are present in at least three of the samples. The results suggest that the most likely mechanism for the anomalies is a neutron-rich statistical equilibrium process. An indication of elevated Ni-60 is found in almost every inclusion measured. This effect is thought to be related to the decay of now extinct Fe-60. An upper limit of 1.6 X 10 to the -6th is calculated for the Fe-60/Fe-56 ratio at the time these Allende inclusions crystallized.
Single atom catalysts on amorphous supports: A quenched disorder perspective
NASA Astrophysics Data System (ADS)
Peters, Baron; Scott, Susannah L.
2015-03-01
Phenomenological models that invoke catalyst sites with different adsorption constants and rate constants are well-established, but computational and experimental methods are just beginning to provide atomically resolved details about amorphous surfaces and their active sites. This letter develops a statistical transformation from the quenched disorder distribution of site structures to the distribution of activation energies for sites on amorphous supports. We show that the overall kinetics are highly sensitive to the precise nature of the low energy tail in the activation energy distribution. Our analysis motivates further development of systematic methods to identify and understand the most reactive members of the active site distribution.
Musmade, Kranti P.; Trilok, M.; Dengale, Swapnil J.; Bhat, Krishnamurthy; Reddy, M. S.; Musmade, Prashant B.; Udupa, N.
2014-01-01
A simple, precise, accurate, rapid, and sensitive reverse phase high performance liquid chromatography (RP-HPLC) method with UV detection has been developed and validated for quantification of naringin (NAR) in novel pharmaceutical formulation. NAR is a polyphenolic flavonoid present in most of the citrus plants having variety of pharmacological activities. Method optimization was carried out by considering the various parameters such as effect of pH and column. The analyte was separated by employing a C18 (250.0 × 4.6 mm, 5 μm) column at ambient temperature in isocratic conditions using phosphate buffer pH 3.5: acetonitrile (75 : 25% v/v) as mobile phase pumped at a flow rate of 1.0 mL/min. UV detection was carried out at 282 nm. The developed method was validated according to ICH guidelines Q2(R1). The method was found to be precise and accurate on statistical evaluation with a linearity range of 0.1 to 20.0 μg/mL for NAR. The intra- and interday precision studies showed good reproducibility with coefficients of variation (CV) less than 1.0%. The mean recovery of NAR was found to be 99.33 ± 0.16%. The proposed method was found to be highly accurate, sensitive, and robust. The proposed liquid chromatographic method was successfully employed for the routine analysis of said compound in developed novel nanopharmaceuticals. The presence of excipients did not show any interference on the determination of NAR, indicating method specificity. PMID:26556205
An Improved Snake Model for Refinement of Lidar-Derived Building Roof Contours Using Aerial Images
NASA Astrophysics Data System (ADS)
Chen, Qi; Wang, Shugen; Liu, Xiuguo
2016-06-01
Building roof contours are considered as very important geometric data, which have been widely applied in many fields, including but not limited to urban planning, land investigation, change detection and military reconnaissance. Currently, the demand on building contours at a finer scale (especially in urban areas) has been raised in a growing number of studies such as urban environment quality assessment, urban sprawl monitoring and urban air pollution modelling. LiDAR is known as an effective means of acquiring 3D roof points with high elevation accuracy. However, the precision of the building contour obtained from LiDAR data is restricted by its relatively low scanning resolution. With the use of the texture information from high-resolution imagery, the precision can be improved. In this study, an improved snake model is proposed to refine the initial building contours extracted from LiDAR. First, an improved snake model is constructed with the constraints of the deviation angle, image gradient, and area. Then, the nodes of the contour are moved in a certain range to find the best optimized result using greedy algorithm. Considering both precision and efficiency, the candidate shift positions of the contour nodes are constrained, and the searching strategy for the candidate nodes is explicitly designed. The experiments on three datasets indicate that the proposed method for building contour refinement is effective and feasible. The average quality index is improved from 91.66% to 93.34%. The statistics of the evaluation results for every single building demonstrated that 77.0% of the total number of contours is updated with higher quality index.
Selecting the optimum plot size for a California design-based stream and wetland mapping program.
Lackey, Leila G; Stein, Eric D
2014-04-01
Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.
Proton radius from electron scattering data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon and Stanford. Methods: We make use of stepwise regression techniques using the F-test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate errormore » estimates. Results: Starting with the precision, low four-momentum transfer (Q 2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F-test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q 2 data on G E to select functions which extrapolate to high Q 2, we find that a Pad´e (N = M = 1) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, G E(Q 2) = (1 + Q 2/0.66 GeV 2) -2. Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extreme low-Q 2 data or by use of the Pad´e approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering result and the muonic hydrogen result are consistent. Lastly, it is the atomic hydrogen results that are the outliers.« less
Proton radius from electron scattering data
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent; ...
2016-05-31
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon and Stanford. Methods: We make use of stepwise regression techniques using the F-test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate errormore » estimates. Results: Starting with the precision, low four-momentum transfer (Q 2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F-test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q 2 data on G E to select functions which extrapolate to high Q 2, we find that a Pad´e (N = M = 1) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, G E(Q 2) = (1 + Q 2/0.66 GeV 2) -2. Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extreme low-Q 2 data or by use of the Pad´e approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering result and the muonic hydrogen result are consistent. Lastly, it is the atomic hydrogen results that are the outliers.« less
Krägeloh, Christian U; Medvedev, Oleg N; Hill, Erin M; Webster, Craig S; Booth, Roger J; Henning, Marcus A
2018-01-01
Measuring competitiveness is necessary to fully understand variables affecting student learning. The 14-item Revised Competitiveness Index has become a widely used measure to assess trait competitiveness. The current study reports on a Rasch analysis to investigate the psychometric properties of the Revised Competitiveness Index and to improve its precision for international comparisons. Students were recruited from medical studies at a university in New Zealand, undergraduate health sciences courses at another New Zealand university, and a psychology undergraduate class at a university in the United States. Rasch model estimate parameters were affected by local dependency and item misfit. Best fit to the Rasch model (χ 2 (20) = 15.86, p = .73, person separation index = .95) was obtained for the Enjoyment of Competition subscale after combining locally dependent items into a subtest and discarding the highly misfitting Item 9. The only modifications required to obtain a suitable fit (χ 2 (25) = 25.81, p = .42, person separation index = .77) for the Contentiousness subscale were a subtest to combine two locally dependent items and splitting this subtest by country to deal with differential item functioning. The results support reliability and internal construct validity of the modified Revised Competitiveness Index. Precision of the measure may be enhanced using the ordinal-to-interval conversion algorithms presented here, allowing the use of parametric statistics without breaking fundamental statistical assumptions.
Selecting relevant 3D image features of margin sharpness and texture for lung nodule retrieval.
Ferreira, José Raniery; de Azevedo-Marques, Paulo Mazzoncini; Oliveira, Marcelo Costa
2017-03-01
Lung cancer is the leading cause of cancer-related deaths in the world. Its diagnosis is a challenge task to specialists due to several aspects on the classification of lung nodules. Therefore, it is important to integrate content-based image retrieval methods on the lung nodule classification process, since they are capable of retrieving similar cases from databases that were previously diagnosed. However, this mechanism depends on extracting relevant image features in order to obtain high efficiency. The goal of this paper is to perform the selection of 3D image features of margin sharpness and texture that can be relevant on the retrieval of similar cancerous and benign lung nodules. A total of 48 3D image attributes were extracted from the nodule volume. Border sharpness features were extracted from perpendicular lines drawn over the lesion boundary. Second-order texture features were extracted from a cooccurrence matrix. Relevant features were selected by a correlation-based method and a statistical significance analysis. Retrieval performance was assessed according to the nodule's potential malignancy on the 10 most similar cases and by the parameters of precision and recall. Statistical significant features reduced retrieval performance. Correlation-based method selected 2 margin sharpness attributes and 6 texture attributes and obtained higher precision compared to all 48 extracted features on similar nodule retrieval. Feature space dimensionality reduction of 83 % obtained higher retrieval performance and presented to be a computationaly low cost method of retrieving similar nodules for the diagnosis of lung cancer.
Peters, Thomas M; Sawvel, Eric J; Willis, Robert; West, Roger R; Casuccio, Gary S
2016-07-19
We report on the precision and accuracy of measuring PM10-2.5 and its components with particles collected by passive aerosol samplers and analyzed by computer-controlled scanning electron microscopy with energy dispersive X-ray spectroscopy. Passive samplers were deployed for week-long intervals in triplicate and colocated with a federal reference method sampler at three sites and for 5 weeks in summer 2009 and 5 weeks in winter 2010 in Cleveland, OH. The limit of detection of the passive method for PM10-2.5 determined from blank analysis was 2.8 μg m(-3). Overall precision expressed as root-mean-square coefficient of variation (CVRMS) improved with increasing concentrations (37% for all samples, n = 30; 19% for PM10-2.5 > 10 μg m(-3), n = 9; and 10% for PM10-2.5 > 15 μg m(-3), n = 4). The linear regression of PM10-2.5 measured passively on that measured with the reference sampler exhibited an intercept not statistically different than zero (p = 0.46) and a slope not statistically different from unity (p = 0.92). Triplicates with high CVs (CV > 40%, n = 5) were attributed to low particle counts (and mass concentrations), spurious counts attributed to salt particles, and Al-rich particles. This work provides important quantitative observations that can help guide future development and use of passive samplers for measuring atmospheric particulate matter.
Automated brain volumetrics in multiple sclerosis: a step closer to clinical application.
Wang, C; Beadnall, H N; Hatton, S N; Bader, G; Tomic, D; Silva, D G; Barnett, M H
2016-07-01
Whole brain volume (WBV) estimates in patients with multiple sclerosis (MS) correlate more robustly with clinical disability than traditional, lesion-based metrics. Numerous algorithms to measure WBV have been developed over the past two decades. We compare Structural Image Evaluation using Normalisation of Atrophy-Cross-sectional (SIENAX) to NeuroQuant and MSmetrix, for assessment of cross-sectional WBV in patients with MS. MRIs from 61 patients with relapsing-remitting MS and 2 patients with clinically isolated syndrome were analysed. WBV measurements were calculated using SIENAX, NeuroQuant and MSmetrix. Statistical agreement between the methods was evaluated using linear regression and Bland-Altman plots. Precision and accuracy of WBV measurement was calculated for (1) NeuroQuant versus SIENAX and (2) MSmetrix versus SIENAX. Precision (Pearson's r) of WBV estimation for NeuroQuant and MSmetrix versus SIENAX was 0.983 and 0.992, respectively. Accuracy (Cb) was 0.871 and 0.994, respectively. NeuroQuant and MSmetrix showed a 5.5% and 1.0% volume difference compared with SIENAX, respectively, that was consistent across low and high values. In the analysed population, NeuroQuant and MSmetrix both quantified cross-sectional WBV with comparable statistical agreement to SIENAX, a well-validated cross-sectional tool that has been used extensively in MS clinical studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Neher, Christopher; Bair, Lucas S.; Duffield, John; Patterson, David A.; Neher, Katherine
2018-01-01
We directly compare trip willingness to pay (WTP) values between dichotomous choice contingent valuation (DCCV) and discrete choice experiment (DCE) stated preference surveys of private party Grand Canyon whitewater boaters. The consistency of DCCV and DCE estimates is debated in the literature, and this study contributes to the body of work comparing the methods. Comparisons were made of mean WTP estimates for four hypothetical Colorado River flow-level scenarios. Boaters were found to most highly value mid-range flows, with very low and very high flows eliciting lower WTP estimates across both DCE and DCCV surveys. Mean WTP precision was estimated through simulation. No statistically significant differences were detected between the two methods at three of the four hypothetical flow levels.
NASA Astrophysics Data System (ADS)
Förster, Matthias; Rashev, Mikhail; Haaland, Stein
2017-04-01
The Electron Drift Instrument (EDI) onboard Cluster can measure 500 eV and 1 keV electron fluxes with high time resolution during passive operation phases in its Ambient Electron (AE) mode. Data from this mode is available in the Cluster Science Archive since October 2004 with a cadence of 16 Hz in the normal mode or 128 Hz for burst mode telemetry intervals. The fluxes are recorded at pitch angles of 0, 90, and 180 degrees. This paper describes the calibration and validation of these measurements. The high resolution AE data allow precise temporal and spatial diagnostics of magnetospheric boundaries and will be used for case studies and statistical studies of low energy electron fluxes in the near-Earth space. We show examples of applications.
The Joint Physics Analysis Center: Recent results
NASA Astrophysics Data System (ADS)
Fernández-Ramírez, César
2016-10-01
We review some of the recent achievements of the Joint Physics Analysis Center, a theoretical collaboration with ties to experimental collaborations, that aims to provide amplitudes suitable for the analysis of the current and forthcoming experimental data on hadron physics. Since its foundation in 2013, the group is focused on hadron spectroscopy in preparation for the forthcoming high statistics and high precision experimental data from BELLEII, BESIII, CLAS12, COMPASS, GlueX, LHCb and (hopefully) PANDA collaborations. So far, we have developed amplitudes for πN scattering, KN scattering, pion and J/ψ photoproduction, two kaon photoproduction and three-body decays of light mesons (η, ω, ϕ). The codes for the amplitudes are available to download from the group web page and can be straightforwardly incorporated to the analysis of the experimental data.
Changing computing paradigms towards power efficiency
Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro
2014-01-01
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
Minimizing magnetic fields for precision experiments
NASA Astrophysics Data System (ADS)
Altarev, I.; Fierlinger, P.; Lins, T.; Marino, M. G.; Nießen, B.; Petzoldt, G.; Reisner, M.; Stuiber, S.; Sturm, M.; Taggart Singh, J.; Taubenheim, B.; Rohrer, H. K.; Schläpfer, U.
2015-06-01
An increasing number of measurements in fundamental and applied physics rely on magnetically shielded environments with sub nano-Tesla residual magnetic fields. State of the art magnetically shielded rooms (MSRs) consist of up to seven layers of high permeability materials in combination with highly conductive shields. Proper magnetic equilibration is crucial to obtain such low magnetic fields with small gradients in any MSR. Here, we report on a scheme to magnetically equilibrate MSRs with a 10 times reduced duration of the magnetic equilibration sequence and a significantly lower magnetic field with improved homogeneity. For the search of the neutron's electric dipole moment, our finding corresponds to a 40% improvement of the statistical reach of the measurement. However, this versatile procedure can improve the performance of any MSR for any application.
2018-01-01
Objective To investigate the psychometric properties of the activities of daily living (ADL) instrument used in the analysis of Korean Longitudinal Study of Ageing (KLoSA) dataset. Methods A retrospective study was carried out involving 2006 KLoSA records of community-dwelling adults diagnosed with stroke. The ADL instrument used for the analysis of KLoSA included 17 items, which were analyzed using Rasch modeling to develop a robust outcome measure. The unidimensionality of the ADL instrument was examined based on confirmatory factor analysis with a one-factor model. Item-level psychometric analysis of the ADL instrument included fit statistics, internal consistency, precision, and the item difficulty hierarchy. Results The study sample included a total of 201 community-dwelling adults (1.5% of the Korean population with an age over 45 years; mean age=70.0 years, SD=9.7) having a history of stroke. The ADL instrument demonstrated unidimensional construct. Two misfit items, money management (mean square [MnSq]=1.56, standardized Z-statistics [ZSTD]=2.3) and phone use (MnSq=1.78, ZSTD=2.3) were removed from the analysis. The remaining 15 items demonstrated good item fit, high internal consistency (person reliability=0.91), and good precision (person strata=3.48). The instrument precisely estimated person measures within a wide range of theta (−4.75 logits < θ < 3.97 logits) and a reliability of 0.9, with a conceptual hierarchy of item difficulty. Conclusion The findings indicate that the 15 ADL items met Rasch expectations of unidimensionality and demonstrated good psychometric properties. It is proposed that the validated ADL instrument can be used as a primary outcome measure for assessing longitudinal disability trajectories in the Korean adult population and can be employed for comparative analysis of international disability across national aging studies. PMID:29765888
Ultrasonic wave velocity measurement in small polymeric and cortical bone specimens
NASA Technical Reports Server (NTRS)
Kohles, S. S.; Bowers, J. R.; Vailas, A. C.; Vanderby, R. Jr
1997-01-01
A system was refined for the determination of the bulk ultrasonic wave propagation velocity in small cortical bone specimens. Longitudinal and shear wave propagations were measured using ceramic, piezoelectric 20 and 5 MHz transducers, respectively. Results of the pulse transmission technique were refined via the measurement of the system delay time. The precision and accuracy of the system were quantified using small specimens of polyoxymethylene, polystyrene-butadiene, and high-density polyethylene. These polymeric materials had known acoustic properties, similarity of propagation velocities to cortical bone, and minimal sample inhomogeneity. Dependence of longitudinal and transverse specimen dimensions upon propagation times was quantified. To confirm the consistency of longitudinal wave propagation in small cortical bone specimens (< 1.0 mm), cut-down specimens were prepared from a normal rat femur. Finally, cortical samples were prepared from each of ten normal rat femora, and Young's moduli (Eii), shear moduli (Gij), and Poisson ratios (Vij) were measured. For all specimens (bone, polyoxymethylene, polystyrene-butadiene, and high-density polyethylene), strong linear correlations (R2 > 0.997) were maintained between propagation time and distance throughout the size ranges down to less than 0.4 mm. Results for polyoxymethylene, polystyrene-butadiene, and high-density polyethylene were accurate to within 5 percent of reported literature values. Measurement repeatability (precision) improved with an increase in the wave transmission distance (propagating dimension). No statistically significant effect due to the transverse dimension was detected.
NASA Astrophysics Data System (ADS)
Cai, Y.
2017-12-01
Accurately forecasting crop yields has broad implications for economic trading, food production monitoring, and global food security. However, the variation of environmental variables presents challenges to model yields accurately, especially when the lack of highly accurate measurements creates difficulties in creating models that can succeed across space and time. In 2016, we developed a sequence of machine-learning based models forecasting end-of-season corn yields for the US at both the county and national levels. We combined machine learning algorithms in a hierarchical way, and used an understanding of physiological processes in temporal feature selection, to achieve high precision in our intra-season forecasts, including in very anomalous seasons. During the live run, we predicted the national corn yield within 1.40% of the final USDA number as early as August. In the backtesting of the 2000-2015 period, our model predicts national yield within 2.69% of the actual yield on average already by mid-August. At the county level, our model predicts 77% of the variation in final yield using data through the beginning of August and improves to 80% by the beginning of October, with the percentage of counties predicted within 10% of the average yield increasing from 68% to 73%. Further, the lowest errors are in the most significant producing regions, resulting in very high precision national-level forecasts. In addition, we identify the changes of important variables throughout the season, specifically early-season land surface temperature, and mid-season land surface temperature and vegetation index. For the 2017 season, we feed 2016 data to the training set, together with additional geospatial data sources, aiming to make the current model even more precise. We will show how our 2017 US corn yield forecasts converges in time, which factors affect the yield the most, as well as present our plans for 2018 model adjustments.
Ma, Shu-Ching; Li, Yu-Chi; Yui, Mei-Shu
2014-01-01
Background Workplace bullying is a prevalent problem in contemporary work places that has adverse effects on both the victims of bullying and organizations. With the rapid development of computer technology in recent years, there is an urgent need to prove whether item response theory–based computerized adaptive testing (CAT) can be applied to measure exposure to workplace bullying. Objective The purpose of this study was to evaluate the relative efficiency and measurement precision of a CAT-based test for hospital nurses compared to traditional nonadaptive testing (NAT). Under the preliminary conditions of a single domain derived from the scale, a CAT module bullying scale model with polytomously scored items is provided as an example for evaluation purposes. Methods A total of 300 nurses were recruited and responded to the 22-item Negative Acts Questionnaire-Revised (NAQ-R). All NAT (or CAT-selected) items were calibrated with the Rasch rating scale model and all respondents were randomly selected for a comparison of the advantages of CAT and NAT in efficiency and precision by paired t tests and the area under the receiver operating characteristic curve (AUROC). Results The NAQ-R is a unidimensional construct that can be applied to measure exposure to workplace bullying through CAT-based administration. Nursing measures derived from both tests (CAT and NAT) were highly correlated (r=.97) and their measurement precisions were not statistically different (P=.49) as expected. CAT required fewer items than NAT (an efficiency gain of 32%), suggesting a reduced burden for respondents. There were significant differences in work tenure between the 2 groups (bullied and nonbullied) at a cutoff point of 6 years at 1 worksite. An AUROC of 0.75 (95% CI 0.68-0.79) with logits greater than –4.2 (or >30 in summation) was defined as being highly likely bullied in a workplace. Conclusions With CAT-based administration of the NAQ-R for nurses, their burden was substantially reduced without compromising measurement precision. PMID:24534113
De Backer, A; Martinez, G T; Rosenauer, A; Van Aert, S
2013-11-01
In the present paper, a statistical model-based method to count the number of atoms of monotype crystalline nanostructures from high resolution high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) images is discussed in detail together with a thorough study on the possibilities and inherent limitations. In order to count the number of atoms, it is assumed that the total scattered intensity scales with the number of atoms per atom column. These intensities are quantitatively determined using model-based statistical parameter estimation theory. The distribution describing the probability that intensity values are generated by atomic columns containing a specific number of atoms is inferred on the basis of the experimental scattered intensities. Finally, the number of atoms per atom column is quantified using this estimated probability distribution. The number of atom columns available in the observed STEM image, the number of components in the estimated probability distribution, the width of the components of the probability distribution, and the typical shape of a criterion to assess the number of components in the probability distribution directly affect the accuracy and precision with which the number of atoms in a particular atom column can be estimated. It is shown that single atom sensitivity is feasible taking the latter aspects into consideration. © 2013 Elsevier B.V. All rights reserved.
Influence of Running on Pistol Shot Hit Patterns.
Kerkhoff, Wim; Bolck, Annabel; Mattijssen, Erwin J A T
2016-01-01
In shooting scene reconstructions, risk assessment of the situation can be important for the legal system. Shooting accuracy and precision, and thus risk assessment, might be correlated with the shooter's physical movement and experience. The hit patterns of inexperienced and experienced shooters, while shooting stationary (10 shots) and in running motion (10 shots) with a semi-automatic pistol, were compared visually (with confidence ellipses) and statistically. The results show a significant difference in precision (circumference of the hit patterns) between stationary shots and shots fired in motion for both inexperienced and experienced shooters. The decrease in precision for all shooters was significantly larger in the y-direction than in the x-direction. The precision of the experienced shooters is overall better than that of the inexperienced shooters. No significant change in accuracy (shift in the hit pattern center) between stationary shots and shots fired in motion can be seen for all shooters. © 2015 American Academy of Forensic Sciences.
Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.
2008-08-11
Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution tomore » mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.« less
Error analysis of high-rate GNSS precise point positioning for seismic wave measurement
NASA Astrophysics Data System (ADS)
Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan
2017-06-01
High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.
Raees Ahmad, Sufiyan Ahmad; Patil, Lalit; Mohammed Usman, Mohammed Rageeb; Imran, Mohammad; Akhtar, Rashid
2018-01-01
A simple rapid, accurate, precise, and reproducible validated reverse phase high performance liquid chromatography (HPLC) method was developed for the determination of Abacavir (ABAC) and Lamivudine (LAMI) in bulk and tablet dosage forms. The quantification was carried out using Symmetry Premsil C18 (250 mm × 4.6 mm, 5 μm) column run in isocratic way using mobile phase comprising methanol: water (0.05% orthophosphoric acid with pH 3) 83:17 v/v and a detection wavelength of 245 nm and injection volume of 20 μl, with a flow rate of 1 ml/min. In the developed method, the retention times of ABAC and LAMI were found to be 3.5 min and 7.4 min, respectively. The method was validated in terms of linearity, precision, accuracy, limits of detection, limits of quantitation, and robustness in accordance with the International Conference on Harmonization guidelines. The assay of the proposed method was found to be 99% - 101%. The recovery studies were also carried out and mean % recovery was found to be 99% - 101%. The % relative standard deviation from reproducibility was found to be <2%. The proposed method was statistically evaluated and can be applied for routine quality control analysis of ABAC and LAMI in bulk and in tablet dosage form. Attempts were made to develop RP-HPLC method for simultaneous estimation of Abacavir and Lamivudine for the RP-HPLC method. The developed method was validated according to the ICH guidelines. The linearity, precision, range, robustness were within the limits as specified by the ICH guidelines. Hence the method was found to be simple, accurate, precise, economic and reproducible. So the proposed methods can be used for the routine quality control analysis of Abacavir and Lamivudine in bulk drug as well as in formulations. Abbreviations Used: HPLC: High-performance liquid chromatography, UV: Ultraviolet, ICH: International Conference on Harmonization, ABAC: Abacavir, LAMI: Lamivudine, HIV: Human immunodeficiency virus, AIDS: Acquired immunodeficiency syndrome, NRTI: Nucleoside reverse transcriptase inhibitors, ARV: Antiretroviral, RSD: Relative standard deviation, RT: Retention time, SD: Standard deviation.
Automatic computational labeling of glomerular textural boundaries
NASA Astrophysics Data System (ADS)
Ginley, Brandon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
The glomerulus, a specialized bundle of capillaries, is the blood filtering unit of the kidney. Each human kidney contains about 1 million glomeruli. Structural damages in the glomerular micro-compartments give rise to several renal conditions; most severe of which is proteinuria, where excessive blood proteins flow freely to the urine. The sole way to confirm glomerular structural damage in renal pathology is by examining histopathological or immunofluorescence stained needle biopsies under a light microscope. However, this method is extremely tedious and time consuming, and requires manual scoring on the number and volume of structures. Computational quantification of equivalent features promises to greatly ease this manual burden. The largest obstacle to computational quantification of renal tissue is the ability to recognize complex glomerular textural boundaries automatically. Here we present a computational pipeline to accurately identify glomerular boundaries with high precision and accuracy. The computational pipeline employs an integrated approach composed of Gabor filtering, Gaussian blurring, statistical F-testing, and distance transform, and performs significantly better than standard Gabor based textural segmentation method. Our integrated approach provides mean accuracy/precision of 0.89/0.97 on n = 200Hematoxylin and Eosin (HE) glomerulus images, and mean 0.88/0.94 accuracy/precision on n = 200 Periodic Acid Schiff (PAS) glomerulus images. Respective accuracy/precision of the Gabor filter bank based method is 0.83/0.84 for HE and 0.78/0.8 for PAS. Our method will simplify computational partitioning of glomerular micro-compartments hidden within dense textural boundaries. Automatic quantification of glomeruli will streamline structural analysis in clinic, and can help realize real time diagnoses and interventions.
Qualitative computer aided evaluation of dental impressions in vivo.
Luthardt, Ralph G; Koch, Rainer; Rudolph, Heike; Walter, Michael H
2006-01-01
Clinical investigations dealing with the precision of different impression techniques are rare. Objective of the present study was to develop and evaluate a procedure for the qualitative analysis of the three-dimensional impression precision based on an established in-vitro procedure. The zero hypothesis to be tested was that the precision of impressions does not differ depending on the impression technique used (single-step, monophase and two-step-techniques) and on clinical variables. Digital surface data of patient's teeth prepared for crowns were gathered from standardized manufactured master casts after impressions with three different techniques were taken in a randomized order. Data-sets were analyzed for each patient in comparison with the one-step impression chosen as the reference. The qualitative analysis was limited to data-points within the 99.5%-range. Based on the color-coded representation areas with maximum deviations were determined (preparation margin and the mantle and occlusal surface). To qualitatively analyze the precision of the impression techniques, the hypothesis was tested in linear models for repeated measures factors (p < 0.05). For the positive 99.5% deviations no variables with significant influence were determined in the statistical analysis. In contrast, the impression technique and the position of the preparation margin significantly influenced the negative 99.5% deviations. The influence of clinical parameter on the deviations between impression techniques can be determined reliably using the 99.5 percentile of the deviations. An analysis regarding the areas with maximum deviations showed high clinical relevance. The preparation margin was pointed out as the weak spot of impression taking.
Tactile display landing safety and precision improvements for the Space Shuttle
NASA Astrophysics Data System (ADS)
Olson, John M.
A tactile display belt using 24 electro-mechanical tactile transducers (tactors) was used to determine if a modified tactile display system, known as the Tactile Situation Awareness System (TSAS) improved the safety and precision of a complex spacecraft (i.e. the Space Shuttle Orbiter) in guided precision approaches and landings. The goal was to determine if tactile cues enhance safety and mission performance through reduced workload, increased situational awareness (SA), and an improved operational capability by increasing secondary cognitive workload capacity and human-machine interface efficiency and effectiveness. Using both qualitative and quantitative measures such as NASA's Justiz Numerical Measure and Synwork1 scores, an Overall Workload (OW) measure, the Cooper-Harper rating scale, and the China Lake Situational Awareness scale, plus Pre- and Post-Flight Surveys, the data show that tactile displays decrease OW, improve SA, counteract fatigue, and provide superior warning and monitoring capacity for dynamic, off-nominal, high concurrent workload scenarios involving complex, cognitive, and multi-sensory critical scenarios. Use of TSAS for maintaining guided precision approaches and landings was generally intuitive, reduced training times, and improved task learning effects. Ultimately, the use of a homogeneous, experienced, and statistically robust population of test pilots demonstrated that the use of tactile displays for Space Shuttle approaches and landings with degraded vehicle systems, weather, and environmental conditions produced substantial improvements in safety, consistency, reliability, and ease of operations under demanding conditions. Recommendations for further analysis and study are provided in order to leverage the results from this research and further explore the potential to reduce the risk of spaceflight and aerospace operations in general.
Satellite laser ranging to low Earth orbiters: orbit and network validation
NASA Astrophysics Data System (ADS)
Arnold, Daniel; Montenbruck, Oliver; Hackel, Stefan; Sośnica, Krzysztof
2018-04-01
Satellite laser ranging (SLR) to low Earth orbiters (LEOs) provides optical distance measurements with mm-to-cm-level precision. SLR residuals, i.e., differences between measured and modeled ranges, serve as a common figure of merit for the quality assessment of orbits derived by radiometric tracking techniques. We discuss relevant processing standards for the modeling of SLR observations and highlight the importance of line-of-sight-dependent range corrections for the various types of laser retroreflector arrays. A 1-3 cm consistency of SLR observations and GPS-based precise orbits is demonstrated for a wide range of past and present LEO missions supported by the International Laser Ranging Service (ILRS). A parameter estimation approach is presented to investigate systematic orbit errors and it is shown that SLR validation of LEO satellites is not only able to detect radial but also along-track and cross-track offsets. SLR residual statistics clearly depend on the employed precise orbit determination technique (kinematic vs. reduced-dynamic, float vs. fixed ambiguities) but also reveal pronounced differences in the ILRS station performance. Using the residual-based parameter estimation approach, corrections to ILRS station coordinates, range biases, and timing offsets are derived. As a result, root-mean-square residuals of 5-10 mm have been achieved over a 1-year data arc in 2016 using observations from a subset of high-performance stations and ambiguity-fixed orbits of four LEO missions. As a final contribution, we demonstrate that SLR can not only validate single-satellite orbit solutions but also precise baseline solutions of formation flying missions such as GRACE, TanDEM-X, and Swarm.
Larkin, J D; Publicover, N G; Sutko, J L
2011-01-01
In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
Quantifying time in sedimentary successions by radio-isotopic dating of ash beds
NASA Astrophysics Data System (ADS)
Schaltegger, Urs
2014-05-01
Sedimentary rock sequences are an accurate record of geological, chemical and biological processes throughout the history of our planet. If we want to know more about the duration or the rates of some of these processes, we can apply methods of absolute age determination, i.e. of radio-isotopic dating. Data of highest precision and accuracy, and therefore of highest degree of confidence, are obtained by chemical abrasion, isotope-dilution, thermal ionization mass spectrometry (CA-ID-TIMS) 238U-206Pb dating techniques, applied to magmatic zircon from ash beds that are interbedded with the sediments. This techniques allows high-precision estimates of age at the 0.1% uncertainty for single analyses, and down to 0.03% uncertainty for groups of statistically equivalent 206Pb/238U dates. Such high precision is needed, since we would like the precision to be approximately equivalent or better than the (interpolated) duration of ammonoid zones in the Mesozoic (e.g., Ovtcharova et al. 2006), or to match short feedback rates of biological, climatic, or geochemical cycles after giant volcanic eruptions in large igneous provinces (LIP's), e.g., at the Permian/Triassic or the Triassic/Jurassic boundaries. We also wish to establish as precisely as possible temporal coincidence between the sedimentary record and short-lived volcanic events within the LIP's. Precision and accuracy of the U-Pb data has to be traceable and quantifiable in absolute terms, achieved by direct reference to the international kilogram, via an absolute calibration of the standard and isotopic tracer solutions. Only with a perfect control on precision and accuracy of radio-isotopic data, we can confidently determine whether two ages of geological events are really different, and avoid mistaking interlaboratory or interchronometer biases for age difference. The development of unprecedented precision of CA-ID-TIMS 238U-206Pb dates led to the recognition of protracted growth of zircon in a magmatic liquid (see, e.g., Schoene et al. 2012), which then becomes transferred into volcanic ashes as excess dispersion of 238U-206Pb dates (see, e.g., Guex et al. 2012). Zircon is crystallizing in the magmatic liquid shortly before the volcanic eruption; we therefore aim at finding the youngest zircon date or youngest statistically equivalent cluster of 238U-206Pb dates as an approximation of ash deposition (Wotzlaw et al. 2013). Time gaps between last zircon crystallization and eruption ("Δt") may be as large as 100-200 ka, at the limits of analytical precision. Understanding the magmatic crystallization history of zircon is the fundamental background for interpreting ash bed dates in a sedimentary succession. Ash beds of different stratigraphic position and age my be generated within different magmatic systems, showing different crystallization histories. A sufficient number of samples (N) is therefore of paramount importance, not to lose the stratigraphic age control in a given section, and to be able to discard samples with large Δt - but, how large has to be "N"? In order to use the youngest zircon or zircons as an approximation of the age of eruption and ash deposition, we need to be sure that we have quantitatively solved the problem of post-crystallization lead loss - but, how can we be sure?! Ash bed zircons are prone to partial loss of radiogenic lead, because the ashes have been flushed by volcanic gases, as well as brines during sediment compaction. We therefore need to analyze a sufficient number of zircons (n) to be sure not to miss the youngest - but, how large has to be "n"? Analysis of trace elements or oxygen, hafnium isotopic compositions in dated zircon may sometimes help to distinguish zircon that is in equilibrium with the last magmatic liquid, from those that are recycled from earlier crystallization episodes, or to recognize zircon with partial lead loss (Schoene et al. 2010). Respecting these constraints, we may arrive at accurate correlation of periods of global environmental and biotic disturbance (from ash bed analysis in biostratigraphically or cyclostratigraphically well constrained marine sections) with volcanic activity; examples are the Triassic-Jurassic boundary and the Central Atlantic Magmatic Province (Schoene et al. 2010), or the lower Toarcian oceanic anoxic event and the Karoo Province volcanism (Sell et al. in prep.). High-precision temporal correlations may also be obtained by combining high-precision U-Pb dating with biochronology in the Middle Triassic (Ovtcharova et al., in prep.), or by comparing U-Pb dates with astronomical timescales in the Upper Miocene (Wotzlaw et al., in prep.). References Guex, J., Schoene, B., Bartolini, A., Spangenberg, J., Schaltegger, U., O'Dogherty, L., et al. (2012). Geochronological constraints on post-extinction recovery of the ammonoids and carbon cycle perturbations during the Early Jurassic. Palaeogeography, Palaeoclimatology, Palaeoecology, 346-347(C), 1-11. Ovtcharova, M., Bucher, H., Schaltegger, U., Galfetti, T., Brayard, A., & Guex, J. (2006). New Early to Middle Triassic U-Pb ages from South China: Calibration with ammonoid biochronozones and implications for the timing of the Triassic biotic recovery. Earth and Planetary Science Letters, 243(3-4), 463-475. Ovtcharova, M., Goudemand, N., Galfetti, Th., Guodun, K., Hammer, O., Schaltegger, U., Bucher, H. Improving accuracy and precision of radio-isotopic and biochronological approaches in dating geological boundaries: The Early-Middle Triassic boundary case. In preparation. Schoene, B., Schaltegger, U., Brack, P., Latkoczy, C., Stracke, A., & Günther, D. (2012). Rates of magma differentiation and emplacement in a ballooning pluton recorded by U-Pb TIMS-TEA, Adamello batholith, Italy. Earth and Planetary Science Letters, 355-356, 162-173. Schoene, B., Latkoczy, C., Schaltegger, U., & Günther, D. (2010). A new method integrating high-precision U-Pb geochronology with zircon trace element analysis (U-Pb TIMS-TEA). Geochimica Et Cosmochimica Acta, 74(24), 7144-7159. Schoene, B., Guex, J., Bartolini, A., Schaltegger, U., & Blackburn, T. J. (2010). Correlating the end-Triassic mass extinction and flood basalt volcanism at the 100 ka level. Geology, 38(5), 387-390. Sell, B., Ovtcharova, M., Guex, J., Jourdan, F., Schaltegger, U. Evaluating the link between the Karoo LIP and climatic-biologic events of the Toarcian Stage with high-precision U-Pb geochronology. In preparation. Wotzlaw, J. F., Schaltegger, U., Frick, D. A., Dungan, M. A., Gerdes, A., & Günther, D. (2013). Tracking the evolution of large-volume silicic magma reservoirs from assembly to supereruption. Geology, 41(8), 867-870. Wotzlaw, J.F., Hüsing, S.K., Hilgen, F.J.., Schaltegger, U. Testing the gold standard of geochronology against astronomical time: High-precision U-Pb geochronology of orbitally tuned ash beds from the Mediterranean Miocene. In preparation.
Validation of a Spectral Method for Quantitative Measurement of Color in Protein Drug Solutions.
Yin, Jian; Swartz, Trevor E; Zhang, Jian; Patapoff, Thomas W; Chen, Bartolo; Marhoul, Joseph; Shih, Norman; Kabakoff, Bruce; Rahimi, Kimia
2016-01-01
A quantitative spectral method has been developed to precisely measure the color of protein solutions. In this method, a spectrophotometer is utilized for capturing the visible absorption spectrum of a protein solution, which can then be converted to color values (L*a*b*) that represent human perception of color in a quantitative three-dimensional space. These quantitative values (L*a*b*) allow for calculating the best match of a sample's color to a European Pharmacopoeia reference color solution. In order to qualify this instrument and assay for use in clinical quality control, a technical assessment was conducted to evaluate the assay suitability and precision. Setting acceptance criteria for this study required development and implementation of a unique statistical method for assessing precision in 3-dimensional space. Different instruments, cuvettes, protein solutions, and analysts were compared in this study. The instrument accuracy, repeatability, and assay precision were determined. The instrument and assay are found suitable for use in assessing color of drug substances and drug products and is comparable to the current European Pharmacopoeia visual assessment method. In the biotechnology industry, a visual assessment is the most commonly used method for color characterization, batch release, and stability testing of liquid protein drug solutions. Using this method, an analyst visually determines the color of the sample by choosing the closest match to a standard color series. This visual method can be subjective because it requires an analyst to make a judgment of the best match of color of the sample to the standard color series, and it does not capture data on hue and chroma that would allow for improved product characterization and the ability to detect subtle differences between samples. To overcome these challenges, we developed a quantitative spectral method for color determination that greatly reduces the variability in measuring color and allows for a more precise understanding of color differences. In this study, we established a statistical method for assessing precision in 3-dimensional space and demonstrated that the quantitative spectral method is comparable with respect to precision and accuracy to the current European Pharmacopoeia visual assessment method. © PDA, Inc. 2016.
Note: High precision measurements using high frequency gigahertz signals
NASA Astrophysics Data System (ADS)
Jin, Aohan; Fu, Siyuan; Sakurai, Atsunori; Liu, Liang; Edman, Fredrik; Pullerits, Tõnu; Öwall, Viktor; Karki, Khadga Jung
2014-12-01
Generalized lock-in amplifiers use digital cavities with Q-factors as high as 5 × 108 to measure signals with very high precision. In this Note, we show that generalized lock-in amplifiers can be used to analyze microwave (giga-hertz) signals with a precision of few tens of hertz. We propose that the physical changes in the medium of propagation can be measured precisely by the ultra-high precision measurement of the signal. We provide evidence to our proposition by verifying the Newton's law of cooling by measuring the effect of change in temperature on the phase and amplitude of the signals propagating through two calibrated cables. The technique could be used to precisely measure different physical properties of the propagation medium, for example, the change in length, resistance, etc. Real time implementation of the technique can open up new methodologies of in situ virtual metrology in material design.
Precision Measurement of the e + e − → Λ c + Λ ¯ c − Cross Section Near Threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ablikim, M.; Achasov, M. N.; Ahmed, S.
2018-03-01
The cross section of the e+e− ! +c¯ −c process is measured with unprecedented precision using data collected with the BESIII detector at ps = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. The non-zero cross section near the +c¯ −c production threshold is cleared. At center-of-mass energies ps = 4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the c polar angle distributions. From these, the c electric over magnetic form factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03 respectively, where the first uncertainties are statistical and the secondmore » are systematic.« less
Precision Measurement of the e^{+}e^{-}→Λ_{c}^{+}Λ[over ¯]_{c}^{-} Cross Section Near Threshold.
Ablikim, M; Achasov, M N; Ahmed, S; Albrecht, M; Alekseev, M; Amoroso, A; An, F F; An, Q; Bai, J Z; Bai, Y; Bakina, O; Baldini Ferroli, R; Ban, Y; Begzsuren, K; Bennett, D W; Bennett, J V; Berger, N; Bertani, M; Bettoni, D; Bianchi, F; Boger, E; Boyko, I; Briere, R A; Cai, H; Cai, X; Cakir, O; Calcaterra, A; Cao, G F; Cetin, S A; Chai, J; Chang, J F; Chelkov, G; Chen, G; Chen, H S; Chen, J C; Chen, M L; Chen, P L; Chen, S J; Chen, X R; Chen, Y B; Chu, X K; Cibinetto, G; Cossio, F; Dai, H L; Dai, J P; Dbeyssi, A; Dedovich, D; Deng, Z Y; Denig, A; Denysenko, I; Destefanis, M; De Mori, F; Ding, Y; Dong, C; Dong, J; Dong, L Y; Dong, M Y; Dou, Z L; Du, S X; Duan, P F; Fang, J; Fang, S S; Fang, Y; Farinelli, R; Fava, L; Fegan, S; Feldbauer, F; Felici, G; Feng, C Q; Fioravanti, E; Fritsch, M; Fu, C D; Gao, Q; Gao, X L; Gao, Y; Gao, Y G; Gao, Z; Garillon, B; Garzia, I; Gilman, A; Goetzen, K; Gong, L; Gong, W X; Gradl, W; Greco, M; Gu, M H; Gu, Y T; Guo, A Q; Guo, R P; Guo, Y P; Guskov, A; Haddadi, Z; Han, S; Hao, X Q; Harris, F A; He, K L; He, X Q; Heinsius, F H; Held, T; Heng, Y K; Holtmann, T; Hou, Z L; Hu, H M; Hu, J F; Hu, T; Hu, Y; Huang, G S; Huang, J S; Huang, X T; Huang, X Z; Huang, Z L; Hussain, T; Ikegami Andersson, W; Ji, Q; Ji, Q P; Ji, X B; Ji, X L; Jiang, X S; Jiang, X Y; Jiao, J B; Jiao, Z; Jin, D P; Jin, S; Jin, Y; Johansson, T; Julin, A; Kalantar-Nayestanaki, N; Kang, X S; Kavatsyuk, M; Ke, B C; Khan, T; Khoukaz, A; Kiese, P; Kliemt, R; Koch, L; Kolcu, O B; Kopf, B; Kornicer, M; Kuemmel, M; Kuhlmann, M; Kupsc, A; Kühn, W; Lange, J S; Lara, M; Larin, P; Lavezzi, L; Leithoff, H; Li, C; Li, Cheng; Li, D M; Li, F; Li, F Y; Li, G; Li, H B; Li, H J; Li, J C; Li, J W; Li, Jin; Li, K J; Li, Kang; Li, Ke; Li, Lei; Li, P L; Li, P R; Li, Q Y; Li, W D; Li, W G; Li, X L; Li, X N; Li, X Q; Li, Z B; Liang, H; Liang, Y F; Liang, Y T; Liao, G R; Libby, J; Lin, C X; Lin, D X; Liu, B; Liu, B J; Liu, C X; Liu, D; Liu, F H; Liu, Fang; Liu, Feng; Liu, H B; Liu, H L; Liu, H M; Liu, Huanhuan; Liu, Huihui; Liu, J B; Liu, J Y; Liu, K; Liu, K Y; Liu, Ke; Liu, L D; Liu, Q; Liu, S B; Liu, X; Liu, Y B; Liu, Z A; Liu, Zhiqing; Long, Y F; Lou, X C; Lu, H J; Lu, J G; Lu, Y; Lu, Y P; Luo, C L; Luo, M X; Luo, X L; Lusso, S; Lyu, X R; Ma, F C; Ma, H L; Ma, L L; Ma, M M; Ma, Q M; Ma, T; Ma, X N; Ma, X Y; Ma, Y M; Maas, F E; Maggiora, M; Malik, Q A; Mao, Y J; Mao, Z P; Marcello, S; Meng, Z X; Messchendorp, J G; Mezzadri, G; Min, J; Mitchell, R E; Mo, X H; Mo, Y J; Morales Morales, C; Muchnoi, N Yu; Muramatsu, H; Mustafa, A; Nefedov, Y; Nerling, F; Nikolaev, I B; Ning, Z; Nisar, S; Niu, S L; Niu, X Y; Olsen, S L; Ouyang, Q; Pacetti, S; Pan, Y; Papenbrock, M; Patteri, P; Pelizaeus, M; Pellegrino, J; Peng, H P; Peng, Z Y; Peters, K; Pettersson, J; Ping, J L; Ping, R G; Pitka, A; Poling, R; Prasad, V; Qi, H R; Qi, M; Qi, T Y; Qian, S; Qiao, C F; Qin, N; Qin, X S; Qin, Z H; Qiu, J F; Rashid, K H; Redmer, C F; Richter, M; Ripka, M; Rolo, M; Rong, G; Rosner, Ch; Sarantsev, A; Savrié, M; Schnier, C; Schoenning, K; Shan, W; Shan, X Y; Shao, M; Shen, C P; Shen, P X; Shen, X Y; Sheng, H Y; Shi, X; Song, J J; Song, W M; Song, X Y; Sosio, S; Sowa, C; Spataro, S; Sun, G X; Sun, J F; Sun, L; Sun, S S; Sun, X H; Sun, Y J; Sun, Y K; Sun, Y Z; Sun, Z J; Sun, Z T; Tan, Y T; Tang, C J; Tang, G Y; Tang, X; Tapan, I; Tiemens, M; Tsednee, B; Uman, I; Varner, G S; Wang, B; Wang, B L; Wang, D; Wang, D Y; Wang, Dan; Wang, K; Wang, L L; Wang, L S; Wang, M; Wang, Meng; Wang, P; Wang, P L; Wang, W P; Wang, X F; Wang, Y; Wang, Y D; Wang, Y F; Wang, Y Q; Wang, Z; Wang, Z G; Wang, Z Y; Wang, Zongyuan; Weber, T; Wei, D H; Wei, J H; Weidenkaff, P; Wen, S P; Wiedner, U; Wolke, M; Wu, L H; Wu, L J; Wu, Z; Xia, L; Xia, Y; Xiao, D; Xiao, Y J; Xiao, Z J; Xie, Y G; Xie, Y H; Xiong, X A; Xiu, Q L; Xu, G F; Xu, J J; Xu, L; Xu, Q J; Xu, Q N; Xu, X P; Yan, F; Yan, L; Yan, W B; Yan, W C; Yan, Y H; Yang, H J; Yang, H X; Yang, L; Yang, Y H; Yang, Y X; Yang, Yifan; Ye, M; Ye, M H; Yin, J H; You, Z Y; Yu, B X; Yu, C X; Yu, J S; Yuan, C Z; Yuan, Y; Yuncu, A; Zafar, A A; Zeng, Y; Zeng, Z; Zhang, B X; Zhang, B Y; Zhang, C C; Zhang, D H; Zhang, H H; Zhang, H Y; Zhang, J; Zhang, J L; Zhang, J Q; Zhang, J W; Zhang, J Y; Zhang, J Z; Zhang, K; Zhang, L; Zhang, S Q; Zhang, X Y; Zhang, Y; Zhang, Y H; Zhang, Y T; Zhang, Yang; Zhang, Yao; Zhang, Yu; Zhang, Z H; Zhang, Z P; Zhang, Z Y; Zhao, G; Zhao, J W; Zhao, J Y; Zhao, J Z; Zhao, Lei; Zhao, Ling; Zhao, M G; Zhao, Q; Zhao, S J; Zhao, T C; Zhao, Y B; Zhao, Z G; Zhemchugov, A; Zheng, B; Zheng, J P; Zheng, Y H; Zhong, B; Zhou, L; Zhou, Q; Zhou, X; Zhou, X K; Zhou, X R; Zhou, X Y; Zhu, A N; Zhu, J; Zhu, K; Zhu, K J; Zhu, S; Zhu, S H; Zhu, X L; Zhu, Y C; Zhu, Y S; Zhu, Z A; Zhuang, J; Zou, B S; Zou, J H
2018-03-30
The cross section of the e^{+}e^{-}→Λ_{c}^{+}Λ[over ¯]_{c}^{-} process is measured with unprecedented precision using data collected with the BESIII detector at sqrt[s]=4574.5, 4580.0, 4590.0 and 4599.5 MeV. The nonzero cross section near the Λ_{c}^{+}Λ[over ¯]_{c}^{-} production threshold is cleared. At center-of-mass energies sqrt[s]=4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λ_{c} polar angle distributions. From these, the Λ_{c} electric over magnetic form-factor ratios (|G_{E}/G_{M}|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03, respectively, where the first uncertainties are statistical and the second are systematic.
Precision Measurement of the e+e-→Λc+Λ¯c - Cross Section Near Threshold
NASA Astrophysics Data System (ADS)
Ablikim, M.; Achasov, M. N.; Ahmed, S.; Albrecht, M.; Alekseev, M.; Amoroso, A.; An, F. F.; An, Q.; Bai, J. Z.; Bai, Y.; Bakina, O.; Baldini Ferroli, R.; Ban, Y.; Begzsuren, K.; Bennett, D. W.; Bennett, J. V.; Berger, N.; Bertani, M.; Bettoni, D.; Bianchi, F.; Boger, E.; Boyko, I.; Briere, R. A.; Cai, H.; Cai, X.; Cakir, O.; Calcaterra, A.; Cao, G. F.; Cetin, S. A.; Chai, J.; Chang, J. F.; Chelkov, G.; Chen, G.; Chen, H. S.; Chen, J. C.; Chen, M. L.; Chen, P. L.; Chen, S. J.; Chen, X. R.; Chen, Y. B.; Chu, X. K.; Cibinetto, G.; Cossio, F.; Dai, H. L.; Dai, J. P.; Dbeyssi, A.; Dedovich, D.; Deng, Z. Y.; Denig, A.; Denysenko, I.; Destefanis, M.; de Mori, F.; Ding, Y.; Dong, C.; Dong, J.; Dong, L. Y.; Dong, M. Y.; Dou, Z. L.; Du, S. X.; Duan, P. F.; Fang, J.; Fang, S. S.; Fang, Y.; Farinelli, R.; Fava, L.; Fegan, S.; Feldbauer, F.; Felici, G.; Feng, C. Q.; Fioravanti, E.; Fritsch, M.; Fu, C. D.; Gao, Q.; Gao, X. L.; Gao, Y.; Gao, Y. G.; Gao, Z.; Garillon, B.; Garzia, I.; Gilman, A.; Goetzen, K.; Gong, L.; Gong, W. X.; Gradl, W.; Greco, M.; Gu, M. H.; Gu, Y. T.; Guo, A. Q.; Guo, R. P.; Guo, Y. P.; Guskov, A.; Haddadi, Z.; Han, S.; Hao, X. Q.; Harris, F. A.; He, K. L.; He, X. Q.; Heinsius, F. H.; Held, T.; Heng, Y. K.; Holtmann, T.; Hou, Z. L.; Hu, H. M.; Hu, J. F.; Hu, T.; Hu, Y.; Huang, G. S.; Huang, J. S.; Huang, X. T.; Huang, X. Z.; Huang, Z. L.; Hussain, T.; Ikegami Andersson, W.; Ji, Q.; Ji, Q. P.; Ji, X. B.; Ji, X. L.; Jiang, X. S.; Jiang, X. Y.; Jiao, J. B.; Jiao, Z.; Jin, D. P.; Jin, S.; Jin, Y.; Johansson, T.; Julin, A.; Kalantar-Nayestanaki, N.; Kang, X. S.; Kavatsyuk, M.; Ke, B. C.; Khan, T.; Khoukaz, A.; Kiese, P.; Kliemt, R.; Koch, L.; Kolcu, O. B.; Kopf, B.; Kornicer, M.; Kuemmel, M.; Kuhlmann, M.; Kupsc, A.; Kühn, W.; Lange, J. S.; Lara, M.; Larin, P.; Lavezzi, L.; Leithoff, H.; Li, C.; Li, Cheng; Li, D. M.; Li, F.; Li, F. Y.; Li, G.; Li, H. B.; Li, H. J.; Li, J. C.; Li, J. W.; Li, Jin; Li, K. J.; Li, Kang; Li, Ke; Li, Lei; Li, P. L.; Li, P. R.; Li, Q. Y.; Li, W. D.; Li, W. G.; Li, X. L.; Li, X. N.; Li, X. Q.; Li, Z. B.; Liang, H.; Liang, Y. F.; Liang, Y. T.; Liao, G. R.; Libby, J.; Lin, C. X.; Lin, D. X.; Liu, B.; Liu, B. J.; Liu, C. X.; Liu, D.; Liu, F. H.; Liu, Fang; Liu, Feng; Liu, H. B.; Liu, H. L.; Liu, H. M.; Liu, Huanhuan; Liu, Huihui; Liu, J. B.; Liu, J. Y.; Liu, K.; Liu, K. Y.; Liu, Ke; Liu, L. D.; Liu, Q.; Liu, S. B.; Liu, X.; Liu, Y. B.; Liu, Z. A.; Liu, Zhiqing; Long, Y. F.; Lou, X. C.; Lu, H. J.; Lu, J. G.; Lu, Y.; Lu, Y. P.; Luo, C. L.; Luo, M. X.; Luo, X. L.; Lusso, S.; Lyu, X. R.; Ma, F. C.; Ma, H. L.; Ma, L. L.; Ma, M. M.; Ma, Q. M.; Ma, T.; Ma, X. N.; Ma, X. Y.; Ma, Y. M.; Maas, F. E.; Maggiora, M.; Malik, Q. A.; Mao, Y. J.; Mao, Z. P.; Marcello, S.; Meng, Z. X.; Messchendorp, J. G.; Mezzadri, G.; Min, J.; Mitchell, R. E.; Mo, X. H.; Mo, Y. J.; Morales Morales, C.; Muchnoi, N. Yu.; Muramatsu, H.; Mustafa, A.; Nefedov, Y.; Nerling, F.; Nikolaev, I. B.; Ning, Z.; Nisar, S.; Niu, S. L.; Niu, X. Y.; Olsen, S. L.; Ouyang, Q.; Pacetti, S.; Pan, Y.; Papenbrock, M.; Patteri, P.; Pelizaeus, M.; Pellegrino, J.; Peng, H. P.; Peng, Z. Y.; Peters, K.; Pettersson, J.; Ping, J. L.; Ping, R. G.; Pitka, A.; Poling, R.; Prasad, V.; Qi, H. R.; Qi, M.; Qi, T. Y.; Qian, S.; Qiao, C. F.; Qin, N.; Qin, X. S.; Qin, Z. H.; Qiu, J. F.; Rashid, K. H.; Redmer, C. F.; Richter, M.; Ripka, M.; Rolo, M.; Rong, G.; Rosner, Ch.; Sarantsev, A.; Savrié, M.; Schnier, C.; Schoenning, K.; Shan, W.; Shan, X. Y.; Shao, M.; Shen, C. P.; Shen, P. X.; Shen, X. Y.; Sheng, H. Y.; Shi, X.; Song, J. J.; Song, W. M.; Song, X. Y.; Sosio, S.; Sowa, C.; Spataro, S.; Sun, G. X.; Sun, J. F.; Sun, L.; Sun, S. S.; Sun, X. H.; Sun, Y. J.; Sun, Y. K.; Sun, Y. Z.; Sun, Z. J.; Sun, Z. T.; Tan, Y. T.; Tang, C. J.; Tang, G. Y.; Tang, X.; Tapan, I.; Tiemens, M.; Tsednee, B.; Uman, I.; Varner, G. S.; Wang, B.; Wang, B. L.; Wang, D.; Wang, D. Y.; Wang, Dan; Wang, K.; Wang, L. L.; Wang, L. S.; Wang, M.; Wang, Meng; Wang, P.; Wang, P. L.; Wang, W. P.; Wang, X. F.; Wang, Y.; Wang, Y. D.; Wang, Y. F.; Wang, Y. Q.; Wang, Z.; Wang, Z. G.; Wang, Z. Y.; Wang, Zongyuan; Weber, T.; Wei, D. H.; Wei, J. H.; Weidenkaff, P.; Wen, S. P.; Wiedner, U.; Wolke, M.; Wu, L. H.; Wu, L. J.; Wu, Z.; Xia, L.; Xia, Y.; Xiao, D.; Xiao, Y. J.; Xiao, Z. J.; Xie, Y. G.; Xie, Y. H.; Xiong, X. A.; Xiu, Q. L.; Xu, G. F.; Xu, J. J.; Xu, L.; Xu, Q. J.; Xu, Q. N.; Xu, X. P.; Yan, F.; Yan, L.; Yan, W. B.; Yan, W. C.; Yan, Y. H.; Yang, H. J.; Yang, H. X.; Yang, L.; Yang, Y. H.; Yang, Y. X.; Yang, Yifan; Ye, M.; Ye, M. H.; Yin, J. H.; You, Z. Y.; Yu, B. X.; Yu, C. X.; Yu, J. S.; Yuan, C. Z.; Yuan, Y.; Yuncu, A.; Zafar, A. A.; Zeng, Y.; Zeng, Z.; Zhang, B. X.; Zhang, B. Y.; Zhang, C. C.; Zhang, D. H.; Zhang, H. H.; Zhang, H. Y.; Zhang, J.; Zhang, J. L.; Zhang, J. Q.; Zhang, J. W.; Zhang, J. Y.; Zhang, J. Z.; Zhang, K.; Zhang, L.; Zhang, S. Q.; Zhang, X. Y.; Zhang, Y.; Zhang, Y. H.; Zhang, Y. T.; Zhang, Yang; Zhang, Yao; Zhang, Yu; Zhang, Z. H.; Zhang, Z. P.; Zhang, Z. Y.; Zhao, G.; Zhao, J. W.; Zhao, J. Y.; Zhao, J. Z.; Zhao, Lei; Zhao, Ling; Zhao, M. G.; Zhao, Q.; Zhao, S. J.; Zhao, T. C.; Zhao, Y. B.; Zhao, Z. G.; Zhemchugov, A.; Zheng, B.; Zheng, J. P.; Zheng, Y. H.; Zhong, B.; Zhou, L.; Zhou, Q.; Zhou, X.; Zhou, X. K.; Zhou, X. R.; Zhou, X. Y.; Zhu, A. N.; Zhu, J.; Zhu, K.; Zhu, K. J.; Zhu, S.; Zhu, S. H.; Zhu, X. L.; Zhu, Y. C.; Zhu, Y. S.; Zhu, Z. A.; Zhuang, J.; Zou, B. S.; Zou, J. H.; Besiii Collaboration
2018-03-01
The cross section of the e+e-→Λc+Λ¯c - process is measured with unprecedented precision using data collected with the BESIII detector at √{s }=4574.5 , 4580.0, 4590.0 and 4599.5 MeV. The nonzero cross section near the Λc+Λ¯c- production threshold is cleared. At center-of-mass energies √{s }=4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λc polar angle distributions. From these, the Λc electric over magnetic form-factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14 ±0.14 ±0.07 and 1.23 ±0.05 ±0.03 , respectively, where the first uncertainties are statistical and the second are systematic.
Precision Measurement of the e + e - → Λ c + Λ ¯ c - Cross Section Near Threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ablikim, M.; Achasov, M. N.; Ahmed, S.
The cross section of the e +e - →more » $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ process is measured with unprecedented precision using data collected with the BESIII detector at √s = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. The non-zero cross section near the $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ production threshold is cleared. At center-of-mass energies √s = 4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λ c polar angle distributions. From these, the Λ c electric over magnetic form factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03 respectively, where the first uncertainties are statistical and the second are systematic.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, B.; Erni, W.; Krusche, B.
Simulation results for future measurements of electromagnetic proton form factors atmore » $$\\overline{\\rm P}$$ANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel p¯p → e +e – is studied on the basis of two different but consistent procedures. The suppression of the main background channel, i.e. p¯p → π +π –, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. Furthermore, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.« less
Singh, B.; Erni, W.; Krusche, B.; ...
2016-10-28
Simulation results for future measurements of electromagnetic proton form factors atmore » $$\\overline{\\rm P}$$ANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel p¯p → e +e – is studied on the basis of two different but consistent procedures. The suppression of the main background channel, i.e. p¯p → π +π –, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. Furthermore, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.« less
Precision Cosmology: The First Half Million Years
NASA Astrophysics Data System (ADS)
Jones, Bernard J. T.
2017-06-01
Cosmology seeks to characterise our Universe in terms of models based on well-understood and tested physics. Today we know our Universe with a precision that once would have been unthinkable. This book develops the entire mathematical, physical and statistical framework within which this has been achieved. It tells the story of how we arrive at our profound conclusions, starting from the early twentieth century and following developments up to the latest data analysis of big astronomical datasets. It provides an enlightening description of the mathematical, physical and statistical basis for understanding and interpreting the results of key space- and ground-based data. Subjects covered include general relativity, cosmological models, the inhomogeneous Universe, physics of the cosmic background radiation, and methods and results of data analysis. Extensive online supplementary notes, exercises, teaching materials, and exercises in Python make this the perfect companion for researchers, teachers and students in physics, mathematics, and astrophysics.
Precision Measurement of the e + e - → Λ c + Λ ¯ c - Cross Section Near Threshold
Ablikim, M.; Achasov, M. N.; Ahmed, S.; ...
2018-03-29
The cross section of the e +e - →more » $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ process is measured with unprecedented precision using data collected with the BESIII detector at √s = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. The non-zero cross section near the $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ production threshold is cleared. At center-of-mass energies √s = 4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λ c polar angle distributions. From these, the Λ c electric over magnetic form factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03 respectively, where the first uncertainties are statistical and the second are systematic.« less
NASA Astrophysics Data System (ADS)
Li, Junye; Hu, Jinglei; Wang, Binyu; Sheng, Liang; Zhang, Xinming
2018-03-01
In order to investigate the effect of abrasive flow polishing surface variable diameter pipe parts, with high precision dispensing needles as the research object, the numerical simulation of the process of polishing high precision dispensing needle was carried out. Analysis of different volume fraction conditions, the distribution of the dynamic pressure and the turbulence viscosity of the abrasive flow field in the high precision dispensing needle, through comparative analysis, the effectiveness of the abrasive grain polishing high precision dispensing needle was studied, controlling the volume fraction of silicon carbide can change the viscosity characteristics of the abrasive flow during the polishing process, so that the polishing quality of the abrasive grains can be controlled.
Methodologies for the Statistical Analysis of Memory Response to Radiation
NASA Astrophysics Data System (ADS)
Bosser, Alexandre L.; Gupta, Viyas; Tsiligiannis, Georgios; Frost, Christopher D.; Zadeh, Ali; Jaatinen, Jukka; Javanainen, Arto; Puchner, Helmut; Saigné, Frédéric; Virtanen, Ari; Wrobel, Frédéric; Dilillo, Luigi
2016-08-01
Methodologies are proposed for in-depth statistical analysis of Single Event Upset data. The motivation for using these methodologies is to obtain precise information on the intrinsic defects and weaknesses of the tested devices, and to gain insight on their failure mechanisms, at no additional cost. The case study is a 65 nm SRAM irradiated with neutrons, protons and heavy ions. This publication is an extended version of a previous study [1].
Quantitative spectroscopy of Galactic BA-type supergiants. I. Atmospheric parameters
NASA Astrophysics Data System (ADS)
Firnstein, M.; Przybilla, N.
2012-07-01
Context. BA-type supergiants show a high potential as versatile indicators for modern astronomy. This paper constitutes the first in a series that aims at a systematic spectroscopic study of Galactic BA-type supergiants. Various problems will be addressed, including in particular observational constraints on the evolution of massive stars and a determination of abundance gradients in the Milky Way. Aims: The focus here is on the determination of accurate and precise atmospheric parameters for a sample of Galactic BA-type supergiants as prerequisite for all further analysis. Some first applications include a recalibration of functional relationships between spectral-type, intrinsic colours, bolometric corrections and effective temperature, and an exploration of the reddening-free Johnson Q and Strömgren [c1] and β-indices as photometric indicators for effective temperatures and gravities of BA-type supergiants. Methods: An extensive grid of theoretical spectra is computed based on a hybrid non-LTE approach, covering the relevant parameter space in effective temperature, surface gravity, helium abundance, microturbulence and elemental abundances. The atmospheric parameters are derived spectroscopically by line-profile fits of our theoretical models to high-resolution and high-S/N spectra obtained at various observatories. Ionization equilibria of multiple metals and the Stark-broadened hydrogen and the neutral helium lines constitute our primary indicators for the parameter determination, supplemented by (spectro-)photometry from the UV to the near-IR. Results: We obtain accurate atmospheric parameters for 35 sample supergiants from a homogeneous analysis. Data on effective temperatures, surface gravities, helium abundances, microturbulence, macroturbulence and rotational velocities are presented. The interstellar reddening and the ratio of total-to-selective extinction towards the stars are determined. Our empirical spectral-type-Teff scale is steeper than reference relations from the literature, the stars are significantly bluer than usually assumed, and bolometric corrections differ significantly from established literature values. Photometric Teff-determinations based on the reddening-free Q-index are found to be of limited use for studies of BA-type supergiants because of large errors of typically ±5% (1σ statistical) ±3% (1σ systematic), compared to a spectroscopically achieved precision of 1-2% (combined statistical and systematic uncertainty with our methodology). The reddening-free [c1] -index and β on the other hand are found to provide useful starting values for high-precision/accuracy analyses, with uncertainties of ±1% ± 2.5% in Teff, and ±0.04 ± 0.13 dex in log g (1σ-statistical, 1σ-systematic, respectively). Based on observations collected at the Centro Astronómico Hispano Alemán at Calar Alto (CAHA), operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC), proposals H2001-2.2-011 and H2005-2.2-016.Based on observations obtained at the European Southern Observatory, proposals 62.H-0176 and 079.B-0856(A). Additional data were adopted from the UVES Paranal Observatory Project (ESO DDT Program ID 266.D-5655).
Classifying the Basic Parameters of Ultraviolet Copper Bromide Laser
NASA Astrophysics Data System (ADS)
Gocheva-Ilieva, S. G.; Iliev, I. P.; Temelkov, K. A.; Vuchkov, N. K.; Sabotinov, N. V.
2009-10-01
The performance of deep ultraviolet copper bromide lasers is of great importance because of their applications in medicine, microbiology, high-precision processing of new materials, high-resolution laser lithography in microelectronics, high-density optical recording of information, laser-induced fluorescence in plasma and wide-gap semiconductors and more. In this paper we present a statistical study on the classification of 12 basic lasing parameters, by using different agglomerative methods of cluster analysis. The results are based on a big amount of experimental data for UV Cu+ Ne-CuBr laser with wavelengths 248.6 nm, 252.9 nm, 260.0 nm and 270.3 nm, obtained in Georgi Nadjakov Institute of Solid State Physics, Bulgarian Academy of Sciences. The relevant influence of parameters on laser generation is also evaluated. The results are applicable in computer modeling and planning the experiments and further laser development with improved output characteristics.
NASA Technical Reports Server (NTRS)
Yee, J. H.; Gjerloev, J.; Wu, D.; Schwartz, M. J.
2017-01-01
Using the O2 118 GHz spectral radiance measurements obtained by the Microwave Limb Sounder instrument on board the Aura spacecraft, we demonstrate that the Zeeman effect can be used to remotely measure the magnetic field perturbations produced by the auroral electrojet near the Hall current closure altitudes. Our derived current-induced magnetic field perturbations are found to be highly correlated with those coincidently obtained by ground magnetometers. These perturbations are also found to be linearly correlated with auroral electrojet strength. The statistically derived polar maps of our measured magnetic field perturbation reveal a spatial-temporal morphology consistent with that produced by the Hall current during substorms and storms. With today's technology, a constellation of compact, low-power, high spectral-resolution cubesats would have the capability to provide high precision and spatiotemporal magnetic field samplings needed for auroral electrojet measurements to gain insights into the spatiotemporal behavior of the auroral electrojet system.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
NASA Astrophysics Data System (ADS)
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
NASA Astrophysics Data System (ADS)
Moreland, Blythe; Oman, Kenji; Curfman, John; Yan, Pearlly; Bundschuh, Ralf
Methyl-binding domain (MBD) protein pulldown experiments have been a valuable tool in measuring the levels of methylated CpG dinucleotides. Due to the frequent use of this technique, high-throughput sequencing data sets are available that allow a detailed quantitative characterization of the underlying interaction between methylated DNA and MBD proteins. Analyzing such data sets, we first found that two such proteins cannot bind closer to each other than 2 bp, consistent with structural models of the DNA-protein interaction. Second, the large amount of sequencing data allowed us to find rather weak but nevertheless clearly statistically significant sequence preferences for several bases around the required CpG. These results demonstrate that pulldown sequencing is a high-precision tool in characterizing DNA-protein interactions. This material is based upon work supported by the National Science Foundation under Grant No. DMR-1410172.
Touch Precision Modulates Visual Bias.
Misceo, Giovanni F; Jones, Maurice D
2018-01-01
The sensory precision hypothesis holds that different seen and felt cues about the size of an object resolve themselves in favor of the more reliable modality. To examine this precision hypothesis, 60 college students were asked to look at one size while manually exploring another unseen size either with their bare fingers or, to lessen the reliability of touch, with their fingers sleeved in rigid tubes. Afterwards, the participants estimated either the seen size or the felt size by finding a match from a visual display of various sizes. Results showed that the seen size biased the estimates of the felt size when the reliability of touch decreased. This finding supports the interaction between touch reliability and visual bias predicted by statistically optimal models of sensory integration.
A novel alignment-free method for detection of lateral genetic transfer based on TF-IDF.
Cong, Yingnan; Chan, Yao-Ban; Ragan, Mark A
2016-07-25
Lateral genetic transfer (LGT) plays an important role in the evolution of microbes. Existing computational methods for detecting genomic regions of putative lateral origin scale poorly to large data. Here, we propose a novel method based on TF-IDF (Term Frequency-Inverse Document Frequency) statistics to detect not only regions of lateral origin, but also their origin and direction of transfer, in sets of hierarchically structured nucleotide or protein sequences. This approach is based on the frequency distributions of k-mers in the sequences. If a set of contiguous k-mers appears sufficiently more frequently in another phyletic group than in its own, we infer that they have been transferred from the first group to the second. We performed rigorous tests of TF-IDF using simulated and empirical datasets. With the simulated data, we tested our method under different parameter settings for sequence length, substitution rate between and within groups and post-LGT, deletion rate, length of transferred region and k size, and found that we can detect LGT events with high precision and recall. Our method performs better than an established method, ALFY, which has high recall but low precision. Our method is efficient, with runtime increasing approximately linearly with sequence length.
A High-precision Trigonometric Parallax to an Ancient Metal-poor Globular Cluster
NASA Astrophysics Data System (ADS)
Brown, T. M.; Casertano, S.; Strader, J.; Riess, A.; VandenBerg, D. A.; Soderblom, D. R.; Kalirai, J.; Salinas, R.
2018-03-01
Using the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST), we have obtained a direct trigonometric parallax for the nearest metal-poor globular cluster, NGC 6397. Although trigonometric parallaxes have been previously measured for many nearby open clusters, this is the first parallax for an ancient metal-poor population—one that is used as a fundamental template in many stellar population studies. This high-precision measurement was enabled by the HST/WFC3 spatial-scanning mode, providing hundreds of astrometric measurements for dozens of stars in the cluster and also for Galactic field stars along the same sightline. We find a parallax of 0.418 ± 0.013 ± 0.018 mas (statistical, systematic), corresponding to a true distance modulus of 11.89 ± 0.07 ± 0.09 mag (2.39 ± 0.07 ± 0.10 kpc). The V luminosity at the stellar main-sequence turnoff implies an absolute cluster age of 13.4 ± 0.7 ± 1.2 Gyr. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs GO-13817, GO-14336, and GO-14773.
Krahenbuhl, Jason T; Cho, Seok-Hwan; Irelan, Jon; Bansal, Naveen K
2016-08-01
Little peer-reviewed information is available regarding the accuracy and precision of the occlusal contact reproduction of digitally mounted stereolithographic casts. The purpose of this in vitro study was to evaluate the accuracy and precision of occlusal contacts among stereolithographic casts mounted by digital occlusal registrations. Four complete anatomic dentoforms were arbitrarily mounted on a semi-adjustable articulator in maximal intercuspal position and served as the 4 different simulated patients (SP). A total of 60 digital impressions and digital interocclusal registrations were made with a digital intraoral scanner to fabricate 15 sets of mounted stereolithographic (SLA) definitive casts for each dentoform. After receiving a total of 60 SLA casts, polyvinyl siloxane (PVS) interocclusal records were made for each set. The occlusal contacts for each set of SLA casts were measured by recording the amount of light transmitted through the interocclusal records. To evaluate the accuracy between the SP and their respective SLA casts, the areas of actual contact (AC) and near contact (NC) were calculated. For precision analysis, the coefficient of variation (CoV) was used. The data was analyzed with t tests for accuracy and the McKay and Vangel test for precision (α=.05). The accuracy analysis showed a statistically significant difference between the SP and the SLA cast of each dentoform (P<.05). For the AC in all dentoforms, a significant increase was found in the areas of actual contact of SLA casts compared with the contacts present in the SP (P<.05). Conversely, for the NC in all dentoforms, a significant decrease was found in the occlusal contact areas of the SLA casts compared with the contacts in the SP (P<.05). The precision analysis demonstrated the different CoV values between AC (5.8 to 8.8%) and NC (21.4 to 44.6%) of digitally mounted SLA casts, indicating that the overall precision of the SLA cast was low. For the accuracy evaluation, statistically significant differences were found between the occlusal contacts of all digitally mounted SLA casts groups, with an increase in AC values and a decrease in NC values. For the precision assessment, the CoV values of the AC and NC showed the digitally articulated cast's inability to reproduce the uniform occlusal contacts. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Statistical and Economic Techniques for Site-specific Nematode Management.
Liu, Zheng; Griffin, Terry; Kirkpatrick, Terrence L
2014-03-01
Recent advances in precision agriculture technologies and spatial statistics allow realistic, site-specific estimation of nematode damage to field crops and provide a platform for the site-specific delivery of nematicides within individual fields. This paper reviews the spatial statistical techniques that model correlations among neighboring observations and develop a spatial economic analysis to determine the potential of site-specific nematicide application. The spatial econometric methodology applied in the context of site-specific crop yield response contributes to closing the gap between data analysis and realistic site-specific nematicide recommendations and helps to provide a practical method of site-specifically controlling nematodes.
NASA Astrophysics Data System (ADS)
Luo, Hanjun; Ouyang, Zhengbiao; Liu, Qiang; Chen, Zhiliang; Lu, Hualan
2017-10-01
Cumulative pulses detection with appropriate cumulative pulses number and threshold has the ability to improve the detection performance of the pulsed laser ranging system with GM-APD. In this paper, based on Poisson statistics and multi-pulses cumulative process, the cumulative detection probabilities and their influence factors are investigated. With the normalized probability distribution of each time bin, the theoretical model of the range accuracy and precision is established, and the factors limiting the range accuracy and precision are discussed. The results show that the cumulative pulses detection can produce higher target detection probability and lower false alarm probability. However, for a heavy noise level and extremely weak echo intensity, the false alarm suppression performance of the cumulative pulses detection deteriorates quickly. The range accuracy and precision is another important parameter evaluating the detection performance, the echo intensity and pulse width are main influence factors on the range accuracy and precision, and higher range accuracy and precision is acquired with stronger echo intensity and narrower echo pulse width, for 5-ns echo pulse width, when the echo intensity is larger than 10, the range accuracy and precision lower than 7.5 cm can be achieved.
Stochastic Downscaling of Digital Elevation Models
NASA Astrophysics Data System (ADS)
Rasera, Luiz Gustavo; Mariethoz, Gregoire; Lane, Stuart N.
2016-04-01
High-resolution digital elevation models (HR-DEMs) are extremely important for the understanding of small-scale geomorphic processes in Alpine environments. In the last decade, remote sensing techniques have experienced a major technological evolution, enabling fast and precise acquisition of HR-DEMs. However, sensors designed to measure elevation data still feature different spatial resolution and coverage capabilities. Terrestrial altimetry allows the acquisition of HR-DEMs with centimeter to millimeter-level precision, but only within small spatial extents and often with dead ground problems. Conversely, satellite radiometric sensors are able to gather elevation measurements over large areas but with limited spatial resolution. In the present study, we propose an algorithm to downscale low-resolution satellite-based DEMs using topographic patterns extracted from HR-DEMs derived for example from ground-based and airborne altimetry. The method consists of a multiple-point geostatistical simulation technique able to generate high-resolution elevation data from low-resolution digital elevation models (LR-DEMs). Initially, two collocated DEMs with different spatial resolutions serve as an input to construct a database of topographic patterns, which is also used to infer the statistical relationships between the two scales. High-resolution elevation patterns are then retrieved from the database to downscale a LR-DEM through a stochastic simulation process. The output of the simulations are multiple equally probable DEMs with higher spatial resolution that also depict the large-scale geomorphic structures present in the original LR-DEM. As these multiple models reflect the uncertainty related to the downscaling, they can be employed to quantify the uncertainty of phenomena that are dependent on fine topography, such as catchment hydrological processes. The proposed methodology is illustrated for a case study in the Swiss Alps. A swissALTI3D HR-DEM (with 5 m resolution) and a SRTM-derived LR-DEM from the Western Alps are used to downscale a SRTM-based LR-DEM from the eastern part of the Alps. The results show that the method is capable of generating multiple high-resolution synthetic DEMs that reproduce the spatial structure and statistics of the original DEM.
High-Precision Registration of Point Clouds Based on Sphere Feature Constraints.
Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter
2016-12-30
Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.
High-Precision Registration of Point Clouds Based on Sphere Feature Constraints
Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter
2016-01-01
Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method. PMID:28042846
A statistical model of the human core-temperature circadian rhythm
NASA Technical Reports Server (NTRS)
Brown, E. N.; Choe, Y.; Luithardt, H.; Czeisler, C. A.
2000-01-01
We formulate a statistical model of the human core-temperature circadian rhythm in which the circadian signal is modeled as a van der Pol oscillator, the thermoregulatory response is represented as a first-order autoregressive process, and the evoked effect of activity is modeled with a function specific for each circadian protocol. The new model directly links differential equation-based simulation models and harmonic regression analysis methods and permits statistical analysis of both static and dynamical properties of the circadian pacemaker from experimental data. We estimate the model parameters by using numerically efficient maximum likelihood algorithms and analyze human core-temperature data from forced desynchrony, free-run, and constant-routine protocols. By representing explicitly the dynamical effects of ambient light input to the human circadian pacemaker, the new model can estimate with high precision the correct intrinsic period of this oscillator ( approximately 24 h) from both free-run and forced desynchrony studies. Although the van der Pol model approximates well the dynamical features of the circadian pacemaker, the optimal dynamical model of the human biological clock may have a harmonic structure different from that of the van der Pol oscillator.
Blynn, Emily; Ahmed, Saifuddin; Gibson, Dustin; Pariyo, George; Hyder, Adnan A
2017-01-01
In low- and middle-income countries (LMICs), historically, household surveys have been carried out by face-to-face interviews to collect survey data related to risk factors for noncommunicable diseases. The proliferation of mobile phone ownership and the access it provides in these countries offers a new opportunity to remotely conduct surveys with increased efficiency and reduced cost. However, the near-ubiquitous ownership of phones, high population mobility, and low cost require a re-examination of statistical recommendations for mobile phone surveys (MPS), especially when surveys are automated. As with landline surveys, random digit dialing remains the most appropriate approach to develop an ideal survey-sampling frame. Once the survey is complete, poststratification weights are generally applied to reduce estimate bias and to adjust for selectivity due to mobile ownership. Since weights increase design effects and reduce sampling efficiency, we introduce the concept of automated active strata monitoring to improve representativeness of the sample distribution to that of the source population. Although some statistical challenges remain, MPS represent a promising emerging means for population-level data collection in LMICs. PMID:28476726
Microbiological assay for the determination of meropenem in pharmaceutical dosage form.
Mendez, Andreas S L; Weisheimer, Vanessa; Oppe, Tércio P; Steppe, Martin; Schapoval, Elfrides E S
2005-04-01
Meropenem is a highly active carbapenem antibiotic used in the treatment of a wide range of serious infections. The present work reports a microbiological assay, applying the cylinder-plate method, for the determination of meropenem in powder for injection. The validation method yielded good results and included linearity, precision, accuracy and specificity. The assay is based on the inhibitory effect of meropenem upon the strain of Micrococcus luteus ATCC 9341 used as the test microorganism. The results of assay were treated statistically by analysis of variance (ANOVA) and were found to be linear (r=0.9999) in the range of 1.5-6.0 microg ml(-1), precise (intra-assay: R.S.D.=0.29; inter-assay: R.S.D.=0.94) and accurate. A preliminary stability study of meropenem was performed to show that the microbiological assay is specific for the determination of meropenem in the presence of its degradation products. The degraded samples were also analysed by the HPLC method. The proposed method allows the quantitation of meropenem in pharmaceutical dosage form and can be used for the drug analysis in routine quality control.
García-Estévez, Ignacio; Alcalde-Eon, Cristina; Escribano-Bailón, M Teresa
2017-08-09
The determination of the detailed flavanol composition in food matrices is not a simple task because of the structural similarities of monomers and, consequently, oligomers and polymers. The aim of this study was the development and validation of an HPLC-MS/MS-multiple reaction monitoring (MRM) method that would allow the accurate and precise quantification of catechins, gallocatechins, and oligomeric proanthocyanidins. The high correlation coefficients of the calibration curves (>0.993), the recoveries not statistically different from 100%, the good intra- and interday precisions (<5%), and the LOD and LOQ values, low enough to quantify flavanols in grapes, are good results from the method validation procedure. Its usefulness has also been tested by determining the detailed composition of Vitis vinifera L. cv. Rufete grapes. Seventy-two (38 nongalloylated and 34 galloylated) and 53 (24 procyanidins and 29 prodelphinidins) flavanols have been identified and quantified in grape seed and grape skin, respectively. The use of HCA and PCA on the detailed flavanol composition has allowed differentiation among Rufete clones.
Transportable Optical Lattice Clock with 7×10^{-17} Uncertainty.
Koller, S B; Grotti, J; Vogt, St; Al-Masoudi, A; Dörscher, S; Häfner, S; Sterr, U; Lisdat, Ch
2017-02-17
We present a transportable optical clock (TOC) with ^{87}Sr. Its complete characterization against a stationary lattice clock resulted in a systematic uncertainty of 7.4×10^{-17}, which is currently limited by the statistics of the determination of the residual lattice light shift, and an instability of 1.3×10^{-15}/sqrt[τ] with an averaging time τ in seconds. Measurements confirm that the systematic uncertainty can be reduced to below the design goal of 1×10^{-17}. To our knowledge, these are the best uncertainties and instabilities reported for any transportable clock to date. For autonomous operation, the TOC has been installed in an air-conditioned car trailer. It is suitable for chronometric leveling with submeter resolution as well as for intercontinental cross-linking of optical clocks, which is essential for a redefinition of the International System of Units (SI) second. In addition, the TOC will be used for high precision experiments for fundamental science that are commonly tied to precise frequency measurements and its development is an important step to space-borne optical clocks.
Zhang, Juwei; Tan, Xiaojiang; Zheng, Pengbo
2017-01-01
Electromagnetic methods are commonly employed to detect wire rope discontinuities. However, determining the residual strength of wire rope based on the quantitative recognition of discontinuities remains problematic. We have designed a prototype device based on the residual magnetic field (RMF) of ferromagnetic materials, which overcomes the disadvantages associated with in-service inspections, such as large volume, inconvenient operation, low precision, and poor portability by providing a relatively small and lightweight device with improved detection precision. A novel filtering system consisting of the Hilbert-Huang transform and compressed sensing wavelet filtering is presented. Digital image processing was applied to achieve the localization and segmentation of defect RMF images. The statistical texture and invariant moment characteristics of the defect images were extracted as the input of a radial basis function neural network. Experimental results show that the RMF device can detect defects in various types of wire rope and prolong the service life of test equipment by reducing the friction between the detection device and the wire rope by accommodating a high lift-off distance. PMID:28300790
Transportable Optical Lattice Clock with 7 ×10-17 Uncertainty
NASA Astrophysics Data System (ADS)
Koller, S. B.; Grotti, J.; Vogt, St.; Al-Masoudi, A.; Dörscher, S.; Häfner, S.; Sterr, U.; Lisdat, Ch.
2017-02-01
We present a transportable optical clock (TOC) with
A survey of eight hot Jupiters in secondary eclipse using WIRCam at CFHT
NASA Astrophysics Data System (ADS)
Martioli, Eder; Colón, Knicole D.; Angerhausen, Daniel; Stassun, Keivan G.; Rodriguez, Joseph E.; Zhou, George; Gaudi, B. Scott; Pepper, Joshua; Beatty, Thomas G.; Tata, Ramarao; James, David J.; Eastman, Jason D.; Wilson, Paul Anthony; Bayliss, Daniel; Stevens, Daniel J.
2018-03-01
We present near-infrared high-precision photometry for eight transiting hot Jupiters observed during their predicted secondary eclipses. Our observations were carried out using the staring mode of the WIRCam instrument on the Canada-France-Hawaii Telescope (CFHT). We present the observing strategies and data reduction methods which delivered time series photometry with statistical photometric precision as low as 0.11 per cent. We performed a Bayesian analysis to model the eclipse parameters and systematics simultaneously. The measured planet-to-star flux ratios allowed us to constrain the thermal emission from the day side of these hot Jupiters, as we derived the planet brightness temperatures. Our results combined with previously observed eclipses reveal an excess in the brightness temperatures relative to the blackbody prediction for the equilibrium temperatures of the planets for a wide range of heat redistribution factors. We find a trend that this excess appears to be larger for planets with lower equilibrium temperatures. This may imply some additional sources of radiation, such as reflected light from the host star and/or thermal emission from residual internal heat from the formation of the planet.
Camera Calibration with Radial Variance Component Estimation
NASA Astrophysics Data System (ADS)
Mélykuti, B.; Kruck, E. J.
2014-11-01
Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.
Cluster mass estimators from CMB temperature and polarization lensing
NASA Astrophysics Data System (ADS)
Hu, Wayne; DeDeo, Simon; Vale, Chris
2007-12-01
Upcoming Sunyaev Zel'dovich surveys are expected to return ~104 intermediate mass clusters at high redshift. Their average masses must be known to the same accuracy as desired for the dark energy properties. Internal to the surveys, the cosmic microwave background (CMB) potentially provides a source for lensing mass measurements whose distance is precisely known and behind all clusters. We develop statistical mass estimators from six quadratic combinations of CMB temperature and polarization fields that can simultaneously recover large-scale structure and cluster mass profiles. The performance of these estimators on idealized Navarro Frenk White (NFW) clusters suggests that surveys with a ~1' beam and 10\\,\\muK^{\\prime} noise in uncontaminated temperature maps can make a ~10σ detection, or equivalently a ~10% mass measurement for each 103 set of clusters. With internal or external acoustic scale E-polarization measurements, the ET cross-correlation estimator can provide a stringent test for contaminants on a first detection at ~1/3 the significance. For surveys that reach below 3\\,\\muK^{\\prime}, the EB cross-correlation estimator should provide the most precise measurements and potentially the strongest control over contaminants.
Evaluation of a role functioning computer adaptive test (RF-CAT).
Anatchkova, M; Rose, M; Ware, J; Bjorner, J B
2013-06-01
To evaluate the validity and participants' acceptance of an online assessment of role function using computer adaptive test (RF-CAT). The RF-CAT and a set of established quality of life instruments were administered in a cross-sectional study in a panel sample (n = 444) recruited from the general population with over-selection of participants with selected self-report chronic conditions (n = 225). The efficiency, score accuracy, validity, and acceptability of the RF-CAT were evaluated and compared to existing measures. The RF-CAT with a stopping rule of six items with content balancing used 25 of the available bank items and was completed on average in 66 s. RF-CAT and the legacy tools scores were highly correlated (.64-.84) and successfully discriminated across known groups. The RF-CAT produced a more precise assessment over a wider range than the SF-36 Role Physical scale. Patients' evaluations of the RF-CAT system were positive overall, with no differences in ratings observed between the CAT and static assessments. The RF-CAT was feasible, more precise than the static SF-36 RP and equally acceptable to participants as legacy measures. In empirical tests of validity, the better performance of the CAT was not uniformly statistically significant. Further research exploring the relationship between gained precision and discriminant power of the CAT assessment is needed.
Code of Federal Regulations, 2010 CFR
2010-07-01
... by the Administrator. (1) Statistical analysis of initial water penetration data performed to support ASTM Designation D2099-00 indicates that poor quantitative precision is associated with this testing...
High-precision thermal expansion measurements using small Fabry-Perot etalons
NASA Astrophysics Data System (ADS)
Davis, Mark J.; Hayden, Joseph S.; Farber, Daniel L.
2007-09-01
Coefficient of thermal expansion (CTE) measurements using small Fabry-Perot etalons were conducted on high and low thermal expansion materials differing in CTE by a factor of nearly 400. The smallest detectable change in length was ~10 -12 m. The sample consisted of a mm-sized Fabry-Perot etalon equipped with spherical mirrors; the material-under-test served as the 2.5 mm-thick spacer between the mirrors. A heterodyne optical setup was used with one laser locked to an ~780 nm hyperfine line of Rb gas and the other locked to a resonance of the sample etalon; changes in the beat frequency between the two lasers as a function of temperature directly provided a CTE value. The measurement system was tested using the high-CTE SCHOTT optical glass N-KF9 (CTE = 9.5 ppm/K at 23 °C). Measurements conducted under reproducibility conditions using five identically-prepared N-KF9 etalons demonstrate a precision of 0.1 ppm/K; absolute values (accuracy) are within 2-sigma errors with those made using mechanical dilatometers with 100-mm long sample rods. Etalon-based CTE measurements were also made on a high-CTE (~10.5 ppm/K), proprietary glass-ceramic used for high peak-pressure electrical feedthroughs and revealed statistically significant differences among parts made under what were assumed to be identical conditions. Finally, CTE measurements were made on etalons constructed from SCHOTT's ultra-low CTE Zerodur (R) glass-ceramic (CTE about -20 ppb/K at 50 °C for the material tested herein).
Feature recognition and detection for ancient architecture based on machine vision
NASA Astrophysics Data System (ADS)
Zou, Zheng; Wang, Niannian; Zhao, Peng; Zhao, Xuefeng
2018-03-01
Ancient architecture has a very high historical and artistic value. The ancient buildings have a wide variety of textures and decorative paintings, which contain a lot of historical meaning. Therefore, the research and statistics work of these different compositional and decorative features play an important role in the subsequent research. However, until recently, the statistics of those components are mainly by artificial method, which consumes a lot of labor and time, inefficiently. At present, as the strong support of big data and GPU accelerated training, machine vision with deep learning as the core has been rapidly developed and widely used in many fields. This paper proposes an idea to recognize and detect the textures, decorations and other features of ancient building based on machine vision. First, classify a large number of surface textures images of ancient building components manually as a set of samples. Then, using the convolution neural network to train the samples in order to get a classification detector. Finally verify its precision.
Statistical U-Th dating results of speleothem from south Europe and the orbital-scale implication
NASA Astrophysics Data System (ADS)
Hu, H. M.
2016-12-01
Reconstructing of hydroclimate in the Mediterranean on an orbital time scale helps improve our understanding of interaction between orbital forcing and north hemisphere climate. We collected 180 speleothem subsamples from Observatoire Cave (Monaco), Prince Cave (south France), Chateaueuf Cave (South France), Arago Cave (South France), and Basura Cave (North Italy) during 2013 to 2015 C.E. Uranium-thorium dating were conducted in the High-Precision Mass Spectrometry and Environment Change Laboratory (HISPEC), National Taiwan University. The results show that most of the speleothem formed during interglacial periods, particularly in marine isotope stage (MIS) 1, 5, and 11. However, only a few speleothem were dated between 180 to 250 thousand years ago (ka). The interval is approximately equivalent to MIS 7, which is a period with contrasting orbital parameters compared to MIS1, 5, and 11. Our statistical dating result implies that the orbital-scale humid/dry condition in southern Europe could be dominantly controlled by orbital forcing.
Emancipation through interaction--how eugenics and statistics converged and diverged.
Louçã, Francisco
2009-01-01
The paper discusses the scope and influence of eugenics in defining the scientific programme of statistics and the impact of the evolution of biology on social scientists. It argues that eugenics was instrumental in providing a bridge between sciences, and therefore created both the impulse and the institutions necessary for the birth of modern statistics in its applications first to biology and then to the social sciences. Looking at the question from the point of view of the history of statistics and the social sciences, and mostly concentrating on evidence from the British debates, the paper discusses how these disciplines became emancipated from eugenics precisely because of the inspiration of biology. It also relates how social scientists were fascinated and perplexed by the innovations taking place in statistical theory and practice.
Tenkanen, Henrikki; Di Minin, Enrico; Heikinheimo, Vuokko; Hausmann, Anna; Herbst, Marna; Kajala, Liisa; Toivonen, Tuuli
2017-12-14
Social media data is increasingly used as a proxy for human activity in different environments, including protected areas, where collecting visitor information is often laborious and expensive, but important for management and marketing. Here, we compared data from Instagram, Twitter and Flickr, and assessed systematically how park popularity and temporal visitor counts derived from social media data perform against high-precision visitor statistics in 56 national parks in Finland and South Africa in 2014. We show that social media activity is highly associated with park popularity, and social media-based monthly visitation patterns match relatively well with the official visitor counts. However, there were considerable differences between platforms as Instagram clearly outperformed Twitter and Flickr. Furthermore, we show that social media data tend to perform better in more visited parks, and should always be used with caution. Based on stakeholder discussions we identified potential reasons why social media data and visitor statistics might not match: the geography and profile of the park, the visitor profile, and sudden events. Overall the results are encouraging in broader terms: Over 60% of the national parks globally have Twitter or Instagram activity, which could potentially inform global nature conservation.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
Precision measurements of solar energetic particle elemental composition
NASA Technical Reports Server (NTRS)
Breneman, H.; Stone, E. C.
1985-01-01
Using data from the Cosmic Ray Subsystem (CRS) aboard the Voyager 1 and 2 spacecraft, solar energetic particle abundances or upper limits for all elements with 3 = Z = 30 from a combined set of 10 solar flares during the 1977 to 1982 time period were determined. Statistically meaningful abundances have been determined for the first time for several rare elements including P, Cl, K, Ti and Mn, while the precision of the mean abundances for the more abundant elements has been improved by typically a factor of approximately 3 over previously reported values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B.
The precision of double-beta ββ-decay experimental half lives and their uncertainties is reanalyzed. The method of Benford's distributions has been applied to nuclear reaction, structure and decay data sets. First-digit distribution trend for ββ-decay T 2v 1/2 is consistent with large nuclear reaction and structure data sets and provides validation of experimental half-lives. A complementary analysis of the decay uncertainties indicates deficiencies due to small size of statistical samples, and incomplete collection of experimental information. Further experimental and theoretical efforts would lead toward more precise values of-decay half-lives and nuclear matrix elements.
Gao, Xing; He, Yao; Hu, Hongpu
2017-01-01
Allowing for the differences in economy development, informatization degree and characteristic of population served and so on among different community health service organizations, community health service precision fund appropriation system based on performance management is designed, which can provide support for the government to appropriate financial funds scientifically and rationally for primary care. The system has the characteristic of flexibility and practicability, in which there are five subsystems including data acquisition, parameter setting, fund appropriation, statistical analysis system and user management.
Olah, Emoke; Poto, Laszlo; Hegyi, Peter; Szabo, Imre; Hartmann, Petra; Solymar, Margit; Petervari, Erika; Balasko, Marta; Habon, Tamas; Rumbus, Zoltan; Tenk, Judit; Rostas, Ildiko; Weinberg, Jordan; Romanovsky, Andrej A; Garami, Andras
2018-04-21
Therapeutic hypothermia was investigated repeatedly as a tool to improve the outcome of severe traumatic brain injury (TBI), but previous clinical trials and meta-analyses found contradictory results. We aimed to determine the effectiveness of therapeutic whole-body hypothermia on the mortality of adult patients with severe TBI by using a novel approach of meta-analysis. We searched the PubMed, EMBASE, and Cochrane Library databases from inception to February 2017. The identified human studies were evaluated regarding statistical, clinical, and methodological designs to ensure inter-study homogeneity. We extracted data on TBI severity, body temperature, mortality, and cooling parameters; then we calculated the cooling index, an integrated measure of therapeutic hypothermia. Forest plot of all identified studies showed no difference in the outcome of TBI between cooled and not cooled patients, but inter-study heterogeneity was high. On the contrary, by meta-analysis of RCTs which were homogenous with regards to statistical, clinical designs and precisely reported the cooling protocol, we showed decreased odds ratio for mortality in therapeutic hypothermia compared to no cooling. As independent factors, milder and longer cooling, and rewarming at < 0.25°C/h were associated with better outcome. Therapeutic hypothermia was beneficial only if the cooling index (measure of combination of cooling parameters) was sufficiently high. We conclude that high methodological and statistical inter-study heterogeneity could underlie the contradictory results obtained in previous studies. By analyzing methodologically homogenous studies, we show that cooling improves the outcome of severe TBI and this beneficial effect depends on certain cooling parameters and on their integrated measure, the cooling index.
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin
2018-01-01
We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.
Holman, B W B; Alvarenga, T I R C; van de Ven, R J; Hopkins, D L
2015-07-01
The Warner-Bratzler shear force (WBSF) of 335 lamb m. longissimus lumborum (LL) caudal and cranial ends was measured to examine and simulate the effect of replicate number (r: 1-8) on the precision of mean WBSF estimates and to compare LL caudal and cranial end WBSF means. All LL were sourced from two experimental flocks as part of the Information Nucleus slaughter programme (CRC for Sheep Industry Innovation) and analysed using a Lloyd Texture analyser with a Warner-Bratzler blade attachment. WBSF data were natural logarithm (ln) transformed before statistical analysis. Mean ln(WBSF) precision improved as r increased; however the practical implications support an r equal to 6, as precision improves only marginally with additional replicates. Increasing LL sample replication results in better ln(WBSF) precision compared with increasing r, provided that sample replicates are removed from the same LL end. Cranial end mean WBSF was 11.2 ± 1.3% higher than the caudal end. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Efficient summary statistical representation when change localization fails.
Haberman, Jason; Whitney, David
2011-10-01
People are sensitive to the summary statistics of the visual world (e.g., average orientation/speed/facial expression). We readily derive this information from complex scenes, often without explicit awareness. Given the fundamental and ubiquitous nature of summary statistical representation, we tested whether this kind of information is subject to the attentional constraints imposed by change blindness. We show that information regarding the summary statistics of a scene is available despite limited conscious access. In a novel experiment, we found that while observers can suffer from change blindness (i.e., not localize where change occurred between two views of the same scene), observers could nevertheless accurately report changes in the summary statistics (or "gist") about the very same scene. In the experiment, observers saw two successively presented sets of 16 faces that varied in expression. Four of the faces in the first set changed from one emotional extreme (e.g., happy) to another (e.g., sad) in the second set. Observers performed poorly when asked to locate any of the faces that changed (change blindness). However, when asked about the ensemble (which set was happier, on average), observer performance remained high. Observers were sensitive to the average expression even when they failed to localize any specific object change. That is, even when observers could not locate the very faces driving the change in average expression between the two sets, they nonetheless derived a precise ensemble representation. Thus, the visual system may be optimized to process summary statistics in an efficient manner, allowing it to operate despite minimal conscious access to the information presented.
Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David
2015-01-01
New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.
NASA Astrophysics Data System (ADS)
Xiong, Qiufen; Hu, Jianglin
2013-05-01
The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and daily temporal scale. The primary factors influencing the dataset precision are elevation and terrain complexity. In general, the gridded dataset has a relatively high precision in plains and flatlands and a relatively low precision in mountainous areas.
High precision locating control system based on VCM for Talbot lithography
NASA Astrophysics Data System (ADS)
Yao, Jingwei; Zhao, Lixin; Deng, Qian; Hu, Song
2016-10-01
Aiming at the high precision and efficiency requirements of Z-direction locating in Talbot lithography, a control system based on Voice Coil Motor (VCM) was designed. In this paper, we built a math model of VCM and its moving characteristic was analyzed. A double-closed loop control strategy including position loop and current loop were accomplished. The current loop was implemented by driver, in order to achieve the rapid follow of the system current. The position loop was completed by the digital signal processor (DSP) and the position feedback was achieved by high precision linear scales. Feed forward control and position feedback Proportion Integration Differentiation (PID) control were applied in order to compensate for dynamic lag and improve the response speed of the system. And the high precision and efficiency of the system were verified by simulation and experiments. The results demonstrated that the performance of Z-direction gantry was obviously improved, having high precision, quick responses, strong real-time and easily to expend for higher precision.
Study on high-precision measurement of long radius of curvature
NASA Astrophysics Data System (ADS)
Wu, Dongcheng; Peng, Shijun; Gao, Songtao
2016-09-01
It is hard to get high-precision measurement of the radius of curvature (ROC), because of many factors that affect the measurement accuracy. For the measurement of long radius of curvature, some factors take more important position than others'. So, at first this paper makes some research about which factor is related to the long measurement distance, and also analyse the uncertain of the measurement accuracy. At second this article also study the influence about the support status and the adjust error about the cat's eye and confocal position. At last, a 1055micrometer radius of curvature convex is measured in high-precision laboratory. Experimental results show that the proper steady support (three-point support) can guarantee the high-precision measurement of radius of curvature. Through calibrating the gain of cat's eye and confocal position, is useful to ensure the precise position in order to increase the measurement accuracy. After finish all the above process, the high-precision long ROC measurement is realized.
Optimetrics for Precise Navigation
NASA Technical Reports Server (NTRS)
Yang, Guangning; Heckler, Gregory; Gramling, Cheryl
2017-01-01
Optimetrics for Precise Navigation will be implemented on existing optical communication links. The ranging and Doppler measurements are conducted over communication data frame and clock. The measurement accuracy is two orders of magnitude better than TDRSS. It also has other advantages of: The high optical carrier frequency enables: (1) Immunity from ionosphere and interplanetary Plasma noise floor, which is a performance limitation for RF tracking; and (2) High antenna gain reduces terminal size and volume, enables high precision tracking in Cubesat, and in deep space smallsat. High Optical Pointing Precision provides: (a) spacecraft orientation, (b) Minimal additional hardware to implement Precise Optimetrics over optical comm link; and (c) Continuous optical carrier phase measurement will enable the system presented here to accept future optical frequency standard with much higher clock accuracy.
A Measurement of the Absolute Reactor Antineutrino Flux and Spectrum at Daya Bay
NASA Astrophysics Data System (ADS)
An, Fengpeng
2017-12-01
The Daya Bay Reactor Neutrino Experiment uses an array of eight underground detectors to study antineutrinos from six reactor cores with different baselines. Since the start of data-taking from late 2011, Daya Bay has collected the largest sample of reactor antineutrino events to date, and has made the most precise measurement of the neutrino oscillation parameters sin22θ13 and Δm2ee. Using the data from the four detectors in the near experimental halls, Daya Bay has made a high statistics measurement of the absolute reactor antineutrino flux and spectrum. In this paper we will present this measurement and its comparison to predictions based on different flux models.
NASA Astrophysics Data System (ADS)
Martin, Jeffery
2016-09-01
The free neutron is an excellent laboratory for searches for physics beyond the standard model. Ultracold neutrons (UCN) are free neutrons that can be confined to material, magnetic, and gravitational traps. UCN are compelling for experiments requiring long observation times, high polarization, or low energies. The challenge of experiments has been to create enough UCN to reach the statistical precision required. Production techniques involving neutron interactions with condensed matter systems have resulted in some successes, and new UCN sources are being pursued worldwide to exploit higher UCN densities offered by these techniques. I will review the physics of how the UCN sources work, along with the present status of the world's efforts. research supported by NSERC, CFI, and CRC.
Development and Characterization of a Low-Pressure Calibration System for Hypersonic Wind Tunnels
NASA Technical Reports Server (NTRS)
Green, Del L.; Everhart, Joel L.; Rhode, Matthew N.
2004-01-01
Minimization of uncertainty is essential for accurate ESP measurements at very low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources requires a well defined and controlled calibration method. A calibration system has been constructed and environmental control software developed to control experimentation to eliminate human induced error sources. The initial stability study of the calibration system shows a high degree of measurement accuracy and precision in temperature and pressure control. Control manometer drift and reference pressure instabilities induce uncertainty into the repeatability of voltage responses measured from the PSI System 8400 between calibrations. Methods of improving repeatability are possible through software programming and further experimentation.
NASA Technical Reports Server (NTRS)
Leake, M. A.
1982-01-01
Various linear and areal measurements of Mercury's first quadrant which were used in geological map preparation, map analysis, and statistical surveys of crater densities are discussed. Accuracy of each method rests on the determination of the scale of the photograph, i.e., the conversion factor between distances on the planet (in km) and distances on the photograph (in cm). Measurement errors arise due to uncertainty in Mercury's radius, poor resolution, poor coverage, high Sun angle illumination in the limb regions, planetary curvature, limited precision in measuring instruments, and inaccuracies in the printed map scales. Estimates are given for these errors.
User's Manual for Downscaler Fusion Software
Recently, a series of 3 papers has been published in the statistical literature that details the use of downscaling to obtain more accurate and precise predictions of air pollution across the conterminous U.S. This downscaling approach combines CMAQ gridded numerical model output...
15 CFR 200.103 - Consulting and advisory services.
Code of Federal Regulations, 2013 CFR
2013-01-01
...., details of design and construction, operational aspects, unusual or extreme conditions, methods of statistical control of the measurement process, automated acquisition of laboratory data, and data reduction... group seminars on the precision measurement of specific types of physical quantities, offering the...
15 CFR 200.103 - Consulting and advisory services.
Code of Federal Regulations, 2011 CFR
2011-01-01
...., details of design and construction, operational aspects, unusual or extreme conditions, methods of statistical control of the measurement process, automated acquisition of laboratory data, and data reduction... group seminars on the precision measurement of specific types of physical quantities, offering the...
A roughness-corrected index of relative bed stability for regional stream surveys
Quantitative regional assessments of streambed sedimentation and its likely causes are hampered because field investigations typically lack the requisite sample size, measurements, or precision for sound geomorphic and statistical interpretation. We adapted an index of relative b...
Study on manufacturing method of optical surface with high precision in angle and surface
NASA Astrophysics Data System (ADS)
Yu, Xin; Li, Xin; Yu, Ze; Zhao, Bin; Zhang, Xuebin; Sun, Lipeng; Tong, Yi
2016-10-01
This paper studied a manufacturing processing of optical surface with high precision in angel and surface. By theoretical analysis of the relationships between the angel precision and surface, the measurement conversion of the technical indicators, optical-cement method application, the optical-cement tooling design, the experiment has been finished successfully, the processing method has been verified, which can be also used in the manufacturing of the optical surface with similar high precision in angle and surface.
Properties of an eclipsing double white dwarf binary NLTT 11748
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, David L.; Walker, Arielle N.; Marsh, Thomas R.
2014-01-10
We present high-quality ULTRACAM photometry of the eclipsing detached double white dwarf binary NLTT 11748. This system consists of a carbon/oxygen white dwarf and an extremely low mass (<0.2 M {sub ☉}) helium-core white dwarf in a 5.6 hr orbit. To date, such extremely low-mass white dwarfs, which can have thin, stably burning outer layers, have been modeled via poorly constrained atmosphere and cooling calculations where uncertainties in the detailed structure can strongly influence the eventual fates of these systems when mass transfer begins. With precise (individual precision ≈1%), high-cadence (≈2 s), multicolor photometry of multiple primary and secondary eclipsesmore » spanning >1.5 yr, we constrain the masses and radii of both objects in the NLTT 11748 system to a statistical uncertainty of a few percent. However, we find that overall uncertainty in the thickness of the envelope of the secondary carbon/oxygen white dwarf leads to a larger (≈13%) systematic uncertainty in the primary He WD's mass. Over the full range of possible envelope thicknesses, we find that our primary mass (0.136-0.162 M {sub ☉}) and surface gravity (log (g) = 6.32-6.38; radii are 0.0423-0.0433 R {sub ☉}) constraints do not agree with previous spectroscopic determinations. We use precise eclipse timing to detect the Rømer delay at 7σ significance, providing an additional weak constraint on the masses and limiting the eccentricity to ecos ω = (– 4 ± 5) × 10{sup –5}. Finally, we use multicolor data to constrain the secondary's effective temperature (7600 ± 120 K) and cooling age (1.6-1.7 Gyr).« less
NASA Astrophysics Data System (ADS)
Zavarygin, E. O.; Webb, J. K.; Dumont, V.; Riemer-Sørensen, S.
2018-04-01
The spectrum of the zem = 2.63 quasar Q1009+2956 has been observed extensively on the Keck telescope. The Lyman limit absorption system zabs = 2.504 was previously used to measure D/H by Burles & Tytler using a spectrum with signal to noise approximately 60 per pixel in the continuum near Ly α at zabs = 2.504. The larger dataset now available combines to form an exceptionally high signal to noise spectrum, around 147 per pixel. Several heavy element absorption lines are detected in this LLS, providing strong constraints on the kinematic structure. We explore a suite of absorption system models and find that the deuterium feature is likely to be contaminated by weak interloping Ly α absorption from a low column density H I cloud, reducing the expected D/H precision. We find D/H =2.48^{+0.41}_{-0.35} × 10^{-5} for this system. Combining this new measurement with others from the literature and applying the method of Least Trimmed Squares to a statistical sample of 15 D/H measurements results in a "reliable" sample of 13 values. This sample yields a primordial deuterium abundance of (D/H)p = (2.545 ± 0.025) × 10-5. The corresponding mean baryonic density of the Universe is Ωbh2 = 0.02174 ± 0.00025. The quasar absorption data is of the same precision as, and marginally inconsistent with, the 2015 CMB Planck (TT+lowP+lensing) measurement, Ωbh2 = 0.02226 ± 0.00023. Further quasar and more precise nuclear data are required to establish whether this is a random fluctuation.
Zhang, X Y; Li, H; Zhao, Y J; Wang, Y; Sun, Y C
2016-07-01
To quantitatively evaluate the quality and accuracy of three-dimensional (3D) data acquired by using two kinds of structure intra-oral scanner to scan the typical teeth crown preparations. Eight typical teeth crown preparations model were scanned 3 times with two kinds of structured light intra-oral scanner(A, B), as test group. A high precision model scanner were used to scan the model as true value group. The data above the cervical margin was extracted. The indexes of quality including non-manifold edges, the self-intersections, highly-creased edges, spikes, small components, small tunnels, small holes and the anount of triangles were measured with the tool of mesh doctor in Geomagic studio 2012. The scanned data of test group were aligned to the data of true value group. 3D deviations of the test group compared with true value group were measured for each scanned point, each preparation and each group. Independent-samples Mann-Whitney U test was applied to analyze 3D deviations for each scanned point of A and B group. Correlation analysis was applied to index values and 3D deviation values. The total number of spikes in A group was 96, and that in B group and true value group were 5 and 0 respectively. Trueness: A group 8.0 (8.3) μm, B group 9.5 (11.5) μm(P>0.05). Correlation analysis of the number of spikes with data precision of A group was r=0.46. In the study, the qulity of the scanner B is better than scanner A, the difference of accuracy is not statistically significant. There is correlation between quality and data precision of the data scanned with scanner A.
Investigation of fast ion pressure effects in ASDEX Upgrade by spectral MSE measurements
NASA Astrophysics Data System (ADS)
Reimer, René; Dinklage, Andreas; Wolf, Robert; Dunne, Mike; Geiger, Benedikt; Hobirk, Jörg; Reich, Matthias; ASDEX Upgrade Team; McCarthy, Patrick J.
2017-04-01
High precision measurements of fast ion effects on the magnetic equilibrium in the ASDEX Upgrade tokamak have been conducted in a high-power (10 MW) neutral-beam injection discharge. An improved analysis of the spectral motional Stark effect data based on forward-modeling, including the Zeeman effect, fine-structure and non-statistical sub-level distribution, revealed changes in the order of 1% in |B| . The results were found to be consistent with results from the equilibrium solver CLISTE. The measurements allowed us to derive the fast ion pressure fraction to be Δ {{p}\\text{FI}}/{{p}\\text{mhd}}≈ 10 % and variations of the fast ion pressure are consistent with calculations of the transport code TRANSP. The results advance the understanding of fast ion confinement and magneto-hydrodynamic stability in the presence of fast ions.
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Method of high precision interval measurement in pulse laser ranging system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong
2013-09-01
Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.
Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C
2018-03-07
Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.
The statistical challenge of constraining the low-mass IMF in Local Group dwarf galaxies
NASA Astrophysics Data System (ADS)
El-Badry, Kareem; Weisz, Daniel R.; Quataert, Eliot
2017-06-01
We use Monte Carlo simulations to explore the statistical challenges of constraining the characteristic mass (mc) and width (σ) of a lognormal sub-solar initial mass function (IMF) in Local Group dwarf galaxies using direct star counts. For a typical Milky Way (MW) satellite (MV = -8), jointly constraining mc and σ to a precision of ≲ 20 per cent requires that observations be complete to ≲ 0.2 M⊙, if the IMF is similar to the MW IMF. A similar statistical precision can be obtained if observations are only complete down to 0.4 M⊙, but this requires measurement of nearly 100× more stars, and thus, a significantly more massive satellite (MV ˜ -12). In the absence of sufficiently deep data to constrain the low-mass turnover, it is common practice to fit a single-sloped power law to the low-mass IMF, or to fit mc for a lognormal while holding σ fixed. We show that the former approximation leads to best-fitting power-law slopes that vary with the mass range observed and can largely explain existing claims of low-mass IMF variations in MW satellites, even if satellite galaxies have the same IMF as the MW. In addition, fixing σ during fitting leads to substantially underestimated uncertainties in the recovered value of mc (by a factor of ˜4 for typical observations). If the IMFs of nearby dwarf galaxies are lognormal and do vary, observations must reach down to ˜mc in order to robustly detect these variations. The high-sensitivity, near-infrared capabilities of the James Webb Space Telescope and Wide-Field Infrared Survey Telescope have the potential to dramatically improve constraints on the low-mass IMF. We present an efficient observational strategy for using these facilities to measure the IMFs of Local Group dwarf galaxies.
Precision mechatronics based on high-precision measuring and positioning systems and machines
NASA Astrophysics Data System (ADS)
Jäger, Gerd; Manske, Eberhard; Hausotte, Tino; Mastylo, Rostyslav; Dorozhovets, Natalja; Hofmann, Norbert
2007-06-01
Precision mechatronics is defined in the paper as the science and engineering of a new generation of high precision systems and machines. Nanomeasuring and nanopositioning engineering represents important fields of precision mechatronics. The nanometrology is described as the today's limit of the precision engineering. The problem, how to design nanopositioning machines with uncertainties as small as possible will be discussed. The integration of several optical and tactile nanoprobes makes the 3D-nanopositioning machine suitable for various tasks, such as long range scanning probe microscopy, mask and wafer inspection, nanotribology, nanoindentation, free form surface measurement as well as measurement of microoptics, precision molds, microgears, ring gauges and small holes.
Classification of LIDAR Data for Generating a High-Precision Roadway Map
NASA Astrophysics Data System (ADS)
Jeong, J.; Lee, I.
2016-06-01
Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.
Enhanced detection and visualization of anomalies in spectral imagery
NASA Astrophysics Data System (ADS)
Basener, William F.; Messinger, David W.
2009-05-01
Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.
Assessment of Hammocks (Petenes) Resilience to Sea Level Rise Due to Climate Change in Mexico
Posada Vanegas, Gregorio; de Jong, Bernardus H. J.
2016-01-01
There is a pressing need to assess resilience of coastal ecosystems against sea level rise. To develop appropriate response strategies against future climate disturbances, it is important to estimate the magnitude of disturbances that these ecosystems can absorb and to better understand their underlying processes. Hammocks (petenes) coastal ecosystems are highly vulnerable to sea level rise linked to climate change; their vulnerability is mainly due to its close relation with the sea through underground drainage in predominantly karstic soils. Hammocks are biologically important because of their high diversity and restricted distribution. This study proposes a strategy to assess resilience of this coastal ecosystem when high-precision data are scarce. Approaches and methods used to derive ecological resilience maps of hammocks are described and assessed. Resilience models were built by incorporating and weighting appropriate indicators of persistence to assess hammocks resilience against flooding due to climate change at “Los Petenes Biosphere Reserve”, in the Yucatán Peninsula, Mexico. According to the analysis, 25% of the study area is highly resilient (hot spots), whereas 51% has low resilience (cold spots). The most significant hot spot clusters of resilience were located in areas distant to the coastal zone, with indirect tidal influence, and consisted mostly of hammocks surrounded by basin mangrove and floodplain forest. This study revealed that multi-criteria analysis and the use of GIS for qualitative, semi-quantitative and statistical spatial analyses constitute a powerful tool to develop ecological resilience maps of coastal ecosystems that are highly vulnerable to sea level rise, even when high-precision data are not available. This method can be applied in other sites to help develop resilience analyses and decision-making processes for management and conservation of coastal areas worldwide. PMID:27611802
Assessment of Hammocks (Petenes) Resilience to Sea Level Rise Due to Climate Change in Mexico.
Hernández-Montilla, Mariana C; Martínez-Morales, Miguel Angel; Posada Vanegas, Gregorio; de Jong, Bernardus H J
2016-01-01
There is a pressing need to assess resilience of coastal ecosystems against sea level rise. To develop appropriate response strategies against future climate disturbances, it is important to estimate the magnitude of disturbances that these ecosystems can absorb and to better understand their underlying processes. Hammocks (petenes) coastal ecosystems are highly vulnerable to sea level rise linked to climate change; their vulnerability is mainly due to its close relation with the sea through underground drainage in predominantly karstic soils. Hammocks are biologically important because of their high diversity and restricted distribution. This study proposes a strategy to assess resilience of this coastal ecosystem when high-precision data are scarce. Approaches and methods used to derive ecological resilience maps of hammocks are described and assessed. Resilience models were built by incorporating and weighting appropriate indicators of persistence to assess hammocks resilience against flooding due to climate change at "Los Petenes Biosphere Reserve", in the Yucatán Peninsula, Mexico. According to the analysis, 25% of the study area is highly resilient (hot spots), whereas 51% has low resilience (cold spots). The most significant hot spot clusters of resilience were located in areas distant to the coastal zone, with indirect tidal influence, and consisted mostly of hammocks surrounded by basin mangrove and floodplain forest. This study revealed that multi-criteria analysis and the use of GIS for qualitative, semi-quantitative and statistical spatial analyses constitute a powerful tool to develop ecological resilience maps of coastal ecosystems that are highly vulnerable to sea level rise, even when high-precision data are not available. This method can be applied in other sites to help develop resilience analyses and decision-making processes for management and conservation of coastal areas worldwide.
Zhang, Fan; Allen, Andrew J; Levine, Lyle E; Mancini, Derrick C; Ilavsky, Jan
2015-05-01
The needs both for increased experimental throughput and for in operando characterization of functional materials under increasingly realistic experimental conditions have emerged as major challenges across the whole of crystallography. A novel measurement scheme that allows multiplexed simultaneous measurements from multiple nearby sample volumes is presented. This new approach enables better measurement statistics or direct probing of heterogeneous structure, dynamics or elemental composition. To illustrate, the submicrometer precision that optical lithography provides has been exploited to create a multiplexed form of ultra-small-angle scattering based X-ray photon correlation spectroscopy (USAXS-XPCS) using micro-slit arrays fabricated by photolithography. Multiplexed USAXS-XPCS is applied to follow the equilibrium dynamics of a simple colloidal suspension. While the dependence of the relaxation time on momentum transfer, and its relationship with the diffusion constant and the static structure factor, follow previous findings, this measurements-in-parallel approach reduces the statistical uncertainties of this photon-starved technique to below those associated with the instrument resolution. More importantly, we note the potential of the multiplexed scheme to elucidate the response of different components of a heterogeneous sample under identical experimental conditions in simultaneous measurements. In the context of the X-ray synchrotron community, this scheme is, in principle, applicable to all in-line synchrotron techniques. Indeed, it has the potential to open a new paradigm for in operando characterization of heterogeneous functional materials, a situation that will be even further enhanced by the ongoing development of multi-bend achromat storage ring designs as the next evolution of large-scale X-ray synchrotron facilities around the world.
Zhang, Fan; Allen, Andrew J.; Levine, Lyle E.; ...
2015-01-01
Here, the needs both for increased experimental throughput and forin operandocharacterization of functional materials under increasingly realistic experimental conditions have emerged as major challenges across the whole of crystallography. A novel measurement scheme that allows multiplexed simultaneous measurements from multiple nearby sample volumes is presented. This new approach enables better measurement statistics or direct probing of heterogeneous structure, dynamics or elemental composition. To illustrate, the submicrometer precision that optical lithography provides has been exploited to create a multiplexed form of ultra-small-angle scattering based X-ray photon correlation spectroscopy (USAXS-XPCS) using micro-slit arrays fabricated by photolithography. Multiplexed USAXS-XPCS is applied to followmore » the equilibrium dynamics of a simple colloidal suspension. While the dependence of the relaxation time on momentum transfer, and its relationship with the diffusion constant and the static structure factor, follow previous findings, this measurements-in-parallel approach reduces the statistical uncertainties of this photon-starved technique to below those associated with the instrument resolution. More importantly, we note the potential of the multiplexed scheme to elucidate the response of different components of a heterogeneous sample underidenticalexperimental conditions in simultaneous measurements. Lastly, in the context of the X-ray synchrotron community, this scheme is, in principle, applicable to all in-line synchrotron techniques. Indeed, it has the potential to open a new paradigm for in operando characterization of heterogeneous functional materials, a situation that will be even further enhanced by the ongoing development of multi-bend achromat storage ring designs as the next evolution of large-scale X-ray synchrotron facilities around the world.« less
Differential cross sections and recoil polarizations for the reaction γ p → K + Σ 0
Dey, B.; Meyer, C. A.; Bellis, M.; ...
2010-08-06
Here, high-statistics measurements of differential cross sections and recoil polarizations for the reactionmore » $$\\gamma p \\rightarrow K^+ \\Sigma^0$$ have been obtained using the CLAS detector at Jefferson Lab. We cover center-of-mass energies ($$\\sqrt{s}$$) from 1.69 to 2.84 GeV, with an extensive coverage in the $K^+$ production angle. Independent measurements were made using the $$K^{+}p\\pi^{-}$$($$\\gamma$$) and $$K^{+}p$$($$\\pi^-,\\gamma$$) final-state topologies, and were found to exhibit good agreement. Our differential cross sections show good agreement with earlier CLAS, SAPHIR and LEPS results, while offering better statistical precision and a 300-MeV increase in $$\\sqrt{s}$$ coverage. Above $$\\sqrt{s} \\approx 2.5$$ GeV, $t$- and $u$-channel Regge scaling behavior can be seen at forward- and backward-angles, respectively. Our recoil polarization ($$P_\\Sigma$$) measurements represent a substantial increase in kinematic coverage and enhanced precision over previous world data. At forward angles we find that $$P_\\Sigma$$ is of the same magnitude but opposite sign as $$P_\\Lambda$$, in agreement with the static SU(6) quark model prediction of $$P_\\Sigma \\approx -P_\\Lambda$$. This expectation is violated in some mid- and backward-angle kinematic regimes, where $$P_\\Sigma$$ and $$P_\\Lambda$$ are of similar magnitudes but also have the same signs. In conjunction with several other meson photoproduction results recently published by CLAS, the present data will help constrain the partial wave analyses being performed to search for missing baryon resonances.« less
Doshi, Urmi; Hamelberg, Donald
2012-11-13
In enhanced sampling techniques, the precision of the reweighted ensemble properties is often decreased due to large variation in statistical weights and reduction in the effective sampling size. To abate this reweighting problem, here, we propose a general accelerated molecular dynamics (aMD) approach in which only the rotatable dihedrals are subjected to aMD (RaMD), unlike the typical implementation wherein all dihedrals are boosted (all-aMD). Nonrotatable and improper dihedrals are marginally important to conformational changes or the different rotameric states. Not accelerating them avoids the sharp increases in the potential energies due to small deviations from their minimum energy conformations and leads to improvement in the precision of RaMD. We present benchmark studies on two model dipeptides, Ace-Ala-Nme and Ace-Trp-Nme, simulated with normal MD, all-aMD, and RaMD. We carry out a systematic comparison between the performances of both forms of aMD using a theory that allows quantitative estimation of the effective number of sampled points and the associated uncertainty. Our results indicate that, for the same level of acceleration and simulation length, as used in all-aMD, RaMD results in significantly less loss in the effective sample size and, hence, increased accuracy in the sampling of φ-ψ space. RaMD yields an accuracy comparable to that of all-aMD, from simulation lengths 5 to 1000 times shorter, depending on the peptide and the acceleration level. Such improvement in speed and accuracy over all-aMD is highly remarkable, suggesting RaMD as a promising method for sampling larger biomolecules.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George
2017-03-01
Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability
NASA Astrophysics Data System (ADS)
Patacci, Marco
2016-04-01
A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.
2016-01-01
PURPOSE The trueness and precision of acquired images of intraoral digital scanners could be influenced by restoration type, preparation outline form, scanning technology and the application of power. The aim of this study is to perform the comparative evaluation of the 3-dimensional reproducibility of intraoral scanners (IOSs). MATERIALS AND METHODS The phantom containing five prepared teeth was scanned by the reference scanner (Dental Wings) and 5 test IOSs (E4D dentist, Fastscan, iTero, Trios and Zfx Intrascan). The acquired images of the scanner groups were compared with the image from the reference scanner (trueness) and within each scanner groups (precision). Statistical analysis was performed using independent two-samples t-test and analysis of variance (α=.05). RESULTS The average deviations of trueness and precision of Fastscan, iTero and Trios were significantly lower than the other scanners. According to the restoration type, significantly higher trueness was observed in crown and inlay than in bridge. However, no significant difference was observed among four sites of preparation outline form. If compared by the characteristics of IOS, high trueness was observed in the group adopting the active triangulation and using powder. However, there was no significant difference between the still image acquisition and video acquisition groups. CONCLUSION Except for two intraoral scanners, Fastscan, iTero and Trios displayed comparable levels of trueness and precision values in tested phantom model. Difference in trueness was observed depending on the restoration type, the preparation outline form and characteristics of IOS, which should be taken into consideration when the intraoral scanning data are utilized. PMID:27826385
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Rajendra Reddy, Gangireddy; Ravindra Reddy, Papammagari; Siva Jyothi, Polisetty
2015-01-01
A novel, simple, precise, and stability-indicating stereoselective method was developed and validated for the accurate quantification of the enantiomer in the drug substance and pharmaceutical dosage forms of Rosuvastatin Calcium. The method is capable of quantifying the enantiomer in the presence of other related substances. The chromatographic separation was achieved with an immobilized cellulose stationary phase (Chiralpak IB) 250 mm x 4.6 mm x 5.0 μm particle size column with a mobile phase containing a mixture of n-hexane, dichloromethane, 2-propanol, and trifluoroacetic acid in the ratio 82:10:8:0.2 (v/v/v/v). The eluted compounds were monitored at 243 nm and the run time was 18 min. Multivariate analysis and statistical tools were used to develop this highly robust method in a short span of time. The stability-indicating power of the method was established by subjecting Rosuvastatin Calcium to the stress conditions (forced degradation) of acid, base, oxidative, thermal, humidity, and photolytic degradation. Major degradation products were identified and found to be well-resolved from the enantiomer peak, proving the stability-indicating power of the method. The developed method was validated as per International Conference on Harmonization (ICH) guidelines with respect to specificity, limit of detection and limit of quantification, precision, linearity, accuracy, and robustness. The method exhibited consistent, high-quality recoveries (100 ± 10%) with a high precision for the enantiomer. Linear regression analysis revealed an excellent correlation between the peak responses and concentrations (r2 value of 0.9977) for the enantiomer. The method is sensitive enough to quantify the enantiomer above 0.04% and detect the enantiomer above 0.015% in Rosuvastatin Calcium. The stability tests were also performed on the drug substances as per ICH norms. PMID:26839815
42 CFR 493.1256 - Standard: Control procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... for having control procedures that monitor the accuracy and precision of the complete analytic process..., include two control materials, including one that is capable of detecting errors in the extraction process... control materials having previously determined statistical parameters. (e) For reagent, media, and supply...
High resolution Florida IR silicon immersion grating spectrometer and an M dwarf planet survey
NASA Astrophysics Data System (ADS)
Ge, Jian; Powell, Scott; Zhao, Bo; Wang, Ji; Fletcher, Adam; Schofield, Sidney; Liu, Jian; Muterspaugh, Matthew; Blake, Cullen; Barnes, Rory
2012-09-01
We report the system design and predicted performance of the Florida IR Silicon immersion grating spectromeTer (FIRST). This new generation cryogenic IR spectrograph offers broad-band high resolution IR spectroscopy with R=72,000 at 1.4-1.8 μm and R=60,000 at 0.8-1.35 μm in a single exposure with a 2kx2k H2RG IR array. It is enabled by a compact design using an extremely high dispersion silicon immersion grating (SIG) and an R4 echelle with a 50 mm diameter pupil in combination with an Image Slicer. This instrument is operated in vacuum with temperature precisely controlled to reach long term stability for high precision radial velocity (RV) measurements of nearby stars, especially M dwarfs and young stars. The primary technical goal is to reach better than 4 m/s long term RV precision with J<9 M dwarfs within 30 min exposures. This instrument is scheduled to be commissioned at the Tennessee State University (TSU) 2-m Automatic Spectroscopic Telescope (AST) at Fairborn Observatory in spring 2013. FIRST can also be used for observing transiting planets, young stellar objects (YSOs), magnetic fields, binaries, brown dwarfs (BDs), ISM and stars. We plan to launch the FIRST NIR M dwarf planet survey in 2014 after FIRST is commissioned at the AST. This NIR M dwarf survey is the first large-scale NIR high precision Doppler survey dedicated to detecting and characterizing planets around 215 nearby M dwarfs with J< 10. Our primary science goal is to look for habitable Super-Earths around the late M dwarfs and also to identify transiting systems for follow-up observations with JWST to measure the planetary atmospheric compositions and study their habitability. Our secondary science goal is to detect and characterize a large number of planets around M dwarfs to understand the statistics of planet populations around these low mass stars and constrain planet formation and evolution models. Our survey baseline is expected to detect ~30 exoplanets, including 10 Super Earths, within 100 day periods. About half of the Super-Earths are in their habitable zones and one of them may be a transiting planet. The AST, with its robotic control and ease of switching between instruments (in seconds), enables great flexibility and efficiency, and enables an optimal strategy, in terms of schedule and cadence, for this NIR M dwarf planet survey.
Escobar-Bahamondes, P; Oba, M; Beauchemin, K A
2017-01-01
The study determined the performance of equations to predict enteric methane (CH4) from beef cattle fed forage- and grain-based diets. Many equations are available to predict CH4 from beef cattle and the predictions vary substantially among equations. The aims were to (1) construct a database of CH4 emissions for beef cattle from published literature, and (2) identify the most precise and accurate extant CH4 prediction models for beef cattle fed diets varying in forage content. The database was comprised of treatment means of CH4 production from in vivo beef studies published from 2000 to 2015. Criteria to include data in the database were as follows: animal description, intakes, diet composition and CH4 production. In all, 54 published equations that predict CH4 production from diet composition were evaluated. Precision and accuracy of the equations were evaluated using the concordance correlation coefficient (r c ), root mean square prediction error (RMSPE), model efficiency and analysis of errors. Equations were ranked using a combined index of the various statistical assessments based on principal component analysis. The final database contained 53 studies and 207 treatment means that were divided into two data sets: diets containing ⩾400 g/kg dry matter (DM) forage (n=116) and diets containing ⩽200 g/kg DM forage (n=42). Diets containing between ⩽400 and ⩾200 g/kg DM forage were not included in the analysis because of their limited numbers (n=6). Outliers, treatment means where feed was fed restrictively and diets with CH4 mitigation additives were omitted (n=43). Using the high-forage dataset the best-fit equations were the International Panel on Climate Change Tier 2 method, 3 equations for steers that considered gross energy intake (GEI) and body weight and an equation that considered dry matter intake and starch:neutral detergent fiber with r c ranging from 0.60 to 0.73 and RMSPE from 35.6 to 45.9 g/day. For the high-grain diets, the 5 best-fit equations considered intakes of metabolisable energy, cellulose, hemicellulose and fat, or for steers GEI and body weight, with r c ranging from 0.35 to 0.52 and RMSPE from 47.4 to 62.9 g/day. Ranking of extant CH4 prediction equations for their accuracy and precision differed with forage content of the diet. When used for cattle fed high-grain diets, extant CH4 prediction models were generally imprecise and lacked accuracy.
Kenyon, Brian J; Van Zyl, Ian; Louie, Kenneth G
2005-08-01
The high-speed high-torque (electric motor) handpiece is becoming more popular in dental offices and laboratories in the United States. It is reported to cut more precisely and to assist in the creation of finer margins that enhance cavity preparations. The authors conducted an in vitro study to compare the quality of cavity preparations fabricated with a high-speed high-torque (electric motor) handpiece and a high-speed low-torque (air turbine) handpiece. Eighty-six dental students each cut two Class I preparations, one with an air turbine handpiece and the other with an electric motor high-speed handpiece. The authors asked the students to cut each preparation accurately to a circular outline and to establish a flat pulpal floor with 1.5 millimeters' depth, 90-degree exit angles, parallel vertical walls and sharp internal line angles, as well as to refine the preparation to achieve flat, smooth walls with a well-defined cavosurface margin. A single faculty member scored the preparations for criteria and refinement using a nine-point scale (range, 1-9). The authors analyzed the data statistically using paired t tests. In preparation criteria, the electric motor high-speed handpiece had a higher average grade than did the air turbine handpiece (5.07 and 4.90, respectively). For refinement, the average grade for the air turbine high-speed handpiece was greater than that for the electric motor high-speed handpiece (5.72 and 5.52, respectively). The differences were not statistically significant. The electric motor high-speed handpiece performed as well as, but not better than, the air turbine handpiece in the fabrication of high-quality cavity preparations.
Metabolomics through the lens of precision cardiovascular medicine.
Lam, Sin Man; Wang, Yuan; Li, Bowen; Du, Jie; Shui, Guanghou
2017-03-20
Metabolomics, which targets at the extensive characterization and quantitation of global metabolites from both endogenous and exogenous sources, has emerged as a novel technological avenue to advance the field of precision medicine principally driven by genomics-oriented approaches. In particular, metabolomics has revealed the cardinal roles that the environment exerts in driving the progression of major diseases threatening public health. Herein, the existent and potential applications of metabolomics in two key areas of precision cardiovascular medicine will be critically discussed: 1) the use of metabolomics in unveiling novel disease biomarkers and pathological pathways; 2) the contribution of metabolomics in cardiovascular drug development. Major issues concerning the statistical handling of big data generated by metabolomics, as well as its interpretation, will be briefly addressed. Finally, the need for integration of various omics branches and adopting a multi-omics approach to precision medicine will be discussed. Copyright © 2017 Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, and Genetics Society of China. Published by Elsevier Ltd. All rights reserved.
Photon Statistics of Propagating Thermal Microwaves.
Goetz, J; Pogorzalek, S; Deppe, F; Fedorov, K G; Eder, P; Fischer, M; Wulschner, F; Xie, E; Marx, A; Gross, R
2017-03-10
In experiments with superconducting quantum circuits, characterizing the photon statistics of propagating microwave fields is a fundamental task. We quantify the n^{2}+n photon number variance of thermal microwave photons emitted from a blackbody radiator for mean photon numbers, 0.05≲n≲1.5. We probe the fields using either correlation measurements or a transmon qubit coupled to a microwave resonator. Our experiments provide a precise quantitative characterization of weak microwave states and information on the noise emitted by a Josephson parametric amplifier.
Photon Statistics of Propagating Thermal Microwaves
NASA Astrophysics Data System (ADS)
Goetz, J.; Pogorzalek, S.; Deppe, F.; Fedorov, K. G.; Eder, P.; Fischer, M.; Wulschner, F.; Xie, E.; Marx, A.; Gross, R.
2017-03-01
In experiments with superconducting quantum circuits, characterizing the photon statistics of propagating microwave fields is a fundamental task. We quantify the n2+n photon number variance of thermal microwave photons emitted from a blackbody radiator for mean photon numbers, 0.05 ≲n ≲1.5 . We probe the fields using either correlation measurements or a transmon qubit coupled to a microwave resonator. Our experiments provide a precise quantitative characterization of weak microwave states and information on the noise emitted by a Josephson parametric amplifier.
Brain tissues volume measurements from 2D MRI using parametric approach
NASA Astrophysics Data System (ADS)
L'vov, A. A.; Toropova, O. A.; Litovka, Yu. V.
2018-04-01
The purpose of the paper is to propose a fully automated method of volume assessment of structures within human brain. Our statistical approach uses maximum interdependency principle for decision making process of measurements consistency and unequal observations. Detecting outliers performed using maximum normalized residual test. We propose a statistical model which utilizes knowledge of tissues distribution in human brain and applies partial data restoration for precision improvement. The approach proposes completed computationally efficient and independent from segmentation algorithm used in the application.
Synchrotron radiation μCT and histology evaluation of bone-to-implant contact.
Neldam, Camilla Albeck; Sporring, Jon; Rack, Alexander; Lauridsen, Torsten; Hauge, Ellen-Margrethe; Jørgensen, Henrik L; Jørgensen, Niklas Rye; Feidenhansl, Robert; Pinholt, Else Marie
2017-09-01
The purpose of this study was to evaluate bone-to-implant contact (BIC) in two-dimensional (2D) histology compared to high-resolution three-dimensional (3D) synchrotron radiation micro computed tomography (SR micro-CT). High spatial resolution, excellent signal-to-noise ratio, and contrast establish SR micro-CT as the leading imaging modality for hard X-ray microtomography. Using SR micro-CT at voxel size 5 μm in an experimental goat mandible model, no statistically significant difference was found between the different treatment modalities nor between recipient and reconstructed bone. The histological evaluation showed a statistically significant difference between BIC in reconstructed and recipient bone (p < 0.0001). Further, no statistically significant difference was found between the different treatment modalities which we found was due to large variation and subsequently due to low power. Comparing histology and SR micro-CT evaluation a bias of 5.2% was found in reconstructed area, and 15.3% in recipient bone. We conclude that for evaluation of BIC with histology and SR micro-CT, SR micro-CT cannot be proven more precise than histology for evaluation of BIC, however, with this SR micro-CT method, one histologic bone section is comparable to the 3D evaluation. Further, the two methods complement each other with knowledge on BIC in 2D and 3D. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
High precision spectroscopy and imaging in THz frequency range
NASA Astrophysics Data System (ADS)
Vaks, Vladimir L.
2014-03-01
Application of microwave methods for development of the THz frequency range has resulted in elaboration of high precision THz spectrometers based on nonstationary effects. The spectrometers characteristics (spectral resolution and sensitivity) meet the requirements for high precision analysis. The gas analyzers, based on the high precision spectrometers, have been successfully applied for analytical investigations of gas impurities in high pure substances. These investigations can be carried out both in absorption cell and in reactor. The devices can be used for ecological monitoring, detecting the components of chemical weapons and explosive in the atmosphere. The great field of THz investigations is the medicine application. Using the THz spectrometers developed one can detect markers for some diseases in exhaled air.
Fischer, Susan L; Koshland, Catherine P
2007-03-01
Rural kitchens of solid-fuel burning households constitute the microenvironment responsible for the majority of human exposures to health-damaging air pollutants, particularly respirable particles and carbon monoxide. Portable nephelometers facilitate cheaper, more precise, time-resolved characterization of particles in rural homes than are attainable by gravitational methods alone. However, field performance of nephelometers must contend with aerosols that are highly variable in terms of chemical content, size, and relative humidity. Previous field validations of nephelometer performance in residential settings explore relatively low particle concentrations, with the vast majority of 24-h average gravitational PM2.5 concentrations falling below 40 microg/m3. We investigate relationships between 24-h gravitational particle measurements and nephelometric data logged by the personal DataRAM (pDR) in highly polluted rural Chinese kitchens, where gravitationally determined 24-h average respirable particle concentrations were as high as 700 microg/m3. We find that where relative humidity remained below 95%, nephelometric response was strongly linear despite complex mixtures of aerosols and variable ambient conditions. Where 95% relative humidity was exceeded for even a brief duration, nephelometrically determined 24-h mean particle concentrations were nonsystematically distorted relative to gravitational data, and neither concurrent relative humidity measurements nor use of robust statistical measures of central tendency offered means of correction. This nonsystematic distortion is particularly problematic for rural exposure assessment studies, which emphasize upper quantiles of time-resolved particle measurements within 24-h samples. Precise, accurate interpretation of nephelometrically resolved short-term particle concentrations requires calibration based on short-term gravitational sampling.
Analysis of video-recorded images to determine linear and angular dimensions in the growing horse.
Hunt, W F; Thomas, V G; Stiefel, W
1999-09-01
Studies of growth and conformation require statistical methods that are not applicable to subjective conformation standards used by breeders and trainers. A new system was developed to provide an objective approach for both science and industry, based on analysis of video images to measure aspects of conformation that were represented by angles or lengths. A studio crush was developed in which video images of horses of different sizes were taken after bone protuberances, located by palpation, were marked with white paper stickers. Screen pixel coordinates of calibration marks, bone markers and points on horse outlines were digitised from captured images and corrected for aspect ratio and 'fish-eye' lens effects. Calculations from the corrected coordinates produced linear dimensions and angular dimensions useful for comparison of horses for conformation and experimental purposes. The precision achieved by the method in determining linear and angular dimensions was examined through systematically determining variance for isolated steps of the procedure. Angles of the front limbs viewed from in front were determined with a standard deviation of 2-5 degrees and effects of viewing angle were detectable statistically. The height of the rump and wither were determined with precision closely related to the limitations encountered in locating a point on a screen, which was greater for markers applied to the skin than for points at the edge of the image. Parameters determined from markers applied to the skin were, however, more variable (because their relation to bone position was affected by movement), but still provided a means by which a number of aspects of size and conformation can be determined objectively for many horses during growth. Sufficient precision was achieved to detect statistically relatively small effects on calculated parameters of camera height position.
A Methodological Approach to Quantifying Plyometric Intensity.
Jarvis, Mark M; Graham-Smith, Phil; Comfort, Paul
2016-09-01
Jarvis, MM, Graham-Smith, P, and Comfort, P. A Methodological approach to quantifying plyometric intensity. J Strength Cond Res 30(9): 2522-2532, 2016-In contrast to other methods of training, the quantification of plyometric exercise intensity is poorly defined. The purpose of this study was to evaluate the suitability of a range of neuromuscular and mechanical variables to describe the intensity of plyometric exercises. Seven male recreationally active subjects performed a series of 7 plyometric exercises. Neuromuscular activity was measured using surface electromyography (SEMG) at vastus lateralis (VL) and biceps femoris (BF). Surface electromyography data were divided into concentric (CON) and eccentric (ECC) phases of movement. Mechanical output was measured by ground reaction forces and processed to provide peak impact ground reaction force (PF), peak eccentric power (PEP), and impulse (IMP). Statistical analysis was conducted to assess the reliability intraclass correlation coefficient and sensitivity smallest detectable difference of all variables. Mean values of SEMG demonstrate high reliability (r ≥ 0.82), excluding ECC VL during a 40-cm drop jump (r = 0.74). PF, PEP, and IMP demonstrated high reliability (r ≥ 0.85). Statistical power for force variables was excellent (power = 1.0), and good for SEMG (power ≥0.86) excluding CON BF (power = 0.57). There was no significant difference (p > 0.05) in CON SEMG between exercises. Eccentric phase SEMG only distinguished between exercises involving a landing and those that did not (percentage of maximal voluntary isometric contraction [%MVIC] = no landing -65 ± 5, landing -140 ± 8). Peak eccentric power, PF, and IMP all distinguished between exercises. In conclusion, CON neuromuscular activity does not appear to vary when intent is maximal, whereas ECC activity is dependent on the presence of a landing. Force characteristics provide a reliable and sensitive measure enabling precise description of intensity in plyometric exercises. The present findings provide coaches and scientists with an insightful and precise method of measuring intensity in plyometrics, which will allow for greater control of programming variables.
Attaining the Photometric Precision Required by Future Dark Energy Projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stubbs, Christopher
2013-01-21
This report outlines our progress towards achieving the high-precision astronomical measurements needed to derive improved constraints on the nature of the Dark Energy. Our approach to obtaining higher precision flux measurements has two basic components: 1) determination of the optical transmission of the atmosphere, and 2) mapping out the instrumental photon sensitivity function vs. wavelength, calibrated by referencing the measurements to the known sensitivity curve of a high precision silicon photodiode, and 3) using the self-consistency of the spectrum of stars to achieve precise color calibrations.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.
2002-06-01
Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.
NASA Astrophysics Data System (ADS)
Knop, R. A.; Aldering, G.; Amanullah, R.; Astier, P.; Blanc, G.; Burns, M. S.; Conley, A.; Deustua, S. E.; Doi, M.; Ellis, R.; Fabbro, S.; Folatelli, G.; Fruchter, A. S.; Garavini, G.; Garmond, S.; Garton, K.; Gibbons, R.; Goldhaber, G.; Goobar, A.; Groom, D. E.; Hardin, D.; Hook, I.; Howell, D. A.; Kim, A. G.; Lee, B. C.; Lidman, C.; Mendez, J.; Nobili, S.; Nugent, P. E.; Pain, R.; Panagia, N.; Pennypacker, C. R.; Perlmutter, S.; Quimby, R.; Raux, J.; Regnault, N.; Ruiz-Lapuente, P.; Sainton, G.; Schaefer, B.; Schahmaneche, K.; Smith, E.; Spadafora, A. L.; Stanishev, V.; Sullivan, M.; Walton, N. A.; Wang, L.; Wood-Vasey, W. M.; Yasuda, N.
2003-11-01
We report measurements of ΩM, ΩΛ, and w from 11 supernovae (SNe) at z=0.36-0.86 with high-quality light curves measured using WFPC2 on the Hubble Space Telescope (HST). This is an independent set of high-redshift SNe that confirms previous SN evidence for an accelerating universe. The high-quality light curves available from photometry on WFPC2 make it possible for these 11 SNe alone to provide measurements of the cosmological parameters comparable in statistical weight to the previous results. Combined with earlier Supernova Cosmology Project data, the new SNe yield a measurement of the mass density ΩM=0.25+0.07-0.06(statistical)+/-0.04 (identified systematics), or equivalently, a cosmological constant of ΩΛ=0.75+0.06-0.07(statistical)+/-0.04 (identified systematics), under the assumptions of a flat universe and that the dark energy equation-of-state parameter has a constant value w=-1. When the SN results are combined with independent flat-universe measurements of ΩM from cosmic microwave background and galaxy redshift distortion data, they provide a measurement of w=-1.05+0.15-0.20(statistical)+/-0.09 (identified systematic), if w is assumed to be constant in time. In addition to high-precision light-curve measurements, the new data offer greatly improved color measurements of the high-redshift SNe and hence improved host galaxy extinction estimates. These extinction measurements show no anomalous negative E(B-V) at high redshift. The precision of the measurements is such that it is possible to perform a host galaxy extinction correction directly for individual SNe without any assumptions or priors on the parent E(B-V) distribution. Our cosmological fits using full extinction corrections confirm that dark energy is required with P(ΩΛ>0)>0.99, a result consistent with previous and current SN analyses that rely on the identification of a low-extinction subset or prior assumptions concerning the intrinsic extinction distribution. Based in part on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs GO-7336, GO-7590, and GO-8346. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. Based in part on observations obtained at the WIYN Observatory, which is a joint facility of the University of Wisconsin at Madison, Indiana University, Yale University, and the National Optical Astronomy Observatory. Based in part on observations made with the European Southern Observatory telescopes (ESO programs 60.A-0586 and 265.A-5721). Based in part on observations made with the Canada-France-Hawaii Telescope, operated by the National Research Council of Canada, le Centre National de la Recherche Scientifique de France, and the University of Hawaii.
High-precision processing and detection of the high-caliber off-axis aspheric mirror
NASA Astrophysics Data System (ADS)
Dai, Chen; Li, Ang; Xu, Lingdi; Zhang, Yingjie
2017-10-01
To achieve the efficient, controllable, digital processing and high-precision detection of the high-caliber off-axis aspheric mirror, meeting the high-level development needs of the modern high-resolution, large field of space optical remote sensing camera, we carried out the research on high precision machining and testing technology of off-axis aspheric mirror. First, we forming the off-axis aspheric sample with diameter of 574mm × 302mm by milling it with milling machine, and then the intelligent robot equipment was used for off-axis aspheric high precision polishing. Surface detection of the sample will be proceed with the off-axis aspheric contact contour detection technology and offaxis non-spherical surface interference detection technology after its fine polishing using ion beam equipment. The final surface accuracy RMS is 12nm.
Kurz, Christopher; Bauer, Julia; Conti, Maurizio; Guérin, Laura; Eriksson, Lars; Parodi, Katia
2015-07-01
External beam radiotherapy with protons and heavier ions enables a tighter conformation of the applied dose to arbitrarily shaped tumor volumes with respect to photons, but is more sensitive to uncertainties in the radiotherapeutic treatment chain. Consequently, an independent verification of the applied treatment is highly desirable. For this purpose, the irradiation-induced β(+)-emitter distribution within the patient is detected shortly after irradiation by a commercial full-ring positron emission tomography/x-ray computed tomography (PET/CT) scanner installed next to the treatment rooms at the Heidelberg Ion-Beam Therapy Center (HIT). A major challenge to this approach is posed by the small number of detected coincidences. This contribution aims at characterizing the performance of the used PET/CT device and identifying the best-performing reconstruction algorithm under the particular statistical conditions of PET-based treatment monitoring. Moreover, this study addresses the impact of radiation background from the intrinsically radioactive lutetium-oxyorthosilicate (LSO)-based detectors at low counts. The authors have acquired 30 subsequent PET scans of a cylindrical phantom emulating a patientlike activity pattern and spanning the entire patient counting regime in terms of true coincidences and random fractions (RFs). Accuracy and precision of activity quantification, image noise, and geometrical fidelity of the scanner have been investigated for various reconstruction algorithms and settings in order to identify a practical, well-suited reconstruction scheme for PET-based treatment verification. Truncated listmode data have been utilized for separating the effects of small true count numbers and high RFs on the reconstructed images. A corresponding simulation study enabled extending the results to an even wider range of counting statistics and to additionally investigate the impact of scatter coincidences. Eventually, the recommended reconstruction scheme has been applied to exemplary postirradiation patient data-sets. Among the investigated reconstruction options, the overall best results in terms of image noise, activity quantification, and accurate geometrical recovery were achieved using the ordered subset expectation maximization reconstruction algorithm with time-of-flight (TOF) and point-spread function (PSF) information. For this algorithm, reasonably accurate (better than 5%) and precise (uncertainty of the mean activity below 10%) imaging can be provided down to 80,000 true coincidences at 96% RF. Image noise and geometrical fidelity are generally improved for fewer iterations. The main limitation for PET-based treatment monitoring has been identified in the small number of true coincidences, rather than the high intrinsic random background. Application of the optimized reconstruction scheme to patient data-sets results in a 25% - 50% reduced image noise at a comparable activity quantification accuracy and an improved geometrical performance with respect to the formerly used reconstruction scheme at HIT, adopted from nuclear medicine applications. Under the poor statistical conditions in PET-based treatment monitoring, improved results can be achieved by considering PSF and TOF information during image reconstruction and by applying less iterations than in conventional nuclear medicine imaging. Geometrical fidelity and image noise are mainly limited by the low number of true coincidences, not the high LSO-related random background. The retrieved results might also impact other emerging PET applications at low counting statistics.
Precision Measurements of Solar Energetic Particle Elemental Composition
NASA Technical Reports Server (NTRS)
Breneman, H.; Stone, E. C.
1985-01-01
Data from the Cosmic Ray Subsystem (CRS) aboard the Voyager 1 and 2 spaceraft were used to determined, solar energetic particle abundances or upper limits for all elements with Z 30 from a combined set of 10 solar flares during the 1977 to 1982 time period. Statistically meaningful abundances were determined for several rare elements including P, C1, K, Ti and Mn, while the precision of the mean abundances for the more abundant elements was proved. When compared to solar photospheric spectroscopic abundances, these new SEP abundances more clearly exhibit the step-function dependence on first ionization potential previously reported.
PV cells electrical parameters measurement
NASA Astrophysics Data System (ADS)
Cibira, Gabriel
2017-12-01
When measuring optical parameters of a photovoltaic silicon cell, precise results bring good electrical parameters estimation, applying well-known physical-mathematical models. Nevertheless, considerable re-combination phenomena might occur in both surface and intrinsic thin layers within novel materials. Moreover, rear contact surface parameters may influence close-area re-combination phenomena, too. Therefore, the only precise electrical measurement approach is to prove assumed cell electrical parameters. Based on theoretical approach with respect to experiments, this paper analyses problems within measurement procedures and equipment used for electrical parameters acquisition within a photovoltaic silicon cell, as a case study. Statistical appraisal quality is contributed.
Automatic Bone Drilling - More Precise, Reliable and Safe Manipulation in the Orthopaedic Surgery
NASA Astrophysics Data System (ADS)
Boiadjiev, George; Kastelov, Rumen; Boiadjiev, Tony; Delchev, Kamen; Zagurski, Kazimir
2016-06-01
Bone drilling manipulation often occurs in the orthopaedic surgery. By statistics, nowadays, about one million people only in Europe need such an operation every year, where bone implants are inserted. Almost always, the drilling is performed handily, which cannot avoid the subjective factor influence. The question of subjective factor reduction has its answer - automatic bone drilling. The specific features and problems of orthopaedic drilling manipulation are considered in this work. The automatic drilling is presented according the possibilities of robotized system Orthopaedic Drilling Robot (ODRO) for assuring the manipulation accuracy, precision, reliability and safety.
DOT National Transportation Integrated Search
2012-01-01
Statistics project that crash/injury/fatality rates of older drivers will increase with the future growth of : this population. Accurate and precise measurement of older driver behaviors becomes imperative to : curtail these crash trends and resultin...
NASA Astrophysics Data System (ADS)
Laura, J. R.; Miller, D.; Paul, M. V.
2012-03-01
An accuracy assessment of AMES Stereo Pipeline derived DEMs for lunar site selection using weighted spatial dependence simulation and a call for outside AMES derived DEMs to facilitate a statistical precision analysis.
METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS
Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such ...
The lawful imprecision of human surface tilt estimation in natural scenes
2018-01-01
Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477
The lawful imprecision of human surface tilt estimation in natural scenes.
Kim, Seha; Burge, Johannes
2018-01-31
Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.
Long-term impact of precision agriculture on a farmer’s field
USDA-ARS?s Scientific Manuscript database
Targeting management practices and inputs with precision agriculture has high potential to meet some of the grand challenges of sustainability in the coming century. Although potential is high, few studies have documented long-term effects of precision agriculture on crop production and environmenta...
Design and algorithm research of high precision airborne infrared touch screen
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Bing; Wang, Shuang-Jie; Fu, Yan; Chen, Zhao-Quan
2016-10-01
There are shortcomings of low precision, touch shaking, and sharp decrease of touch precision when emitting and receiving tubes are failure in the infrared touch screen. A high precision positioning algorithm based on extended axis is proposed to solve these problems. First, the unimpeded state of the beam between emitting and receiving tubes is recorded as 0, while the impeded state is recorded as 1. Then, the method of oblique scan is used, in which the light of one emitting tube is used for five receiving tubes. The impeded information of all emitting and receiving tubes is collected as matrix. Finally, according to the method of arithmetic average, the position of the touch object is calculated. The extended axis positioning algorithm is characteristic of high precision in case of failure of individual infrared tube and affects slightly the precision. The experimental result shows that the 90% display area of the touch error is less than 0.25D, where D is the distance between adjacent emitting tubes. The conclusion is gained that the algorithm based on extended axis has advantages of high precision, little impact when individual infrared tube is failure, and using easily.
A Lane-Level LBS System for Vehicle Network with High-Precision BDS/GPS Positioning
Guo, Chi; Guo, Wenfei; Cao, Guangyi; Dong, Hongbo
2015-01-01
In recent years, research on vehicle network location service has begun to focus on its intelligence and precision. The accuracy of space-time information has become a core factor for vehicle network systems in a mobile environment. However, difficulties persist in vehicle satellite positioning since deficiencies in the provision of high-quality space-time references greatly limit the development and application of vehicle networks. In this paper, we propose a high-precision-based vehicle network location service to solve this problem. The major components of this study include the following: (1) application of wide-area precise positioning technology to the vehicle network system. An adaptive correction message broadcast protocol is designed to satisfy the requirements for large-scale target precise positioning in the mobile Internet environment; (2) development of a concurrence service system with a flexible virtual expansion architecture to guarantee reliable data interaction between vehicles and the background; (3) verification of the positioning precision and service quality in the urban environment. Based on this high-precision positioning service platform, a lane-level location service is designed to solve a typical traffic safety problem. PMID:25755665
Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis
Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.
2015-01-01
Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505
Precision Crystal Calorimeters in High Energy Physics
Ren-Yuan Zhu
2017-12-09
Precision crystal calorimeters traditionally play an important role in high energy physics experiments. In the last two decades, it faces a challenge to maintain its precision in a hostile radiation environment. This paper reviews the performance of crystal calorimeters constructed for high energy physics experiments and the progress achieved in understanding crystalâs radiation damage as well as in developing high quality scintillating crystals for particle physics. Potential applications of new generation scintillating crystals of high density and high light yield, such as LSO and LYSO, in particle physics experiments is also discussed.
NASA Technical Reports Server (NTRS)
Allton, J. H.
2017-01-01
There is widespread agreement among planetary scientists that much of what we know about the workings of the solar system comes from accurate, high precision measurements on returned samples. Precision is a function of the number of atoms the instrumentation is able to count. Accuracy depends on the calibration or standardization technique. For Genesis, the solar wind sample return mission, acquiring enough atoms to ensure precise SW measurements and then accurately quantifying those measurements were steps known to be non-trivial pre-flight. The difficulty of precise and accurate measurements on returned samples, and why they cannot be made remotely, is not communicated well to the public. In part, this is be-cause "high precision" is abstract and error bars are not very exciting topics. This paper explores ideas for collecting and compiling compelling metaphors and colorful examples as a resource for planetary science public speakers.
Fully accelerating quantum Monte Carlo simulations of real materials on GPU clusters
NASA Astrophysics Data System (ADS)
Esler, Kenneth
2011-03-01
Quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles, combining very high accuracy with extreme parallel scalability. By solving the many-body Schrödinger equation through a stochastic projection, it achieves greater accuracy than mean-field methods and better scaling with system size than quantum chemical methods, enabling scientific discovery across a broad spectrum of disciplines. In recent years, graphics processing units (GPUs) have provided a high-performance and low-cost new approach to scientific computing, and GPU-based supercomputers are now among the fastest in the world. The multiple forms of parallelism afforded by QMC algorithms make the method an ideal candidate for acceleration in the many-core paradigm. We present the results of porting the QMCPACK code to run on GPU clusters using the NVIDIA CUDA platform. Using mixed precision on GPUs and MPI for intercommunication, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core CPUs alone, while reproducing the double-precision CPU results within statistical error. We discuss the algorithm modifications necessary to achieve good performance on this heterogeneous architecture and present the results of applying our code to molecules and bulk materials. Supported by the U.S. DOE under Contract No. DOE-DE-FG05-08OR23336 and by the NSF under No. 0904572.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Andrew J.; Fast, James E.; Fulsom, Bryan G.
For many nuclear material safeguards inspections, spectroscopic gamma detectors are required which can achieve high event rates (in excess of 10^6 s^-1) while maintaining very good energy resolution for discrimination of neighboring gamma signatures in complex backgrounds. Such spectra can be useful for non-destructive assay (NDA) of spent nuclear fuel with long cooling times, which contains many potentially useful low-rate gamma lines, e.g., Cs-134, in the presence of a few dominating gamma lines, such as Cs-137. Detectors in use typically sacrifice energy resolution for count rate, e.g., LaBr3, or visa versa, e.g., CdZnTe. In contrast, we anticipate that beginning withmore » a detector with high energy resolution, e.g., high-purity germanium (HPGe), and adapting the data acquisition for high throughput will be able to achieve the goals of the ideal detector. In this work, we present quantification of Cs-134 and Cs-137 activities, useful for fuel burn-up quantification, in fuel that has been cooling for 22.3 years. A segmented, planar HPGe detector is used for this inspection, which has been adapted for a high-rate throughput in excess of 500k counts/s. Using a very-high-statistic spectrum of 2.4*10^11 counts, isotope activities can be determined with very low statistical uncertainty. However, it is determined that systematic uncertainties dominate in such a data set, e.g., the uncertainty in the pulse line shape. This spectrum offers a unique opportunity to quantify this uncertainty and subsequently determine required counting times for given precision on values of interest.« less
ZERODUR - bending strength: review of achievements
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2017-08-01
Increased demand for using the glass ceramic ZERODUR® with high mechanical loads called for strength data based on larger statistical samples. Design calculations for failure probability target value below 1: 100 000 cannot be made reliable with parameters derived from 20 specimen samples. The data now available for a variety of surface conditions, ground with different grain sizes and acid etched for full micro crack removal, allow stresses by factors four to ten times higher than before. The large sample revealed that breakage stresses of ground surfaces follow the three parameter Weibull distribution instead of the two parameter version. This is more reasonable considering that the micro cracks of such surfaces have a maximum depth which is reflected in the existence of a threshold breakage stress below which breakage probability is zero. This minimum strength allows calculating minimum lifetimes. Fatigue under load can be taken into account by using the stress corrosion coefficient for the actual environmental humidity. For fully etched surfaces Weibull statistics fails. The precondition of the Weibull distribution, the existence of one unique failure mechanism, is not given anymore. ZERODUR® with fully etched surfaces free from damages introduced after etching endures easily 100 MPa tensile stress. The possibility to use ZERODUR® for combined high precision and high stress application was confirmed by the successful launch and continuing operation of LISA Pathfinder the precursor experiment for the gravitational wave antenna satellite array eLISA.
NASA Astrophysics Data System (ADS)
Balasis, Georgios; Potirakis, Stelios M.; Papadimitriou, Constantinos; Zitis, Pavlos I.; Eftaxias, Konstantinos
2015-04-01
The field of study of complex systems considers that the dynamics of complex systems are founded on universal principles that may be used to describe a great variety of scientific and technological approaches of different types of natural, artificial, and social systems. We apply concepts of the nonextensive statistical physics, on time-series data of observable manifestations of the underlying complex processes ending up to different extreme events, in order to support the suggestion that a dynamical analogy characterizes the generation of a single magnetic storm, solar flare, earthquake (in terms of pre-seismic electromagnetic signals) , epileptic seizure, and economic crisis. The analysis reveals that all the above mentioned different extreme events can be analyzed within similar mathematical framework. More precisely, we show that the populations of magnitudes of fluctuations included in all the above mentioned pulse-like-type time series follow the traditional Gutenberg-Richter law as well as a nonextensive model for earthquake dynamics, with similar nonextensive q-parameter values. Moreover, based on a multidisciplinary statistical analysis we show that the extreme events are characterized by crucial common symptoms, namely: (i) high organization, high compressibility, low complexity, high information content; (ii) strong persistency; and (iii) existence of clear preferred direction of emerged activities. These symptoms clearly discriminate the appearance of the extreme events under study from the corresponding background noise.
SAGITTARIUS STREAM THREE-DIMENSIONAL KINEMATICS FROM SLOAN DIGITAL SKY SURVEY STRIPE 82
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koposov, Sergey E.; Belokurov, Vasily; Evans, N. Wyn
2013-04-01
Using multi-epoch observations of the Stripe 82 region from the Sloan Digital Sky Survey (SDSS), we measure precise statistical proper motions of the stars in the Sagittarius (Sgr) stellar stream. The multi-band photometry and SDSS radial velocities allow us to efficiently select Sgr members and thus enhance the proper-motion precision to {approx}0.1 mas yr{sup -1}. We measure separately the proper motion of a photometrically selected sample of the main-sequence turn-off stars, as well as spectroscopically selected Sgr giants. The data allow us to determine the proper motion separately for the two Sgr streams in the south found in Koposov etmore » al. Together with the precise velocities from SDSS, our proper motions provide exquisite constraints of the three-dimensional motions of the stars in the Sgr streams.« less
Precision electroweak physics at LEP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mannelli, M.
1994-12-01
Copious event statistics, a precise understanding of the LEP energy scale, and a favorable experimental situation at the Z{sup 0} resonance have allowed the LEP experiments to provide both dramatic confirmation of the Standard Model of strong and electroweak interactions and to place substantially improved constraints on the parameters of the model. The author concentrates on those measurements relevant to the electroweak sector. It will be seen that the precision of these measurements probes sensitively the structure of the Standard Model at the one-loop level, where the calculation of the observables measured at LEP is affected by the value chosenmore » for the top quark mass. One finds that the LEP measurements are consistent with the Standard Model, but only if the mass of the top quark is measured to be within a restricted range of about 20 GeV.« less
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Hegazy, Maha Abdel Monem
2013-09-01
Four simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of simvastatin (SM) and ezetimibe (EZ) namely; extended ratio subtraction (EXRSM), simultaneous ratio subtraction (SRSM), ratio difference (RDSM) and absorption factor (AFM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined, and the methods were validated and the specificity was assessed by analyzing synthetic mixtures containing the cited drugs. The four methods were applied for the determination of the cited drugs in tablets and the obtained results were statistically compared with each other and with those of a reported HPLC method. The comparison showed that there is no significant difference between the proposed methods and the reported method regarding both accuracy and precision.
Using statistical text classification to identify health information technology incidents
Chai, Kevin E K; Anthony, Stephen; Coiera, Enrico; Magrabi, Farah
2013-01-01
Objective To examine the feasibility of using statistical text classification to automatically identify health information technology (HIT) incidents in the USA Food and Drug Administration (FDA) Manufacturer and User Facility Device Experience (MAUDE) database. Design We used a subset of 570 272 incidents including 1534 HIT incidents reported to MAUDE between 1 January 2008 and 1 July 2010. Text classifiers using regularized logistic regression were evaluated with both ‘balanced’ (50% HIT) and ‘stratified’ (0.297% HIT) datasets for training, validation, and testing. Dataset preparation, feature extraction, feature selection, cross-validation, classification, performance evaluation, and error analysis were performed iteratively to further improve the classifiers. Feature-selection techniques such as removing short words and stop words, stemming, lemmatization, and principal component analysis were examined. Measurements κ statistic, F1 score, precision and recall. Results Classification performance was similar on both the stratified (0.954 F1 score) and balanced (0.995 F1 score) datasets. Stemming was the most effective technique, reducing the feature set size to 79% while maintaining comparable performance. Training with balanced datasets improved recall (0.989) but reduced precision (0.165). Conclusions Statistical text classification appears to be a feasible method for identifying HIT reports within large databases of incidents. Automated identification should enable more HIT problems to be detected, analyzed, and addressed in a timely manner. Semi-supervised learning may be necessary when applying machine learning to big data analysis of patient safety incidents and requires further investigation. PMID:23666777
Statistical analysis of regulatory ecotoxicity tests.
Isnard, P; Flammarion, P; Roman, G; Babut, M; Bastien, P; Bintein, S; Esserméant, L; Férard, J F; Gallotti-Schmitt, S; Saouter, E; Saroli, M; Thiébaud, H; Tomassone, R; Vindimian, E
2001-11-01
ANOVA-type data analysis, i.e.. determination of lowest-observed-effect concentrations (LOECs), and no-observed-effect concentrations (NOECs), has been widely used for statistical analysis of chronic ecotoxicity data. However, it is more and more criticised for several reasons, among which the most important is probably the fact that the NOEC depends on the choice of test concentrations and number of replications and rewards poor experiments, i.e., high variability, with high NOEC values. Thus, a recent OECD workshop concluded that the use of the NOEC should be phased out and that a regression-based estimation procedure should be used. Following this workshop, a working group was established at the French level between government, academia and industry representatives. Twenty-seven sets of chronic data (algae, daphnia, fish) were collected and analysed by ANOVA and regression procedures. Several regression models were compared and relations between NOECs and ECx, for different values of x, were established in order to find an alternative summary parameter to the NOEC. Biological arguments are scarce to help in defining a negligible level of effect x for the ECx. With regard to their use in the risk assessment procedures, a convenient methodology would be to choose x so that ECx are on average similar to the present NOEC. This would lead to no major change in the risk assessment procedure. However, experimental data show that the ECx depend on the regression models and that their accuracy decreases in the low effect zone. This disadvantage could probably be reduced by adapting existing experimental protocols but it could mean more experimental effort and higher cost. ECx (derived with existing test guidelines, e.g., regarding the number of replicates) whose lowest bounds of the confidence interval are on average similar to present NOEC would improve this approach by a priori encouraging more precise experiments. However, narrow confidence intervals are not only linked to good experimental practices, but also depend on the distance between the best model fit and experimental data. At least, these approaches still use the NOEC as a reference although this reference is statistically not correct. On the contrary, EC50 are the most precise values to estimate on a concentration response curve, but they are clearly different from the NOEC and their use would require a modification of existing assessment factors.
NASA Astrophysics Data System (ADS)
Vandergoes, Marcus J.; Howarth, Jamie D.; Dunbar, Gavin B.; Turnbull, Jocelyn C.; Roop, Heidi A.; Levy, Richard H.; Li, Xun; Prior, Christine; Norris, Margaret; Keller, Liz D.; Baisden, W. Troy; Ditchburn, Robert; Fitzsimons, Sean J.; Bronk Ramsey, Christopher
2018-05-01
Annually resolved (varved) lake sequences are important palaeoenvironmental archives as they offer a direct incremental dating technique for high-frequency reconstruction of environmental and climate change. Despite the importance of these records, establishing a robust chronology and quantifying its precision and accuracy (estimations of error) remains an essential but challenging component of their development. We outline an approach for building reliable independent chronologies, testing the accuracy of layer counts and integrating all chronological uncertainties to provide quantitative age and error estimates for varved lake sequences. The approach incorporates (1) layer counts and estimates of counting precision; (2) radiometric and biostratigrapic dating techniques to derive independent chronology; and (3) the application of Bayesian age modelling to produce an integrated age model. This approach is applied to a case study of an annually resolved sediment record from Lake Ohau, New Zealand. The most robust age model provides an average error of 72 years across the whole depth range. This represents a fractional uncertainty of ∼5%, higher than the <3% quoted for most published varve records. However, the age model and reported uncertainty represent the best fit between layer counts and independent chronology and the uncertainties account for both layer counting precision and the chronological accuracy of the layer counts. This integrated approach provides a more representative estimate of age uncertainty and therefore represents a statistically more robust chronology.
A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.
Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa
2016-05-17
Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.
NASA Astrophysics Data System (ADS)
Kunert, Anna Theresa; Scheel, Jan Frederik; Helleis, Frank; Klimach, Thomas; Pöschl, Ulrich; Fröhlich-Nowoisky, Janine
2016-04-01
Freezing of water above homogeneous freezing is catalyzed by ice nucleation active (INA) particles called ice nuclei (IN), which can be of various inorganic or biological origin. The freezing temperatures reach up to -1 °C for some biological samples and are dependent on the chemical composition of the IN. The standard method to analyze IN in solution is the droplet freezing assay (DFA) established by Gabor Vali in 1970. Several modifications and improvements were already made within the last decades, but they are still limited by either small droplet numbers, large droplet volumes or inadequate separation of the single droplets resulting in mutual interferences and therefore improper measurements. The probability that miscellaneous IN are concentrated together in one droplet increases with the volume of the droplet, which can be described by the Poisson distribution. At a given concentration, the partition of a droplet into several smaller droplets leads to finely dispersed IN resulting in better statistics and therefore in a better resolution of the nucleation spectrum. We designed a new customized high-performance droplet freezing assay (HP-DFA), which represents an upgrade of the previously existing DFAs in terms of temperature range and statistics. The necessity of observing freezing events at temperatures lower than homogeneous freezing due to freezing point depression, requires high-performance thermostats combined with an optimal insulation. Furthermore, we developed a cooling setup, which allows both huge and tiny temperature changes within a very short period of time. Besides that, the new DFA provides the analysis of more than 750 droplets per run with a small droplet volume of 5 μL. This enables a fast and more precise analysis of biological samples with complex IN composition as well as better statistics for every sample at the same time.
Pohl, Lydia; Kölbl, Angelika; Werner, Florian; Mueller, Carsten W; Höschen, Carmen; Häusler, Werner; Kögel-Knabner, Ingrid
2018-04-30
Aluminium (Al)-substituted goethite is ubiquitous in soils and sediments. The extent of Al-substitution affects the physicochemical properties of the mineral and influences its macroscale properties. Bulk analysis only provides total Al/Fe ratios without providing information with respect to the Al-substitution of single minerals. Here, we demonstrate that nanoscale secondary ion mass spectrometry (NanoSIMS) enables the precise determination of Al-content in single minerals, while simultaneously visualising the variation of the Al/Fe ratio. Al-substituted goethite samples were synthesized with increasing Al concentrations of 0.1, 3, and 7 % and analysed by NanoSIMS in combination with established bulk spectroscopic methods (XRD, FTIR, Mössbauer spectroscopy). The high spatial resolution (50-150 nm) of NanoSIMS is accompanied by a high number of single-point measurements. We statistically evaluated the Al/Fe ratios derived from NanoSIMS, while maintaining the spatial information and reassigning it to its original localization. XRD analyses confirmed increasing concentration of incorporated Al within the goethite structure. Mössbauer spectroscopy revealed 11 % of the goethite samples generated at high Al concentrations consisted of hematite. The NanoSIMS data show that the Al/Fe ratios are in agreement with bulk data derived from total digestion and demonstrated small spatial variability between single-point measurements. More advantageously, statistical analysis and reassignment of single-point measurements allowed us to identify distinct spots with significantly higher or lower Al/Fe ratios. NanoSIMS measurements confirmed the capacity to produce images, which indicated the uniform increase in Al-concentrations in goethite. Using a combination of statistical analysis with information from complementary spectroscopic techniques (XRD, FTIR and Mössbauer spectroscopy) we were further able to reveal spots with lower Al/Fe ratios as hematite. Copyright © 2018 John Wiley & Sons, Ltd.
Physics opportunities with meson beams
Briscoe, William J.; Doring, Michael; Haberzettl, Helmut; ...
2015-10-20
Over the past two decades, meson photo- and electro-production data of unprecedented quality and quantity have been measured at electromagnetic facilities worldwide. By contrast, the meson-beam data for the same hadronic final states are mostly outdated and largely of poor quality, or even nonexistent, and thus provide inadequate input to help interpret, analyze, and exploit the full potential of the new electromagnetic data. To reap the full benefit of the high-precision electromagnetic data, new high-statistics data from measurements with meson beams, with good angle and energy coverage for a wide range of reactions, are critically needed to advance our knowledgemore » in baryon and meson spectroscopy and other related areas of hadron physics. To address this situation, a state of-the-art meson-beam facility needs to be constructed. Furthermore, the present paper summarizes unresolved issues in hadron physics and outlines the vast opportunities and advances that only become possible with such a facility.« less
Physics opportunities with meson beams
NASA Astrophysics Data System (ADS)
Briscoe, William J.; Döring, Michael; Haberzettl, Helmut; Manley, D. Mark; Naruki, Megumi; Strakovsky, Igor I.; Swanson, Eric S.
2015-10-01
Over the past two decades, meson photo- and electroproduction data of unprecedented quality and quantity have been measured at electromagnetic facilities worldwide. By contrast, the meson-beam data for the same hadronic final states are mostly outdated and largely of poor quality, or even non-existent, and thus provide inadequate input to help interpret, analyze, and exploit the full potential of the new electromagnetic data. To reap the full benefit of the high-precision electromagnetic data, new high-statistics data from measurements with meson beams, with good angle and energy coverage for a wide range of reactions, are critically needed to advance our knowledge in baryon and meson spectroscopy and other related areas of hadron physics. To address this situation, a state-of-the-art meson-beam facility needs to be constructed. The present paper summarizes unresolved issues in hadron physics and outlines the vast opportunities and advances that only become possible with such a facility.
[Hang-gliding accidents in high mountains. Apropos of 200 cases].
Foray, J; Abrassart, S; Femmy, T; Aldilli, M
1991-01-01
A review of 200 cases of "paragliding" accidents in high mountain areas has been completed. The first flights have been murderous, a thesis written in 1987 in Grenoble showing seven dead out of 97 casualties. Since then the statistics seen to be improving as a consequence of the setting of regulations and the establishment of "paragliding" schools. The more frequent accidents happen on landing: in 70% of the cases fractures of the "tibiotarsienne", the wrist and the spinal column prevail. They happen to young adults between 20 and 40 years old, with a variable experience. Preventive measures consist in a greater prudence, a good physical condition and a precise aerological knowledge. The adepts of this sport have understood that wearing a helmet and appropriate shoes could reduce the gravity of the accidents. "Paragliding" if not a dangerous sport is certainly a risky one.
The branching ratio ω → π ^+π ^- revisited
NASA Astrophysics Data System (ADS)
Hanhart, C.; Holz, S.; Kubis, B.; Kupść, A.; Wirzba, A.; Xiao, C. W.
2017-02-01
We analyze the most recent data for the pion vector form factor in the timelike region, employing a model-independent approach based on dispersion theory. We confirm earlier observations about the inconsistency of different modern high-precision data sets. Excluding the BaBar data, we find an updated value for the isospin-violating branching ratio B(ω → π ^+π ^-) = (1.46± 0.08) × 10^{-2}. As a side result, we also extract an improved value for the pion vector or charge radius, √{< r_V^2rangle } = 0.6603(5)(4) {fm}, where the first uncertainty is statistical as derived from the fit, while the second estimates the possible size of nonuniversal radiative corrections. In addition, we demonstrate that modern high-quality data for the decay η '→ π ^+π ^-γ will allow for an even improved determination of the transition strength ω → π ^+π ^-.
The STAR Detector Upgrades and Electromagnetic Probes in Beam Energy Scan Phase II
NASA Astrophysics Data System (ADS)
Yang, Chi
The Beam Energy Scan Phase II at RHIC, BES-II, is scheduled from year 2019 to 2020 and will explore the high baryon density region of the QCD phase diagram with high precision. The program will focus on the interesting energy region determined from the results of BES-I. Some of the key measurements anticipated are the chiral symmetry restoration and QGP thermal radiation in the dilepton and direct photon channels. The measurements will be possible with an order of magnitude better statistics provided by the electron cooling upgrade of RHIC and with the detector upgrades planned to extend STAR experimental reach. The upgrades are: the inner Time Projection Chamber sectors (iTPC), the Event Plane Detector (EPD), and the end-cap Time of Flight (eTOF). We present the BES-II program details and the physics opportunities in the dilepton and direct photon channels enabled by the upgrades.
High-Precision Half-Life Measurements for the Superallowed Fermi β+ Emitters 14O and 18Ne
NASA Astrophysics Data System (ADS)
Laffoley, A. T.; Andreoiu, C.; Austin, R. A. E.; Ball, G. C.; Bender, P. C.; Bidaman, H.; Bildstein, V.; Blank, B.; Bouzomita, H.; Cross, D. S.; Deng, G.; Diaz Varela, A.; Dunlop, M. R.; Dunlop, R.; Finlay, P.; Garnsworthy, A. B.; Garrett, P.; Giovinazzo, J.; Grinyer, G. F.; Grinyer, J.; Hadinia, B.; Jamieson, D. S.; Jigmeddorj, B.; Ketelhut, S.; Kisliuk, D.; Leach, K. G.; Leslie, J. R.; MacLean, A.; Miller, D.; Mills, B.; Moukaddam, M.; Radich, A. J.; Rajabali, M. M.; Rand, E. T.; Svensson, C. E.; Tardiff, E.; Thomas, J. C.; Turko, J.; Voss, P.; Unsworth, C.
High-precision half-life measurements, at the level of ±0.04%, for the superallowed Fermi emitters 14O and 18Ne have been performed at TRIUMF's Isotope Separator and Accelerator facility. Using 3 independent detector systems, a gas-proportional counter, a fast plastic scintillator, and a high-purity germanium array, a series of direct β and γ counting measurements were performed for each of the isotopes. In the case of 14O, these measurements were made to help resolve an existing discrepancy between detection methods, whereas for 18Ne the half-life precision has been improved in anticipation of forthcoming high-precision branching ratio measurements.
A new hearing protector rating: The Noise Reduction Statistic for use with A weighting (NRSA).
NASA Astrophysics Data System (ADS)
Berger, Elliott H.; Gauger, Dan
2004-05-01
An important question to ask in regard to hearing protection devices (HPDs) is how much hearing protection they can provide. With respect to the law, at least, this question was answered in 1979 when the U. S. Environmental Protection Agency (EPA) promulgated a labeling regulation specifying a Noise Reduction Rating (NRR) measured in decibels (dB). In the intervening 25 years many concerns have arisen over this regulation. Currently the EPA is considering proposing a revised rule. This report examines the relevant issues in order to provide recommendations for new ratings and a new method of obtaining the test data. The conclusion is that a Noise Reduction Statistic for use with A weighting (NRSA), an A-A' rating computed in a manner that considers both intersubject and interspectrum variation in protection, yields sufficient precision. Two such statistics ought to be specified on the primary package label-the smaller one to indicate the protection that is possible for most users to exceed, and a larger one such that the range between the two numbers conveys to the user the uncertainty in protection provided. Guidance on how to employ these numbers, and a suggestion for an additional, more precise, graphically oriented rating to be provided on a secondary label, are also included.
Precision injection molding of freeform optics
NASA Astrophysics Data System (ADS)
Fang, Fengzhou; Zhang, Nan; Zhang, Xiaodong
2016-08-01
Precision injection molding is the most efficient mass production technology for manufacturing plastic optics. Applications of plastic optics in field of imaging, illumination, and concentration demonstrate a variety of complex surface forms, developing from conventional plano and spherical surfaces to aspheric and freeform surfaces. It requires high optical quality with high form accuracy and lower residual stresses, which challenges both optical tool inserts machining and precision injection molding process. The present paper reviews recent progress in mold tool machining and precision injection molding, with more emphasis on precision injection molding. The challenges and future development trend are also discussed.
Sawmill simulation: concepts and computer use
Hugh W. Reynolds; Charles J. Gatchell
1969-01-01
Product specifications were fed into a computer so that the yield of products from the same sample of logs could be determined for simulated sawing methods. Since different sawing patterns were tested on the same sample, variation among log samples was eliminated; hence, the statistical conclusions are very precise.