Characterizations of linear sufficient statistics
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.
1977-01-01
A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.
ERIC Educational Resources Information Center
Spencer, Bryden
2016-01-01
Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…
Detection of non-Gaussian fluctuations in a quantum point contact.
Gershon, G; Bomze, Yu; Sukhorukov, E V; Reznikov, M
2008-07-04
An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.
Detection of Non-Gaussian Fluctuations in a Quantum Point Contact
NASA Astrophysics Data System (ADS)
Gershon, G.; Bomze, Yu.; Sukhorukov, E. V.; Reznikov, M.
2008-07-01
An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.
NASA Astrophysics Data System (ADS)
Rusu-Anghel, S.
2017-01-01
Analytical modeling of the flow of manufacturing process of the cement is difficult because of their complexity and has not resulted in sufficiently precise mathematical models. In this paper, based on a statistical model of the process and using the knowledge of human experts, was designed a fuzzy system for automatic control of clinkering process.
Study samples are too small to produce sufficiently precise reliability coefficients.
Charter, Richard A
2003-04-01
In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.
Jones, David T; Kandathil, Shaun M
2018-04-26
In addition to substitution frequency data from protein sequence alignments, many state-of-the-art methods for contact prediction rely on additional sources of information, or features, of protein sequences in order to predict residue-residue contacts, such as solvent accessibility, predicted secondary structure, and scores from other contact prediction methods. It is unclear how much of this information is needed to achieve state-of-the-art results. Here, we show that using deep neural network models, simple alignment statistics contain sufficient information to achieve state-of-the-art precision. Our prediction method, DeepCov, uses fully convolutional neural networks operating on amino-acid pair frequency or covariance data derived directly from sequence alignments, without using global statistical methods such as sparse inverse covariance or pseudolikelihood estimation. Comparisons against CCMpred and MetaPSICOV2 show that using pairwise covariance data calculated from raw alignments as input allows us to match or exceed the performance of both of these methods. Almost all of the achieved precision is obtained when considering relatively local windows (around 15 residues) around any member of a given residue pairing; larger window sizes have comparable performance. Assessment on a set of shallow sequence alignments (fewer than 160 effective sequences) indicates that the new method is substantially more precise than CCMpred and MetaPSICOV2 in this regime, suggesting that improved precision is attainable on smaller sequence families. Overall, the performance of DeepCov is competitive with the state of the art, and our results demonstrate that global models, which employ features from all parts of the input alignment when predicting individual contacts, are not strictly needed in order to attain precise contact predictions. DeepCov is freely available at https://github.com/psipred/DeepCov. d.t.jones@ucl.ac.uk.
NASA Astrophysics Data System (ADS)
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
Phenomenological constraints on the bulk viscosity of QCD
NASA Astrophysics Data System (ADS)
Paquet, Jean-François; Shen, Chun; Denicol, Gabriel; Jeon, Sangyong; Gale, Charles
2017-11-01
While small at very high temperature, the bulk viscosity of Quantum Chromodynamics is expected to grow in the confinement region. Although its precise magnitude and temperature-dependence in the cross-over region is not fully understood, recent theoretical and phenomenological studies provided evidence that the bulk viscosity can be sufficiently large to have measurable consequences on the evolution of the quark-gluon plasma. In this work, a Bayesian statistical analysis is used to establish probabilistic constraints on the temperature-dependence of bulk viscosity using hadronic measurements from RHIC and LHC.
Natural gas odor level testing: Instruments and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, E.H.
1995-12-01
An odor in natural and LP gases is necessary. The statistics are overwhelming; when gas customers can smell a leak before the percentage of gas in air reaches a combustible mixture, the chances of an accident are greatly reduced. How do gas companies determine if there is sufficient odor reaching every gas customers home? Injection equipment is important. The rate and quality of odorant is important. Nevertheless, precision odorization alone does not guarantee that customers` homes always have gas with a readily detectable odor. To secure that goal, odor monitoring instruments are necessary.
A new hearing protector rating: The Noise Reduction Statistic for use with A weighting (NRSA).
NASA Astrophysics Data System (ADS)
Berger, Elliott H.; Gauger, Dan
2004-05-01
An important question to ask in regard to hearing protection devices (HPDs) is how much hearing protection they can provide. With respect to the law, at least, this question was answered in 1979 when the U. S. Environmental Protection Agency (EPA) promulgated a labeling regulation specifying a Noise Reduction Rating (NRR) measured in decibels (dB). In the intervening 25 years many concerns have arisen over this regulation. Currently the EPA is considering proposing a revised rule. This report examines the relevant issues in order to provide recommendations for new ratings and a new method of obtaining the test data. The conclusion is that a Noise Reduction Statistic for use with A weighting (NRSA), an A-A' rating computed in a manner that considers both intersubject and interspectrum variation in protection, yields sufficient precision. Two such statistics ought to be specified on the primary package label-the smaller one to indicate the protection that is possible for most users to exceed, and a larger one such that the range between the two numbers conveys to the user the uncertainty in protection provided. Guidance on how to employ these numbers, and a suggestion for an additional, more precise, graphically oriented rating to be provided on a secondary label, are also included.
Nanoscale temperature mapping in operating microelectronic devices
Mecklenburg, Matthew; Hubbard, William A.; White, E. R.; ...
2015-02-05
We report that modern microelectronic devices have nanoscale features that dissipate power nonuniformly, but fundamental physical limits frustrate efforts to detect the resulting temperature gradients. Contact thermometers disturb the temperature of a small system, while radiation thermometers struggle to beat the diffraction limit. Exploiting the same physics as Fahrenheit’s glass-bulb thermometer, we mapped the thermal expansion of Joule-heated, 80-nanometer-thick aluminum wires by precisely measuring changes in density. With a scanning transmission electron microscope (STEM) and electron energy loss spectroscopy (EELS), we quantified the local density via the energy of aluminum’s bulk plasmon. Rescaling density to temperature yields maps with amore » statistical precision of 3 kelvin/hertz ₋1/2, an accuracy of 10%, and nanometer-scale resolution. Lastly, many common metals and semiconductors have sufficiently sharp plasmon resonances to serve as their own thermometers.« less
Analysis of video-recorded images to determine linear and angular dimensions in the growing horse.
Hunt, W F; Thomas, V G; Stiefel, W
1999-09-01
Studies of growth and conformation require statistical methods that are not applicable to subjective conformation standards used by breeders and trainers. A new system was developed to provide an objective approach for both science and industry, based on analysis of video images to measure aspects of conformation that were represented by angles or lengths. A studio crush was developed in which video images of horses of different sizes were taken after bone protuberances, located by palpation, were marked with white paper stickers. Screen pixel coordinates of calibration marks, bone markers and points on horse outlines were digitised from captured images and corrected for aspect ratio and 'fish-eye' lens effects. Calculations from the corrected coordinates produced linear dimensions and angular dimensions useful for comparison of horses for conformation and experimental purposes. The precision achieved by the method in determining linear and angular dimensions was examined through systematically determining variance for isolated steps of the procedure. Angles of the front limbs viewed from in front were determined with a standard deviation of 2-5 degrees and effects of viewing angle were detectable statistically. The height of the rump and wither were determined with precision closely related to the limitations encountered in locating a point on a screen, which was greater for markers applied to the skin than for points at the edge of the image. Parameters determined from markers applied to the skin were, however, more variable (because their relation to bone position was affected by movement), but still provided a means by which a number of aspects of size and conformation can be determined objectively for many horses during growth. Sufficient precision was achieved to detect statistically relatively small effects on calculated parameters of camera height position.
Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?
Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R
2018-04-30
Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Corpus-based Statistical Screening for Phrase Identification
Kim, Won; Wilbur, W. John
2000-01-01
Purpose: The authors study the extraction of useful phrases from a natural language database by statistical methods. The aim is to leverage human effort by providing preprocessed phrase lists with a high percentage of useful material. Method: The approach is to develop six different scoring methods that are based on different aspects of phrase occurrence. The emphasis here is not on lexical information or syntactic structure but rather on the statistical properties of word pairs and triples that can be obtained from a large database. Measurements: The Unified Medical Language System (UMLS) incorporates a large list of humanly acceptable phrases in the medical field as a part of its structure. The authors use this list of phrases as a gold standard for validating their methods. A good method is one that ranks the UMLS phrases high among all phrases studied. Measurements are 11-point average precision values and precision-recall curves based on the rankings. Result: The authors find of six different scoring methods that each proves effective in identifying UMLS quality phrases in a large subset of MEDLINE. These methods are applicable both to word pairs and word triples. All six methods are optimally combined to produce composite scoring methods that are more effective than any single method. The quality of the composite methods appears sufficient to support the automatic placement of hyperlinks in text at the site of highly ranked phrases. Conclusion: Statistical scoring methods provide a promising approach to the extraction of useful phrases from a natural language database for the purpose of indexing or providing hyperlinks in text. PMID:10984469
Role of sufficient statistics in stochastic thermodynamics and its implication to sensory adaptation
NASA Astrophysics Data System (ADS)
Matsumoto, Takumi; Sagawa, Takahiro
2018-04-01
A sufficient statistic is a significant concept in statistics, which means a probability variable that has sufficient information required for an inference task. We investigate the roles of sufficient statistics and related quantities in stochastic thermodynamics. Specifically, we prove that for general continuous-time bipartite networks, the existence of a sufficient statistic implies that an informational quantity called the sensory capacity takes the maximum. Since the maximal sensory capacity imposes a constraint that the energetic efficiency cannot exceed one-half, our result implies that the existence of a sufficient statistic is inevitably accompanied by energetic dissipation. We also show that, in a particular parameter region of linear Langevin systems there exists the optimal noise intensity at which the sensory capacity, the information-thermodynamic efficiency, and the total entropy production are optimized at the same time. We apply our general result to a model of sensory adaptation of E. coli and find that the sensory capacity is nearly maximal with experimentally realistic parameters.
NASA Astrophysics Data System (ADS)
Sikora, Mark; Compton@HIGS Team
2017-01-01
The electric (αn) and magnetic (βn) polarizabilities of the neutron are fundamental properties arising from its internal structure which describe the nucleon's response to applied electromagnetic fields. Precise measurements of the polarizabilities provide crucial constraints on models of Quantum Chromodynamics (QCD) in the low energy regime such as Chiral Effective Field Theories as well as emerging ab initio calculations from lattice-QCD. These values also contribute the most uncertainty to theoretical determinations of the proton-neutron mass difference. Historically, the experimental challenges to measuring αn and βn have been due to the difficulty in obtaining suitable targets and sufficiently intense beams, leading to significant statistical uncertainties. To address these issues, a program of Compton scattering experiments on the deuteron is underway at the High Intensity Gamma Source (HI γS) at Duke University with the aim of providing the world's most precise measurement of αn and βn. We report measurements of the Compton scattering differential cross section obtained at an incident photon energy of 65 MeV and discuss the sensitivity of these data to the polarizabilities.
NASA Astrophysics Data System (ADS)
Sikora, Mark
2016-09-01
The electric (αn) and magnetic (βn) polarizabilities of the neutron are fundamental properties arising from its internal structure which describe the nucleon's response to applied electromagnetic fields. Precise measurements of the polarizabilities provide crucial constraints on models of Quantum Chromodynamics (QCD) in the low energy regime such as Chiral Effective Field Theories as well as emerging ab initio calculations from lattice-QCD. These values also contribute the most uncertainty to theoretical determinations of the proton-neutron mass difference. Historically, the experimental challenges to measuring αn and βn have been due to the difficulty in obtaining suitable targets and sufficiently intense beams, leading to significant statistical uncertainties. To address these issues, a program of Compton scattering experiments on the deuteron is underway at the High Intensity Gamma Source (HI γS) at Duke University with the aim of providing the world's most precise measurement of αn and βn. We report measurements of the Compton scattering differential cross section obtained at incident photon energies of 65 and 85 MeV and discuss the sensitivity of these data to the polarizabilities.
Validation of the Filovirus Plaque Assay for Use in Preclinical Studies
Shurtleff, Amy C.; Bloomfield, Holly A.; Mort, Shannon; Orr, Steven A.; Audet, Brian; Whitaker, Thomas; Richards, Michelle J.; Bavari, Sina
2016-01-01
A plaque assay for quantitating filoviruses in virus stocks, prepared viral challenge inocula and samples from research animals has recently been fully characterized and standardized for use across multiple institutions performing Biosafety Level 4 (BSL-4) studies. After standardization studies were completed, Good Laboratory Practices (GLP)-compliant plaque assay method validation studies to demonstrate suitability for reliable and reproducible measurement of the Marburg Virus Angola (MARV) variant and Ebola Virus Kikwit (EBOV) variant commenced at the United States Army Medical Research Institute of Infectious Diseases (USAMRIID). The validation parameters tested included accuracy, precision, linearity, robustness, stability of the virus stocks and system suitability. The MARV and EBOV assays were confirmed to be accurate to ±0.5 log10 PFU/mL. Repeatability precision, intermediate precision and reproducibility precision were sufficient to return viral titers with a coefficient of variation (%CV) of ≤30%, deemed acceptable variation for a cell-based bioassay. Intraclass correlation statistical techniques for the evaluation of the assay’s precision when the same plaques were quantitated by two analysts returned values passing the acceptance criteria, indicating high agreement between analysts. The assay was shown to be accurate and specific when run on Nonhuman Primates (NHP) serum and plasma samples diluted in plaque assay medium, with negligible matrix effects. Virus stocks demonstrated stability for freeze-thaw cycles typical of normal usage during assay retests. The results demonstrated that the EBOV and MARV plaque assays are accurate, precise and robust for filovirus titration in samples associated with the performance of GLP animal model studies. PMID:27110807
Sharp predictions from eternal inflation patches in D-brane inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertog, Thomas; Janssen, Oliver, E-mail: thomas.hertog@fys.kuleuven.be, E-mail: opj202@nyu.edu
We numerically generate the six-dimensional landscape of D3-brane inflation and identify patches of eternal inflation near sufficiently flat inflection points of the potential. We show that reasonable measures that select patches of eternal inflation in the landscape yield sharp predictions for the spectral properties of primordial perturbations on observable scales. These include a scalar tilt of .936, a running of the scalar tilt −.00103, undetectably small tensors and non-Gaussianity, and no observable spatial curvature. Our results explicitly demonstrate that precision cosmology probes the combination of the statistical properties of the string landscape and the measure implied by the universe's quantummore » state.« less
Single photon laser altimeter simulator and statistical signal processing
NASA Astrophysics Data System (ADS)
Vacek, Michael; Prochazka, Ivan
2013-05-01
Spaceborne altimeters are common instruments onboard the deep space rendezvous spacecrafts. They provide range and topographic measurements critical in spacecraft navigation. Simultaneously, the receiver part may be utilized for Earth-to-satellite link, one way time transfer, and precise optical radiometry. The main advantage of single photon counting approach is the ability of processing signals with very low signal-to-noise ratio eliminating the need of large telescopes and high power laser source. Extremely small, rugged and compact microchip lasers can be employed. The major limiting factor, on the other hand, is the acquisition time needed to gather sufficient volume of data in repetitive measurements in order to process and evaluate the data appropriately. Statistical signal processing is adopted to detect signals with average strength much lower than one photon per measurement. A comprehensive simulator design and range signal processing algorithm are presented to identify a mission specific altimeter configuration. Typical mission scenarios (celestial body surface landing and topographical mapping) are simulated and evaluated. The high interest and promising single photon altimeter applications are low-orbit (˜10 km) and low-radial velocity (several m/s) topographical mapping (asteroids, Phobos and Deimos) and landing altimetry (˜10 km) where range evaluation repetition rates of ˜100 Hz and 0.1 m precision may be achieved. Moon landing and asteroid Itokawa topographical mapping scenario simulations are discussed in more detail.
Variable-pulse switching circuit accurately controls solenoid-valve actuations
NASA Technical Reports Server (NTRS)
Gillett, J. D.
1967-01-01
Solid state circuit generating adjustable square wave pulses of sufficient power operates a 28 volt dc solenoid valve at precise time intervals. This circuit is used for precise time control of fluid flow in combustion experiments.
Precision constraints on the top-quark effective field theory at future lepton colliders
NASA Astrophysics Data System (ADS)
Durieux, G.
We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the $e^+e^-\\to bW^+\\:\\bar bW^-$ process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.
Optical nano artifact metrics using silicon random nanostructures
NASA Astrophysics Data System (ADS)
Matsumoto, Tsutomu; Yoshida, Naoki; Nishio, Shumpei; Hoga, Morihisa; Ohyagi, Yasuyuki; Tate, Naoya; Naruse, Makoto
2016-08-01
Nano-artifact metrics exploit unique physical attributes of nanostructured matter for authentication and clone resistance, which is vitally important in the age of Internet-of-Things where securing identities is critical. However, expensive and huge experimental apparatuses, such as scanning electron microscopy, have been required in the former studies. Herein, we demonstrate an optical approach to characterise the nanoscale-precision signatures of silicon random structures towards realising low-cost and high-value information security technology. Unique and versatile silicon nanostructures are generated via resist collapse phenomena, which contains dimensions that are well below the diffraction limit of light. We exploit the nanoscale precision ability of confocal laser microscopy in the height dimension; our experimental results demonstrate that the vertical precision of measurement is essential in satisfying the performances required for artifact metrics. Furthermore, by using state-of-the-art nanostructuring technology, we experimentally fabricate clones from the genuine devices. We demonstrate that the statistical properties of the genuine and clone devices are successfully exploited, showing that the liveness-detection-type approach, which is widely deployed in biometrics, is valid in artificially-constructed solid-state nanostructures. These findings pave the way for reasonable and yet sufficiently secure novel principles for information security based on silicon random nanostructures and optical technologies.
[Calm thinking for precision medicine of breast cancer in the boom].
Jiang, Z F; Xu, F R
2017-02-01
In the past two years, researchers have been exploring the precision medicine. Thanks to the development of sequence industry and clinical study, the big data about the precision medicine have been more and more sufficient. However, how to deal with the precision data is still a question for clinicians. We focus on the hot issues that disturb clinicians most, wanting to help them to make suitable decisions between the traditional and precision medicine of breast cancer. We believe the precision medicine is on the way.
In vivo precision of conventional and digital methods for obtaining quadrant dental impressions.
Ender, Andreas; Zimmermann, Moritz; Attin, Thomas; Mehl, Albert
2016-09-01
Quadrant impressions are commonly used as alternative to full-arch impressions. Digital impression systems provide the ability to take these impressions very quickly; however, few studies have investigated the accuracy of the technique in vivo. The aim of this study is to assess the precision of digital quadrant impressions in vivo in comparison to conventional impression techniques. Impressions were obtained via two conventional (metal full-arch tray, CI, and triple tray, T-Tray) and seven digital impression systems (Lava True Definition Scanner, T-Def; Lava Chairside Oral Scanner, COS; Cadent iTero, ITE; 3Shape Trios, TRI; 3Shape Trios Color, TRC; CEREC Bluecam, Software 4.0, BC4.0; CEREC Bluecam, Software 4.2, BC4.2; and CEREC Omnicam, OC). Impressions were taken three times for each of five subjects (n = 15). The impressions were then superimposed within the test groups. Differences from model surfaces were measured using a normal surface distance method. Precision was calculated using the Perc90_10 value. The values for all test groups were statistically compared. The precision ranged from 18.8 (CI) to 58.5 μm (T-Tray), with the highest precision in the CI, T-Def, BC4.0, TRC, and TRI groups. The deviation pattern varied distinctly depending on the impression method. Impression systems with single-shot capture exhibited greater deviations at the tooth surface whereas high-frame rate impression systems differed more in gingival areas. Triple tray impressions displayed higher local deviation at the occlusal contact areas of upper and lower jaw. Digital quadrant impression methods achieve a level of precision, comparable to conventional impression techniques. However, there are significant differences in terms of absolute values and deviation pattern. With all tested digital impression systems, time efficient capturing of quadrant impressions is possible. The clinical precision of digital quadrant impression models is sufficient to cover a broad variety of restorative indications. Yet the precision differs significantly between the digital impression systems.
Probabilistic seismic loss estimation via endurance time method
NASA Astrophysics Data System (ADS)
Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.
2017-01-01
Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.
Compensation for time delay in flight simulator visual-display systems
NASA Technical Reports Server (NTRS)
Crane, D. F.
1983-01-01
A piloted aircraft can be viewed as a closed-loop, man-machine control system. When a simulator pilot is performing a precision maneuver, a delay in the visual display of aircraft response to pilot-control input decreases the stability of the pilot-aircraft system. The less stable system is more difficult to control precisely. Pilot dynamic response and performance change as the pilot attempts to compensate for the decrease in system stability, and these changes bias the simulation results by influencing the pilot's rating of the handling qualities of the simulated aircraft. Delay compensation, designed to restore pilot-aircraft system stability, was evaluated in several studies which are reported here. The studies range from single-axis, tracking-task experiments (with sufficient subjects and trials to establish statistical significance of the results) to a brief evaluation of compensation of a computer-generated-imagery (CGI) visual display system in a full six-degree-of-freedom simulation. The compensation was effective - improvements in pilot performance and workload or aircraft handling-qualities rating (HQR) were observed. Results from recent aircraft handling-qualities research literature which support the compensation design approach are also reviewed.
1982-12-01
Sequence dj Estimate of the Desired Signal DEL Sampling Time Interval DS Direct Sequence c Sufficient Statistic E/T Signal Power Erfc Complimentary Error...Namely, a white Gaussian noise (WGN) generator was added. Also, a statistical subroutine was added in order to assess performance improvement at the...reference code and then passed through a correlation detector whose output is the sufficient 1 statistic , e . Using a threshold device and the sufficient
Olah, Emoke; Poto, Laszlo; Hegyi, Peter; Szabo, Imre; Hartmann, Petra; Solymar, Margit; Petervari, Erika; Balasko, Marta; Habon, Tamas; Rumbus, Zoltan; Tenk, Judit; Rostas, Ildiko; Weinberg, Jordan; Romanovsky, Andrej A; Garami, Andras
2018-04-21
Therapeutic hypothermia was investigated repeatedly as a tool to improve the outcome of severe traumatic brain injury (TBI), but previous clinical trials and meta-analyses found contradictory results. We aimed to determine the effectiveness of therapeutic whole-body hypothermia on the mortality of adult patients with severe TBI by using a novel approach of meta-analysis. We searched the PubMed, EMBASE, and Cochrane Library databases from inception to February 2017. The identified human studies were evaluated regarding statistical, clinical, and methodological designs to ensure inter-study homogeneity. We extracted data on TBI severity, body temperature, mortality, and cooling parameters; then we calculated the cooling index, an integrated measure of therapeutic hypothermia. Forest plot of all identified studies showed no difference in the outcome of TBI between cooled and not cooled patients, but inter-study heterogeneity was high. On the contrary, by meta-analysis of RCTs which were homogenous with regards to statistical, clinical designs and precisely reported the cooling protocol, we showed decreased odds ratio for mortality in therapeutic hypothermia compared to no cooling. As independent factors, milder and longer cooling, and rewarming at < 0.25°C/h were associated with better outcome. Therapeutic hypothermia was beneficial only if the cooling index (measure of combination of cooling parameters) was sufficiently high. We conclude that high methodological and statistical inter-study heterogeneity could underlie the contradictory results obtained in previous studies. By analyzing methodologically homogenous studies, we show that cooling improves the outcome of severe TBI and this beneficial effect depends on certain cooling parameters and on their integrated measure, the cooling index.
Minimal sufficient positive-operator valued measure on a separable Hilbert space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuramochi, Yui, E-mail: kuramochi.yui.22c@st.kyoto-u.ac.jp
We introduce a concept of a minimal sufficient positive-operator valued measure (POVM), which is the least redundant POVM among the POVMs that have the equivalent information about the measured quantum system. Assuming the system Hilbert space to be separable, we show that for a given POVM, a sufficient statistic called a Lehmann-Scheffé-Bahadur statistic induces a minimal sufficient POVM. We also show that every POVM has an equivalent minimal sufficient POVM and that such a minimal sufficient POVM is unique up to relabeling neglecting null sets. We apply these results to discrete POVMs and information conservation conditions proposed by the author.
On sufficient statistics of least-squares superposition of vector sets.
Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M
2015-06-01
The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.
3D-Printing for Analytical Ultracentrifugation
Desai, Abhiksha; Krynitsky, Jonathan; Pohida, Thomas J.; Zhao, Huaying
2016-01-01
Analytical ultracentrifugation (AUC) is a classical technique of physical biochemistry providing information on size, shape, and interactions of macromolecules from the analysis of their migration in centrifugal fields while free in solution. A key mechanical element in AUC is the centerpiece, a component of the sample cell assembly that is mounted between the optical windows to allow imaging and to seal the sample solution column against high vacuum while exposed to gravitational forces in excess of 300,000 g. For sedimentation velocity it needs to be precisely sector-shaped to allow unimpeded radial macromolecular migration. During the history of AUC a great variety of centerpiece designs have been developed for different types of experiments. Here, we report that centerpieces can now be readily fabricated by 3D printing at low cost, from a variety of materials, and with customized designs. The new centerpieces can exhibit sufficient mechanical stability to withstand the gravitational forces at the highest rotor speeds and be sufficiently precise for sedimentation equilibrium and sedimentation velocity experiments. Sedimentation velocity experiments with bovine serum albumin as a reference molecule in 3D printed centerpieces with standard double-sector design result in sedimentation boundaries virtually indistinguishable from those in commercial double-sector epoxy centerpieces, with sedimentation coefficients well within the range of published values. The statistical error of the measurement is slightly above that obtained with commercial epoxy, but still below 1%. Facilitated by modern open-source design and fabrication paradigms, we believe 3D printed centerpieces and AUC accessories can spawn a variety of improvements in AUC experimental design, efficiency and resource allocation. PMID:27525659
NASA Astrophysics Data System (ADS)
James, Ryan G.; Mahoney, John R.; Crutchfield, James P.
2017-06-01
One of the most basic characterizations of the relationship between two random variables, X and Y , is the value of their mutual information. Unfortunately, calculating it analytically and estimating it empirically are often stymied by the extremely large dimension of the variables. One might hope to replace such a high-dimensional variable by a smaller one that preserves its relationship with the other. It is well known that either X (or Y ) can be replaced by its minimal sufficient statistic about Y (or X ) while preserving the mutual information. While intuitively reasonable, it is not obvious or straightforward that both variables can be replaced simultaneously. We demonstrate that this is in fact possible: the information X 's minimal sufficient statistic preserves about Y is exactly the information that Y 's minimal sufficient statistic preserves about X . We call this procedure information trimming. As an important corollary, we consider the case where one variable is a stochastic process' past and the other its future. In this case, the mutual information is the channel transmission rate between the channel's effective states. That is, the past-future mutual information (the excess entropy) is the amount of information about the future that can be predicted using the past. Translating our result about minimal sufficient statistics, this is equivalent to the mutual information between the forward- and reverse-time causal states of computational mechanics. We close by discussing multivariate extensions to this use of minimal sufficient statistics.
The least channel capacity for chaos synchronization.
Wang, Mogei; Wang, Xingyuan; Liu, Zhenzhen; Zhang, Huaguang
2011-03-01
Recently researchers have found that a channel with capacity exceeding the Kolmogorov-Sinai entropy of the drive system (h(KS)) is theoretically necessary and sufficient to sustain the unidirectional synchronization to arbitrarily high precision. In this study, we use symbolic dynamics and the automaton reset sequence to distinguish the information that is required in identifying the current drive word and obtaining the synchronization. Then, we show that the least channel capacity that is sufficient to transmit the distinguished information and attain the synchronization of arbitrarily high precision is h(KS). Numerical simulations provide support for our conclusions.
NASA Astrophysics Data System (ADS)
Hu, Zhaoying; Tulevski, George S.; Hannon, James B.; Afzali, Ali; Liehr, Michael; Park, Hongsik
2015-06-01
Carbon nanotubes (CNTs) have been widely studied as a channel material of scaled transistors for high-speed and low-power logic applications. In order to have sufficient drive current, it is widely assumed that CNT-based logic devices will have multiple CNTs in each channel. Understanding the effects of the number of CNTs on device performance can aid in the design of CNT field-effect transistors (CNTFETs). We have fabricated multi-CNT-channel CNTFETs with an 80-nm channel length using precise self-assembly methods. We describe compact statistical models and Monte Carlo simulations to analyze failure probability and the variability of the on-state current and threshold voltage. The results show that multichannel CNTFETs are more resilient to process variation and random environmental fluctuations than single-CNT devices.
Quantitative analysis of the correlations in the Boltzmann-Grad limit for hard spheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pulvirenti, M.
2014-12-09
In this contribution I consider the problem of the validity of the Boltzmann equation for a system of hard spheres in the Boltzmann-Grad limit. I briefly review the results available nowadays with a particular emphasis on the celebrated Lanford’s validity theorem. Finally I present some recent results, obtained in collaboration with S. Simonella, concerning a quantitative analysis of the propagation of chaos. More precisely we introduce a quantity (the correlation error) measuring how close a j-particle rescaled correlation function at time t (sufficiently small) is far from the full statistical independence. Roughly speaking, a correlation error of order k, measuresmore » (in the context of the BBKGY hierarchy) the event in which k tagged particles form a recolliding group.« less
NASA Astrophysics Data System (ADS)
Wang, Haipeng; Chen, Jianhui; Zhang, Shengda; Zhang, David D.; Wang, Zongli; Xu, Qinghai; Chen, Shengqian; Wang, Shijin; Kang, Shichang; Chen, Fahu
2018-03-01
Long-term, high-resolution temperature records which combine an unambiguous proxy and precise dating are rare in China. In addition, the societal implications of past temperature change on a regional scale have not been sufficiently assessed. Here, based on the modern relationship between chironomids and temperature, we use fossil chironomid assemblages in a precisely dated sediment core from Gonghai Lake to explore temperature variability during the past 4000 years in northern China. Subsequently, we address the possible regional societal implications of temperature change through a statistical analysis of the occurrence of wars. Our results show the following. (1) The mean annual temperature (TANN) was relatively high during 4000-2700 cal yr BP, decreased gradually during 2700-1270 cal yr BP and then fluctuated during the last 1270 years. (2) A cold event in the Period of Disunity, the Sui-Tang Warm Period (STWP), the Medieval Warm Period (MWP) and the Little Ice Age (LIA) can all be recognized in the paleotemperature record, as well as in many other temperature reconstructions in China. This suggests that our chironomid-inferred temperature record for the Gonghai Lake region is representative. (3) Local wars in Shanxi Province, documented in the historical literature during the past 2700 years, are statistically significantly correlated with changes in temperature, and the relationship is a good example of the potential societal implications of temperature change on a regional scale.
Quantitative topographic differentiation of the neonatal EEG.
Paul, Karel; Krajca, Vladimír; Roth, Zdenek; Melichar, Jan; Petránek, Svojmil
2006-09-01
To test the discriminatory topographic potential of a new method of the automatic EEG analysis in neonates. A quantitative description of the neonatal EEG can contribute to the objective assessment of the functional state of the brain, and may improve the precision of diagnosing cerebral dysfunctions manifested by 'disorganization', 'dysrhythmia' or 'dysmaturity'. 21 healthy, full-term newborns were examined polygraphically during sleep (EEG-8 referential derivations, respiration, ECG, EOG, EMG). From each EEG record, two 5-min samples (one from the middle of quiet sleep, the other from the middle of active sleep) were subject to subsequent automatic analysis and were described by 13 variables: spectral features and features describing shape and variability of the signal. The data from individual infants were averaged and the number of variables was reduced by factor analysis. All factors identified by factor analysis were statistically significantly influenced by the location of derivation. A large number of statistically significant differences were also established when comparing the effects of individual derivations on each of the 13 measured variables. Both spectral features and features describing shape and variability of the signal are largely accountable for the topographic differentiation of the neonatal EEG. The presented method of the automatic EEG analysis is capable to assess the topographic characteristics of the neonatal EEG, and it is adequately sensitive and describes the neonatal electroencephalogram with sufficient precision. The discriminatory capability of the used method represents a promise for their application in the clinical practice.
Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G
2016-05-09
The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.
Cubison, M. J.; Jimenez, J. L.
2015-06-05
Least-squares fitting of overlapping peaks is often needed to separately quantify ions in high-resolution mass spectrometer data. A statistical simulation approach is used to assess the statistical precision of the retrieved peak intensities. The sensitivity of the fitted peak intensities to statistical noise due to ion counting is probed for synthetic data systems consisting of two overlapping ion peaks whose positions are pre-defined and fixed in the fitting procedure. The fitted intensities are sensitive to imperfections in the m/Q calibration. These propagate as a limiting precision in the fitted intensities that may greatly exceed the precision arising from counting statistics.more » The precision on the fitted peak intensity falls into one of three regimes. In the "counting-limited regime" (regime I), above a peak separation χ ~ 2 to 3 half-widths at half-maximum (HWHM), the intensity precision is similar to that due to counting error for an isolated ion. For smaller χ and higher ion counts (~ 1000 and higher), the intensity precision rapidly degrades as the peak separation is reduced ("calibration-limited regime", regime II). Alternatively for χ < 1.6 but lower ion counts (e.g. 10–100) the intensity precision is dominated by the additional ion count noise from the overlapping ion and is not affected by the imprecision in the m/Q calibration ("overlapping-limited regime", regime III). The transition between the counting and m/Q calibration-limited regimes is shown to be weakly dependent on resolving power and data spacing and can thus be approximated by a simple parameterisation based only on peak intensity ratios and separation. A simple equation can be used to find potentially problematic ion pairs when evaluating results from fitted spectra containing many ions. Longer integration times can improve the precision in regimes I and III, but a given ion pair can only be moved out of regime II through increased spectrometer resolving power. As a result, studies presenting data obtained from least-squares fitting procedures applied to mass spectral peaks should explicitly consider these limits on statistical precision.« less
Achieving the Heisenberg limit in quantum metrology using quantum error correction.
Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang
2018-01-08
Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.
van Elk, Michiel; Matzke, Dora; Gronau, Quentin F.; Guan, Maime; Vandekerckhove, Joachim; Wagenmakers, Eric-Jan
2015-01-01
According to a recent meta-analysis, religious priming has a positive effect on prosocial behavior (Shariff et al., 2015). We first argue that this meta-analysis suffers from a number of methodological shortcomings that limit the conclusions that can be drawn about the potential benefits of religious priming. Next we present a re-analysis of the religious priming data using two different meta-analytic techniques. A Precision-Effect Testing–Precision-Effect-Estimate with Standard Error (PET-PEESE) meta-analysis suggests that the effect of religious priming is driven solely by publication bias. In contrast, an analysis using Bayesian bias correction suggests the presence of a religious priming effect, even after controlling for publication bias. These contradictory statistical results demonstrate that meta-analytic techniques alone may not be sufficiently robust to firmly establish the presence or absence of an effect. We argue that a conclusive resolution of the debate about the effect of religious priming on prosocial behavior – and about theoretically disputed effects more generally – requires a large-scale, preregistered replication project, which we consider to be the sole remedy for the adverse effects of experimenter bias and publication bias. PMID:26441741
PRECISE:PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare
Chen, Feng; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Jiang, Xiaoqian
2015-01-01
Quality improvement (QI) requires systematic and continuous efforts to enhance healthcare services. A healthcare provider might wish to compare local statistics with those from other institutions in order to identify problems and develop intervention to improve the quality of care. However, the sharing of institution information may be deterred by institutional privacy as publicizing such statistics could lead to embarrassment and even financial damage. In this article, we propose a PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare (PRECISE), which aims at enabling cross-institution comparison of healthcare statistics while protecting privacy. The proposed framework relies on a set of state-of-the-art cryptographic protocols including homomorphic encryption and Yao’s garbled circuit schemes. By securely pooling data from different institutions, PRECISE can rank the encrypted statistics to facilitate QI among participating institutes. We conducted experiments using MIMIC II database and demonstrated the feasibility of the proposed PRECISE framework. PMID:26146645
PRECISE:PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare.
Chen, Feng; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Jiang, Xiaoqian
2014-10-01
Quality improvement (QI) requires systematic and continuous efforts to enhance healthcare services. A healthcare provider might wish to compare local statistics with those from other institutions in order to identify problems and develop intervention to improve the quality of care. However, the sharing of institution information may be deterred by institutional privacy as publicizing such statistics could lead to embarrassment and even financial damage. In this article, we propose a PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare (PRECISE), which aims at enabling cross-institution comparison of healthcare statistics while protecting privacy. The proposed framework relies on a set of state-of-the-art cryptographic protocols including homomorphic encryption and Yao's garbled circuit schemes. By securely pooling data from different institutions, PRECISE can rank the encrypted statistics to facilitate QI among participating institutes. We conducted experiments using MIMIC II database and demonstrated the feasibility of the proposed PRECISE framework.
Accessible Information Without Disturbing Partially Known Quantum States on a von Neumann Algebra
NASA Astrophysics Data System (ADS)
Kuramochi, Yui
2018-04-01
This paper addresses the problem of how much information we can extract without disturbing a statistical experiment, which is a family of partially known normal states on a von Neumann algebra. We define the classical part of a statistical experiment as the restriction of the equivalent minimal sufficient statistical experiment to the center of the outcome space, which, in the case of density operators on a Hilbert space, corresponds to the classical probability distributions appearing in the maximal decomposition by Koashi and Imoto (Phys. Rev. A 66, 022,318 2002). We show that we can access by a Schwarz or completely positive channel at most the classical part of a statistical experiment if we do not disturb the states. We apply this result to the broadcasting problem of a statistical experiment. We also show that the classical part of the direct product of statistical experiments is the direct product of the classical parts of the statistical experiments. The proof of the latter result is based on the theorem that the direct product of minimal sufficient statistical experiments is also minimal sufficient.
Scaling properties of multiscale equilibration
NASA Astrophysics Data System (ADS)
Detmold, W.; Endres, M. G.
2018-04-01
We investigate the lattice spacing dependence of the equilibration time for a recently proposed multiscale thermalization algorithm for Markov chain Monte Carlo simulations. The algorithm uses a renormalization-group matched coarse lattice action and prolongation operation to rapidly thermalize decorrelated initial configurations for evolution using a corresponding target lattice action defined at a finer scale. Focusing on nontopological long-distance observables in pure S U (3 ) gauge theory, we provide quantitative evidence that the slow modes of the Markov process, which provide the dominant contribution to the rethermalization time, have a suppressed contribution toward the continuum limit, despite their associated timescales increasing. Based on these numerical investigations, we conjecture that the prolongation operation used herein will produce ensembles that are indistinguishable from the target fine-action distribution for a sufficiently fine coupling at a given level of statistical precision, thereby eliminating the cost of rethermalization.
Are patient specific meshes required for EIT head imaging?
Jehl, Markus; Aristovich, Kirill; Faulkner, Mayo; Holder, David
2016-06-01
Head imaging with electrical impedance tomography (EIT) is usually done with time-differential measurements, to reduce time-invariant modelling errors. Previous research suggested that more accurate head models improved image quality, but no thorough analysis has been done on the required accuracy. We propose a novel pipeline for creation of precise head meshes from magnetic resonance imaging and computed tomography scans, which was applied to four different heads. Voltages were simulated on all four heads for perturbations of different magnitude, haemorrhage and ischaemia, in five different positions and for three levels of instrumentation noise. Statistical analysis showed that reconstructions on the correct mesh were on average 25% better than on the other meshes. However, the stroke detection rates were not improved. We conclude that a generic head mesh is sufficient for monitoring patients for secondary strokes following head trauma.
Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates
NASA Astrophysics Data System (ADS)
Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.
2010-04-01
Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.
On the statistical assessment of classifiers using DNA microarray data
Ancona, N; Maglietta, R; Piepoli, A; D'Addabbo, A; Cotugno, R; Savino, M; Liuni, S; Carella, M; Pesole, G; Perri, F
2006-01-01
Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22) and tumor (25) specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA) classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045) as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS) and Support Vector Machines (SVM) classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035) and e = 18% (p = 0.037) respectively. Moreover, the error rate decreases as the training set size increases, reaching its best performances with 35 training examples. In this case, RLS and SVM have error rates of e = 14% (p = 0.027) and e = 11% (p = 0.019). Concerning the number of genes, we found about 6000 genes (p < 0.05) correlated with the pathology, resulting from the signal-to-noise statistic. Moreover the performances of RLS and SVM classifiers do not change when 74% of genes is used. They progressively reduce up to e = 16% (p < 0.05) when only 2 genes are employed. The biological relevance of a set of genes determined by our statistical analysis and the major roles they play in colorectal tumorigenesis is discussed. Conclusions The method proposed provides statistically significant answers to precise questions relevant for the diagnosis and prognosis of cancer. We found that, with as few as 15 examples, it is possible to train statistically significant classifiers for colon cancer diagnosis. As for the definition of the number of genes sufficient for a reliable classification of colon cancer, our results suggest that it depends on the accuracy required. PMID:16919171
High-precision arithmetic in mathematical physics
Bailey, David H.; Borwein, Jonathan M.
2015-05-12
For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.
NASA Astrophysics Data System (ADS)
Rumsey, Ian C.; Walker, John T.
2016-06-01
The dry component of total nitrogen and sulfur atmospheric deposition remains uncertain. The lack of measurements of sufficient chemical speciation and temporal extent make it difficult to develop accurate mass budgets and sufficient process level detail is not available to improve current air-surface exchange models. Over the past decade, significant advances have been made in the development of continuous air sampling measurement techniques, resulting with instruments of sufficient sensitivity and temporal resolution to directly quantify air-surface exchange of nitrogen and sulfur compounds. However, their applicability is generally restricted to only one or a few of the compounds within the deposition budget. Here, the performance of the Monitor for AeRosols and GAses in ambient air (MARGA 2S), a commercially available online ion-chromatography-based analyzer is characterized for the first time as applied for air-surface exchange measurements of HNO3, NH3, NH4+, NO3-, SO2 and SO42-. Analytical accuracy and precision are assessed under field conditions. Chemical concentrations gradient precision are determined at the same sampling site. Flux uncertainty measured by the aerodynamic gradient method is determined for a representative 3-week period in fall 2012 over a grass field. Analytical precision and chemical concentration gradient precision were found to compare favorably in comparison to previous studies. During the 3-week period, percentages of hourly chemical concentration gradients greater than the corresponding chemical concentration gradient detection limit were 86, 42, 82, 73, 74 and 69 % for NH3, NH4+, HNO3, NO3-, SO2 and SO42-, respectively. As expected, percentages were lowest for aerosol species, owing to their relatively low deposition velocities and correspondingly smaller gradients relative to gas phase species. Relative hourly median flux uncertainties were 31, 121, 42, 43, 67 and 56 % for NH3, NH4+, HNO3, NO3-, SO2 and SO42-, respectively. Flux uncertainty is dominated by uncertainty in the chemical concentrations gradients during the day but uncertainty in the chemical concentration gradients and transfer velocity are of the same order at night. Results show the instrument is sufficiently precise for flux gradient applications.
Using statistical equivalence testing logic and mixed model theory an approach has been developed, that extends the work of Stork et al (JABES,2008), to define sufficient similarity in dose-response for chemical mixtures containing the same chemicals with different ratios ...
ERIC Educational Resources Information Center
Meyer, Heinz-Dieter
2017-01-01
Quantitative measures of student performance are increasingly used as proxies of educational quality and teacher ability. Such assessments assume that the quality of educational practices can be unambiguously quantitatively measured and that such measures are sufficiently precise and robust to be aggregated into policy-relevant rankings like…
Precision cosmology with time delay lenses: High resolution imaging requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Xiao -Lei; Treu, Tommaso; Agnello, Adriano
Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ``Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration ofmore » the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ tot∝ r–γ' for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. Furthermore, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive Optics System, and TMT, will only be of order a few minutes per system, thus making the follow-up of hundreds of systems a practical and efficient cosmological probe.« less
Precision cosmology with time delay lenses: high resolution imaging requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Xiao-Lei; Liao, Kai; Treu, Tommaso
Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ''Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration ofmore » the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ{sub tot}∝ r{sup −γ'} for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. However, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive Optics System, and TMT, will only be of order a few minutes per system, thus making the follow-up of hundreds of systems a practical and efficient cosmological probe.« less
C.W. Woodall
2008-01-01
Increment cores are invaluable for assessing tree attributes such as inside bark diameter, radial growth, and sapwood area. However, because trees accrue growth and sapwood unevenly around their pith, tree attributes derived from one increment core may not provide sufficient precision for forest management/research activities. To assess the variability in a tree's...
Egbewale, Bolaji E; Lewis, Martyn; Sim, Julius
2014-04-09
Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. 126 hypothetical trial scenarios were evaluated (126,000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power.
2014-01-01
Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304
The statistical challenge of constraining the low-mass IMF in Local Group dwarf galaxies
NASA Astrophysics Data System (ADS)
El-Badry, Kareem; Weisz, Daniel R.; Quataert, Eliot
2017-06-01
We use Monte Carlo simulations to explore the statistical challenges of constraining the characteristic mass (mc) and width (σ) of a lognormal sub-solar initial mass function (IMF) in Local Group dwarf galaxies using direct star counts. For a typical Milky Way (MW) satellite (MV = -8), jointly constraining mc and σ to a precision of ≲ 20 per cent requires that observations be complete to ≲ 0.2 M⊙, if the IMF is similar to the MW IMF. A similar statistical precision can be obtained if observations are only complete down to 0.4 M⊙, but this requires measurement of nearly 100× more stars, and thus, a significantly more massive satellite (MV ˜ -12). In the absence of sufficiently deep data to constrain the low-mass turnover, it is common practice to fit a single-sloped power law to the low-mass IMF, or to fit mc for a lognormal while holding σ fixed. We show that the former approximation leads to best-fitting power-law slopes that vary with the mass range observed and can largely explain existing claims of low-mass IMF variations in MW satellites, even if satellite galaxies have the same IMF as the MW. In addition, fixing σ during fitting leads to substantially underestimated uncertainties in the recovered value of mc (by a factor of ˜4 for typical observations). If the IMFs of nearby dwarf galaxies are lognormal and do vary, observations must reach down to ˜mc in order to robustly detect these variations. The high-sensitivity, near-infrared capabilities of the James Webb Space Telescope and Wide-Field Infrared Survey Telescope have the potential to dramatically improve constraints on the low-mass IMF. We present an efficient observational strategy for using these facilities to measure the IMFs of Local Group dwarf galaxies.
Challenges in the optical system of GAIA
NASA Astrophysics Data System (ADS)
Le Poole, Rudolf S.
2017-11-01
The precision aimed at by ESA's Astrometry and Radial Velocity mission GAIA surpasses that of the successful HIPPARCOS mission by more than 2 orders of magnitude, while at the same time increasing the number of objects 10000 times. This overwhelming increase in performance (statistical weight increased by 8 orders of magnitude) is achieved by insisting on a full description in terms of photon shot noise as the fundamental limiting factor. Yet such measurements refer to wave front topography to be understood to the level of better than 100 pico meters, in an optical system a few meters across. Obviously such understanding relies heavily on the expected stability, and chromatic effects also are of dominant importance, requiring stellar spectral energy distributions to be determined. It is fascinating that the experience of HIPPARCOS can indeed generate sufficient confidence for these performance specifications to be within reach. Elaborating the design specifications and tolerances I hope to convince you of GAIA's imminent success.
Targeted On-Demand Team Performance App Development
2016-10-01
from three sites; 6) Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes...statistical analyses, and examine any resulting qualitative data for trends or connections to statistical outcomes. On Schedule 21 Predictive...Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes. What opportunities for
ERIC Educational Resources Information Center
Andrich, David
2016-01-01
This article reproduces correspondence between Georg Rasch of The University of Copenhagen and Benjamin Wright of The University of Chicago in the period from January 1966 to July 1967. This correspondence reveals their struggle to operationalize a unidimensional measurement model with sufficient statistics for responses in a set of ordered…
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
A canonical theory of dynamic decision-making.
Fox, John; Cooper, Richard P; Glasspool, David W
2013-01-01
Decision-making behavior is studied in many very different fields, from medicine and economics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI, and other technical disciplines. However the conceptualization of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision-maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision-making with respect to other high-level cognitive capabilities like problem solving, planning, and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuropsychology, artificial intelligence, and decision engineering.
A Canonical Theory of Dynamic Decision-Making
Fox, John; Cooper, Richard P.; Glasspool, David W.
2012-01-01
Decision-making behavior is studied in many very different fields, from medicine and economics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI, and other technical disciplines. However the conceptualization of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision-maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision-making with respect to other high-level cognitive capabilities like problem solving, planning, and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuropsychology, artificial intelligence, and decision engineering. PMID:23565100
2015-05-12
Deficiencies That Affect the Reliability of Estimates ________________________________________6 Statistical Precision Could Be Improved... statistical precision of improper payments estimates in seven of the DoD payment programs through the use of stratified sample designs. DoD improper...payments not subject to sampling, which made the results statistically invalid. We made a recommendation to correct this problem in a previous report;4
NASA Astrophysics Data System (ADS)
Fisher, W. P., Jr.; Elbaum, B.; Coulter, A.
2010-07-01
Reliability coefficients indicate the proportion of total variance attributable to differences among measures separated along a quantitative continuum by a testing, survey, or assessment instrument. Reliability is usually considered to be influenced by both the internal consistency of a data set and the number of items, though textbooks and research papers rarely evaluate the extent to which these factors independently affect the data in question. Probabilistic formulations of the requirements for unidimensional measurement separate consistency from error by modelling individual response processes instead of group-level variation. The utility of this separation is illustrated via analyses of small sets of simulated data, and of subsets of data from a 78-item survey of over 2,500 parents of children with disabilities. Measurement reliability ultimately concerns the structural invariance specified in models requiring sufficient statistics, parameter separation, unidimensionality, and other qualities that historically have made quantification simple, practical, and convenient for end users. The paper concludes with suggestions for a research program aimed at focusing measurement research more on the calibration and wide dissemination of tools applicable to individuals, and less on the statistical study of inter-variable relations in large data sets.
Human metabolic profiles are stably controlled by genetic and environmental variation
Nicholson, George; Rantalainen, Mattias; Maher, Anthony D; Li, Jia V; Malmodin, Daniel; Ahmadi, Kourosh R; Faber, Johan H; Hallgrímsdóttir, Ingileif B; Barrett, Amy; Toft, Henrik; Krestyaninova, Maria; Viksna, Juris; Neogi, Sudeshna Guha; Dumas, Marc-Emmanuel; Sarkans, Ugis; The MolPAGE Consortium; Silverman, Bernard W; Donnelly, Peter; Nicholson, Jeremy K; Allen, Maxine; Zondervan, Krina T; Lindon, John C; Spector, Tim D; McCarthy, Mark I; Holmes, Elaine; Baunsgaard, Dorrit; Holmes, Chris C
2011-01-01
1H Nuclear Magnetic Resonance spectroscopy (1H NMR) is increasingly used to measure metabolite concentrations in sets of biological samples for top-down systems biology and molecular epidemiology. For such purposes, knowledge of the sources of human variation in metabolite concentrations is valuable, but currently sparse. We conducted and analysed a study to create such a resource. In our unique design, identical and non-identical twin pairs donated plasma and urine samples longitudinally. We acquired 1H NMR spectra on the samples, and statistically decomposed variation in metabolite concentration into familial (genetic and common-environmental), individual-environmental, and longitudinally unstable components. We estimate that stable variation, comprising familial and individual-environmental factors, accounts on average for 60% (plasma) and 47% (urine) of biological variation in 1H NMR-detectable metabolite concentrations. Clinically predictive metabolic variation is likely nested within this stable component, so our results have implications for the effective design of biomarker-discovery studies. We provide a power-calculation method which reveals that sample sizes of a few thousand should offer sufficient statistical precision to detect 1H NMR-based biomarkers quantifying predisposition to disease. PMID:21878913
High-precision measurement of chlorine stable isotope ratios
Long, A.; Eastoe, C.J.; Kaufmann, R.S.; Martin, J.G.; Wirt, L.; Finley, J.B.
1993-01-01
We present an analysis procedure that allows stable isotopes of chlorine to be analyzed with precision sufficient for geological and hydrological studies. The total analytical precision is ?????0.09%., and the present known range of chloride in the surface and near-surface environment is 3.5???. As Cl- is essentially nonreactive in natural aquatic environments, it is a conservative tracer and its ??37Cl is also conservative. Thus, the ??37Cl parameter is valuable for quantitative evaluation of mixing of different sources of chloride in brines and aquifers. ?? 1993.
A novel alignment-free method for detection of lateral genetic transfer based on TF-IDF.
Cong, Yingnan; Chan, Yao-Ban; Ragan, Mark A
2016-07-25
Lateral genetic transfer (LGT) plays an important role in the evolution of microbes. Existing computational methods for detecting genomic regions of putative lateral origin scale poorly to large data. Here, we propose a novel method based on TF-IDF (Term Frequency-Inverse Document Frequency) statistics to detect not only regions of lateral origin, but also their origin and direction of transfer, in sets of hierarchically structured nucleotide or protein sequences. This approach is based on the frequency distributions of k-mers in the sequences. If a set of contiguous k-mers appears sufficiently more frequently in another phyletic group than in its own, we infer that they have been transferred from the first group to the second. We performed rigorous tests of TF-IDF using simulated and empirical datasets. With the simulated data, we tested our method under different parameter settings for sequence length, substitution rate between and within groups and post-LGT, deletion rate, length of transferred region and k size, and found that we can detect LGT events with high precision and recall. Our method performs better than an established method, ALFY, which has high recall but low precision. Our method is efficient, with runtime increasing approximately linearly with sequence length.
Raymond, Mark R; Clauser, Brian E; Furman, Gail E
2010-10-01
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.
Peripheral vascular effects on auscultatory blood pressure measurement.
Rabbany, S Y; Drzewiecki, G M; Noordergraaf, A
1993-01-01
Experiments were conducted to examine the accuracy of the conventional auscultatory method of blood pressure measurement. The influence of the physiologic state of the vascular system in the forearm distal to the site of Korotkoff sound recording and its impact on the precision of the measured blood pressure is discussed. The peripheral resistance in the arm distal to the cuff was changed noninvasively by heating and cooling effects and by induction of reactive hyperemia. All interventions were preceded by an investigation of their effect on central blood pressure to distinguish local effects from changes in central blood pressure. These interventions were sufficiently moderate to make their effect on central blood pressure, recorded in the other arm, statistically insignificant (i.e., changes in systolic [p < 0.3] and diastolic [p < 0.02]). Nevertheless, such alterations were found to modify the amplitude of the Korotkoff sound, which can manifest itself as an apparent change in arterial blood pressure that is readily discerned by the human ear. The increase in diastolic pressure for the cooling experiments was statistically significant (p < 0.001). Moreover, both measured systolic (p < 0.004) and diastolic (p < 0.001) pressure decreases during the reactive hyperemia experiments were statistically significant. The findings demonstrate that alteration in vascular state generates perplexing changes in blood pressure, hence confirming experimental observations by earlier investigators as well as predictions by our model studies.
Brigo, Francesco; Bragazzi, Nicola; Nardone, Raffaele; Trinka, Eugen
2016-11-01
The aim of this study was to conduct a meta-analysis of published studies to directly compare intravenous (IV) levetiracetam (LEV) with IV phenytoin (PHT) or IV valproate (VPA) as second-line treatment of status epilepticus (SE), to indirectly compare intravenous IV LEV with IV VPA using common reference-based indirect comparison meta-analysis, and to verify whether results of indirect comparisons are consistent with results of head-to-head randomized controlled trials (RCTs) directly comparing IV LEV with IV VPA. Random-effects Mantel-Haenszel meta-analyses to obtain odds ratios (ORs) for efficacy and safety of LEV versus VPA and LEV or VPA versus PHT were used. Adjusted indirect comparisons between LEV and VPA were used. Two RCTs comparing LEV with PHT (144 episodes of SE) and 3 RCTs comparing VPA with PHT (227 episodes of SE) were included. Direct comparisons showed no difference in clinical seizure cessation, neither between VPA and PHT (OR: 1.07; 95% CI: 0.57 to 2.03) nor between LEV and PHT (OR: 1.18; 95% CI: 0.50 to 2.79). Indirect comparisons showed no difference between LEV and VPA for clinical seizure cessation (OR: 1.16; 95% CI: 0.45 to 2.97). Results of indirect comparisons are consistent with results of a recent RCT directly comparing LEV with VPA. The absence of a statistically significant difference in direct and indirect comparisons is due to the lack of sufficient statistical power to detect a difference. Conducting a RCT that has not enough people to detect a clinically important difference or to estimate an effect with sufficient precision can be regarded a waste of time and resources and may raise several ethical concerns, especially in RCT on SE. Copyright © 2016 Elsevier Inc. All rights reserved.
A non-perturbative exploration of the high energy regime in Nf=3 QCD. ALPHA Collaboration
NASA Astrophysics Data System (ADS)
Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer
2018-05-01
Using continuum extrapolated lattice data we trace a family of running couplings in three-flavour QCD over a large range of scales from about 4 to 128 GeV. The scale is set by the finite space time volume so that recursive finite size techniques can be applied, and Schrödinger functional (SF) boundary conditions enable direct simulations in the chiral limit. Compared to earlier studies we have improved on both statistical and systematic errors. Using the SF coupling to implicitly define a reference scale 1/L_0≈ 4 GeV through \\bar{g}^2(L_0) =2.012, we quote L_0 Λ ^{N_f=3}_{{\\overline{MS}}} =0.0791(21). This error is dominated by statistics; in particular, the remnant perturbative uncertainty is negligible and very well controlled, by connecting to infinite renormalization scale from different scales 2^n/L_0 for n=0,1,\\ldots ,5. An intermediate step in this connection may involve any member of a one-parameter family of SF couplings. This provides an excellent opportunity for tests of perturbation theory some of which have been published in a letter (ALPHA collaboration, M. Dalla Brida et al. in Phys Rev Lett 117(18):182001, 2016). The results indicate that for our target precision of 3 per cent in L_0 Λ ^{N_f=3}_{{\\overline{MS}}}, a reliable estimate of the truncation error requires non-perturbative data for a sufficiently large range of values of α _s=\\bar{g}^2/(4π ). In the present work we reach this precision by studying scales that vary by a factor 2^5= 32, reaching down to α _s≈ 0.1. We here provide the details of our analysis and an extended discussion.
[Precision nutrition in the era of precision medicine].
Chen, P Z; Wang, H
2016-12-06
Precision medicine has been increasingly incorporated into clinical practice and is enabling a new era for disease prevention and treatment. As an important constituent of precision medicine, precision nutrition has also been drawing more attention during physical examinations. The main aim of precision nutrition is to provide safe and efficient intervention methods for disease treatment and management, through fully considering the genetics, lifestyle (dietary, exercise and lifestyle choices), metabolic status, gut microbiota and physiological status (nutrient level and disease status) of individuals. Three major components should be considered in precision nutrition, including individual criteria for sufficient nutritional status, biomarker monitoring or techniques for nutrient detection and the applicable therapeutic or intervention methods. It was suggested that, in clinical practice, many inherited and chronic metabolic diseases might be prevented or managed through precision nutritional intervention. For generally healthy populations, because lifestyles, dietary factors, genetic factors and environmental exposures vary among individuals, precision nutrition is warranted to improve their physical activity and reduce disease risks. In summary, research and practice is leading toward precision nutrition becoming an integral constituent of clinical nutrition and disease prevention in the era of precision medicine.
NASA Astrophysics Data System (ADS)
Monna, F.; Loizeau, J.-L.; Thomas, B. A.; Guéguen, C.; Favarger, P.-Y.
1998-08-01
One of the factors limiting the precision of inductively coupled plasma mass spectrometry is the counting statistics, which depend upon acquisition time and ion fluxes. In the present study, the precision of the isotopic measurements of Pb and Sr is examined. The time of measurement is optimally shared for each isotope, using a mathematical simulation, to provide the lowest theoretical analytical error. Different algorithms of mass bias correction are also taken into account and evaluated in term of improvement of overall precision. Several experiments allow a comparison of real conditions with theory. The present method significantly improves the precision, regardless of the instrument used. However, this benefit is more important for equipment which originally yields a precision close to that predicted by counting statistics. Additionally, the procedure is flexible enough to be easily adapted to other problems, such as isotopic dilution.
Finding Mars-Sized Planets in Inner Orbits of Other Stars by Photometry
NASA Technical Reports Server (NTRS)
Borucki, W.; Cullers, K.; Dunham, E.; Koch, D.; Mena-Werth, J.; Cuzzi, Jeffrey N. (Technical Monitor)
1995-01-01
High precision photometry from a spaceborne telescope has the potential of discovering sub-earth sized inner planets. Model calculations by Wetherill indicate that Mars-sized planets can be expected to form throughout the range of orbits from that of Mercury to Mars. While a transit of an Earth-sized planet causes a 0.084% decrease in brightness from a solar-like star, a transit of a planet as small as Mars causes a flux decrease of only 0.023%. Stellar variability will be the limiting factor for transit measurements. Recent analysis of solar variability from the SOLSTICE experiment shows that much of the variability is in the UV at <400 nm. Combining this result with the total flux variability measured by the ACRIM-1 photometer implies that the Sun has relative amplitude variations of about 0.0007% in the 17-69 pHz bandpass and is presumably typical for solar-like stars. Tests were conducted at Lick Observatory to determine the photometric precision of CCD detectors in the 17-69 pHz bandpass. With frame-by-frame corrections of the image centroids it was found that a precision of 0.001% could be readily achieved, corresponding to a signal to noise ratio of 1.4, provided the telescope aperture was sufficient to keep the statistical noise below 0.0006%. With 24 transits a planet as small as Mars should be reliably detectable. If Wetherill's models are correct in postulating that Mars-like planets are present in Mercury-like orbits, then a six year search should be able to find them.
Design and control of a macro-micro robot for precise force applications
NASA Technical Reports Server (NTRS)
Wang, Yulun; Mangaser, Amante; Laby, Keith; Jordan, Steve; Wilson, Jeff
1993-01-01
Creating a robot which can delicately interact with its environment has been the goal of much research. Primarily two difficulties have made this goal hard to attain. The execution of control strategies which enable precise force manipulations are difficult to implement in real time because such algorithms have been too computationally complex for available controllers. Also, a robot mechanism which can quickly and precisely execute a force command is difficult to design. Actuation joints must be sufficiently stiff, frictionless, and lightweight so that desired torques can be accurately applied. This paper describes a robotic system which is capable of delicate manipulations. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system. Delicate force tasks such as polishing, finishing, cleaning, and deburring, are the target applications of the robot.
Extension of Fong and Alvarez: When is a lower limit of detection low enough
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, W.E.
1999-11-01
The work of Fong and Alvarez is easily extended for counting when varied counting is not employed. It remains true in the general case that the precision of a counting method can be no less than about 30% at the LLD and so it is desirable to have decision levels at least several times larger than the LLD so that measurements have sufficient precision to make valid decisions.
Extension of Fong and Alvarez: When is a lower limit of detection low enough?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, W.E.
1999-11-01
The work of Fong and Alvarez is easily extended for counting when varied counting is not employed. It remains true in the general case that the precision of a counting method can be no less than about 30% at the LLD and so it is desirable to have decision levels at least several times larger than the LLD so that measurements have sufficient precision to make valid decisions.
Alania, M; De Backer, A; Lobato, I; Krause, F F; Van Dyck, D; Rosenauer, A; Van Aert, S
2017-10-01
In this paper, we investigate how precise atoms of a small nanocluster can ultimately be located in three dimensions (3D) from a tilt series of images acquired using annular dark field (ADF) scanning transmission electron microscopy (STEM). Therefore, we derive an expression for the statistical precision with which the 3D atomic position coordinates can be estimated in a quantitative analysis. Evaluating this statistical precision as a function of the microscope settings also allows us to derive the optimal experimental design. In this manner, the optimal angular tilt range, required electron dose, optimal detector angles, and number of projection images can be determined. Copyright © 2016 Elsevier B.V. All rights reserved.
[Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].
Krimmel, M; Kluba, S; Dietz, K; Reinert, S
2005-03-01
The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.
Reduction to Outside the Atmosphere and Statistical Tests Used in Geneva Photometry
NASA Technical Reports Server (NTRS)
Rufener, F.
1984-01-01
Conditions for creating a precise photometric system are investigated. The analytical and discriminatory potentials of a photometry obviously result from the localization of the passbands in the spectrum; they do, however, also depend critically on the precision attained. This precision is the result of two different types of precautions. Two procedures which contribute in an efficient manner to achieving greater precision are examined. These two methods are known as hardware related precision and software related precision.
Aucouturier, Jean-Julien; Defreville, Boris; Pachet, François
2007-08-01
The "bag-of-frames" approach (BOF) to audio pattern recognition represents signals as the long-term statistical distribution of their local spectral features. This approach has proved nearly optimal for simulating the auditory perception of natural and human environments (or soundscapes), and is also the most predominent paradigm to extract high-level descriptions from music signals. However, recent studies show that, contrary to its application to soundscape signals, BOF only provides limited performance when applied to polyphonic music signals. This paper proposes to explicitly examine the difference between urban soundscapes and polyphonic music with respect to their modeling with the BOF approach. First, the application of the same measure of acoustic similarity on both soundscape and music data sets confirms that the BOF approach can model soundscapes to near-perfect precision, and exhibits none of the limitations observed in the music data set. Second, the modification of this measure by two custom homogeneity transforms reveals critical differences in the temporal and statistical structure of the typical frame distribution of each type of signal. Such differences may explain the uneven performance of BOF algorithms on soundscapes and music signals, and suggest that their human perception rely on cognitive processes of a different nature.
Statistical issues in the design, conduct and analysis of two large safety studies.
Gaffney, Michael
2016-10-01
The emergence, post approval, of serious medical events, which may be associated with the use of a particular drug or class of drugs, is an important public health and regulatory issue. The best method to address this issue is through a large, rigorously designed safety study. Therefore, it is important to elucidate the statistical issues involved in these large safety studies. Two such studies are PRECISION and EAGLES. PRECISION is the primary focus of this article. PRECISION is a non-inferiority design with a clinically relevant non-inferiority margin. Statistical issues in the design, conduct and analysis of PRECISION are discussed. Quantitative and clinical aspects of the selection of the composite primary endpoint, the determination and role of the non-inferiority margin in a large safety study and the intent-to-treat and modified intent-to-treat analyses in a non-inferiority safety study are shown. Protocol changes that were necessary during the conduct of PRECISION are discussed from a statistical perspective. Issues regarding the complex analysis and interpretation of the results of PRECISION are outlined. EAGLES is presented as a large, rigorously designed safety study when a non-inferiority margin was not able to be determined by a strong clinical/scientific method. In general, when a non-inferiority margin is not able to be determined, the width of the 95% confidence interval is a way to size the study and to assess the cost-benefit of relative trial size. A non-inferiority margin, when able to be determined by a strong scientific method, should be included in a large safety study. Although these studies could not be called "pragmatic," they are examples of best real-world designs to address safety and regulatory concerns. © The Author(s) 2016.
All-digital precision processing of ERTS images
NASA Technical Reports Server (NTRS)
Bernstein, R. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Digital techniques have been developed and used to apply precision-grade radiometric and geometric corrections to ERTS MSS and RBV scenes. Geometric accuracies sufficient for mapping at 1:250,000 scale have been demonstrated. Radiometric quality has been superior to ERTS NDPF precision products. A configuration analysis has shown that feasible, cost-effective all-digital systems for correcting ERTS data are easily obtainable. This report contains a summary of all results obtained during this study and includes: (1) radiometric and geometric correction techniques, (2) reseau detection, (3) GCP location, (4) resampling, (5) alternative configuration evaluations, and (6) error analysis.
NASA Astrophysics Data System (ADS)
Gillaspy, J. D.; Chantler, C. T.; Paterson, D.; Hudson, L. T.; Serpa, F. G.; Takács, E.
2010-04-01
The first measurement of hydrogen-like vanadium x-ray Lyman alpha transitions has been made. The measurement was made on an absolute scale, fully independent of atomic structure calculations. Sufficient signal was obtained to reduce the statistical uncertainty to a small fraction of the total uncertainty budget. Potential sources of systematic error due to Doppler shifts were eliminated by performing the measurement on trapped ions. The energies for Ly α1 (1s-2p3/2) and Ly α2 (1s-2p1/2) are found to be 5443.95(25) eV and 5431.10(25) eV, respectively. These results are within approximately 1.5 σ (experimental) of the theoretical values 5443.63 eV and 5430.70 eV. The results are discussed in terms of their relation to the Lamb shift and the development of an x-ray wavelength standard based on a compact source of trapped highly charged ions.
Explicit formula for the Holevo bound for two-parameter qubit-state estimation problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Jun, E-mail: junsuzuki@uec.ac.jp
The main contribution of this paper is to derive an explicit expression for the fundamental precision bound, the Holevo bound, for estimating any two-parameter family of qubit mixed-states in terms of quantum versions of Fisher information. The obtained formula depends solely on the symmetric logarithmic derivative (SLD), the right logarithmic derivative (RLD) Fisher information, and a given weight matrix. This result immediately provides necessary and sufficient conditions for the following two important classes of quantum statistical models; the Holevo bound coincides with the SLD Cramér-Rao bound and it does with the RLD Cramér-Rao bound. One of the important results ofmore » this paper is that a general model other than these two special cases exhibits an unexpected property: the structure of the Holevo bound changes smoothly when the weight matrix varies. In particular, it always coincides with the RLD Cramér-Rao bound for a certain choice of the weight matrix. Several examples illustrate these findings.« less
NASA Astrophysics Data System (ADS)
Escobar Martínez, S. D.; Fabela Enríquez, B.; Pedraza Morales, M. I.; REDTOP Collaboration
2017-10-01
REDTOP is a novel experiment proposed at the Delivery Ring of Fermilab with the intent of producing more than 1013 η mesons per year to detect possible rare η decays which can be a clear evidence of the existence of Physics Beyond the Standard Model. Such statistics are sufficient for investigating several discrete symmetry violations, searching for new particles and interactions and to perform precision studies. One of the golden processes to study is the η → π + π - π 0 decay [7], where π 0 decays promptly into two photons. In the context of the Standard Model, the dynamics of the charged pions is symmetric in this process. Thus, any mirror asymmetry in the Dalitz plot would be a direct indication of C and CP violation. We present a study on the performance of the REDTOP experiment detector by reconstructing the invariant mass of the final state π + π - γγ using Monte Carlo samples.
Simultaneous measurement of two noncommuting quantum variables: Solution of a dynamical model
NASA Astrophysics Data System (ADS)
Perarnau-Llobet, Martí; Nieuwenhuizen, Theodorus Maria
2017-05-01
The possibility of performing simultaneous measurements in quantum mechanics is investigated in the context of the Curie-Weiss model for a projective measurement. Concretely, we consider a spin-1/2 system simultaneously interacting with two magnets, which act as measuring apparatuses of two different spin components. We work out the dynamics of this process and determine the final state of the measuring apparatuses, from which we can find the probabilities of the four possible outcomes of the measurements. The measurement is found to be nonideal, as (i) the joint statistics do not coincide with the one obtained by separately measuring each spin component, and (ii) the density matrix of the spin does not collapse in either of the measured observables. However, we give an operational interpretation of the process as a generalized quantum measurement, and show that it is fully informative: The expected value of the measured spin components can be found with arbitrary precision for sufficiently many runs of the experiment.
NASA Astrophysics Data System (ADS)
Howell, E. J.; Chan, M. L.; Chu, Q.; Jones, D. H.; Heng, I. S.; Lee, H.-M.; Blair, D.; Degallaix, J.; Regimbau, T.; Miao, H.; Zhao, C.; Hendry, M.; Coward, D.; Messenger, C.; Ju, L.; Zhu, Z.-H.
2018-03-01
The detection of black hole binary coalescence events by Advanced LIGO allows the science benefits of future detectors to be evaluated. In this paper, we report the science benefits of one or two 8 km arm length detectors based on the doubling of key parameters in an Advanced LIGO-type detector, combined with realizable enhancements. It is shown that the total detection rate for sources similar to those already detected would increase to ˜ 103-105 per year. Within 0.4 Gpc, we find that around 10 of these events would be localizable to within ˜10-1 deg2. This is sufficient to make unique associations or to rule out a direct association with the brightest galaxies in optical surveys (at r-band magnitudes of 17 or above) or for deeper limits (down to r-band magnitudes of 20) yield statistically significant associations. The combination of angular resolution and event rate would benefit precision testing of formation models, cosmic evolution, and cosmological studies.
Identification of phases, symmetries and defects through local crystallography
Belianinov, Alex; He, Qian; Kravchenko, Mikhail; ...
2015-07-20
Here we report that advances in electron and probe microscopies allow 10 pm or higher precision in measurements of atomic positions. This level of fidelity is sufficient to correlate the length (and hence energy) of bonds, as well as bond angles to functional properties of materials. Traditionally, this relied on mapping locally measured parameters to macroscopic variables, for example, average unit cell. This description effectively ignores the information contained in the microscopic degrees of freedom available in a high-resolution image. Here we introduce an approach for local analysis of material structure based on statistical analysis of individual atomic neighbourhoods. Clusteringmore » and multivariate algorithms such as principal component analysis explore the connectivity of lattice and bond structure, as well as identify minute structural distortions, thus allowing for chemical description and identification of phases. This analysis lays the framework for building image genomes and structure–property libraries, based on conjoining structural and spectral realms through local atomic behaviour.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escoda, J.; Departement Materiaux et Mecanique des Composants, Electricite de France, Moret-sur-Loing; Willot, F., E-mail: francois.willot@ensmp.f
2011-05-15
This study concerns the prediction of the elastic properties of a 3D mortar image, obtained by micro-tomography, using a combined image segmentation and numerical homogenization approach. The microstructure is obtained by segmentation of the 3D image into aggregates, voids and cement paste. Full-fields computations of the elastic response of mortar are undertaken using the Fast Fourier Transform method. Emphasis is made on highly-contrasted properties between aggregates and matrix, to anticipate needs for creep or damage computation. The representative volume element, i.e. the volume size necessary to compute the effective properties with a prescribed accuracy, is given. Overall, the volumes usedmore » in this work were sufficient to estimate the effective response of mortar with a precision of 5%, 6% and 10% for contrast ratios of 100, 1000 and 10,000, resp. Finally, a statistical and local characterization of the component of the stress field parallel to the applied loading is carried out.« less
Physical habitat in the national wadeable streams assessment
Effective environmental policy decisions require stream habitat information that is accurate, precise, and relevant. The recent National Wadeable Streams Assessment (NWSA) carried out by the U.S. EPA required physical habitat information sufficiently comprehensive to facilitate i...
The Die Is Cast: Precision Electrophilic Modifications Contribute to Cellular Decision Making
2016-01-01
This perspective sets out to critically evaluate the scope of reactive electrophilic small molecules as unique chemical signal carriers in biological information transfer cascades. We consider these electrophilic cues as a new volatile cellular currency and compare them to canonical signaling circulation such as phosphate in terms of chemical properties, biological specificity, sufficiency, and necessity. The fact that nonenzymatic redox sensing properties are found in proteins undertaking varied cellular tasks suggests that electrophile signaling is a moonlighting phenomenon manifested within a privileged set of sensor proteins. The latest interrogations into these on-target electrophilic responses set forth a new horizon in the molecular mechanism of redox signal propagation wherein direct low-occupancy electrophilic modifications on a single sensor target are biologically sufficient to drive functional redox responses with precision timing. We detail how the various mechanisms through which redox signals function could contribute to their interesting phenotypic responses, including hormesis. PMID:27617777
The Die Is Cast: Precision Electrophilic Modifications Contribute to Cellular Decision Making.
Long, Marcus J C; Aye, Yimon
2016-10-02
This perspective sets out to critically evaluate the scope of reactive electrophilic small molecules as unique chemical signal carriers in biological information transfer cascades. We consider these electrophilic cues as a new volatile cellular currency and compare them to canonical signaling circulation such as phosphate in terms of chemical properties, biological specificity, sufficiency, and necessity. The fact that nonenzymatic redox sensing properties are found in proteins undertaking varied cellular tasks suggests that electrophile signaling is a moonlighting phenomenon manifested within a privileged set of sensor proteins. The latest interrogations into these on-target electrophilic responses set forth a new horizon in the molecular mechanism of redox signal propagation wherein direct low-occupancy electrophilic modifications on a single sensor target are biologically sufficient to drive functional redox responses with precision timing. We detail how the various mechanisms through which redox signals function could contribute to their interesting phenotypic responses, including hormesis.
Ultra precision and reliable bonding method
NASA Technical Reports Server (NTRS)
Gwo, Dz-Hung (Inventor)
2001-01-01
The bonding of two materials through hydroxide-catalyzed hydration/dehydration is achieved at room temperature by applying hydroxide ions to at least one of the two bonding surfaces and by placing the surfaces sufficiently close to each other to form a chemical bond between them. The surfaces may be placed sufficiently close to each other by simply placing one surface on top of the other. A silicate material may also be used as a filling material to help fill gaps between the surfaces caused by surface figure mismatches. A powder of a silica-based or silica-containing material may also be used as an additional filling material. The hydroxide-catalyzed bonding method forms bonds which are not only as precise and transparent as optical contact bonds, but also as strong and reliable as high-temperature frit bonds. The hydroxide-catalyzed bonding method is also simple and inexpensive.
Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris
2012-01-01
Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa
Eser, Alexander; Primas, Christian; Reinisch, Sieglinde; Vogelsang, Harald; Novacek, Gottfried; Mould, Diane R; Reinisch, Walter
2018-01-30
Despite a robust exposure-response relationship of infliximab in inflammatory bowel disease (IBD), attempts to adjust dosing to individually predicted serum concentrations of infliximab (SICs) are lacking. Compared with labor-intensive conventional software for pharmacokinetic (PK) modeling (eg, NONMEM) dashboards are easy-to-use programs incorporating complex Bayesian statistics to determine individual pharmacokinetics. We evaluated various infliximab detection assays and the number of samples needed to precisely forecast individual SICs using a Bayesian dashboard. We assessed long-term infliximab retention in patients being dosed concordantly versus discordantly with Bayesian dashboard recommendations. Three hundred eighty-two serum samples from 117 adult IBD patients on infliximab maintenance therapy were analyzed by 3 commercially available assays. Data from each assay was modeled using NONMEM and a Bayesian dashboard. PK parameter precision and residual variability were assessed. Forecast concentrations from both systems were compared with observed concentrations. Infliximab retention was assessed by prediction for dose intensification via Bayesian dashboard versus real-life practice. Forecast precision of SICs varied between detection assays. At least 3 SICs from a reliable assay are needed for an accurate forecast. The Bayesian dashboard performed similarly to NONMEM to predict SICs. Patients dosed concordantly with Bayesian dashboard recommendations had a significantly longer median drug survival than those dosed discordantly (51.5 versus 4.6 months, P < .0001). The Bayesian dashboard helps to assess the diagnostic performance of infliximab detection assays. Three, not single, SICs provide sufficient information for individualized dose adjustment when incorporated into the Bayesian dashboard. Treatment adjusted to forecasted SICs is associated with longer drug retention of infliximab. © 2018, The American College of Clinical Pharmacology.
A High-Precision Counter Using the DSP Technique
2004-09-01
DSP is not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. So we cut the...sampling number in a cycle is not good enough to achieve an accuracy less than 2×10-11. For this reason, a correlation operation is performed for... not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. We will solve this
A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data
Chen, Yi-Hau
2017-01-01
Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https://github.com/roqe/T2GA. PMID:28622336
A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data.
Lai, En-Yu; Chen, Yi-Hau; Wu, Kun-Pin
2017-06-01
Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https://github.com/roqe/T2GA.
Precise Near IR Radial Velocity First Light Observations With iSHELL
NASA Astrophysics Data System (ADS)
Cale, Bryson L.; Plavchan, Peter; Gagné, Jonathan; Gao, Peter; Nishimoto, America; Tanner, Angelle; Walp, Bernie; Brinkworth, Carolyn; Johnson, John Asher; Vasisht, Gautam
2018-01-01
We present our current progress on obtaining precise radial velocities with the new iSHELL spectrograph at NASA's Infrared Telescope Facility. To obtain precise RV's, we use a methane isotopologue absorption gas cell in the calibration unit. Over the past year, we've collected 3-12 epochs of 17 bright G, K, and M dwarfs at a high SNR. By focusing on late type type stars, we obtain relatively higher SNR in the near infrared. We've successfully updated both our spectral and RV extraction pipelines, with a few exceptions. Inherent to the iSHELL data is a wavelength dependent fringing component, which must be incorporated into our model to obtain adequate RV precision. With iSHELL's predecessor, CSHELL, we obtained a precision of 3 m/s on the bright M giant SV Peg. With further progress on our fringing and telluric models, we hope to obtain a precision of <3 m/s with iSHELL, sufficient to detect terrestrial planets in the habitable zone of nearby M dwarfs.
Siebers, Jeffrey V
2008-04-04
Monte Carlo (MC) is rarely used for IMRT plan optimization outside of research centres due to the extensive computational resources or long computation times required to complete the process. Time can be reduced by degrading the statistical precision of the MC dose calculation used within the optimization loop. However, this eventually introduces optimization convergence errors (OCEs). This study determines the statistical noise levels tolerated during MC-IMRT optimization under the condition that the optimized plan has OCEs <100 cGy (1.5% of the prescription dose) for MC-optimized IMRT treatment plans.Seven-field prostate IMRT treatment plans for 10 prostate patients are used in this study. Pre-optimization is performed for deliverable beams with a pencil-beam (PB) dose algorithm. Further deliverable-based optimization proceeds using: (1) MC-based optimization, where dose is recomputed with MC after each intensity update or (2) a once-corrected (OC) MC-hybrid optimization, where a MC dose computation defines beam-by-beam dose correction matrices that are used during a PB-based optimization. Optimizations are performed with nominal per beam MC statistical precisions of 2, 5, 8, 10, 15, and 20%. Following optimizer convergence, beams are re-computed with MC using 2% per beam nominal statistical precision and the 2 PTV and 10 OAR dose indices used in the optimization objective function are tallied. For both the MC-optimization and OC-optimization methods, statistical equivalence tests found that OCEs are less than 1.5% of the prescription dose for plans optimized with nominal statistical uncertainties of up to 10% per beam. The achieved statistical uncertainty in the patient for the 10% per beam simulations from the combination of the 7 beams is ~3% with respect to maximum dose for voxels with D>0.5D(max). The MC dose computation time for the OC-optimization is only 6.2 minutes on a single 3 Ghz processor with results clinically equivalent to high precision MC computations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilcher, Levi; Thomson, Jim; Talbert, Joe
This work details a methodology for measuring hub height inflow turbulence using moored acoustic Doppler velocimiters (ADVs). This approach is motivated by the shortcomings of alternatives. For example, remote velocity measurements (i.e., from acoustic Doppler profilers) lack sufficient precision for device simulation, and rigid tower-mounted measurements are very expensive and technically challenging in the tidal environment. Moorings offer a low-cost, site-adaptable and robust deployment platform, and ADVs provide the necessary precision to accurately quantify turbulence.
Grinding Parts For Automatic Welding
NASA Technical Reports Server (NTRS)
Burley, Richard K.; Hoult, William S.
1989-01-01
Rollers guide grinding tool along prospective welding path. Skatelike fixture holds rotary grinder or file for machining large-diameter rings or ring segments in preparation for welding. Operator grasps handles to push rolling fixture along part. Rollers maintain precise dimensional relationship so grinding wheel cuts precise depth. Fixture-mounted grinder machines surface to quality sufficient for automatic welding; manual welding with attendant variations and distortion not necessary. Developed to enable automatic welding of parts, manual welding of which resulted in weld bead permeated with microscopic fissures.
Dynamics of statistical distance: Quantum limits for two-level clocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braunstein, S.L.; Milburn, G.J.
1995-03-01
We study the evolution of statistical distance on the Bloch sphere under unitary and nonunitary dynamics. This corresponds to studying the limits to clock precision for a clock constructed from a two-state system. We find that the initial motion away from pure states under nonunitary dynamics yields the greatest accuracy for a one-tick'' clock; in this case the clock's precision is not limited by the largest frequency of the system.
Time Delay Embedding Increases Estimation Precision of Models of Intraindividual Variability
ERIC Educational Resources Information Center
von Oertzen, Timo; Boker, Steven M.
2010-01-01
This paper investigates the precision of parameters estimated from local samples of time dependent functions. We find that "time delay embedding," i.e., structuring data prior to analysis by constructing a data matrix of overlapping samples, increases the precision of parameter estimates and in turn statistical power compared to standard…
McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad
2011-07-01
The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.
Statistical evaluation of rainfall-simulator and erosion testing procedure : final report.
DOT National Transportation Integrated Search
1977-01-01
The specific aims of this study were (1) to supply documentation of statistical repeatability and precision of the rainfall-simulator and to document the statistical repeatabiity of the soil-loss data when using the previously recommended tentative l...
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
Holloway, Andrew J; Oshlack, Alicia; Diyagama, Dileepa S; Bowtell, David DL; Smyth, Gordon K
2006-01-01
Background Concerns are often raised about the accuracy of microarray technologies and the degree of cross-platform agreement, but there are yet no methods which can unambiguously evaluate precision and sensitivity for these technologies on a whole-array basis. Results A methodology is described for evaluating the precision and sensitivity of whole-genome gene expression technologies such as microarrays. The method consists of an easy-to-construct titration series of RNA samples and an associated statistical analysis using non-linear regression. The method evaluates the precision and responsiveness of each microarray platform on a whole-array basis, i.e., using all the probes, without the need to match probes across platforms. An experiment is conducted to assess and compare four widely used microarray platforms. All four platforms are shown to have satisfactory precision but the commercial platforms are superior for resolving differential expression for genes at lower expression levels. The effective precision of the two-color platforms is improved by allowing for probe-specific dye-effects in the statistical model. The methodology is used to compare three data extraction algorithms for the Affymetrix platforms, demonstrating poor performance for the commonly used proprietary algorithm relative to the other algorithms. For probes which can be matched across platforms, the cross-platform variability is decomposed into within-platform and between-platform components, showing that platform disagreement is almost entirely systematic rather than due to measurement variability. Conclusion The results demonstrate good precision and sensitivity for all the platforms, but highlight the need for improved probe annotation. They quantify the extent to which cross-platform measures can be expected to be less accurate than within-platform comparisons for predicting disease progression or outcome. PMID:17118209
Design of a novel instrument for active neutron interrogation of artillery shells.
Bélanger-Champagne, Camille; Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter
2017-01-01
The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from [Formula: see text]% to [Formula: see text]% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s.
Design of a novel instrument for active neutron interrogation of artillery shells
Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter
2017-01-01
The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from 53-7+7% to 74-10+8% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s. PMID:29211773
After Behaviourism, Navigationism?
ERIC Educational Resources Information Center
Moran, Sean
2008-01-01
Two previous articles in this journal advocate the greater use of a behaviourist methodology called "Precision Teaching" (PT). From a position located within virtue ethics, this article argues that the technical feat of raising narrowly defined performance in mathematics and other subjects is not sufficient justification for the…
An audit of the statistics and the comparison with the parameter in the population
NASA Astrophysics Data System (ADS)
Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad
2015-10-01
The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.
Demonstration of improved sensitivity of echo interferometers to gravitational acceleration
NASA Astrophysics Data System (ADS)
Mok, C.; Barrett, B.; Carew, A.; Berthiaume, R.; Beattie, S.; Kumarakrishnan, A.
2013-08-01
We have developed two configurations of an echo interferometer that rely on standing-wave excitation of a laser-cooled sample of rubidium atoms. Both configurations can be used to measure acceleration a along the axis of excitation. For a two-pulse configuration, the signal from the interferometer is modulated at the recoil frequency and exhibits a sinusoidal frequency chirp as a function of pulse spacing. In comparison, for a three-pulse stimulated-echo configuration, the signal is observed without recoil modulation and exhibits a modulation at a single frequency as a function of pulse spacing. The three-pulse configuration is less sensitive to effects of vibrations and magnetic field curvature, leading to a longer experimental time scale. For both configurations of the atom interferometer (AI), we show that a measurement of acceleration with a statistical precision of 0.5% can be realized by analyzing the shape of the echo envelope that has a temporal duration of a few microseconds. Using the two-pulse AI, we obtain measurements of acceleration that are statistically precise to 6 parts per million (ppm) on a 25 ms time scale. In comparison, using the three-pulse AI, we obtain measurements of acceleration that are statistically precise to 0.4 ppm on a time scale of 50 ms. A further statistical enhancement is achieved by analyzing the data across the echo envelope so that the statistical error is reduced to 75 parts per billion (ppb). The inhomogeneous field of a magnetized vacuum chamber limited the experimental time scale and resulted in prominent systematic effects. Extended time scales and improved signal-to-noise ratio observed in recent echo experiments using a nonmagnetic vacuum chamber suggest that echo techniques are suitable for a high-precision measurement of gravitational acceleration g. We discuss methods for reducing systematic effects and improving the signal-to-noise ratio. Simulations of both AI configurations with a time scale of 300 ms suggest that an optimized experiment with improved vibration isolation and atoms selected in the mF=0 state can result in measurements of g statistically precise to 0.3 ppb for the two-pulse AI and 0.6 ppb for the three-pulse AI.
Developing Statistical Knowledge for Teaching during Design-Based Research
ERIC Educational Resources Information Center
Groth, Randall E.
2017-01-01
Statistical knowledge for teaching is not precisely equivalent to statistics subject matter knowledge. Teachers must know how to make statistics understandable to others as well as understand the subject matter themselves. This dual demand on teachers calls for the development of viable teacher education models. This paper offers one such model,…
Information content analysis: the potential for methane isotopologue retrieval from GOSAT-2
NASA Astrophysics Data System (ADS)
Malina, Edward; Yoshida, Yukio; Matsunaga, Tsuneo; Muller, Jan-Peter
2018-02-01
Atmospheric methane is comprised of multiple isotopic molecules, with the most abundant being 12CH4 and 13CH4, making up 98 and 1.1 % of atmospheric methane respectively. It has been shown that is it possible to distinguish between sources of methane (biogenic methane, e.g. marshland, or abiogenic methane, e.g. fracking) via a ratio of these main methane isotopologues, otherwise known as the δ13C value. δ13C values typically range between -10 and -80 ‰, with abiogenic sources closer to zero and biogenic sources showing more negative values. Initially, we suggest that a δ13C difference of 10 ‰ is sufficient, in order to differentiate between methane source types, based on this we derive that a precision of 0.2 ppbv on 13CH4 retrievals may achieve the target δ13C variance. Using an application of the well-established information content analysis (ICA) technique for assumed clear-sky conditions, this paper shows that using a combination of the shortwave infrared (SWIR) bands on the planned Greenhouse gases Observing SATellite (GOSAT-2) mission, 13CH4 can be measured with sufficient information content to a precision of between 0.7 and 1.2 ppbv from a single sounding (assuming a total column average value of 19.14 ppbv), which can then be reduced to the target precision through spatial and temporal averaging techniques. We therefore suggest that GOSAT-2 can be used to differentiate between methane source types. We find that large unconstrained covariance matrices are required in order to achieve sufficient information content, while the solar zenith angle has limited impact on the information content.
Infants Segment Continuous Events Using Transitional Probabilities
ERIC Educational Resources Information Center
Stahl, Aimee E.; Romberg, Alexa R.; Roseberry, Sarah; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathryn
2014-01-01
Throughout their 1st year, infants adeptly detect statistical structure in their environment. However, little is known about whether statistical learning is a primary mechanism for event segmentation. This study directly tests whether statistical learning alone is sufficient to segment continuous events. Twenty-eight 7- to 9-month-old infants…
Statistical Learning of Phonetic Categories: Insights from a Computational Approach
ERIC Educational Resources Information Center
McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.
2009-01-01
Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…
Nilsen, Vegard; Wyller, John
2016-01-01
Dose-response models are essential to quantitative microbial risk assessment (QMRA), providing a link between levels of human exposure to pathogens and the probability of negative health outcomes. In drinking water studies, the class of semi-mechanistic models known as single-hit models, such as the exponential and the exact beta-Poisson, has seen widespread use. In this work, an attempt is made to carefully develop the general mathematical single-hit framework while explicitly accounting for variation in (1) host susceptibility and (2) pathogen infectivity. This allows a precise interpretation of the so-called single-hit probability and precise identification of a set of statistical independence assumptions that are sufficient to arrive at single-hit models. Further analysis of the model framework is facilitated by formulating the single-hit models compactly using probability generating and moment generating functions. Among the more practically relevant conclusions drawn are: (1) for any dose distribution, variation in host susceptibility always reduces the single-hit risk compared to a constant host susceptibility (assuming equal mean susceptibilities), (2) the model-consistent representation of complete host immunity is formally demonstrated to be a simple scaling of the response, (3) the model-consistent expression for the total risk from repeated exposures deviates (gives lower risk) from the conventional expression used in applications, and (4) a model-consistent expression for the mean per-exposure dose that produces the correct total risk from repeated exposures is developed. © 2016 Society for Risk Analysis.
Assessing Quality of Care and Elder Abuse in Nursing Homes via Google Reviews.
Mowery, Jared; Andrei, Amanda; Le, Elizabeth; Jian, Jing; Ward, Megan
2016-01-01
It is challenging to assess the quality of care and detect elder abuse in nursing homes, since patients may be incapable of reporting quality issues or abuse themselves, and resources for sending inspectors are limited. This study correlates Google reviews of nursing homes with Centers for Medicare and Medicaid Services (CMS) inspection results in the Nursing Home Compare (NHC) data set, to quantify the extent to which the reviews reflect the quality of care and the presence of elder abuse. A total of 16,160 reviews were collected, spanning 7,170 nursing homes. Two approaches were tested: using the average rating as an overall estimate of the quality of care at a nursing home, and using the average scores from a maximum entropy classifier trained to recognize indications of elder abuse. The classifier achieved an F-measure of 0.81, with precision 0.74 and recall 0.89. The correlation for the classifier is weak but statistically significant: = 0.13, P < .001, and 95% confidence interval (0.10, 0.16). The correlation for the ratings exhibits a slightly higher correlation: = 0.15, P < .001. Both the classifier and rating correlations approach approximately 0.65 when the effective average number of reviews per provider is increased by aggregating similar providers. These results indicate that an analysis of Google reviews of nursing homes can be used to detect indications of elder abuse with high precision and to assess the quality of care, but only when a sufficient number of reviews are available.
Successful reproduction depends upon the precise orchestration of many physiological processes. With respect to male reproductive performance, normal copulatory behavior and ejaculatory function are required to insure that semen is deposited in the female tract. Then, a suffici...
What to use to express the variability of data: Standard deviation or standard error of mean?
Barde, Mohini P; Barde, Prajakt J
2012-07-01
Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.
Robust stability of second-order systems
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1995-01-01
It has been shown recently how virtual passive controllers can be designed for second-order dynamic systems to achieve robust stability. The virtual controllers were visualized as systems made up of spring, mass and damping elements. In this paper, a new approach emphasizing on the notion of positive realness to the same second-order dynamic systems is used. Necessary and sufficient conditions for positive realness are presented for scalar spring-mass-dashpot systems. For multi-input multi-output systems, we show how a mass-spring-dashpot system can be made positive real by properly choosing its output variables. In particular, sufficient conditions are shown for the system without output velocity. Furthermore, if velocity cannot be measured then the system parameters must be precise to keep the system positive real. In practice, system parameters are not always constant and cannot be measured precisely. Therefore, in order to be useful positive real systems must be robust to some degrees. This can be achieved with the design presented in this paper.
NASA Astrophysics Data System (ADS)
Iwaya, Takamitsu; Akao, Shingo; Sakamoto, Toshihiro; Tsuji, Toshihiro; Nakaso, Noritaka; Yamanaka, Kazushi
2012-07-01
In the field of environmental measurement and security, a portable gas chromatograph (GC) is required for the on-site analysis of multiple hazardous gases. Although the gas separation column has been downsized using micro-electro-mechanical-systems (MEMS) technology, an MEMS column made of silicon and glass still does not have sufficient robustness and a sufficiently low fabrication cost for a portable GC. In this study, we fabricated a robust and inexpensive high-precision metal MEMS column by combining diffusion-bonded etched stainless-steel plates with alignment evaluation using acoustic microscopy. The separation performance was evaluated using a desktop GC with a flame ionization detector and we achieved the high separation performance comparable to the best silicon MEMS column fabricated using a dynamic coating method. As an application, we fabricated a palm-size surface acoustic wave (SAW) GC combining this column with a ball SAW sensor and succeeded in separating and detecting a mixture of volatile organic compounds.
Valente, Nina Leão Marques; Vallada, Homero; Cordeiro, Quirino; Miguita, Karen; Bressan, Rodrigo Affonseca; Andreoli, Sergio Baxter; Mari, Jair Jesus; Mello, Marcelo Feijó
2011-05-01
Posttraumatic stress disorder (PTSD) is a prevalent, disabling anxiety disorder marked by behavioral and physiologic alterations which commonly follows a chronic course. Exposure to a traumatic event constitutes a necessary, but not sufficient, factor. There is evidence from twin studies supporting a significant genetic predisposition to PTSD. However, the precise genetic loci still remain unclear. The objective of the present study was to identify, in a case-control study, whether the brain-derived neurotrophic factor (BDNF) val66met polymorphism (rs6265), the dopamine transporter (DAT1) three prime untranslated region (3'UTR) variable number of tandem repeats (VNTR), and the serotonin transporter (5-HTTPRL) short/long variants are associated with the development of PTSD in a group of victims of urban violence. All polymorphisms were genotyped in 65 PTSD patients as well as in 34 victims of violence without PTSD and in a community control group (n = 335). We did not find a statistical significant difference between the BDNF val66met and 5-HTTPRL polymorphism and the traumatic phenotype. However, a statistical association was found between DAT1 3'UTR VNTR nine repeats and PTSD (OR = 1.82; 95% CI, 1.20-2.76). This preliminary result confirms previous reports supporting a susceptibility role for allele 9 and PTSD.
Meckley, Trevor D.; Holbrook, Christopher M.; Wagner, C. Michael; Binder, Thomas R.
2014-01-01
The use of position precision estimates that reflect the confidence in the positioning process should be considered prior to the use of biological filters that rely on a priori expectations of the subject’s movement capacities and tendencies. Position confidence goals should be determined based upon the needs of the research questions and analysis requirements versus arbitrary selection, in which filters of previous studies are adopted. Data filtering with this approach ensures that data quality is sufficient for the selected analyses and presents the opportunity to adjust or identify a different analysis in the event that the requisite precision was not attained. Ignoring these steps puts a practitioner at risk of reporting errant findings.
Constructing and Modifying Sequence Statistics for relevent Using informR in 𝖱
Marcum, Christopher Steven; Butts, Carter T.
2015-01-01
The informR package greatly simplifies the analysis of complex event histories in 𝖱 by providing user friendly tools to build sufficient statistics for the relevent package. Historically, building sufficient statistics to model event sequences (of the form a→b) using the egocentric generalization of Butts’ (2008) relational event framework for modeling social action has been cumbersome. The informR package simplifies the construction of the complex list of arrays needed by the rem() model fitting for a variety of cases involving egocentric event data, multiple event types, and/or support constraints. This paper introduces these tools using examples from real data extracted from the American Time Use Survey. PMID:26185488
Statistical analysis of radioimmunoassay. In comparison with bioassay (in Japanese)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakano, R.
1973-01-01
Using the data of RIA (radioimmunoassay), statistical procedures for dealing with two problems of the linearization of dose response curve and calculation of relative potency were described. There were three methods for linearization of dose response curve of RIA. In each method, the following parameters were shown on the horizontal and vertical axis: dose x, (B/T)/sup -1/; c/x + c, B/T (C: dose which makes B/T 50%); log x, logit B/T. Among them, the last method seems to be most practical. The statistical procedures for bioassay were employed for calculating the relative potency of unknown samples compared to the standardmore » samples from dose response curves of standand and unknown samples using regression coefficient. It is desirable that relative potency is calculated by plotting more than 5 points in the standard curve and plotting more than 2 points in unknow samples. For examining the statistical limit of precision of measuremert, LH activity of gonadotropin in urine was measured and relative potency, precision coefficient and the upper and lower limits of relative potency at 95% confidence limit were calculated. On the other hand, bioassay (by the ovarian ascorbic acid reduction method and anteriol lobe of prostate weighing method) was done in the same samples, and the precision was compared with that of RIA. In these examinations, the upper and lower limits of the relative potency at 95% confidence limit were near each other, while in bioassay, a considerable difference was observed between the upper and lower limits. The necessity of standardization and systematization of the statistical procedures for increasing the precision of RIA was pointed out. (JA)« less
Statistics and Title VII Proof: Prima Facie Case and Rebuttal.
ERIC Educational Resources Information Center
Whitten, David
1978-01-01
The method and means by which statistics can raise a prima facie case of Title VII violation are analyzed. A standard is identified that can be applied to determine whether a statistical disparity is sufficient to shift the burden to the employer to rebut a prima facie case of discrimination. (LBH)
Accuracy assessment with complex sampling designs
Raymond L. Czaplewski
2010-01-01
A reliable accuracy assessment of remotely sensed geospatial data requires a sufficiently large probability sample of expensive reference data. Complex sampling designs reduce cost or increase precision, especially with regional, continental and global projects. The General Restriction (GR) Estimator and the Recursive Restriction (RR) Estimator separate a complex...
Some Automated Cartography Developments at the Defense Mapping Agency.
1981-01-01
on a pantographic router creating a laminate step model which was moulded in plaster for carving Into a terrain model. This section will trace DMA’s...offering economical automation. Precision flatbed Concord plotters were brought into DMA with sufficiently programmable control computers to perform these
World Class Schools: An Evolving Concept.
ERIC Educational Resources Information Center
Jenkins, John M., Ed.; And Others
The concept of "world class," often used in reference to education, lacks a precise, universal definition. This book presents case studies of exemplary schools. The foreword by Fenwick W. English presents a developmental concept of world-class education, in which fair and comparable standards, with sufficient room for sociocultural…
A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang
2017-06-28
Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Ribatet, M.; Stübi, R.; Weihs, P.; Holawe, F.; Peter, T.; Davison, A. C.
2009-04-01
Over the last few decades negative trends in stratospheric ozone have been studied because of the direct link between decreasing stratospheric ozone and increasing surface UV-radiation. Recently a discussion on ozone recovery has begun. Long-term measurements of total ozone extending back earlier than 1958 are limited and only available from a few stations in the northern hemisphere. The world's longest total ozone record is available from Arosa, Switzerland (Staehelin et al., 1998a,b). At this site total ozone measurements have been made since late 1926 through the present day. Within this study (Rieder et al., 2009) new tools from extreme value theory (e.g. Coles, 2001; Ribatet, 2007) are applied to select mathematically well-defined thresholds for extreme low and extreme high total ozone. A heavy-tail focused approach is used by fitting the Generalized Pareto Distribution (GPD) to the Arosa time series. Asymptotic arguments (Pickands, 1975) justify the use of the GPD for modeling exceedances over a sufficiently high (or below a sufficiently low) threshold (Coles, 2001). More precisely, the GPD is the limiting distribution of normalized excesses over a threshold, as the threshold approaches the endpoint of the distribution. In practice, GPD parameters are fitted, to exceedances by maximum likelihood or other methods - such as the probability weighted moments. A preliminary step consists in defining an appropriate threshold for which the asymptotic GPD approximation holds. Suitable tools for threshold selection as the MRL-plot (mean residual life plot) and TC-plot (stability plot) from the POT-package (Ribatet, 2007) are presented. The frequency distribution of extremes in low (termed ELOs) and high (termed EHOs) total ozone and their influence on the long-term changes in total ozone are analyzed. Further it is shown that from the GPD-model the distribution of so-called ozone mini holes (e.g. Bojkov and Balis, 2001) can be precisely estimated and that the "extremes concept" provides new information on the data distribution and variability within the Arosa record as well as on the influence of ELOs and EHOs on the long-term trends of the ozone time series. References: Bojkov, R. D., and Balis, D.S.: Characteristics of episodes with extremely low ozone values in the northern middle latitudes 1975-2000, Ann. Geophys., 19, 797-807, 2001. Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Pickands, J.: Statistical inference using extreme order statistics, Ann. Stat., 3, 1, 119-131, 1975. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Stübi, R., Weihs, P., Holawe, F., and M. Ribatet: From ozone mini holes and mini highs towards extreme value theory: New insights from extreme events and non stationarity, submitted to J. Geophys. Res., 2009. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998a. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998b.
NASA Astrophysics Data System (ADS)
Stapp, Henry P.
2011-11-01
The principle of sufficient reason asserts that anything that happens does so for a reason: no definite state of affairs can come into being unless there is a sufficient reason why that particular thing should happen. This principle is usually attributed to Leibniz, although the first recorded Western philosopher to use it was Anaximander of Miletus. The demand that nature be rational, in the sense that it be compatible with the principle of sufficient reason, conflicts with a basic feature of contemporary orthodox physical theory, namely the notion that nature's response to the probing action of an observer is determined by pure chance, and hence on the basis of absolutely no reason at all. This appeal to pure chance can be deemed to have no rational fundamental place in reason-based Western science. It is argued here, on the basis of the other basic principles of quantum physics, that in a world that conforms to the principle of sufficient reason, the usual quantum statistical rules will naturally emerge at the pragmatic level, in cases where the reason behind nature's choice of response is unknown, but that the usual statistics can become biased in an empirically manifest way when the reason for the choice is empirically identifiable. It is shown here that if the statistical laws of quantum mechanics were to be biased in this way then the basically forward-in-time unfolding of empirical reality described by orthodox quantum mechanics would generate the appearances of backward-time-effects of the kind that have been reported in the scientific literature.
D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C
2014-07-01
Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.
Comparison of Accuracy Between a Conventional and Two Digital Intraoral Impression Techniques.
Malik, Junaid; Rodriguez, Jose; Weisbloom, Michael; Petridis, Haralampos
To compare the accuracy (ie, precision and trueness) of full-arch impressions fabricated using either a conventional polyvinyl siloxane (PVS) material or one of two intraoral optical scanners. Full-arch impressions of a reference model were obtained using addition silicone impression material (Aquasil Ultra; Dentsply Caulk) and two optical scanners (Trios, 3Shape, and CEREC Omnicam, Sirona). Surface matching software (Geomagic Control, 3D Systems) was used to superimpose the scans within groups to determine the mean deviations in precision and trueness (μm) between the scans, which were calculated for each group and compared statistically using one-way analysis of variance with post hoc Bonferroni (trueness) and Games-Howell (precision) tests (IBM SPSS ver 24, IBM UK). Qualitative analysis was also carried out from three-dimensional maps of differences between scans. Means and standard deviations (SD) of deviations in precision for conventional, Trios, and Omnicam groups were 21.7 (± 5.4), 49.9 (± 18.3), and 36.5 (± 11.12) μm, respectively. Means and SDs for deviations in trueness were 24.3 (± 5.7), 87.1 (± 7.9), and 80.3 (± 12.1) μm, respectively. The conventional impression showed statistically significantly improved mean precision (P < .006) and mean trueness (P < .001) compared to both digital impression procedures. There were no statistically significant differences in precision (P = .153) or trueness (P = .757) between the digital impressions. The qualitative analysis revealed local deviations along the palatal surfaces of the molars and incisal edges of the anterior teeth of < 100 μm. Conventional full-arch PVS impressions exhibited improved mean accuracy compared to two direct optical scanners. No significant differences were found between the two digital impression methods.
Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida
2017-08-01
This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].
Evaluation of Cross-Protocol Stability of a Fully Automated Brain Multi-Atlas Parcellation Tool.
Liang, Zifei; He, Xiaohai; Ceritoglu, Can; Tang, Xiaoying; Li, Yue; Kutten, Kwame S; Oishi, Kenichi; Miller, Michael I; Mori, Susumu; Faria, Andreia V
2015-01-01
Brain parcellation tools based on multiple-atlas algorithms have recently emerged as a promising method with which to accurately define brain structures. When dealing with data from various sources, it is crucial that these tools are robust for many different imaging protocols. In this study, we tested the robustness of a multiple-atlas, likelihood fusion algorithm using Alzheimer's Disease Neuroimaging Initiative (ADNI) data with six different protocols, comprising three manufacturers and two magnetic field strengths. The entire brain was parceled into five different levels of granularity. In each level, which defines a set of brain structures, ranging from eight to 286 regions, we evaluated the variability of brain volumes related to the protocol, age, and diagnosis (healthy or Alzheimer's disease). Our results indicated that, with proper pre-processing steps, the impact of different protocols is minor compared to biological effects, such as age and pathology. A precise knowledge of the sources of data variation enables sufficient statistical power and ensures the reliability of an anatomical analysis when using this automated brain parcellation tool on datasets from various imaging protocols, such as clinical databases.
Hong, Young-seoub; Ye, Byeong-jin; Kim, Yu-mi; Kim, Byoung-gwon; Kang, Gyeong-hui; Kim, Jeong-jin; Song, Ki-hoon; Kim, Young-hun
2017-01-01
Recent epidemiological studies have reported adverse health effects, including skin cancer, due to low concentrations of arsenic via drinking water. We conducted a study to assess whether low arsenic contaminated ground water affected health of the residents who consumed it. For precise biomonitoring results, the inorganic (trivalent arsenite (As III) and pentavalent arsenate (As V)) and organic forms (monomethylarsonate (MMA) and dimethylarsinate (DMA)) of arsenic were separately quantified by combining high-performance liquid chromatography and inductively coupled plasma mass spectroscopy from urine samples. In conclusion, urinary As III, As V, MMA, and hair arsenic concentrations were significantly higher in residents who consumed arsenic contaminated ground water than control participants who consumed tap water. But, most health screening results did not show a statistically significant difference between exposed and control subjects. We presume that the elevated arsenic concentrations may not be sufficient to cause detectable health effects. Consumption of arsenic contaminated ground water could result in elevated urinary organic and inorganic arsenic concentrations. We recommend immediate discontinuation of ground water supply in this area for the safety of the residents. PMID:29186890
Hybrid Approaches and Industrial Applications of Pattern Recognition,
1980-10-01
emphasized that the probability distribution in (9) is correct only under the assumption that P( wIx ) is known exactly. In practice this assumption will...sufficient precision. The alternative would be to take the probability distribution of estimates of P( wix ) into account in the analysis. However, from the
Thin-Slice Perception Develops Slowly
ERIC Educational Resources Information Center
Balas, Benjamin; Kanwisher, Nancy; Saxe, Rebecca
2012-01-01
Body language and facial gesture provide sufficient visual information to support high-level social inferences from "thin slices" of behavior. Given short movies of nonverbal behavior, adults make reliable judgments in a large number of tasks. Here we find that the high precision of adults' nonverbal social perception depends on the slow…
Effective environmental policy decisions benefit from stream habitat information that is accurate, precise, and relevant. The recent National Wadeable Streams Assessment (NWSA) carried out by the U.S. EPA required physical habitat information sufficiently comprehensive to facilit...
Vibration Transmission through Rolling Element Bearings in Geared Rotor Systems
1990-11-01
147 4.8 Concluding Remarks ........................................................... 153 V STATISTICAL ENERGY ANALYSIS ............................................ 155...and dynamic finite element techniques are used to develop the discrete vibration models while statistical energy analysis method is used for the broad...bearing system studies, geared rotor system studies, and statistical energy analysis . Each chapter is self sufficient since it is written in a
DOT National Transportation Integrated Search
2010-03-01
This document provides guidance for using the ACS Statistical Analyzer. It is an Excel-based template for users of estimates from the American Community Survey (ACS) to assess the precision of individual estimates and to compare pairs of estimates fo...
Ground control requirements for precision processing of ERTS images
Burger, Thomas C.
1973-01-01
With the successful flight of the ERTS-1 satellite, orbital height images are available for precision processing into products such as 1:1,000,000-scale photomaps and enlargements up to 1:250,000 scale. In order to maintain positional error below 100 meters, control points for the precision processing must be carefully selected, clearly definitive on photos in both X and Y. Coordinates of selected control points measured on existing ½ and 15-minute standard maps provide sufficient accuracy for any space imaging system thus far defined. This procedure references the points to accepted horizontal and vertical datums. Maps as small as 1:250,000 scale can be used as source material for coordinates, but to maintain the desired accuracy, maps of 1:100,000 and larger scale should be used when available.
Precision measurement of the three 2(3)P(J) helium fine structure intervals.
Zelevinsky, T; Farkas, D; Gabrielse, G
2005-11-11
The three 2(3)P fine structure intervals of 4H are measured at an improved accuracy that is sufficient to test two-electron QED theory and to determine the fine structure constant alpha to 14 parts in 10(9). The more accurate determination of alpha, to a precision higher than attained with the quantum Hall and Josephson effects, awaits the reconciliation of two inconsistent theoretical calculations now being compared term by term. A low pressure helium discharge presents experimental uncertainties quite different than for earlier measurements and allows direct measurements of light pressure shifts.
Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling
ERIC Educational Resources Information Center
Banjanovic, Erin S.; Osborne, Jason W.
2016-01-01
Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…
Evaluation on the use of cerium in the NBL Titrimetric Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zebrowski, J.P.; Orlowicz, G.J.; Johnson, K.D.
An alternative to potassium dichromate as titrant in the New Brunswick Laboratory Titrimetric Method for uranium analysis was sought since chromium in the waste makes disposal difficult. Substitution of a ceric-based titrant was statistically evaluated. Analysis of the data indicated statistically equivalent precisions for the two methods, but a significant overall bias of +0.035% for the ceric titrant procedure. The cause of the bias was investigated, alterations to the procedure were made, and a second statistical study was performed. This second study revealed no statistically significant bias, nor any analyst-to-analyst variation in the ceric titration procedure. A statistically significant day-to-daymore » variation was detected, but this was physically small (0.01 5%) and was only detected because of the within-day precision of the method. The added mean and standard deviation of the %RD for a single measurement was found to be 0.031%. A comparison with quality control blind dichromate titration data again indicated similar overall precision. Effects of ten elements on the ceric titration`s performance was determined. Co, Ti, Cu, Ni, Na, Mg, Gd, Zn, Cd, and Cr in previous work at NBL these impurities did not interfere with the potassium dichromate titrant. This study indicated similar results for the ceric titrant, with the exception of Ti. All the elements (excluding Ti and Cr), caused no statistically significant bias in uranium measurements at levels of 10 mg impurity per 20-40 mg uranium. The presence of Ti was found to cause a bias of {minus}0.05%; this is attributed to the presence of sulfate ions, resulting in precipitation of titanium sulfate and occlusion of uranium. A negative bias of 0.012% was also statistically observed in the samples containing chromium impurities.« less
Towards Precision Spectroscopy of Baryonic Resonances
NASA Astrophysics Data System (ADS)
Döring, Michael; Mai, Maxim; Rönchen, Deborah
2017-01-01
Recent progress in baryon spectroscopy is reviewed. In a common effort, various groups have analyzed a set of new high-precision polarization observables from ELSA. The Jülich-Bonn group has finalized the analysis of pion-induced meson-baryon production, the potoproduction of pions and eta mesons, and (almost) the KΛ final state. As data become preciser, statistical aspects in the analysis of excited baryons become increasingly relevant and several advances in this direction are proposed.
Towards precision spectroscopy of baryonic resonances
Doring, Michael; Mai, Maxim; Ronchen, Deborah
2017-01-26
Recent progress in baryon spectroscopy is reviewed. In a common effort, various groups have analyzed a set of new high-precision polarization observables from ELSA. The Julich-Bonn group has finalized the analysis of pion-induced meson-baryon production, the potoproduction of pions and eta mesons, and (almost) the KΛ final state. Lastly, as data become preciser, statistical aspects in the analysis of excited baryons become increasingly relevant and several advances in this direction are proposed.
Bragança, Sara; Arezes, Pedro; Carvalho, Miguel; Ashdown, Susan P; Castellucci, Ignacio; Leão, Celina
2018-01-01
Collecting anthropometric data for real-life applications demands a high degree of precision and reliability. It is important to test new equipment that will be used for data collectionOBJECTIVE:Compare two anthropometric data gathering techniques - manual methods and a Kinect-based 3D body scanner - to understand which of them gives more precise and reliable results. The data was collected using a measuring tape and a Kinect-based 3D body scanner. It was evaluated in terms of precision by considering the regular and relative Technical Error of Measurement and in terms of reliability by using the Intraclass Correlation Coefficient, Reliability Coefficient, Standard Error of Measurement and Coefficient of Variation. The results obtained showed that both methods presented better results for reliability than for precision. Both methods showed relatively good results for these two variables, however, manual methods had better results for some body measurements. Despite being considered sufficiently precise and reliable for certain applications (e.g. apparel industry), the 3D scanner tested showed, for almost every anthropometric measurement, a different result than the manual technique. Many companies design their products based on data obtained from 3D scanners, hence, understanding the precision and reliability of the equipment used is essential to obtain feasible results.
Classification of LIDAR Data for Generating a High-Precision Roadway Map
NASA Astrophysics Data System (ADS)
Jeong, J.; Lee, I.
2016-06-01
Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.
Turnlund, Judith R; Keyes, William R
2002-09-01
Stable isotopes are used with increasing frequency to trace the metabolic fate of minerals in human nutrition studies. The precision of the analytical methods used must be sufficient to permit reliable measurement of low enrichments and the accuracy should permit comparisons between studies. Two methods most frequently used today are thermal ionization mass spectrometry (TIMS) and inductively coupled plasma mass spectrometry (ICP-MS). This study was conducted to compare the two methods. Multiple natural samples of copper, zinc, molybdenum, and magnesium were analyzed by both methods to compare their internal and external precision. Samples with a range of isotopic enrichments that were collected from human studies or prepared from standards were analyzed to compare their accuracy. TIMS was more precise and accurate than ICP-MS. However, the cost, ease, and speed of analysis were better for ICP-MS. Therefore, for most purposes, ICP-MS is the method of choice, but when the highest degrees of precision and accuracy are required and when enrichments are very low, TIMS is the method of choice.
Spatial variability effects on precision and power of forage yield estimation
USDA-ARS?s Scientific Manuscript database
Spatial analyses of yield trials are important, as they adjust cultivar means for spatial variation and improve the statistical precision of yield estimation. While the relative efficiency of spatial analysis has been frequently reported in several yield trials, its application on long-term forage y...
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
Pretorius, Etheresia
2017-01-01
The latest statistics from the 2016 heart disease and stroke statistics update shows that cardiovascular disease is the leading global cause of death, currently accounting for more than 17.3 million deaths per year. Type II diabetes is also on the rise with out-of-control numbers. To address these pandemics, we need to treat patients using an individualized patient care approach, but simultaneously gather data to support the precision medicine initiative. Last year the NIH announced the precision medicine initiative to generate novel knowledge regarding diseases, with a near-term focus on cancers, followed by a longer-term aim, applicable to a whole range of health applications and diseases. The focus of this paper is to suggest a combined effort between the latest precision medicine initiative, researchers and clinicians; whereby novel techniques could immediately make a difference in patient care, but long-term add to knowledge for use in precision medicine. We discuss the intricate relationship between individualized patient care and precision medicine and the current thoughts regarding which data is actually suitable for the precision medicine data gathering. The uses of viscoelastic techniques in precision medicine are discussed and how these techniques might give novel perspectives on the success of treatment regimes of cardiovascular patients are explored. Thrombo-embolic stroke, rheumathoid arthritis and type II diabetes are used as examples of diseases where precision medicine and a patient-orientated approach can possibly be implemented. In conclusion it is suggested that if all role players work together by embracing a new way of thought in treating and managing cardiovascular disease and diabetes will we be able to adequately address these out-ofcontrol conditions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stapp, Henry P.
2011-05-10
The principle of sufficient reason asserts that anything that happens does so for a reason: no definite state of affairs can come into being unless there is a sufficient reason why that particular thing should happen. This principle is usually attributed to Leibniz, although the first recorded Western philosopher to use it was Anaximander of Miletus. The demand that nature be rational, in the sense that it be compatible with the principle of sufficient reason, conflicts with a basic feature of contemporary orthodox physical theory, namely the notion that nature's response to the probing action of an observer is determinedmore » by pure chance, and hence on the basis of absolutely no reason at all. This appeal to pure chance can be deemed to have no rational fundamental place in reason-based Western science. It is argued here, on the basis of the other basic principles of quantum physics, that in a world that conforms to the principle of sufficient reason, the usual quantum statistical rules will naturally emerge at the pragmatic level, in cases where the reason behind nature's choice of response is unknown, but that the usual statistics can become biased in an empirically manifest way when the reason for the choice is empirically identifiable. It is shown here that if the statistical laws of quantum mechanics were to be biased in this way then the basically forward-in-time unfolding of empirical reality described by orthodox quantum mechanics would generate the appearances of backward-time-effects of the kind that have been reported in the scientific literature.« less
Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete
2008-08-20
Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.
NASA Astrophysics Data System (ADS)
Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.
2018-02-01
In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.
Correcting Measurement Error in Latent Regression Covariates via the MC-SIMEX Method
ERIC Educational Resources Information Center
Rutkowski, Leslie; Zhou, Yan
2015-01-01
Given the importance of large-scale assessments to educational policy conversations, it is critical that subpopulation achievement is estimated reliably and with sufficient precision. Despite this importance, biased subpopulation estimates have been found to occur when variables in the conditioning model side of a latent regression model contain…
49 CFR 395.16 - Electronic on-board recording devices.
Code of Federal Regulations, 2011 CFR
2011-10-01
... “sufficiently precise,” for purposes of this paragraph means the nearest city, town or village. (3) When the CMV... driving, and where released from work), the name of the nearest city, town, or village, with State... password) that identifies the driver or to provide other information (such as smart cards, biometrics) that...
We the Peoples: When American Education Began
ERIC Educational Resources Information Center
Warren, Donald
2007-01-01
"The accomplishments of Indians and their actual place in the story of the United States have never been remotely touched by ... [most] historians. The major reason for this omission is that a substantial number of practicing historians simply do not know the source documents with sufficient precision to make sense of them; ... They spend a…
Blowing Polymer Bubbles in an Acoustic Levitator
NASA Technical Reports Server (NTRS)
Lee, M. C.
1985-01-01
In new manufacturing process, small gas-filled polymer shells made by injecting gas directly into acoustically levitated prepolymer drops. New process allows sufficient time for precise control of shell geometry. Applications foreseen in fabrication of deuterium/tritium-filled fusion targets and in pharmaceutical coatings. New process also useful in glass blowing and blow molding.
The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival
Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas
2016-01-01
Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561
Guo, Nannan; Soden, Marta E; Herber, Charlotte; Kim, Michael TaeWoo; Besnard, Antoine; Lin, Paoyan; Ma, Xiang; Cepko, Constance L; Zweifel, Larry S; Sahay, Amar
2018-05-01
Memories become less precise and generalized over time as memory traces reorganize in hippocampal-cortical networks. Increased time-dependent loss of memory precision is characterized by an overgeneralization of fear in individuals with post-traumatic stress disorder (PTSD) or age-related cognitive impairments. In the hippocampal dentate gyrus (DG), memories are thought to be encoded by so-called 'engram-bearing' dentate granule cells (eDGCs). Here we show, using rodents, that contextual fear conditioning increases connectivity between eDGCs and inhibitory interneurons (INs) in the downstream hippocampal CA3 region. We identify actin-binding LIM protein 3 (ABLIM3) as a mossy-fiber-terminal-localized cytoskeletal factor whose levels decrease after learning. Downregulation of ABLIM3 expression in DGCs was sufficient to increase connectivity with CA3 stratum lucidum INs (SLINs), promote parvalbumin (PV)-expressing SLIN activation, enhance feedforward inhibition onto CA3 and maintain a fear memory engram in the DG over time. Furthermore, downregulation of ABLIM3 expression in DGCs conferred conditioned context-specific reactivation of memory traces in hippocampal-cortical and amygdalar networks and decreased fear memory generalization at remote (i.e., distal) time points. Consistent with the observation of age-related hyperactivity of CA3, learning failed to increase DGC-SLIN connectivity in 17-month-old mice, whereas downregulation of ABLIM3 expression was sufficient to restore DGC-SLIN connectivity, increase PV+ SLIN activation and improve the precision of remote memories. These studies exemplify a connectivity-based strategy that targets a molecular brake of feedforward inhibition in DG-CA3 and may be harnessed to decrease time-dependent memory generalization in individuals with PTSD and improve memory precision in aging individuals.
Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations
NASA Astrophysics Data System (ADS)
Kozak, P.
2014-12-01
Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.
Spatio-temporal conditional inference and hypothesis tests for neural ensemble spiking precision
Harrison, Matthew T.; Amarasingham, Asohan; Truccolo, Wilson
2014-01-01
The collective dynamics of neural ensembles create complex spike patterns with many spatial and temporal scales. Understanding the statistical structure of these patterns can help resolve fundamental questions about neural computation and neural dynamics. Spatio-temporal conditional inference (STCI) is introduced here as a semiparametric statistical framework for investigating the nature of precise spiking patterns from collections of neurons that is robust to arbitrarily complex and nonstationary coarse spiking dynamics. The main idea is to focus statistical modeling and inference, not on the full distribution of the data, but rather on families of conditional distributions of precise spiking given different types of coarse spiking. The framework is then used to develop families of hypothesis tests for probing the spatio-temporal precision of spiking patterns. Relationships among different conditional distributions are used to improve multiple hypothesis testing adjustments and to design novel Monte Carlo spike resampling algorithms. Of special note are algorithms that can locally jitter spike times while still preserving the instantaneous peri-stimulus time histogram (PSTH) or the instantaneous total spike count from a group of recorded neurons. The framework can also be used to test whether first-order maximum entropy models with possibly random and time-varying parameters can account for observed patterns of spiking. STCI provides a detailed example of the generic principle of conditional inference, which may be applicable in other areas of neurostatistical analysis. PMID:25380339
ERIC Educational Resources Information Center
Cassel, Russell N.
This paper relates educational and psychological statistics to certain "Research Statistical Tools" (RSTs) necessary to accomplish and understand general research in the behavioral sciences. Emphasis is placed on acquiring an effective understanding of the RSTs and to this end they are are ordered to a continuum scale in terms of individual…
Tataru, Paula; Hobolth, Asger
2011-12-05
Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.
Rapid Prototyping Technology for Manufacturing GTE Turbine Blades
NASA Astrophysics Data System (ADS)
Balyakin, A. V.; Dobryshkina, E. M.; Vdovin, R. A.; Alekseev, V. P.
2018-03-01
The conventional approach to manufacturing turbine blades by investment casting is expensive and time-consuming, as it takes a lot of time to make geometrically precise and complex wax patterns. Turbine blade manufacturing in pilot production can be sped up by accelerating the casting process while keeping the geometric precision of the final product. This paper compares the rapid prototyping method (casting the wax pattern composition into elastic silicone molds) to the conventional technology. Analysis of the size precision of blade casts shows that silicon-mold casting features sufficient geometric precision. Thus, this method for making wax patterns can be a cost-efficient solution for small-batch or pilot production of turbine blades for gas-turbine units (GTU) and gas-turbine engines (GTE). The paper demonstrates how additive technology and thermographic analysis can speed up the cooling of wax patterns in silicone molds. This is possible at an optimal temperature and solidification time, which make the process more cost-efficient while keeping the geometric quality of the final product.
Deciphering the MSSM Higgs mass at future hadron colliders
Agrawal, Prateek; Fan, JiJi; Reece, Matthew; ...
2017-06-06
Here, future hadron colliders will have a remarkable capacity to discover massive new particles, but their capabilities for precision measurements of couplings that can reveal underlying mechanisms have received less study. In this work we study the capability of future hadron colliders to shed light on a precise, focused question: is the higgs mass of 125 GeV explained by the MSSM? If supersymmetry is realized near the TeV scale, a future hadron collider could produce huge numbers of gluinos and electroweakinos. We explore whether precision measurements of their properties could allow inference of the scalar masses and tan β withmore » sufficient accuracy to test whether physics beyond the MSSM is needed to explain the higgs mass. We also discuss dark matter direct detection and precision higgs physics as complementary probes of tan β. For concreteness, we focus on the mini-split regime of MSSM parameter space at a 100 TeV pp collider, with scalar masses ranging from 10s to about 1000 TeV.« less
Knollmann, Friedrich D; Kumthekar, Rohan; Fetzer, David; Socinski, Mark A
2014-03-01
We set out to investigate whether volumetric tumor measurements allow for a prediction of treatment response, as measured by patient survival, in patients with advanced non-small-cell lung cancer (NSCLC). Patients with nonresectable NSCLC (stage III or IV, n = 100) who were repeatedly evaluated for treatment response by computed tomography (CT) were included in a Health Insurance Portability and Accountability Act (HIPAA)-compliant retrospective study. Tumor response was measured by comparing tumor volumes over time. Patient survival was compared with Response Evaluation Criteria in Solid Tumors (RECIST) using Kaplan-Meier survival statistics and Cox regression analysis. The median overall patient survival was 553 days (standard error, 146 days); for patients with stage III NSCLC, it was 822 days, and for patients with stage IV disease, 479 days. The survival differences were not statistically significant (P = .09). According to RECIST, 5 patients demonstrated complete response, 39 partial response, 44 stable disease, and 12 progressive disease. Patient survival was not significantly associated with RECIST class, the change of the sum of tumor diameters (P = .98), nor the change of the sum of volumetric tumor dimensions (P = .17). In a group of 100 patients with advanced-stage NSCLC, neither volumetric CT measurements of changes in tumor size nor RECIST class significantly predicted patient survival. This observation suggests that size response may not be a sufficiently precise surrogate marker of success to steer treatment decisions in individual patients. Copyright © 2014 Elsevier Inc. All rights reserved.
Stability of MINERVA Spectrograph's Instrumental Profile
NASA Astrophysics Data System (ADS)
Wilson, Maurice; Eastman, Jason; Johnson, John Asher
2018-01-01
For most Earth-like exoplanets, their physical properties cannot be determined without high precision photometry and radial velocities. For this reason, the MINiature Exoplanet Radial Velocity Array (MINERVA) was designed to obtain photometric and radial velocity measurements with precision sufficient for finding, confirming, and characterizing rocky planets around our nearest stars. MINERVA is an array of four robotic telescopes located on Mt. Hopkins in Arizona. We aim to improve our radial velocity precision with MINERVA by analyzing the stability of our spectrograph’s instrumental profile. We have taken several spectra of the daytime sky each month and have checked for variability over a span of six months. We investigate the variation over time to see if it correlates with temperature and pressure changes in the spectrograph. We discuss the implications of our daytime sky spectra and how the instrumental profile’s stability may be improved.
Space Geodesy and the New Madrid Seismic Zone
NASA Astrophysics Data System (ADS)
Smalley, Robert; Ellis, Michael A.
2008-07-01
One of the most contentious issues related to earthquake hazards in the United States centers on the midcontinent and the origin, magnitudes, and likely recurrence intervals of the 1811-1812 New Madrid earthquakes that occurred there. The stakeholder groups in the debate (local and state governments, reinsurance companies, American businesses, and the scientific community) are similar to the stakeholder groups in regions more famous for large earthquakes. However, debate about New Madrid seismic hazard has been fiercer because of the lack of two fundamental components of seismic hazard estimation: an explanatory model for large, midplate earthquakes; and sufficient or sufficiently precise data about the causes, effects, and histories of such earthquakes.
ERIC Educational Resources Information Center
Richardson, William H., Jr.
2006-01-01
Computational precision is sometimes given short shrift in a first programming course. Treating this topic requires discussing integer and floating-point number representations and inaccuracies that may result from their use. An example of a moderately simple programming problem from elementary statistics was examined. It forced students to…
ERIC Educational Resources Information Center
Bloom, Howard S.; Richburg-Hayes, Lashawn; Black, Alison Rebeck
2007-01-01
This article examines how controlling statistically for baseline covariates, especially pretests, improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement. Empirical findings from five urban school districts indicate that (1) pretests can reduce the number of randomized…
Inverse probability weighting for covariate adjustment in randomized studies
Li, Xiaochun; Li, Lingling
2013-01-01
SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458
NASA Astrophysics Data System (ADS)
Jones, Bernard J. T.
2017-04-01
Preface; Notation and conventions; Part I. 100 Years of Cosmology: 1. Emerging cosmology; 2. The cosmic expansion; 3. The cosmic microwave background; 4. Recent cosmology; Part II. Newtonian Cosmology: 5. Newtonian cosmology; 6. Dark energy cosmological models; 7. The early universe; 8. The inhomogeneous universe; 9. The inflationary universe; Part III. Relativistic Cosmology: 10. Minkowski space; 11. The energy momentum tensor; 12. General relativity; 13. Space-time geometry and calculus; 14. The Einstein field equations; 15. Solutions of the Einstein equations; 16. The Robertson-Walker solution; 17. Congruences, curvature and Raychaudhuri; 18. Observing and measuring the universe; Part IV. The Physics of Matter and Radiation: 19. Physics of the CMB radiation; 20. Recombination of the primeval plasma; 21. CMB polarisation; 22. CMB anisotropy; Part V. Precision Tools for Precision Cosmology: 23. Likelihood; 24. Frequentist hypothesis testing; 25. Statistical inference: Bayesian; 26. CMB data processing; 27. Parametrising the universe; 28. Precision cosmology; 29. Epilogue; Appendix A. SI, CGS and Planck units; Appendix B. Magnitudes and distances; Appendix C. Representing vectors and tensors; Appendix D. The electromagnetic field; Appendix E. Statistical distributions; Appendix F. Functions on a sphere; Appendix G. Acknowledgements; References; Index.
Inverse probability weighting for covariate adjustment in randomized studies.
Shen, Changyu; Li, Xiaochun; Li, Lingling
2014-02-20
Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Identifiability of PBPK Models with Applications to Dimethylarsinic Acid Exposure
Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss diff...
Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born
2012-01-01
We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031
NASA Technical Reports Server (NTRS)
French, R. A.; Cohen, B. A.; Miller, J. S.
2014-01-01
KArLE (Potassium--Argon Laser Experiment) has been developed for in situ planetary geochronology using the K - Ar (potassium--argon) isotope system, where material ablated by LIBS (Laser--Induced Breakdown Spectroscopy) is used to calculate isotope abundances. We are determining the accuracy and precision of volume measurements of these pits using stereo and laser microscope data to better understand the ablation process for isotope abundance calculations. If a characteristic volume can be determined with sufficient accuracy and precision for specific rock types, KArLE will prove to be a useful instrument for future planetary rover missions.
Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A
2018-02-01
To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher
2018-01-01
Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377
NASA Astrophysics Data System (ADS)
Dimitropoulos, Dimitrios
Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth . an index based approach, and an econometric cost based approach . to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately 1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.
NASA Astrophysics Data System (ADS)
Dimitropoulos, Dimitrios
Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth---an index based approach, and an econometric cost based approach---to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately -1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.
Statistical Analysis Experiment for Freshman Chemistry Lab.
ERIC Educational Resources Information Center
Salzsieder, John C.
1995-01-01
Describes a laboratory experiment dissolving zinc from galvanized nails in which data can be gathered very quickly for statistical analysis. The data have sufficient significant figures and the experiment yields a nice distribution of random errors. Freshman students can gain an appreciation of the relationships between random error, number of…
Statistical correlations in an ideal gas of particles obeying fractional exclusion statistics.
Pellegrino, F M D; Angilella, G G N; March, N H; Pucci, R
2007-12-01
After a brief discussion of the concepts of fractional exchange and fractional exclusion statistics, we report partly analytical and partly numerical results on thermodynamic properties of assemblies of particles obeying fractional exclusion statistics. The effect of dimensionality is one focal point, the ratio mu/k_(B)T of chemical potential to thermal energy being obtained numerically as a function of a scaled particle density. Pair correlation functions are also presented as a function of the statistical parameter, with Friedel oscillations developing close to the fermion limit, for sufficiently large density.
Quantifying lost information due to covariance matrix estimation in parameter inference
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heavens, Alan F.
2017-02-01
Parameter inference with an estimated covariance matrix systematically loses information due to the remaining uncertainty of the covariance matrix. Here, we quantify this loss of precision and develop a framework to hypothetically restore it, which allows to judge how far away a given analysis is from the ideal case of a known covariance matrix. We point out that it is insufficient to estimate this loss by debiasing the Fisher matrix as previously done, due to a fundamental inequality that describes how biases arise in non-linear functions. We therefore develop direct estimators for parameter credibility contours and the figure of merit, finding that significantly fewer simulations than previously thought are sufficient to reach satisfactory precisions. We apply our results to DES Science Verification weak lensing data, detecting a 10 per cent loss of information that increases their credibility contours. No significant loss of information is found for KiDS. For a Euclid-like survey, with about 10 nuisance parameters we find that 2900 simulations are sufficient to limit the systematically lost information to 1 per cent, with an additional uncertainty of about 2 per cent. Without any nuisance parameters, 1900 simulations are sufficient to only lose 1 per cent of information. We further derive estimators for all quantities needed for forecasting with estimated covariance matrices. Our formalism allows to determine the sweetspot between running sophisticated simulations to reduce the number of nuisance parameters, and running as many fast simulations as possible.
Efficient Scores, Variance Decompositions and Monte Carlo Swindles.
1984-08-28
to ;r Then a version .of Pythagoras ’ theorem gives the variance decomposition (6.1) varT var S var o(T-S) P P0 0 0 One way to see this is to note...complete sufficient statistics for (B, a) , and that the standard- ized residuals a(y - XB) 6 are ancillary. Basu’s sufficiency- ancillarity theorem
Nutrition and Musculoskeletal Function: Skylab Experiment Series Number M070
NASA Technical Reports Server (NTRS)
Raumbaut, P. C.
1972-01-01
The M070 experiments are expected to give medical investigators precise information on a variety of biochemical changes occurring during exposure to space flight. Sufficient control data are being generated by baseline studies to differentiate those effects that are caused by weightless flight and those that are caused by other abnormal conditions that normally accompany spaceflight.
Small area estimation in forests affected by wildfire in the Interior West
G. G. Moisen; J. A. Blackard; M. Finco
2004-01-01
Recent emphasis has been placed on estimating amount and characteristics of forests affected by wildfire in the Interior West. Data collected by FIA is intended for estimation over large geographic areas and is too sparse to construct sufficiently precise estimates within burn perimeters. This paper illustrates how recently built MODISbased maps of forest/nonforest and...
Evaluating Classified MODIS Satellite Imagery as a Stratification Tool
Greg C. Liknes; Mark D. Nelson; Ronald E. McRoberts
2004-01-01
The Forest Inventory and Analysis (FIA) program of the USDA Forest Service collects forest attribute data on permanent plots arranged on a hexagonal network across all 50 states and Puerto Rico. Due to budget constraints, sample sizes sufficient to satisfy national FIA precision standards are seldom achieved for most inventory variables unless the estimation process is...
NASA Astrophysics Data System (ADS)
Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.
We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.
Optimization of deformation monitoring networks using finite element strain analysis
NASA Astrophysics Data System (ADS)
Alizadeh-Khameneh, M. Amin; Eshagh, Mehdi; Jensen, Anna B. O.
2018-04-01
An optimal design of a geodetic network can fulfill the requested precision and reliability of the network, and decrease the expenses of its execution by removing unnecessary observations. The role of an optimal design is highlighted in deformation monitoring network due to the repeatability of these networks. The core design problem is how to define precision and reliability criteria. This paper proposes a solution, where the precision criterion is defined based on the precision of deformation parameters, i. e. precision of strain and differential rotations. A strain analysis can be performed to obtain some information about the possible deformation of a deformable object. In this study, we split an area into a number of three-dimensional finite elements with the help of the Delaunay triangulation and performed the strain analysis on each element. According to the obtained precision of deformation parameters in each element, the precision criterion of displacement detection at each network point is then determined. The developed criterion is implemented to optimize the observations from the Global Positioning System (GPS) in Skåne monitoring network in Sweden. The network was established in 1989 and straddled the Tornquist zone, which is one of the most active faults in southern Sweden. The numerical results show that 17 out of all 21 possible GPS baseline observations are sufficient to detect minimum 3 mm displacement at each network point.
NASA Astrophysics Data System (ADS)
Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.
2015-06-01
Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).
Chen, Po-Chia; Hologne, Maggy; Walker, Olivier
2017-03-02
Rotational diffusion (D rot ) is a fundamental property of biomolecules that contains information about molecular dimensions and solute-solvent interactions. While ab initio D rot prediction can be achieved by explicit all-atom molecular dynamics simulations, this is hindered by both computational expense and limitations in water models. We propose coarse-grained force fields as a complementary solution, and show that the MARTINI force field with elastic networks is sufficient to compute D rot in >10 proteins spanning 5-157 kDa. We also adopt a quaternion-based approach that computes D rot orientation directly from autocorrelations of best-fit rotations as used in, e.g., RMSD algorithms. Over 2 μs trajectories, isotropic MARTINI+EN tumbling replicates experimental values to within 10-20%, with convergence analyses suggesting a minimum sampling of >50 × τ theor to achieve sufficient precision. Transient fluctuations in anisotropic tumbling cause decreased precision in predictions of axisymmetric anisotropy and rhombicity, the latter of which cannot be precisely evaluated within 2000 × τ theor for GB3. Thus, we encourage reporting of axial decompositions D x , D y , D z to ease comparability between experiment and simulation. Where protein disorder is absent, we observe close replication of MARTINI+EN D rot orientations versus CHARMM22*/TIP3p and experimental data. This work anticipates the ab initio prediction of NMR-relaxation by combining coarse-grained global motions with all-atom local motions.
Interpretation; Apollo 9 photography of parts of southern Arizona and southern New Mexico
Owen, J. Robert; Shown, Lynn M.
1973-01-01
Examination of small-scale (approximately 1:650,000) multispectral photographs obtained on the Apollo 9 mission in March 1969 revealed that in semiarid, regions features due to differences in soils or quantity of vegetation could most easily be discriminated on the color infrared photographs. Where there is sufficient ground truth, it is possible to delineate regional wildland plant communities on the basis of tone, however, the precision of the method may be improved by using photographs obtained two or more times during the year. Sites where vegetation-improvement practices have been completed are not always discernible. For example, where waterspreaders have been constructed, there was sufficient change in the density of vegetation to be readily detected on the photographs; however, pinyon-juniper to grass, conversions or contour furrowing did not always produce a sufficient change in the vegetation to be detected on the photographs.
Efficient exploration of cosmology dependence in the EFT of LSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo, E-mail: matteoc@dark-cosmology.dk, E-mail: sfore@stanford.edu, E-mail: senatore@stanford.edu
The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. The ideas and codes we present may easily be extended for other applications or higher-precision results.« less
Efficient exploration of cosmology dependence in the EFT of LSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo
The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. Finally, the ideas and codes we present may easily be extended for other applications or higher-precision results.« less
Efficient exploration of cosmology dependence in the EFT of LSS
Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo
2017-04-18
The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. Finally, the ideas and codes we present may easily be extended for other applications or higher-precision results.« less
Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics
Dowding, Irene; Haufe, Stefan
2018-01-01
Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885
Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size
ERIC Educational Resources Information Center
Shieh, Gwowen
2015-01-01
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
NASA Astrophysics Data System (ADS)
Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer
2011-03-01
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
Quasi-Monochromatic Visual Environments and the Resting Point of Accommodation
1988-01-01
accommodation. No statistically significant differences were revealed to support the possibility of color mediated differential regression to resting...discussed with respect to the general findings of the total sample as well as the specific behavior of individual participants. The summarized statistics ...remaining ten varied considerably with respect to the averaged trends reported in the above descriptive statistics as well as with respect to precision
Franzen, Delwen L; Gleiss, Sarah A; Berger, Christina; Kümpfbeck, Franziska S; Ammer, Julian J; Felmy, Felix
2015-01-15
Passive and active membrane properties determine the voltage responses of neurons. Within the auditory brain stem, refinements in these intrinsic properties during late postnatal development usually generate short integration times and precise action-potential generation. This developmentally acquired temporal precision is crucial for auditory signal processing. How the interactions of these intrinsic properties develop in concert to enable auditory neurons to transfer information with high temporal precision has not yet been elucidated in detail. Here, we show how the developmental interaction of intrinsic membrane parameters generates high firing precision. We performed in vitro recordings from neurons of postnatal days 9-28 in the ventral nucleus of the lateral lemniscus of Mongolian gerbils, an auditory brain stem structure that converts excitatory to inhibitory information with high temporal precision. During this developmental period, the input resistance and capacitance decrease, and action potentials acquire faster kinetics and enhanced precision. Depending on the stimulation time course, the input resistance and capacitance contribute differentially to action-potential thresholds. The decrease in input resistance, however, is sufficient to explain the enhanced action-potential precision. Alterations in passive membrane properties also interact with a developmental change in potassium currents to generate the emergence of the mature firing pattern, characteristic of coincidence-detector neurons. Cholinergic receptor-mediated depolarizations further modulate this intrinsic excitability profile by eliciting changes in the threshold and firing pattern, irrespective of the developmental stage. Thus our findings reveal how intrinsic membrane properties interact developmentally to promote temporally precise information processing. Copyright © 2015 the American Physiological Society.
The Too-Much-Precision Effect.
Loschelder, David D; Friese, Malte; Schaerer, Michael; Galinsky, Adam D
2016-12-01
Past research has suggested a fundamental principle of price precision: The more precise an opening price, the more it anchors counteroffers. The present research challenges this principle by demonstrating a too-much-precision effect. Five experiments (involving 1,320 experts and amateurs in real-estate, jewelry, car, and human-resources negotiations) showed that increasing the precision of an opening offer had positive linear effects for amateurs but inverted-U-shaped effects for experts. Anchor precision backfired because experts saw too much precision as reflecting a lack of competence. This negative effect held unless first movers gave rationales that boosted experts' perception of their competence. Statistical mediation and experimental moderation established the critical role of competence attributions. This research disentangles competing theoretical accounts (attribution of competence vs. scale granularity) and qualifies two putative truisms: that anchors affect experts and amateurs equally, and that more precise prices are linearly more potent anchors. The results refine current theoretical understanding of anchoring and have significant implications for everyday life.
Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping
NASA Astrophysics Data System (ADS)
Rehak, M.; Skaloud, J.
2015-08-01
In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.
Performance of Planar-Waveguide External Cavity Laser for Precision Measurements
NASA Technical Reports Server (NTRS)
Numata, Kenji; Camp, Jordan; Krainak, Michael A.; Stolpner, Lew
2010-01-01
A 1542-nm planar-waveguide external cavity laser (PW-ECL) is shown to have a sufficiently low level of frequency and intensity noise to be suitable for precision measurement applications. The frequency noise and intensity noise of the PW-ECL was comparable or better than the nonplanar ring oscillator (NPRO) and fiber laser between 0.1 mHz to 100 kHz. Controllability of the PW-ECL was demonstrated by stabilizing its frequency to acetylene (13C2H2) at 10(exp -13) level of Allan deviation. The PW-ECL also has the advantage of the compactness of a standard butterfly package, low cost, and a simple design consisting of a semiconductor gain media coupled to a planar-waveguide Bragg reflector. These features would make the PW-ECL suitable for precision measurements, including compact optical frequency standards, space lidar, and space interferometry
Research about the high precision temperature measurement
NASA Astrophysics Data System (ADS)
Lin, J.; Yu, J.; Zhu, X.; Zeng, Z.; Deng, Y.
2012-12-01
High precision temperature control system is one of most important support conditions for tunable birefringent filter.As the first step,we researched some high precision temperature measurement methods for it. Firstly, circuits with a 24 bit ADC as the sensor's reader were carefully designed; Secondly, an ARM porcessor is used as the centrol processing unit, it provides sufficient reading and procesing ability; Thirdly, three kinds of sensors, PT100, Dale 01T1002-5 thermistor, Wheatstone bridge(constructed by pure copper and manganin) as the senor of the temperature were tested respectively. The resolution of the measurement with these three kinds of sensors are all better than 0.001 that's enough for 0.01 stability temperature control. Comparatively, Dale 01T1002-5 thermistor could get the most accurate temperature of the key point, Wheatstone bridge could get the most accurate mean temperature of the whole layer, both of them will be used in our futrue temperature controll system.
Role of sufficient phosphorus in biodiesel production from diatom Phaeodactylum tricornutum.
Yu, Shi-Jin; Shen, Xiao-Fei; Ge, Huo-Qing; Zheng, Hang; Chu, Fei-Fei; Hu, Hao; Zeng, Raymond J
2016-08-01
In order to study the role of sufficient phosphorus (P) in biodiesel production by microalgae, Phaeodactylum tricornutum were cultivated in six different media treatments with combination of nitrogen (N) sufficiency/deprivation and phosphorus sufficiency/limitation/deprivation. Profiles of N and P, biomass, and fatty acids (FAs) content and compositions were measured during a 7-day cultivation period. The results showed that the FA content in microalgae biomass was promoted by P deprivation. However, statistical analysis showed that FA productivity had no significant difference (p = 0.63, >0.05) under the treatments of N deprivation with P sufficiency (N-P) and N deprivation with P deprivation (N-P-), indicating P sufficiency in N deprivation medium has little effect on increasing biodiesel productivity from P. triornutum. It was also found that the P absorption in N-P medium was 1.41 times higher than that in N sufficiency and P sufficiency (NP) medium. N deprivation with P limitation (N-P-l) was the optimal treatment for producing biodiesel from P. triornutum because of both the highest FA productivity and good biodiesel quality.
Decoding and disrupting left midfusiform gyrus activity during word reading
Hirshorn, Elizabeth A.; Ward, Michael J.; Fiez, Julie A.; Ghuman, Avniel Singh
2016-01-01
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation. PMID:27325763
NASA Technical Reports Server (NTRS)
Crane, D. F.
1984-01-01
When human operators are performing precision tracking tasks, their dynamic response can often be modeled by quasilinear describing functions. That fact permits analysis of the effects of delay in certain man machine control systems using linear control system analysis techniques. The analysis indicates that a reduction in system stability is the immediate effect of additional control system delay, and that system characteristics moderate or exaggerate the importance of the delay. A selection of data (simulator and flight test) consistent with the analysis is reviewed. Flight simulator visual-display delay compensation, designed to restore pilot aircraft system stability, was evaluated in several studies which are reviewed here. The studies range from single-axis, tracking-task experiments (with sufficient subjects and trials to establish the statistical significance of the results) to a brief evaluation of compensation of a computer generated imagery (CGI) visual display system in a full six degree of freedom simulation. The compensation was effective, improvements in pilot performance and workload or aircraft handling qualities rating (HQR) were observed. Results from recent aircraft handling qualities research literature, which support the compensation design approach, are also reviewed.
Decoding and disrupting left midfusiform gyrus activity during word reading.
Hirshorn, Elizabeth A; Li, Yuanning; Ward, Michael J; Richardson, R Mark; Fiez, Julie A; Ghuman, Avniel Singh
2016-07-19
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.
Stationary conditions for stochastic differential equations
NASA Technical Reports Server (NTRS)
Adomian, G.; Walker, W. W.
1972-01-01
This is a preliminary study of possible necessary and sufficient conditions to insure stationarity in the solution process for a stochastic differential equation. It indirectly sheds some light on ergodicity properties and shows that the spectral density is generally inadequate as a statistical measure of the solution. Further work is proceeding on a more general theory which gives necessary and sufficient conditions in a form useful for applications.
NASA Astrophysics Data System (ADS)
Zender, Charles S.
2016-09-01
Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.
How Large Should a Statistical Sample Be?
ERIC Educational Resources Information Center
Menil, Violeta C.; Ye, Ruili
2012-01-01
This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…
Visualizing Teacher Education as a Complex System: A Nested Simplex System Approach
ERIC Educational Resources Information Center
Ludlow, Larry; Ell, Fiona; Cochran-Smith, Marilyn; Newton, Avery; Trefcer, Kaitlin; Klein, Kelsey; Grudnoff, Lexie; Haigh, Mavis; Hill, Mary F.
2017-01-01
Our purpose is to provide an exploratory statistical representation of initial teacher education as a complex system comprised of dynamic influential elements. More precisely, we reveal what the system looks like for differently-positioned teacher education stakeholders based on our framework for gathering, statistically analyzing, and graphically…
Glideslope Descent-Rate Cuing to Aid Carrier Landings
1980-10-01
provided a sufficiently precise indication ofdescent rate close to the ship. Sensitivity of the arrows was set at alevel that during pretesting...Washington, D.C. 20036 Alexancria, Virginia 22314 American Psychological Assoc. 1 Human Factors Society 2 Psyc. INFO Document Control Unit Attn...Research (Code 458) Behavioral & Social Sciences Psychological Sciences Division 5001 Eisenhower Avenue 800 N. Quincy Street Alexandria, Virginia 22333
ERIC Educational Resources Information Center
Glassman, Jill R.; Potter, Susan C.; Baumler, Elizabeth R.; Coyle, Karin K.
2015-01-01
Introduction: Group-randomized trials (GRTs) are one of the most rigorous methods for evaluating the effectiveness of group-based health risk prevention programs. Efficiently designing GRTs with a sample size that is sufficient for meeting the trial's power and precision goals while not wasting resources exceeding them requires estimates of the…
NASA Astrophysics Data System (ADS)
Bianchi, Eugenio; Haggard, Hal M.; Rovelli, Carlo
2017-08-01
We show that in Oeckl's boundary formalism the boundary vectors that do not have a tensor form represent, in a precise sense, statistical states. Therefore the formalism incorporates quantum statistical mechanics naturally. We formulate general-covariant quantum statistical mechanics in this language. We illustrate the formalism by showing how it accounts for the Unruh effect. We observe that the distinction between pure and mixed states weakens in the general covariant context, suggesting that local gravitational processes are naturally statistical without a sharp quantal versus probabilistic distinction.
Clouds in the atmosphere of the super-Earth exoplanet GJ 1214b.
Kreidberg, Laura; Bean, Jacob L; Désert, Jean-Michel; Benneke, Björn; Deming, Drake; Stevenson, Kevin B; Seager, Sara; Berta-Thompson, Zachory; Seifahrt, Andreas; Homeier, Derek
2014-01-02
Recent surveys have revealed that planets intermediate in size between Earth and Neptune ('super-Earths') are among the most common planets in the Galaxy. Atmospheric studies are the next step towards developing a comprehensive understanding of this new class of object. Much effort has been focused on using transmission spectroscopy to characterize the atmosphere of the super-Earth archetype GJ 1214b (refs 7 - 17), but previous observations did not have sufficient precision to distinguish between two interpretations for the atmosphere. The planet's atmosphere could be dominated by relatively heavy molecules, such as water (for example, a 100 per cent water vapour composition), or it could contain high-altitude clouds that obscure its lower layers. Here we report a measurement of the transmission spectrum of GJ 1214b at near-infrared wavelengths that definitively resolves this ambiguity. The data, obtained with the Hubble Space Telescope, are sufficiently precise to detect absorption features from a high mean-molecular-mass atmosphere. The observed spectrum, however, is featureless. We rule out cloud-free atmospheric models with compositions dominated by water, methane, carbon monoxide, nitrogen or carbon dioxide at greater than 5σ confidence. The planet's atmosphere must contain clouds to be consistent with the data.
Risk assessment in the 21st century: roadmap and matrix.
Embry, Michelle R; Bachman, Ammie N; Bell, David R; Boobis, Alan R; Cohen, Samuel M; Dellarco, Michael; Dewhurst, Ian C; Doerrer, Nancy G; Hines, Ronald N; Moretto, Angelo; Pastoor, Timothy P; Phillips, Richard D; Rowlands, J Craig; Tanir, Jennifer Y; Wolf, Douglas C; Doe, John E
2014-08-01
Abstract The RISK21 integrated evaluation strategy is a problem formulation-based exposure-driven risk assessment roadmap that takes advantage of existing information to graphically represent the intersection of exposure and toxicity data on a highly visual matrix. This paper describes in detail the process for using the roadmap and matrix. The purpose of this methodology is to optimize the use of prior information and testing resources (animals, time, facilities, and personnel) to efficiently and transparently reach a risk and/or safety determination. Based on the particular problem, exposure and toxicity data should have sufficient precision to make such a decision. Estimates of exposure and toxicity, bounded by variability and/or uncertainty, are plotted on the X- and Y-axes of the RISK21 matrix, respectively. The resulting intersection is a highly visual representation of estimated risk. Decisions can then be made to increase precision in the exposure or toxicity estimates or declare that the available information is sufficient. RISK21 represents a step forward in the goal to introduce new methodologies into 21st century risk assessment. Indeed, because of its transparent and visual process, RISK21 has the potential to widen the scope of risk communication beyond those with technical expertise.
An Evaluation of Different Statistical Targets for Assembling Parallel Forms in Item Response Theory
Ali, Usama S.; van Rijn, Peter W.
2015-01-01
Assembly of parallel forms is an important step in the test development process. Therefore, choosing a suitable theoretical framework to generate well-defined test specifications is critical. The performance of different statistical targets of test specifications using the test characteristic curve (TCC) and the test information function (TIF) was investigated. Test length, the number of test forms, and content specifications are considered as well. The TCC target results in forms that are parallel in difficulty, but not necessarily in terms of precision. Vice versa, test forms created using a TIF target are parallel in terms of precision, but not necessarily in terms of difficulty. As sometimes the focus is either on TIF or TCC, differences in either difficulty or precision can arise. Differences in difficulty can be mitigated by equating, but differences in precision cannot. In a series of simulations using a real item bank, the two-parameter logistic model, and mixed integer linear programming for automated test assembly, these differences were found to be quite substantial. When both TIF and TCC are combined into one target with manipulation to relative importance, these differences can be made to disappear.
Explicit-Duration Hidden Markov Model Inference of UP-DOWN States from Continuous Signals
McFarland, James M.; Hahn, Thomas T. G.; Mehta, Mayank R.
2011-01-01
Neocortical neurons show UP-DOWN state (UDS) oscillations under a variety of conditions. These UDS have been extensively studied because of the insight they can yield into the functioning of cortical networks, and their proposed role in putative memory formation. A key element in these studies is determining the precise duration and timing of the UDS. These states are typically determined from the membrane potential of one or a small number of cells, which is often not sufficient to reliably estimate the state of an ensemble of neocortical neurons. The local field potential (LFP) provides an attractive method for determining the state of a patch of cortex with high spatio-temporal resolution; however current methods for inferring UDS from LFP signals lack the robustness and flexibility to be applicable when UDS properties may vary substantially within and across experiments. Here we present an explicit-duration hidden Markov model (EDHMM) framework that is sufficiently general to allow statistically principled inference of UDS from different types of signals (membrane potential, LFP, EEG), combinations of signals (e.g., multichannel LFP recordings) and signal features over long recordings where substantial non-stationarities are present. Using cortical LFPs recorded from urethane-anesthetized mice, we demonstrate that the proposed method allows robust inference of UDS. To illustrate the flexibility of the algorithm we show that it performs well on EEG recordings as well. We then validate these results using simultaneous recordings of the LFP and membrane potential (MP) of nearby cortical neurons, showing that our method offers significant improvements over standard methods. These results could be useful for determining functional connectivity of different brain regions, as well as understanding network dynamics. PMID:21738730
A laser frequency comb that enables radial velocity measurements with a precision of 1 cm s(-1).
Li, Chih-Hao; Benedick, Andrew J; Fendel, Peter; Glenday, Alexander G; Kärtner, Franz X; Phillips, David F; Sasselov, Dimitar; Szentgyorgyi, Andrew; Walsworth, Ronald L
2008-04-03
Searches for extrasolar planets using the periodic Doppler shift of stellar spectral lines have recently achieved a precision of 60 cm s(-1) (ref. 1), which is sufficient to find a 5-Earth-mass planet in a Mercury-like orbit around a Sun-like star. To find a 1-Earth-mass planet in an Earth-like orbit, a precision of approximately 5 cm s(-1) is necessary. The combination of a laser frequency comb with a Fabry-Pérot filtering cavity has been suggested as a promising approach to achieve such Doppler shift resolution via improved spectrograph wavelength calibration, with recent encouraging results. Here we report the fabrication of such a filtered laser comb with up to 40-GHz (approximately 1-A) line spacing, generated from a 1-GHz repetition-rate source, without compromising long-term stability, reproducibility or spectral resolution. This wide-line-spacing comb, or 'astro-comb', is well matched to the resolving power of high-resolution astrophysical spectrographs. The astro-comb should allow a precision as high as 1 cm s(-1) in astronomical radial velocity measurements.
Airborne Precision Spacing (APS) Dependent Parallel Arrivals (DPA)
NASA Technical Reports Server (NTRS)
Smith, Colin L.
2012-01-01
The Airborne Precision Spacing (APS) team at the NASA Langley Research Center (LaRC) has been developing a concept of operations to extend the current APS concept to support dependent approaches to parallel or converging runways along with the required pilot and controller procedures and pilot interfaces. A staggered operations capability for the Airborne Spacing for Terminal Arrival Routes (ASTAR) tool was developed and designated as ASTAR10. ASTAR10 has reached a sufficient level of maturity to be validated and tested through a fast-time simulation. The purpose of the experiment was to identify and resolve any remaining issues in the ASTAR10 algorithm, as well as put the concept of operations through a practical test.
Identifying natural flow regimes using fish communities
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Tsai, Wen-Ping; Wu, Tzu-Ching; Chen, Hung-kwai; Herricks, Edwin E.
2011-10-01
SummaryModern water resources management has adopted natural flow regimes as reasonable targets for river restoration and conservation. The characterization of a natural flow regime begins with the development of hydrologic statistics from flow records. However, little guidance exists for defining the period of record needed for regime determination. In Taiwan, the Taiwan Eco-hydrological Indicator System (TEIS), a group of hydrologic statistics selected for fisheries relevance, is being used to evaluate ecological flows. The TEIS consists of a group of hydrologic statistics selected to characterize the relationships between flow and the life history of indigenous species. Using the TEIS and biosurvey data for Taiwan, this paper identifies the length of hydrologic record sufficient for natural flow regime characterization. To define the ecological hydrology of fish communities, this study connected hydrologic statistics to fish communities by using methods to define antecedent conditions that influence existing community composition. A moving average method was applied to TEIS statistics to reflect the effects of antecedent flow condition and a point-biserial correlation method was used to relate fisheries collections with TEIS statistics. The resulting fish species-TEIS (FISH-TEIS) hydrologic statistics matrix takes full advantage of historical flows and fisheries data. The analysis indicates that, in the watersheds analyzed, averaging TEIS statistics for the present year and 3 years prior to the sampling date, termed MA(4), is sufficient to develop a natural flow regime. This result suggests that flow regimes based on hydrologic statistics for the period of record can be replaced by regimes developed for sampled fish communities.
Precision determination of the πN scattering lengths and the charged πNN coupling constant
NASA Astrophysics Data System (ADS)
Ericson, T. E. O.; Loiseau, B.; Thomas, A. W.
2000-01-01
We critically evaluate the isovector GMO sumrule for the charged πNN coupling constant using recent precision data from π-p and π-d atoms and with careful attention to systematic errors. From the π-d scattering length we deduce the pion-proton scattering lengths 1/2(aπ-p + aπ-n) = (-20 +/- 6(statistic)+/-10 (systematic) .10-4m-1πc and 1/2(aπ-p - aπ-n) = (903 +/- 14) . 10-4m-1πc. From this a direct evaluation gives g2c(GMO)/4π = 14.20 +/- 0.07 (statistic)+/-0.13(systematic) or f2c/4π = 0.0786 +/- 0.0008.
NASA Astrophysics Data System (ADS)
Khaire, Vikram
2017-08-01
There exists a large void in our understanding of the intergalactic medium (IGM) at z=0.5-1.5, spanning a significant cosmic time of 4 Gyr. This hole resulted from a paucity of near-UV QSO spectra, which were historically very expensive to obtain. However, with the advent of COS and the HST UV initiative, sufficient STIS/COS NUV spectra have finally become available, enabling the first statistical analyses. We propose a comprehensive study of the z 1 IGM using the Ly-alpha forest of 26 archival QSO spectra. This analysis will: (1) measure the distribution of HI absorbers to several percent precision down to log NHI < 13 to test our model of the IGM, and determine the extragalactic UV background (UVB) at that epoch; (2) measure the Ly-alpha forest power spectrum to 12%, providing another precision test of LCDM and our theory of the IGM; (3) measure the thermal state of the IGM, which reflects the balance of heating (photoheating, HI/HeII reionization) and cooling (Hubble expansion) of cosmic baryons, and directly verify the predicted cooldown of IGM gas after reionization for the first time; (4) generate high-quality reductions, coadds, and continuum fits that will be released to the public to enable other science cases. These results, along with our state-of-the-art hydrodynamical simulations, and theoretical models of the UVB, will fill the 4 Gyr hole in our understanding of the IGM. When combined with existing HST and ground-based data from lower and higher z, they will lead to a complete, empirical description of the IGM from HI reionization to the present, spanning more than 10 Gyr of cosmic history, adding substantially to Hubble's legacy of discovery on the IGM.
Gamma-ray imaging and holdup assays of 235-F PuFF cells 1 & 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aucott, T.
Savannah River National Laboratory (SRNL) Nuclear Measurements (L4120) was tasked with performing enhanced characterization of the holdup in the PuFF shielded cells. Assays were performed in accordance with L16.1-ADS-2460 using two high-resolution gamma-ray detectors. The first detector, an In Situ Object Counting System (ISOCS)-characterized detector, was used in conjunction with the ISOCS Geometry Composer software to quantify grams of holdup. The second detector, a Germanium Gamma-ray Imager (GeGI), was used to visualize the location and relative intensity of the holdup in the cells. Carts and collimators were specially designed to perform optimum assays of the cells. Thick, pencil-beam tungsten collimatorsmore » were fabricated to allow for extremely precise targeting of items of interest inside the cells. Carts were designed with a wide range of motion to position and align the detectors. A total of 24 measurements were made, each typically 24 hours or longer to provide sufficient statistical precision. This report presents the results of the enhanced characterization for cells 1 and 2. The measured gram values agree very well with results from the 2014 study. In addition, images were created using both the 2014 data and the new GeGI data. The GeGI images of the cells walls reveal significant Pu-238 holdup on the surface of the walls in cells 1 and 2. Additionally, holdup is visible in the two pass-throughs from cell 1 to the wing cabinets. This report documents the final element (exterior measurements coupled with gamma-ray imaging and modeling) of the enhanced characterization of cells 1-5 (East Cell Line).« less
Current Strategies for Quantitating Fibrosis in Liver Biopsy
Wang, Yan; Hou, Jin-Lin
2015-01-01
Objective: The present mini-review updated the progress in methodologies based on using liver biopsy. Data Sources: Articles for study of liver fibrosis, liver biopsy or fibrosis assessment published on high impact peer review journals from 1980 to 2014. Study Selection: Key articles were selected mainly according to their levels of relevance to this topic and citations. Results: With the recently mounting progress in chronic liver disease therapeutics, comes by a pressing need for precise, accurate, and dynamic assessment of hepatic fibrosis and cirrhosis in individual patients. Histopathological information is recognized as the most valuable data for fibrosis assessment. Conventional histology categorical systems describe the changes of fibrosis patterns in liver tissue; but the simplified ordinal digits assigned by these systems cannot reflect the fibrosis dynamics with sufficient precision and reproducibility. Morphometric assessment by computer assist digital image analysis, such as collagen proportionate area (CPA), detects change of fibrosis amount in tissue section in a continuous variable, and has shown its independent diagnostic value for assessment of advanced or late-stage of fibrosis. Due to its evident sensitivity to sampling variances, morphometric measurement is feasible to be taken as a reliable statistical parameter for the study of a large cohort. Combining state-of-art imaging technology and fundamental principle in Tissue Engineering, structure-based quantitation was recently initiated with a novel proof-of-concept tool, qFibrosis. qFibrosis showed not only the superior performance to CPA in accurately and reproducibly differentiating adjacent stages of fibrosis, but also the possibility for facilitating analysis of fibrotic regression and cirrhosis sub-staging. Conclusions: With input from multidisciplinary innovation, liver biopsy assessment as a new “gold standard” is anticipated to substantially support the accelerated progress of Hepatology medicine. PMID:25591571
Quantifying time in sedimentary successions by radio-isotopic dating of ash beds
NASA Astrophysics Data System (ADS)
Schaltegger, Urs
2014-05-01
Sedimentary rock sequences are an accurate record of geological, chemical and biological processes throughout the history of our planet. If we want to know more about the duration or the rates of some of these processes, we can apply methods of absolute age determination, i.e. of radio-isotopic dating. Data of highest precision and accuracy, and therefore of highest degree of confidence, are obtained by chemical abrasion, isotope-dilution, thermal ionization mass spectrometry (CA-ID-TIMS) 238U-206Pb dating techniques, applied to magmatic zircon from ash beds that are interbedded with the sediments. This techniques allows high-precision estimates of age at the 0.1% uncertainty for single analyses, and down to 0.03% uncertainty for groups of statistically equivalent 206Pb/238U dates. Such high precision is needed, since we would like the precision to be approximately equivalent or better than the (interpolated) duration of ammonoid zones in the Mesozoic (e.g., Ovtcharova et al. 2006), or to match short feedback rates of biological, climatic, or geochemical cycles after giant volcanic eruptions in large igneous provinces (LIP's), e.g., at the Permian/Triassic or the Triassic/Jurassic boundaries. We also wish to establish as precisely as possible temporal coincidence between the sedimentary record and short-lived volcanic events within the LIP's. Precision and accuracy of the U-Pb data has to be traceable and quantifiable in absolute terms, achieved by direct reference to the international kilogram, via an absolute calibration of the standard and isotopic tracer solutions. Only with a perfect control on precision and accuracy of radio-isotopic data, we can confidently determine whether two ages of geological events are really different, and avoid mistaking interlaboratory or interchronometer biases for age difference. The development of unprecedented precision of CA-ID-TIMS 238U-206Pb dates led to the recognition of protracted growth of zircon in a magmatic liquid (see, e.g., Schoene et al. 2012), which then becomes transferred into volcanic ashes as excess dispersion of 238U-206Pb dates (see, e.g., Guex et al. 2012). Zircon is crystallizing in the magmatic liquid shortly before the volcanic eruption; we therefore aim at finding the youngest zircon date or youngest statistically equivalent cluster of 238U-206Pb dates as an approximation of ash deposition (Wotzlaw et al. 2013). Time gaps between last zircon crystallization and eruption ("Δt") may be as large as 100-200 ka, at the limits of analytical precision. Understanding the magmatic crystallization history of zircon is the fundamental background for interpreting ash bed dates in a sedimentary succession. Ash beds of different stratigraphic position and age my be generated within different magmatic systems, showing different crystallization histories. A sufficient number of samples (N) is therefore of paramount importance, not to lose the stratigraphic age control in a given section, and to be able to discard samples with large Δt - but, how large has to be "N"? In order to use the youngest zircon or zircons as an approximation of the age of eruption and ash deposition, we need to be sure that we have quantitatively solved the problem of post-crystallization lead loss - but, how can we be sure?! Ash bed zircons are prone to partial loss of radiogenic lead, because the ashes have been flushed by volcanic gases, as well as brines during sediment compaction. We therefore need to analyze a sufficient number of zircons (n) to be sure not to miss the youngest - but, how large has to be "n"? Analysis of trace elements or oxygen, hafnium isotopic compositions in dated zircon may sometimes help to distinguish zircon that is in equilibrium with the last magmatic liquid, from those that are recycled from earlier crystallization episodes, or to recognize zircon with partial lead loss (Schoene et al. 2010). Respecting these constraints, we may arrive at accurate correlation of periods of global environmental and biotic disturbance (from ash bed analysis in biostratigraphically or cyclostratigraphically well constrained marine sections) with volcanic activity; examples are the Triassic-Jurassic boundary and the Central Atlantic Magmatic Province (Schoene et al. 2010), or the lower Toarcian oceanic anoxic event and the Karoo Province volcanism (Sell et al. in prep.). High-precision temporal correlations may also be obtained by combining high-precision U-Pb dating with biochronology in the Middle Triassic (Ovtcharova et al., in prep.), or by comparing U-Pb dates with astronomical timescales in the Upper Miocene (Wotzlaw et al., in prep.). References Guex, J., Schoene, B., Bartolini, A., Spangenberg, J., Schaltegger, U., O'Dogherty, L., et al. (2012). Geochronological constraints on post-extinction recovery of the ammonoids and carbon cycle perturbations during the Early Jurassic. Palaeogeography, Palaeoclimatology, Palaeoecology, 346-347(C), 1-11. Ovtcharova, M., Bucher, H., Schaltegger, U., Galfetti, T., Brayard, A., & Guex, J. (2006). New Early to Middle Triassic U-Pb ages from South China: Calibration with ammonoid biochronozones and implications for the timing of the Triassic biotic recovery. Earth and Planetary Science Letters, 243(3-4), 463-475. Ovtcharova, M., Goudemand, N., Galfetti, Th., Guodun, K., Hammer, O., Schaltegger, U., Bucher, H. Improving accuracy and precision of radio-isotopic and biochronological approaches in dating geological boundaries: The Early-Middle Triassic boundary case. In preparation. Schoene, B., Schaltegger, U., Brack, P., Latkoczy, C., Stracke, A., & Günther, D. (2012). Rates of magma differentiation and emplacement in a ballooning pluton recorded by U-Pb TIMS-TEA, Adamello batholith, Italy. Earth and Planetary Science Letters, 355-356, 162-173. Schoene, B., Latkoczy, C., Schaltegger, U., & Günther, D. (2010). A new method integrating high-precision U-Pb geochronology with zircon trace element analysis (U-Pb TIMS-TEA). Geochimica Et Cosmochimica Acta, 74(24), 7144-7159. Schoene, B., Guex, J., Bartolini, A., Schaltegger, U., & Blackburn, T. J. (2010). Correlating the end-Triassic mass extinction and flood basalt volcanism at the 100 ka level. Geology, 38(5), 387-390. Sell, B., Ovtcharova, M., Guex, J., Jourdan, F., Schaltegger, U. Evaluating the link between the Karoo LIP and climatic-biologic events of the Toarcian Stage with high-precision U-Pb geochronology. In preparation. Wotzlaw, J. F., Schaltegger, U., Frick, D. A., Dungan, M. A., Gerdes, A., & Günther, D. (2013). Tracking the evolution of large-volume silicic magma reservoirs from assembly to supereruption. Geology, 41(8), 867-870. Wotzlaw, J.F., Hüsing, S.K., Hilgen, F.J.., Schaltegger, U. Testing the gold standard of geochronology against astronomical time: High-precision U-Pb geochronology of orbitally tuned ash beds from the Mediterranean Miocene. In preparation.
ERIC Educational Resources Information Center
National Association for Welfare Research and Statistics, Olympia, WA.
The presentations compiled in these proceedings on welfare and self-sufficiency reflect much of the current research in areas of housing, health, employment and training, welfare and reform, nutrition, child support, child care, and youth. The first section provides information on the conference and on the National Association for Welfare Research…
Conducting Precision Medicine Research with African Americans.
Halbert, Chanita Hughes; McDonald, Jasmine; Vadaparampil, Susan; Rice, LaShanta; Jefferson, Melanie
2016-01-01
Precision medicine is an approach to detecting, treating, and managing disease that is based on individual variation in genetic, environmental, and lifestyle factors. Precision medicine is expected to reduce health disparities, but this will be possible only if studies have adequate representation of racial minorities. It is critical to anticipate the rates at which individuals from diverse populations are likely to participate in precision medicine studies as research initiatives are being developed. We evaluated the likelihood of participating in a clinical study for precision medicine. Observational study conducted between October 2010 and February 2011 in a national sample of African Americans. Intentions to participate in a government sponsored study that involves providing a biospecimen and generates data that could be shared with other researchers to conduct future studies. One third of respondents would participate in a clinical study for precision medicine. Only gender had a significant independent association with participation intentions. Men had a 1.86 (95% CI = 1.11, 3.12, p = 0.02) increased likelihood of participating in a precision medicine study compared to women in the model that included overall barriers and facilitators. In the model with specific participation barriers, distrust was associated with a reduced likelihood of participating in the research described in the vignette (OR = 0.57, 95% CI = 0.34, 0.96, p = 0.04). African Americans may have low enrollment in PMI research. As PMI research is implemented, extensive efforts will be needed to ensure adequate representation. Additional research is needed to identify optimal ways of ethically describing precision medicine studies to ensure sufficient recruitment of racial minorities.
A Study of Particle Beam Spin Dynamics for High Precision Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiedler, Andrew J.
In the search for physics beyond the Standard Model, high precision experiments to measure fundamental properties of particles are an important frontier. One group of such measurements involves magnetic dipole moment (MDM) values as well as searching for an electric dipole moment (EDM), both of which could provide insights about how particles interact with their environment at the quantum level and if there are undiscovered new particles. For these types of high precision experiments, minimizing statistical uncertainties in the measurements plays a critical role. \\\\ \\indent This work leverages computer simulations to quantify the effects of statistical uncertainty for experimentsmore » investigating spin dynamics. In it, analysis of beam properties and lattice design effects on the polarization of the beam is performed. As a case study, the beam lines that will provide polarized muon beams to the Fermilab Muon \\emph{g}-2 experiment are analyzed to determine the effects of correlations between the phase space variables and the overall polarization of the muon beam.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.
An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less
Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.; ...
2018-04-19
An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less
The precise time-dependent solution of the Fokker–Planck equation with anomalous diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Ran; Du, Jiulin, E-mail: jiulindu@aliyun.com
2015-08-15
We study the time behavior of the Fokker–Planck equation in Zwanzig’s rule (the backward-Ito’s rule) based on the Langevin equation of Brownian motion with an anomalous diffusion in a complex medium. The diffusion coefficient is a function in momentum space and follows a generalized fluctuation–dissipation relation. We obtain the precise time-dependent analytical solution of the Fokker–Planck equation and at long time the solution approaches to a stationary power-law distribution in nonextensive statistics. As a test, numerically we have demonstrated the accuracy and validity of the time-dependent solution. - Highlights: • The precise time-dependent solution of the Fokker–Planck equation with anomalousmore » diffusion is found. • The anomalous diffusion satisfies a generalized fluctuation–dissipation relation. • At long time the time-dependent solution approaches to a power-law distribution in nonextensive statistics. • Numerically we have demonstrated the accuracy and validity of the time-dependent solution.« less
ERIC Educational Resources Information Center
Greer, Wil
2013-01-01
This study identified the variables associated with data-driven instruction (DDI) that are perceived to best predict student achievement. Of the DDI variables discussed in the literature, 51 of them had a sufficient enough research base to warrant statistical analysis. Of them, 26 were statistically significant. Multiple regression and an…
Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng
2017-12-01
Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.
Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R
ERIC Educational Resources Information Center
Dogan, C. Deha
2017-01-01
Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…
Statistical inference of selection and divergence of rice blast resistance gene Pi-ta
USDA-ARS?s Scientific Manuscript database
The resistance gene Pi-ta has been effectively used to control rice blast disease worldwide. A few recent studies have described the possible evolution of Pi-ta in cultivated and weedy rice. However, evolutionary statistics used for the studies are too limited to precisely understand selection and d...
Determination of the pion-nucleon coupling constant and scattering lengths
NASA Astrophysics Data System (ADS)
Ericson, T. E.; Loiseau, B.; Thomas, A. W.
2002-07-01
We critically evaluate the isovector Goldberger-Miyazawa-Oehme (GMO) sum rule for forward πN scattering using the recent precision measurements of π-p and π-d scattering lengths from pionic atoms. We deduce the charged-pion-nucleon coupling constant, with careful attention to systematic and statistical uncertainties. This determination gives, directly from data, g2c(GMO)/ 4π=14.11+/-0.05(statistical)+/-0.19(systematic) or f2c/4π=0.0783(11). This value is intermediate between that of indirect methods and the direct determination from backward np differential scattering cross sections. We also use the pionic atom data to deduce the coherent symmetric and antisymmetric sums of the pion-proton and pion-neutron scattering lengths with high precision, namely, (aπ-p+aπ-n)/2=[- 12+/-2(statistical)+/-8(systematic)]×10-4 m-1π and (aπ-p-aπ- n)/2=[895+/-3(statistical)+/-13 (systematic)]×10-4 m-1π. For the need of the present analysis, we improve the theoretical description of the pion-deuteron scattering length.
Wind speed statistics for Goldstone, California, anemometer sites
NASA Technical Reports Server (NTRS)
Berg, M.; Levy, R.; Mcginness, H.; Strain, D.
1981-01-01
An exploratory wind survey at an antenna complex was summarized statistically for application to future windmill designs. Data were collected at six locations from a total of 10 anemometers. Statistics include means, standard deviations, cubes, pattern factors, correlation coefficients, and exponents for power law profile of wind speed. Curves presented include: mean monthly wind speeds, moving averages, and diurnal variation patterns. It is concluded that three of the locations have sufficiently strong winds to justify consideration for windmill sites.
The Strong Lensing Time Delay Challenge (2014)
NASA Astrophysics Data System (ADS)
Liao, Kai; Dobler, G.; Fassnacht, C. D.; Treu, T.; Marshall, P. J.; Rumbaugh, N.; Linder, E.; Hojjati, A.
2014-01-01
Time delays between multiple images in strong lensing systems are a powerful probe of cosmology. At the moment the application of this technique is limited by the number of lensed quasars with measured time delays. However, the number of such systems is expected to increase dramatically in the next few years. Hundred such systems are expected within this decade, while the Large Synoptic Survey Telescope (LSST) is expected to deliver of order 1000 time delays in the 2020 decade. In order to exploit this bounty of lenses we needed to make sure the time delay determination algorithms have sufficiently high precision and accuracy. As a first step to test current algorithms and identify potential areas for improvement we have started a "Time Delay Challenge" (TDC). An "evil" team has created realistic simulated light curves, to be analyzed blindly by "good" teams. The challenge is open to all interested parties. The initial challenge consists of two steps (TDC0 and TDC1). TDC0 consists of a small number of datasets to be used as a training template. The non-mandatory deadline is December 1 2013. The "good" teams that complete TDC0 will be given access to TDC1. TDC1 consists of thousands of lightcurves, a number sufficient to test precision and accuracy at the subpercent level, necessary for time-delay cosmography. The deadline for responding to TDC1 is July 1 2014. Submissions will be analyzed and compared in terms of predefined metrics to establish the goodness-of-fit, efficiency, precision and accuracy of current algorithms. This poster describes the challenge in detail and gives instructions for participation.
NASA Astrophysics Data System (ADS)
Profe, Jörn; Ohlendorf, Christian
2017-04-01
XRF-scanning is the state-of-the-art technique for geochemical analyses in marine and lacustrine sedimentology for more than a decade. However, little attention has been paid to data precision and technical limitations so far. Using homogenized, dried and powdered samples (certified geochemical reference standards and samples from a lithologically-contrasting loess-paleosol sequence) minimizes many adverse effects that influence the XRF-signal when analyzing wet sediment cores. This allows the investigation of data precision under ideal conditions and documents a new application of the XRF core-scanner technology at the same time. Reliable interpretations of XRF results require data precision evaluation of single elements as a function of X-ray tube, measurement time, sample compaction and quality of peak fitting. Ten-fold measurement of each sample constitutes data precision. Data precision of XRF measurements theoretically obeys Poisson statistics. Fe and Ca exhibit largest deviations from Poisson statistics. The same elements show the least mean relative standard deviations in the range from 0.5% to 1%. This represents the technical limit of data precision achievable by the installed detector. Measurement times ≥ 30 s reveal mean relative standard deviations below 4% for most elements. The quality of peak fitting is only relevant for elements with overlapping fluorescence lines such as Ba, Ti and Mn or for elements with low concentrations such as Y, for example. Differences in sample compaction are marginal and do not change mean relative standard deviation considerably. Data precision is in the range reported for geochemical reference standards measured by conventional techniques. Therefore, XRF scanning of discrete samples provide a cost- and time-efficient alternative to conventional multi-element analyses. As best trade-off between economical operation and data quality, we recommend a measurement time of 30 s resulting in a total scan time of 30 minutes for 30 samples.
Simplified Rotation In Acoustic Levitation
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Gaspar, M. S.; Trinh, E. H.
1989-01-01
New technique based on old discovery used to control orientation of object levitated acoustically in axisymmetric chamber. Method does not require expensive equipment like additional acoustic drivers of precisely adjustable amplitude, phase, and frequency. Reflecting object acts as second source of sound. If reflecting object large enough, close enough to levitated object, or focuses reflected sound sufficiently, Rayleigh torque exerted on levitated object by reflected sound controls orientation of object.
ERIC Educational Resources Information Center
In'nami, Yo; Koizumi, Rie
2013-01-01
The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…
Application of high-precision two-way ranging to Galileo Earth-1 encounter navigation
NASA Technical Reports Server (NTRS)
Pollmeier, V. M.; Thurman, S. W.
1992-01-01
The application of precision two-way ranging to orbit determination with relatively short data arcs is investigated for the Galileo spacecraft's approach to its first Earth encounter (December 8, 1990). Analysis of previous S-band (2.3-GHz) ranging data acquired from Galileo indicated that under good signal conditions submeter precision and 10-m ranging accuracy were achieved. It is shown that ranging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. A range data filtering technique, in which explicit modeling of range measurement bias parameters for each station pass is utilized, is shown to largely remove the systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle-finding capabilities of the data. The accuracy of the Galileo orbit solutions obtained with S-band Doppler and precision ranging were found to be consistent with simple theoretical calculations, which predicted that angular accuracies of 0.26-0.34 microrad were achievable. In addition, the navigation accuracy achieved with precision ranging was marginally better than that obtained using delta-differenced one-way range (delta DOR), the principal data type that was previously used to obtain spacecraft angular position measurements operationally.
[Precision medicine: new opportunities and challenges for molecular epidemiology].
Song, Jing; Hu, Yonghua
2016-04-01
Since the completion of the Human Genome Project in 2003 and the announcement of the Precision Medicine Initiative by U.S. President Barack Obama in January 2015, human beings have initially completed the " three steps" of " genomics to biology, genomics to health as well as genomics to society". As a new inter-discipline, the emergence and development of precision medicine have relied on the support and promotion from biological science, basic medicine, clinical medicine, epidemiology, statistics, sociology and information science, etc. Meanwhile, molecular epidemiology is considered to be the core power to promote precision medical as a cross discipline of epidemiology and molecular biology. This article is based on the characteristics and research progress of medicine and molecular epidemiology respectively, focusing on the contribution and significance of molecular epidemiology to precision medicine, and exploring the possible opportunities and challenges in the future.
Why Current Statistics of Complementary Alternative Medicine Clinical Trials is Invalid.
Pandolfi, Maurizio; Carreras, Giulia
2018-06-07
It is not sufficiently known that frequentist statistics cannot provide direct information on the probability that the research hypothesis tested is correct. The error resulting from this misunderstanding is compounded when the hypotheses under scrutiny have precarious scientific bases, which, generally, those of complementary alternative medicine (CAM) are. In such cases, it is mandatory to use inferential statistics, considering the prior probability that the hypothesis tested is true, such as the Bayesian statistics. The authors show that, under such circumstances, no real statistical significance can be achieved in CAM clinical trials. In this respect, CAM trials involving human material are also hardly defensible from an ethical viewpoint.
Assessment of Alternative [U] and [Th] Zircon Standards for SIMS
NASA Astrophysics Data System (ADS)
Monteleone, B. D.; van Soest, M. C.; Hodges, K.; Moore, G. M.; Boyce, J. W.; Hervig, R. L.
2009-12-01
The quality of in situ (U-Th)/He zircon dates is dependent upon the accuracy and precision of spatially distributed [U] and [Th] measurements on often complexly zoned zircon crystals. Natural zircon standards for SIMS traditionally have been used to obtain precise U-Pb ages rather than precise U and Th concentration. [U] and [Th] distributions within even the most homogeneous U-Pb age standards are not sufficient to make good microbeam standards (i.e., yield good precision: 2σ < 5%) for (U-Th)/He dates. In the absence of sufficiently homogeneous natural zircon crystals, we evaluate the use of the NIST 610 glass standard and a synthetic polycrystalline solid “zircon synrock” made by powdering and pressing natural zircon crystals at 2 GPa and 1100°C within a 13 mm piston cylinder for 24 hours. SIMS energy spectra and multiple spot analyses help assess the matrix-dependence of secondary ion emission and [U] and [Th] homogeneity of these materials. Although spot analyses on NIST 610 glass yielded spatially consistent ratios of 238U/30Si and 232Th/30Si (2σ = 2%, n = 14), comparison of energy spectra collected on glass and zircon reveal significant differences in U, UO, Th, and ThO ion intensities over the range of initial kinetic energies commonly used for trace element analyses. Computing [U] and [Th] in zircon using NIST glass yields concentrations that vary by more than 10% for [U] and [Th], depending on the initial kinetic energy and ion mass (elemental, oxide, or sum of elemental and oxide) used for the analysis. The observed effect of chemistry on secondary ion energy spectra suggests that NIST glass cannot be used as a standard for trace [U] and [Th] in zircon without a correction factor (presently unknown). Energy spectra of the zircon synrock are similar to those of natural zircon, suggesting matrix compatibility and therefore potential for accurate standardization. Spot analyses on the zircon powder pellets, however, show that adequate homogeneity of [U] and [Th] (2σ = 37% and 33% for 238U/30Si and 232Th/30Si, respectively, n = 8) has yet to be achieved. Modeling shows that homogenization of [U] and [Th] within these pellets requires preparation of powders with <2 micron sized particles, which has yet to be achieved in sample preparation. Thus, the zircon synrock pellet remains a viable potential [U], [Th] standard, although the preparation of a sufficiently fine grained, homogeneous pellet is a work in progress.
2014-11-01
Kullback , S., & Leibler , R. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22, 79...cognitive challenges of sensemaking only informally using conceptual notions like "framing" and "re-framing", which are not sufficient to support T&E in...appropriate frame(s) from memory. Assess the Frame: Evaluate the quality of fit between data and frame. Generate Hypotheses: Use the current
Development of Pulsar Detection Methods for a Galactic Center Search
NASA Astrophysics Data System (ADS)
Thornton, Stephen; Wharton, Robert; Cordes, James; Chatterjee, Shami
2018-01-01
Finding pulsars within the inner parsec of the galactic center would be incredibly beneficial: for pulsars sufficiently close to Sagittarius A*, extremely precise tests of general relativity in the strong field regime could be performed through measurement of post-Keplerian parameters. Binary pulsar systems with sufficiently short orbital periods could provide the same laboratories with which to test existing theories. Fast and efficient methods are needed to parse large sets of time-domain data from different telescopes to search for periodicity in signals and differentiate radio frequency interference (RFI) from pulsar signals. Here we demonstrate several techniques to reduce red noise (low-frequency interference), generate signals from pulsars in binary orbits, and create plots that allow for fast detection of both RFI and pulsars.
$B$- and $D$-meson leptonic decay constants from four-flavor lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazavov, A.; Bernard, C.; Brown, N.
We calculate the leptonic decay constants of heavy-light pseudoscalar mesons with charm and bottom quarks in lattice quantum chromodynamics on four-flavor QCD gauge-field configurations with dynamicalmore » $u$, $d$, $s$, and $c$ quarks. We analyze over twenty isospin-symmetric ensembles with six lattice spacings down to $$a\\approx 0.03$$~fm and several values of the light-quark mass down to the physical value $$\\frac{1}{2}(m_u+m_d)$$. We employ the highly-improved staggered-quark (HISQ) action for the sea and valence quarks; on the finest lattice spacings, discretization errors are sufficiently small that we can calculate the $B$-meson decay constants with the HISQ action for the first time directly at the physical $b$-quark mass. We obtain the most precise determinations to-date of the $D$- and $B$-meson decay constants and their ratios, $$f_{D^+} = 212.6 (0.5)$$~MeV, $$f_{D_s} = 249.8(0.4)$$~MeV, $$f_{D_s}/f_{D^+} = 1.1749(11)$$, $$f_{B^+} = 189.4(1.4)$$~MeV, $$f_{B_s} = 230.7(1.2)$$~MeV, $$f_{B_s}/f_{B^+} = 1.2180(49)$$, where the errors include statistical and all systematic uncertainties. Our results for the $B$-meson decay constants are three times more precise than the previous best lattice-QCD calculations, and bring the QCD errors in the Standard-Model predictions for the rare leptonic decays $$\\overline{\\mathcal{B}}(B_s \\to \\mu^+\\mu^-) = 3.65(11) \\times 10^{-9}$$, $$\\overline{\\mathcal{B}}(B^0 \\to \\mu^+\\mu^-) = 1.00(3) \\times 10^{-11}$$, and $$\\overline{\\mathcal{B}}(B^0 \\to \\mu^+\\mu^-)/\\overline{\\mathcal{B}}(B_s \\to \\mu^+\\mu^-) = 0.00264(7)$$ to well below other sources of uncertainty. As a byproduct of our analysis, we also update our previously published results for the light-quark-mass ratios and the scale-setting quantities $$f_{p4s}$$, $$M_{p4s}$$, and $$R_{p4s}$$. We obtain the most precise lattice-QCD determination to date of the ratio $$f_{K^+}/f_{\\pi^+} = 1.1950(^{+15}_{-22})$$~MeV.« less
Feder, Paul I; Ma, Zhenxu J; Bull, Richard J; Teuschler, Linda K; Rice, Glenn
2009-01-01
In chemical mixtures risk assessment, the use of dose-response data developed for one mixture to estimate risk posed by a second mixture depends on whether the two mixtures are sufficiently similar. While evaluations of similarity may be made using qualitative judgments, this article uses nonparametric statistical methods based on the "bootstrap" resampling technique to address the question of similarity among mixtures of chemical disinfectant by-products (DBP) in drinking water. The bootstrap resampling technique is a general-purpose, computer-intensive approach to statistical inference that substitutes empirical sampling for theoretically based parametric mathematical modeling. Nonparametric, bootstrap-based inference involves fewer assumptions than parametric normal theory based inference. The bootstrap procedure is appropriate, at least in an asymptotic sense, whether or not the parametric, distributional assumptions hold, even approximately. The statistical analysis procedures in this article are initially illustrated with data from 5 water treatment plants (Schenck et al., 2009), and then extended using data developed from a study of 35 drinking-water utilities (U.S. EPA/AMWA, 1989), which permits inclusion of a greater number of water constituents and increased structure in the statistical models.
A Concept for Directly Coupled Pulsed Electromagnetic Acceleration of Plasmas
NASA Technical Reports Server (NTRS)
Thio, Y.C. Francis; Cassibry, Jason T.; Eskridge, Richard; Smith, James; Wu, S. T.; Rodgers, Stephen L. (Technical Monitor)
2001-01-01
Plasma jets with high momentum flux density are required for a variety of applications in propulsion research. Methods of producing these plasma jets are being investigated at NASA Marshall Space Flight Center. The experimental goal in the immediate future is to develop plasma accelerators which are capable of producing plasma jets with momentum flux density represented by velocities up to 200 km/s and ion density up to 10(exp 24) per cu m, with sufficient precision and reproducibility in their properties, and with sufficiently high efficiency. The jets must be sufficiently focused to allow them to be transported over several meters. A plasma accelerator concept is presented that might be able to meet these requirements. It is a self-switching, shaped coaxial pulsed plasma thruster, with focusing of the plasma flow by shaping muzzle current distribution as in plasma focus devices, and by mechanical tapering of the gun walls. Some 2-D MHD modeling in support of the conceptual design will be presented.
Zender, Charles S.
2016-09-19
Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.« less
Nonlinear Statistical Estimation with Numerical Maximum Likelihood
1974-10-01
probably most directly attributable to the speed, precision and compactness of the linear programming algorithm exercised ; the mutual primal-dual...discriminant analysis is to classify the individual as a member of T# or IT, 1 2 according to the relative...Introduction to the Dissertation 1 Introduction to Statistical Estimation Theory 3 Choice of Estimator.. .Density Functions 12 Choice of Estimator
Direct visualization of atomically precise nitrogen-doped graphene nanoribbons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yi; Zhang, Yanfang; Li, Geng
2014-07-14
We have fabricated atomically precise nitrogen-doped chevron-type graphene nanoribbons by using the on-surface synthesis technique combined with the nitrogen substitution of the precursors. Scanning tunneling microscopy and spectroscopy indicate that the well-defined nanoribbons tend to align with the neighbors side-by-side with a band gap of 1.02 eV, which is in good agreement with the density functional theory calculation result. The influence of the high precursor coverage on the quality of the nanoribbons is also studied. We find that graphene nanoribbons with sufficient aspect ratios can only be fabricated at sub-monolayer precursor coverage. This work provides a way to construct atomically precisemore » nitrogen-doped graphene nanoribbons.« less
Characterization of a Combined CARS and Interferometric Rayleigh Scattering System
NASA Technical Reports Server (NTRS)
Tedder, Sarah A.; Bivolaru, Daniel; Danehy, Paul M.; Weikl, M. C.; Beyrau, F.; Seeger, T.; Cutler, Andrew D.
2007-01-01
This paper describes the characterization of a combined Coherent anti-Stokes Raman Spectroscopy and Interferometric Rayleigh Scattering (CARS-IRS) system by reporting the accuracy and precision of the measurements of temperature, species mole fraction of N2, O2, and H2, and two-components of velocity. A near-adiabatic H2-air Hencken burner flame was used to provide known properties for measurements made with the system. The measurement system is also demonstrated in a small-scale Mach 1.6 H2-air combustion-heated supersonic jet with a co-flow of H2. The system is found to have a precision that is sufficient to resolve fluctuations of flow properties in the mixing layer of the jet.
Scanner imaging systems, aircraft
NASA Technical Reports Server (NTRS)
Ungar, S. G.
1982-01-01
The causes and effects of distortion in aircraft scanner data are reviewed and an approach to reduce distortions by modelling the effect of aircraft motion on the scanner scene is discussed. With the advent of advanced satellite borne scanner systems, the geometric and radiometric correction of aircraft scanner data has become increasingly important. Corrections are needed to reliably simulate observations obtained by such systems for purposes of evaluation. It is found that if sufficient navigational information is available, aircraft scanner coordinates may be related very precisely to planimetric ground coordinates. However, the potential for a multivalue remapping transformation (i.e., scan lines crossing each other), adds an inherent uncertainty, to any radiometric resampling scheme, which is dependent on the precise geometry of the scan and ground pattern.
Lawson concepts and criticality in DT fusion reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lartigue, J.G.
1987-12-01
The original Lawson concepts (amplification factor R and parameter n{tau}) as well as their applications in DT reactors are discussed in two cases: the ignition regime and the subignition regime in a self-sufficient plant. The modified Lawson factor or internal amplification factor R{sub a} (a function of alpha power) is proposed as a means to measure the ignition level reached by the plasma, in a more precise way than that given by the collective parameter (n{tau}kT). The self-sufficiency factor ({delta}) is proposed as a means to measure the plant self-sufficiency, {delta} being more significant than the traditional Q factor. Itmore » is stated that the ignition regime (R{sub a} = 1) is equivalent to a critical state (energy equilibrium); then, the corresponding critical mass concept is proposed. The analysis of the R{sub a} relationship with temperature (kT), (n{tau}), and recirculating factor ({var epsilon}) gives the conditions for the reactor to reach ignition or for the plant to reach self-sufficiency; it also shows that an approach to ignition is not improved by heating from 50 to 100 KeV.« less
2009-12-01
events. Work associated with aperiodic tasks have the same statistical behavior and the same timing requirements. The timing deadlines are soft. • Sporadic...answers, but it is possible to calculate how precise the estimates are. Simulation-based performance analysis of a model includes a statistical ...to evaluate all pos- sible states in a timely manner. This is the principle reason for resorting to simulation and statistical analysis to evaluate
Karimi, Davood; Samei, Golnoosh; Kesch, Claudia; Nir, Guy; Salcudean, Septimiu E
2018-05-15
Most of the existing convolutional neural network (CNN)-based medical image segmentation methods are based on methods that have originally been developed for segmentation of natural images. Therefore, they largely ignore the differences between the two domains, such as the smaller degree of variability in the shape and appearance of the target volume and the smaller amounts of training data in medical applications. We propose a CNN-based method for prostate segmentation in MRI that employs statistical shape models to address these issues. Our CNN predicts the location of the prostate center and the parameters of the shape model, which determine the position of prostate surface keypoints. To train such a large model for segmentation of 3D images using small data (1) we adopt a stage-wise training strategy by first training the network to predict the prostate center and subsequently adding modules for predicting the parameters of the shape model and prostate rotation, (2) we propose a data augmentation method whereby the training images and their prostate surface keypoints are deformed according to the displacements computed based on the shape model, and (3) we employ various regularization techniques. Our proposed method achieves a Dice score of 0.88, which is obtained by using both elastic-net and spectral dropout for regularization. Compared with a standard CNN-based method, our method shows significantly better segmentation performance on the prostate base and apex. Our experiments also show that data augmentation using the shape model significantly improves the segmentation results. Prior knowledge about the shape of the target organ can improve the performance of CNN-based segmentation methods, especially where image features are not sufficient for a precise segmentation. Statistical shape models can also be employed to synthesize additional training data that can ease the training of large CNNs.
Identifiability of PBPK Models with Applications to ...
Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss different types of identifiability that occur in PBPK models and give reasons why they occur. We particularly focus on how the mathematical structure of a PBPK model and lack of appropriate data can lead to statistical models in which it is impossible to estimate at least some parameters precisely. Methods are reviewed which can determine whether a purely linear PBPK model is globally identifiable. We propose a theorem which determines when identifiability at a set of finite and specific values of the mathematical PBPK model (global discrete identifiability) implies identifiability of the statistical model. However, we are unable to establish conditions that imply global discrete identifiability, and conclude that the only safe approach to analysis of PBPK models involves Bayesian analysis with truncated priors. Finally, computational issues regarding posterior simulations of PBPK models are discussed. The methodology is very general and can be applied to numerous PBPK models which can be expressed as linear time-invariant systems. A real data set of a PBPK model for exposure to dimethyl arsinic acid (DMA(V)) is presented to illustrate the proposed methodology. We consider statistical analy
A spatial scan statistic for nonisotropic two-level risk cluster.
Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie
2012-01-30
Spatial scan statistic methods are commonly used for geographical disease surveillance and cluster detection. The standard spatial scan statistic does not model any variability in the underlying risks of subregions belonging to a detected cluster. For a multilevel risk cluster, the isotonic spatial scan statistic could model a centralized high-risk kernel in the cluster. Because variations in disease risks are anisotropic owing to different social, economical, or transport factors, the real high-risk kernel will not necessarily take the central place in a whole cluster area. We propose a spatial scan statistic for a nonisotropic two-level risk cluster, which could be used to detect a whole cluster and a noncentralized high-risk kernel within the cluster simultaneously. The performance of the three methods was evaluated through an intensive simulation study. Our proposed nonisotropic two-level method showed better power and geographical precision with two-level risk cluster scenarios, especially for a noncentralized high-risk kernel. Our proposed method is illustrated using the hand-foot-mouth disease data in Pingdu City, Shandong, China in May 2009, compared with two other methods. In this practical study, the nonisotropic two-level method is the only way to precisely detect a high-risk area in a detected whole cluster. Copyright © 2011 John Wiley & Sons, Ltd.
A global view on the Higgs self-coupling at lepton colliders
Di Vita, Stefano; Durieux, Gauthier; Grojean, Christophe; ...
2018-02-28
We perform a global effective-field-theory analysis to assess the precision on the determination of the Higgs trilinear self-coupling at future lepton colliders. Two main scenarios are considered, depending on whether the center-of-mass energy of the colliders is sufficient or not to access Higgs pair production processes. Low-energy machines allow for ~40% precision on the extraction of the Higgs trilinear coupling through the exploitation of next-to-leading-order effects in single Higgs measurements, provided that runs at both 240/250 GeV and 350 GeV are available with luminosities in the few attobarns range. A global fit, including possible deviations in other SM couplings, ismore » essential in this case to obtain a robust determination of the Higgs self-coupling. High-energy machines can easily achieve a ~20% precision through Higgs pair production processes. In this case, the impact of additional coupling modifications is milder, although not completely negligible.« less
FLEET Velocimetry Measurements on a Transonic Airfoil
NASA Technical Reports Server (NTRS)
Burns, Ross A.; Danehy, Paul M.
2017-01-01
Femtosecond laser electronic excitation tagging (FLEET) velocimetry was used to study the flowfield around a symmetric, transonic airfoil in the NASA Langley 0.3-m TCT facility. A nominal Mach number of 0.85 was investigated with a total pressure of 125 kPa and total temperature of 280 K. Two-components of velocity were measured along vertical profiles at different locations above, below, and aft of the airfoil at angles of attack of 0 deg, 3.5 deg, and 7deg. Measurements were assessed for their accuracy, precision, dynamic range, spatial resolution, and overall measurement uncertainty in the context of the applied flowfield. Measurement precisions as low as 1 m/s were observed, while overall uncertainties ranged from 4 to 5 percent. Velocity profiles within the wake showed sufficient accuracy, precision, and sensitivity to resolve both the mean and fluctuating velocities and general flow physics such as shear layer growth. Evidence of flow separation is found at high angles of attack.
A global view on the Higgs self-coupling at lepton colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Vita, Stefano; Durieux, Gauthier; Grojean, Christophe
We perform a global effective-field-theory analysis to assess the precision on the determination of the Higgs trilinear self-coupling at future lepton colliders. Two main scenarios are considered, depending on whether the center-of-mass energy of the colliders is sufficient or not to access Higgs pair production processes. Low-energy machines allow for ~40% precision on the extraction of the Higgs trilinear coupling through the exploitation of next-to-leading-order effects in single Higgs measurements, provided that runs at both 240/250 GeV and 350 GeV are available with luminosities in the few attobarns range. A global fit, including possible deviations in other SM couplings, ismore » essential in this case to obtain a robust determination of the Higgs self-coupling. High-energy machines can easily achieve a ~20% precision through Higgs pair production processes. In this case, the impact of additional coupling modifications is milder, although not completely negligible.« less
Crowdsourcing as an Analytical Method: Metrology of Smartphone Measurements in Heritage Science.
Brigham, Rosie; Grau-Bové, Josep; Rudnicka, Anna; Cassar, May; Strlic, Matija
2018-06-18
This research assesses the precision, repeatability, and accuracy of crowdsourced scientific measurements, and whether their quality is sufficient to provide usable results. Measurements of colour and area were chosen because of the possibility of producing them with smartphone cameras. The quality of the measurements was estimated experimentally by comparing data contributed by anonymous participants in heritage sites with reference measurements of known accuracy and precision. Participants performed the measurements by taking photographs with their smartphones, from which colour and dimensional data could be extracted. The results indicate that smartphone measurements provided by citizen scientists can be used to measure changes in colour, but that the performance is strongly dependent on the measured colour coordinate. The same method can be used to measure areas when the difference in colour with the neighbouring areas is large enough. These results render the method useful in some heritage science contexts, but higher precision would be desirable. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Simulation of scattered fields: Some guidelines for the equivalent source method
NASA Astrophysics Data System (ADS)
Gounot, Yves J. R.; Musafir, Ricardo E.
2011-07-01
Three different approaches of the equivalent source method for simulating scattered fields are compared: two of them deal with monopole sets, the other with multipole expansions. In the first monopole approach, the sources have fixed positions given by specific rules, while in the second one (ESGA), the optimal positions are determined via a genetic algorithm. The 'pros and cons' of each of these approaches are discussed with the aim of providing practical guidelines for the user. It is shown that while both monopole techniques furnish quite good pressure field reconstructions with simple source arrangements, ESGA requires a number of monopoles significantly smaller and, with equal number of sources, yields a better precision. As for the multipole technique, the main advantage is that in principle any precision can be reached, provided the source order is sufficiently high. On the other hand, the results point out that the lack of rules for determining the proper multipole order necessary for a desired precision may constitute a handicap for the user.
Observing exoplanet populations with high-precision astrometry
NASA Astrophysics Data System (ADS)
Sahlmann, Johannes
2012-06-01
This thesis deals with the application of the astrometry technique, consisting in measuring the position of a star in the plane of the sky, for the discovery and characterisation of extra-solar planets. It is feasible only with a very high measurement precision, which motivates the use of space observatories, the development of new ground-based astronomical instrumentation and of innovative data analysis methods: The study of Sun-like stars with substellar companions using CORALIE radial velocities and HIPPARCOS astrometry leads to the determination of the frequency of close brown dwarf companions and to the discovery of a dividing line between massive planets and brown dwarf companions; An observation campaign employing optical imaging with a very large telescope demonstrates sufficient astrometric precision to detect planets around ultra-cool dwarf stars and the first results of the survey are presented; Finally, the design and initial astrometric performance of PRIMA, ! a new dual-feed near-infrared interferometric observing facility for relative astrometry is presented.
Yue Xu, Selene; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki
2018-04-01
Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.
NASA Astrophysics Data System (ADS)
Cabalín, L. M.; González, A.; Ruiz, J.; Laserna, J. J.
2010-08-01
Statistical uncertainty in the quantitative analysis of solid samples in motion by laser-induced breakdown spectroscopy (LIBS) has been assessed. For this purpose, a LIBS demonstrator was designed and constructed in our laboratory. The LIBS system consisted of a laboratory-scale conveyor belt, a compact optical module and a Nd:YAG laser operating at 532 nm. The speed of the conveyor belt was variable and could be adjusted up to a maximum speed of 2 m s - 1 . Statistical uncertainty in the analytical measurements was estimated in terms of precision (reproducibility and repeatability) and accuracy. The results obtained by LIBS on shredded scrap samples under real conditions have demonstrated that the analytical precision and accuracy of LIBS is dependent on the sample geometry, position on the conveyor belt and surface cleanliness. Flat, relatively clean scrap samples exhibited acceptable reproducibility and repeatability; by contrast, samples with an irregular shape or a dirty surface exhibited a poor relative standard deviation.
Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu
2017-11-01
This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.
Metz, Thomas; Walewski, Joachim; Kaminski, Clemens F
2003-03-20
Evaluation schemes, e.g., least-squares fitting, are not generally applicable to any types of experiments. If the evaluation schemes were not derived from a measurement model that properly described the experiment to be evaluated, poorer precision or accuracy than attainable from the measured data could result. We outline ways in which statistical data evaluation schemes should be derived for all types of experiment, and we demonstrate them for laser-spectroscopic experiments, in which pulse-to-pulse fluctuations of the laser power cause correlated variations of laser intensity and generated signal intensity. The method of maximum likelihood is demonstrated in the derivation of an appropriate fitting scheme for this type of experiment. Statistical data evaluation contains the following steps. First, one has to provide a measurement model that considers statistical variation of all enclosed variables. Second, an evaluation scheme applicable to this particular model has to be derived or provided. Third, the scheme has to be characterized in terms of accuracy and precision. A criterion for accepting an evaluation scheme is that it have accuracy and precision as close as possible to the theoretical limit. The fitting scheme derived for experiments with pulsed lasers is compared to well-established schemes in terms of fitting power and rational functions. The precision is found to be as much as three timesbetter than for simple least-squares fitting. Our scheme also suppresses the bias on the estimated model parameters that other methods may exhibit if they are applied in an uncritical fashion. We focus on experiments in nonlinear spectroscopy, but the fitting scheme derived is applicable in many scientific disciplines.
NASA Astrophysics Data System (ADS)
Haagmans, G. G.; Verhagen, S.; Voûte, R. L.; Verbree, E.
2017-09-01
Since GPS tends to fail for indoor positioning purposes, alternative methods like indoor positioning systems (IPS) based on Bluetooth low energy (BLE) are developing rapidly. Generally, IPS are deployed in environments covered with obstacles such as furniture, walls, people and electronics influencing the signal propagation. The major factor influencing the system performance and to acquire optimal positioning results is the geometry of the beacons. The geometry of the beacons is limited to the available infrastructure that can be deployed (number of beacons, basestations and tags), which leads to the following challenge: Given a limited number of beacons, where should they be placed in a specified indoor environment, such that the geometry contributes to optimal positioning results? This paper aims to propose a statistical model that is able to select the optimal configuration that satisfies the user requirements in terms of precision. The model requires the definition of a chosen 3D space (in our case 7 × 10 × 6 meter), number of beacons, possible user tag locations and a performance threshold (e.g. required precision). For any given set of beacon and receiver locations, the precision, internal- and external reliability can be determined on forehand. As validation, the modeled precision has been compared with observed precision results. The measurements have been performed with an IPS of BlooLoc at a chosen set of user tag locations for a given geometric configuration. Eventually, the model is able to select the optimal geometric configuration out of millions of possible configurations based on a performance threshold (e.g. required precision).
Tarone, Aaron M; Foran, David R
2011-01-01
Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.
U.S. Navy Interoperability with its High-End Allies
2000-10-01
Precision weapons require tremendous amounts of information from multiple sensors . Information is first used to plan missions. Then when the weapon is...programed and launched, information must be con - tinuously transmitted at very high rates of speed. The U.S. has developed systems capable of...liberal, on the assumption that advanced sensors can provide sufficient information to judge the severity of incoming threats U.S. allies develop
Response of a WB-47E Airplane to Runway Roughness at Eielson AFB, Alaska, September 1964
NASA Technical Reports Server (NTRS)
Morris, Garland J.; Hall, Albert W.
1965-01-01
An investigation has been conducted to measure the response of a WB-47E airplane to the roughness of the runway at Eielson AFB, Alaska. The acceleration level in the pilot's compartment and the pitching oscillation of the airplane were found to be sufficiently high to possibly cause pilot discomfort and have an adverse effect on the precision of take-off.
ERIC Educational Resources Information Center
Halpern, Arthur M.; Liu, Allen
2008-01-01
Using an easy-to-make cylindrical resonator, students can measure the speed of sound in a gas, u, with sufficiently high precision (by locating standing-wave Lissajous patterns on an oscilloscope) to observe real gas properties at one atmosphere and 300 K. For CO[subscript 2] and SF[subscript 6], u is found to be 268.83 and 135.25 m s[superscript…
Arbitrary nonlinearity is sufficient to represent all functions by neural networks - A theorem
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.
1991-01-01
It is proved that if we have neurons implementing arbitrary linear functions and a neuron implementing one (arbitrary but smooth) nonlinear function g(x), then for every continuous function f(x sub 1,..., x sub m) of arbitrarily many variables, and for arbitrary e above 0, we can construct a network that consists of g-neurons and linear neurons, and computes f with precision e.
HPF: The Habitable Zone Planet Finder at the Hobby-Eberly Telescope
NASA Astrophysics Data System (ADS)
Wright, Jason T.; Mahadevan, Suvrath; Hearty, Fred; Monson, Andy; Stefansson, Gudmundur; Ramsey, Larry; Ninan, Joe; Bender, Chad; Kaplan, Kyle; Roy, Arpita; Terrien, Ryan; Robertson, Paul; Halverson, Sam; Schwab, Christian; Kanodia, Shubham
2018-01-01
The Habitable Zone Planet Finder (HPF) is an ultra-stable NIR (ZYJ) high resolution echelle spectrograph on the 10-m Hobby-Eberly Telescope capable of 1-3 m/s Doppler velocimetry on nearby late M dwarfs (M4-M9). This precision is sufficient to detect terrestrial planets in the Habitable Zones of these relatively unexplored stars. Here we present its capabilities and early commissioning results.
Clinical evaluation of a miniaturized desktop breath hydrogen analyzer.
Duan, L P; Braden, B; Clement, T; Caspary, W F; Lembcke, B
1994-10-01
A small desktop electrochemical H2 analyzer (EC-60-Hydrogen monitor) was compared with a stationary electrochemical H2 monitor (GMI-exhaled Hydrogen monitor). The EC-60-H2 monitor shows a high degree of precision for repetitive (n = 10) measurements of standard hydrogen mixtures (CV 1-8%). The response time for completion of measurement is shorter than that of the GMI-exhaled H2 monitor (37 sec. vs 53 sec.; p < 0.0001), while reset times are almost identical (54 sec. vs 51 sec. n.s). In a clinical setting, breath H2-concentrations measured with the EC-60-H2 monitor and the GMI-exhaled H2 monitor were in excellent agreement with a linear correlation (Y = 1.12X + 1.022, r2 = 0.9617, n = 115). With increasing H2-concentrations the EC-60-H2 monitor required larger sample volumes for maintaining sufficient precision, and sample volumes greater than 200 ml were required with H2-concentrations > 30 ppm. For routine gastrointestinal function testing, the EC-60-H2 monitor is an satisfactory and reliable, easy to use and inexpensive desktop breath hydrogen analyzer, whereas in patients with difficulty in cooperating (children, people with severe pulmonary insufficiency), special care has to be applied to obtain sufficiently large breath samples.
Tutino, Lorenzo; Cianchi, Giovanni; Barbani, Francesco; Batacchi, Stefano; Cammelli, Rita; Peris, Adriano
2010-08-12
The use of lung ultrasound (LUS) in ICU is increasing but ultrasonographic patterns of lung are often difficult to quantify by different operators. The aim of this study was to evaluate the accuracy and quality of LUS reporting after the introduction of a standardized electronic recording sheet. Intensivists were trained for LUS following a teaching programme. From April 2008, an electronic sheet was designed and introduced in ICU database in order to uniform LUS examination reporting. A mark from 0 to 24 has been given for each exam by two senior intensivists not involved in the survey. The mark assigned was based on completeness of a precise reporting scheme, concerning the main finding of LUS. A cut off of 15 was considered sufficiency. The study comprehended 12 months of observations and a total of 637 LUS. Initially, although some improvement in the reports completeness, still the accuracy and precision of examination reporting was below 15. The time required to reach a sufficient quality was 7 months. A linear trend in physicians progress was observed. The uniformity in teaching programme and examinations reporting system permits to improve the level of completeness and accuracy of LUS reporting, helping physicians in following lung pathology evolution.
NASA Astrophysics Data System (ADS)
Shui, Tao; Yang, Wen-Xing; Chen, Ai-Xi; Liu, Shaopeng; Li, Ling; Zhu, Zhonghu
2018-03-01
We propose a scheme for high-precision two-dimensional (2D) atom localization via the four-wave mixing (FWM) in a four-level double-Λ atomic system. Due to the position-dependent atom-field interaction, the 2D position information of the atoms can be directly determined by the measurement of the normalized light intensity of output FWM-generated field. We further show that, when the position-dependent generated FWM field has become sufficiently intense, efficient back-coupling to the FWM generating state becomes important. This back-coupling pathway leads to competitive multiphoton destructive interference of the FWM generating state by three supplied and one internally generated fields. We find that the precision of 2D atom localization can be improved significantly by the multiphoton destructive interference and depends sensitively on the frequency detunings and the pump field intensity. Interestingly enough, we show that adjusting the frequency detunings and the pump field intensity can modify significantly the FWM efficiency, and consequently lead to a redistribution of the atoms. As a result, the atom can be localized in one of four quadrants with holding the precision of atom localization.
NASA Technical Reports Server (NTRS)
Pollmeier, Vincent M.; Kallemeyn, Pieter H.; Thurman, Sam W.
1993-01-01
The application of high-accuracy S/S-band (2.1 GHz uplink/2.3 GHz downlink) ranging to orbit determination with relatively short data arcs is investigated for the approach phase of each of the Galileo spacecraft's two Earth encounters (8 December 1990 and 8 December 1992). Analysis of S-band ranging data from Galileo indicated that under favorable signal levels, meter-level precision was attainable. It is shown that ranginging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. Explicit modeling of ranging bias parameters for each station pass is used to largely remove systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle finding capabilities of the data. The accuracy achieved using the precision range filtering strategy proved markedly better when compared to post-flyby reconstructions than did solutions utilizing a traditional Doppler/range filter strategy. In addition, the navigation accuracy achieved with precision ranging was comparable to that obtained using delta-Differenced One-Way Range, an interferometric measurement of spacecraft angular position relative to a natural radio source, which was also used operationally.
Weber, P.K.; Bacon, C.R.; Hutcheon, I.D.; Ingram, B.L.; Wooden, J.L.
2005-01-01
The ion microprobe has the capability to generate high resolution, high precision isotopic measurements, but analysis of the isotopic composition of strontium, as measured by the 87Sr/ 86Sr ratio, has been hindered by isobaric interferences. Here we report the first high precision measurements of 87Sr/ 86Sr by ion microprobe in calcium carbonate samples with moderate Sr concentrations. We use the high mass resolving power (7000 to 9000 M.R.P.) of the SHRIMP-RG ion microprobe in combination with its high transmission to reduce the number of interfering species while maintaining sufficiently high count rates for precise isotopic measurements. The isobaric interferences are characterized by peak modeling and repeated analyses of standards. We demonstrate that by sample-standard bracketing, 87Sr/86Sr ratios can be measured in inorganic and biogenic carbonates with Sr concentrations between 400 and 1500 ppm with ???2??? external precision (2??) for a single analysis, and subpermil external precision with repeated analyses. Explicit correction for isobaric interferences (peak-stripping) is found to be less accurate and precise than sample-standard bracketing. Spatial resolution is ???25 ??m laterally and 2 ??m deep for a single analysis, consuming on the order of 2 ng of material. The method is tested on otoliths from salmon to demonstrate its accuracy and utility. In these growth-banded aragonitic structures, one-week temporal resolution can be achieved. The analytical method should be applicable to other calcium carbonate samples with similar Sr concentrations. Copyright ?? 2005 Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Pongs, Guido; Bresseler, Bernd; Bergs, Thomas; Menke, Gert
2012-10-01
Today isothermal precision molding of imaging glass optics has become a widely applied and integrated production technology in the optical industry. Especially in consumer electronics (e.g. digital cameras, mobile phones, Blu-ray) a lot of optical systems contain rotationally symmetrical aspherical lenses produced by precision glass molding. But due to higher demands on complexity and miniaturization of optical elements the established process chain for precision glass molding is not sufficient enough. Wafer based molding processes for glass optics manufacturing become more and more interesting for mobile phone applications. Also cylindrical lens arrays can be used in high power laser systems. The usage of unsymmetrical free-form optics allows an increase of efficiency in optical laser systems. Aixtooling is working on different aspects in the fields of mold manufacturing technologies and molding processes for extremely high complex optical components. In terms of array molding technologies, Aixtooling has developed a manufacturing technology for the ultra-precision machining of carbide molds together with European partners. The development covers the machining of multi lens arrays as well as cylindrical lens arrays. The biggest challenge is the molding of complex free-form optics having no symmetrical axis. A comprehensive CAD/CAM data management along the entire process chain is essential to reach high accuracies on the molded lenses. Within a national funded project Aixtooling is working on a consistent data handling procedure in the process chain for precision molding of free-form optics.
Determinants of Whether or not Mixtures of Disinfection By-products are Similar
This project summary and its related publications provide information on the development of chemical, toxicological and statistical criteria for determining the sufficient similarity of complex chemical mixtures.
Liu, Jen-Pei; Lu, Li-Tien; Liao, C T
2009-09-01
Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.
Lotfy, Hayam Mahmoud; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom
2014-05-21
Two smart and novel spectrophotometric methods namely; absorbance subtraction (AS) and amplitude modulation (AM) were developed and validated for the determination of a binary mixture of timolol maleate (TIM) and dorzolamide hydrochloride (DOR) in presence of benzalkonium chloride without prior separation, using unified regression equation. Additionally, simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of the binary mixture namely; simultaneous ratio subtraction (SRS), ratio difference (RD), ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), constant multiplication method (CM) and mean centering of ratio spectra (MCR). The proposed spectrophotometric procedures do not require any separation steps. Accuracy, precision and linearity ranges of the proposed methods were determined and the specificity was assessed by analyzing synthetic mixtures of both drugs. They were applied to their pharmaceutical formulation and the results obtained were statistically compared to that of a reported spectrophotometric method. The statistical comparison showed that there is no significant difference between the proposed methods and the reported one regarding both accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.
Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O
2011-01-01
The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.
Research gaps identified during systematic reviews of clinical trials: glass-ionomer cements.
Mickenautsch, Steffen
2012-06-29
To report the results of an audit concerning research gaps in clinical trials that were accepted for appraisal in authored and published systematic reviews regarding the application of glass-ionomer cements (GIC) in dental practice Information concerning research gaps in trial precision was extracted, following a framework that included classification of the research gap reasons: 'imprecision of information (results)', 'biased information', 'inconsistency or unknown consistency' and 'not the right information', as well as research gap characterization using PICOS elements: population (P), intervention (I), comparison (C), outcomes (O) and setting (S). Internal trial validity assessment was based on the understanding that successful control for systematic error cannot be assured on the basis of inclusion of adequate methods alone, but also requires empirical evidence about whether such attempt was successful. A comprehensive and interconnected coverage of GIC-related clinical topics was established. The most common reasons found for gaps in trial precision were lack of sufficient trials and lack of sufficient large sample size. Only a few research gaps were ascribed to 'Lack of information' caused by focus on mainly surrogate trial outcomes. According to the chosen assessment criteria, a lack of adequate randomisation, allocation concealment and blinding/masking in trials covering all reviewed GIC topics was noted (selection- and detection/performance bias risk). Trial results appear to be less affected by loss-to-follow-up (attrition bias risk). This audit represents an adjunct of the systematic review articles it has covered. Its results do not change the systematic review's conclusions but highlight existing research gaps concerning the precision and internal validity of reviewed trials in detail. These gaps should be addressed in future GIC-related clinical research.
A precise measurement of the [Formula: see text] meson oscillation frequency.
Aaij, R; Abellán Beteta, C; Adeva, B; Adinolfi, M; Affolder, A; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Anderson, J; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Aquines Gutierrez, O; Archilli, F; d'Argent, P; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Batozskaya, V; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Bel, L J; Bellee, V; Belloli, N; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bertolin, A; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Billoir, P; Bird, T; Birnkraut, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borsato, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britsch, M; Britton, T; Brodzicka, J; Brook, N H; Buchanan, E; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Campana, P; Campora Perez, D; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cavallero, G; Cenci, R; Charles, M; Charpentier, Ph; Chefdeville, M; Chen, S; Cheung, S-F; Chiapolini, N; Chrzaszcz, M; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collazuol, G; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombes, M; Coquereau, S; Corti, G; Corvo, M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Dall'Occo, E; Dalseno, J; David, P N Y; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Simone, P; Dean, C-T; Decamp, D; Deckenhoff, M; Del Buono, L; Déléage, N; Demmer, M; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Di Ruscio, F; Dijkstra, H; Donleavy, S; Dordei, F; Dorigo, M; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dupertuis, F; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Easo, S; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; El Rifai, I; Elsasser, Ch; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Färber, C; Farley, N; Farry, S; Fay, R; Ferguson, D; Fernandez Albor, V; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fohl, K; Fol, P; Fontana, M; Fontanelli, F; C Forshaw, D; Forty, R; Frank, M; Frei, C; Frosini, M; Fu, J; Furfaro, E; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; García Pardiñas, J; Garra Tico, J; Garrido, L; Gascon, D; Gaspar, C; Gauld, R; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gotti, C; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Greening, E; Gregson, S; Griffith, P; Grillo, L; Grünberg, O; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; He, J; Head, T; Heijne, V; Heister, A; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hoballah, M; Hombach, C; Hulsbergen, W; Humair, T; Hussain, N; Hutchcroft, D; Hynds, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jawahery, A; Jing, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Kanso, W; Karacson, M; Karbach, T M; Karodia, S; Kecke, M; Kelsey, M; Kenyon, I R; Kenzie, M; Ketel, T; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Kochebina, O; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Kozeiha, M; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; K Kuonen, A; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lanfranchi, G; Langenbruch, C; Langhans, B; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Likhomanenko, T; Liles, M; Lindner, R; Linn, C; Lionetto, F; Liu, B; Liu, X; Loh, D; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusardi, N; Lusiani, A; Machefert, F; Maciuc, F; Maev, O; Maguire, K; Malde, S; Malinin, A; Manca, G; Mancinelli, G; Manning, P; Mapelli, A; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurin, B; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; Meadows, B; Meier, F; Meissner, M; Melnychuk, D; Merk, M; Michielin, E; Milanes, D A; Minard, M-N; Mitzel, D S; Molina Rodriguez, J; Monroy, I A; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A B; Mountain, R; Muheim, F; Müller, D; Müller, J; Müller, K; Müller, V; Mussini, M; Muster, B; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Niess, V; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Okhrimenko, O; Oldeman, R; Onderwater, C J G; Osorio Rodrigues, B; Otalora Goicochea, J M; Otto, A; Owen, P; Oyanguren, A; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Pappalardo, L L; Pappenheimer, C; Parkes, C; Passaleva, G; Patel, G D; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pilař, T; Pinci, D; Pistone, A; Piucci, A; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poluektov, A; Polyakov, I; Polycarpo, E; Popov, A; Popov, D; Popovici, B; Potterat, C; Price, E; Price, J D; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rama, M; Rangel, M S; Raniuk, I; Rauschmayr, N; Raven, G; Redi, F; Reichert, S; Reid, M M; Dos Reis, A C; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Roiser, S; Romanovsky, V; Romero Vidal, A; W Ronayne, J; Rotondo, M; Rouvinet, J; Ruf, T; Ruiz Valls, P; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Shires, A; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, E; Smith, I T; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Soomro, F; Souza, D; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefkova, S; Steinkamp, O; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Swientek, S; Syropoulos, V; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Tayduganov, A; Tekampe, T; Teklishyn, M; Tellarini, G; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Todd, J; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Torr, N; Tournefier, E; Tourneur, S; Trabelsi, K; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagnoni, V; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vázquez Sierra, C; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Vilasis-Cardona, X; Vollhardt, A; Volyanskyy, D; Voong, D; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wandernoth, S; Wang, J; Ward, D R; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wright, S; Wyllie, K; Xie, Y; Xu, Z; Yang, Z; Yu, J; Yuan, X; Yushchenko, O; Zangoli, M; Zavertyaev, M; Zhang, L; Zhang, Y; Zhelezov, A; Zhokhov, A; Zhong, L; Zhukov, V; Zucchelli, S
2016-01-01
The oscillation frequency, [Formula: see text], of [Formula: see text] mesons is measured using semileptonic decays with a [Formula: see text] or [Formula: see text] meson in the final state. The data sample corresponds to 3.0[Formula: see text] of pp collisions, collected by the LHCb experiment at centre-of-mass energies [Formula: see text] = 7 and 8[Formula: see text]. A combination of the two decay modes gives [Formula: see text], where the first uncertainty is statistical and the second is systematic. This is the most precise single measurement of this parameter. It is consistent with the current world average and has similar precision.
Biomarker development in the precision medicine era: lung cancer as a case study.
Vargas, Ashley J; Harris, Curtis C
2016-08-01
Precision medicine relies on validated biomarkers with which to better classify patients by their probable disease risk, prognosis and/or response to treatment. Although affordable 'omics'-based technology has enabled faster identification of putative biomarkers, the validation of biomarkers is still stymied by low statistical power and poor reproducibility of results. This Review summarizes the successes and challenges of using different types of molecule as biomarkers, using lung cancer as a key illustrative example. Efforts at the national level of several countries to tie molecular measurement of samples to patient data via electronic medical records are the future of precision medicine research.
Galili, Tal; Meilijson, Isaac
2016-01-02
The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].
Mackiewicz, Dorota; de Oliveira, Paulo Murilo Castro; Moss de Oliveira, Suzana; Cebrat, Stanisław
2013-01-01
Recombination is the main cause of genetic diversity. Thus, errors in this process can lead to chromosomal abnormalities. Recombination events are confined to narrow chromosome regions called hotspots in which characteristic DNA motifs are found. Genomic analyses have shown that both recombination hotspots and DNA motifs are distributed unevenly along human chromosomes and are much more frequent in the subtelomeric regions of chromosomes than in their central parts. Clusters of motifs roughly follow the distribution of recombination hotspots whereas single motifs show a negative correlation with the hotspot distribution. To model the phenomena related to recombination, we carried out computer Monte Carlo simulations of genome evolution. Computer simulations generated uneven distribution of hotspots with their domination in the subtelomeric regions of chromosomes. They also revealed that purifying selection eliminating defective alleles is strong enough to cause such hotspot distribution. After sufficiently long time of simulations, the structure of chromosomes reached a dynamic equilibrium, in which number and global distribution of both hotspots and defective alleles remained statistically unchanged, while their precise positions were shifted. This resembles the dynamic structure of human and chimpanzee genomes, where hotspots change their exact locations but the global distributions of recombination events are very similar.
Ji, Qinqin; Salomon, Arthur R.
2015-01-01
The activation of T-lymphocytes through antigen-mediated T-cell receptor (TCR) clustering is vital in regulating the adaptive-immune response. Although T cell receptor signaling has been extensively studied, the fundamental mechanisms for signal initiation are not fully understood. Reduced temperature initiated some of the hallmarks of TCR signaling such as increased phosphorylation and activation on ERK and calcium release from the endoplasmic reticulum as well as coalesce T-cell membrane microdomains. The precise mechanism of TCR signaling initiation due to temperature change remains obscure. One critical question is whether signaling initiated by cold treatment of T cells differs from signaling initiated by crosslinking of the T cell receptor. To address this uncertainty, a wide-scale, quantitative mass spectrometry-based phosphoproteomic analysis was performed on T cells stimulated either by temperature shift or through crosslinking of the TCR. Careful statistical comparison between the two stimulations revealed a striking level of identity between the subset of 339 sites that changed significantly with both stimulations. This study demonstrates for the first time, at unprecedented detail, that T cell cold treatment was sufficient to initiate signaling patterns nearly identical to soluble antibody stimulation, shedding new light on the mechanism of activation of these critically important immune cells. PMID:25839225
X-Ray Characteristics of Megamaser Galaxies
NASA Astrophysics Data System (ADS)
Leiter, K.; Kadler, M.; Wilms, J.; Braatz, J.; Grossberger, C.; Krauss, F.; Kreikenbohm, A.; Langejahn, M.; Litzinger, E.; Markowitz, A.
2017-10-01
Water megamaser galaxies are a rare subclass of Active Galactic Nuclei (AGN). They play a key role in modern cosmology, providing a significant improvement for measuring geometrical distances with high precision. Megamaser studies presently measure H_{0} to about 5%. The goal of modern programs is to reach 3%, which strongly constrains the equation of state of dark energy. An increasing number of independent measurements of suitable water masers is providing the statistics necessary to decrease the uncertainties. X-ray studies of maser galaxies yield important constraints on target-selection criteria for future surveys, increasing their detection rate. We studied the X-ray properties of a homogeneous sample of Type 2 AGN with water maser activity observed by XMM-Newton to investigate the properties of megamaser-hosting galaxies compared to a control sample of non-maser galaxies. Comparing the luminosity distributions confirm previous results that water maser galaxies appear more luminous than non-maser sources. The maser phenomenon goes along with more complex X-ray spectra, higher column densities and higher equivalent widths of the Fe Kα line. Both a sufficiently luminous X-ray source and a high absorbing column density in the line of sight are necessary prerequisites to favour the appearance of the water megamaser phenomenon in AGN.
Pluto-Charon Stellar Occultation Candidates: 1990-1995
NASA Technical Reports Server (NTRS)
Dunham, E. W.; McDonald, S. W.; Elliot, J. L.
1991-01-01
We have carried out a search to identify stars that might be occulted by Pluto or Charon during the period 1990-1995 and part of 1996. This search was made with an unfiltered CCD camera operated in the strip scanning mode, and it reaches an R magnitude of approximately 17.5-about 1.5 mag fainter than previous searches. Circumstances for each of the 162 potential occultations are given, including an approximate R magnitude of the star, which allows estimation of the signal-to-noise ratio (S/N) for observation of each occultation. The faintest stars in our list would yield an S/N of about 20 for a 1 S integration when observed with a CCD detector on an 8 m telescope under a dark sky. Our astrometric precision (+/- 0.2 arcsec, with larger systematic errors possible for individual cases) is insufficient to serve as a final prediction for these potential occultations, but is sufficient to identify stars deserving of further, more accurate, astrometric observations. Statistically, we expect about 32 of these events to be observable somewhere on Earth. The number of events actually observed will be substantially smaller because of clouds and the sparse distribution of large telescopes. Finder charts for each of the 91 stars involved are presented.
Sheen, Seungsoo; Lee, Keu Sung; Chung, Wou Young; Nam, Saeil; Kang, Dae Ryong
2016-01-01
Lung cancer is a leading cause of cancer-related death in the world. Smoking is definitely the most important risk factor for lung cancer. Radon ((222)Rn) is a natural gas produced from radium ((226)Ra) in the decay series of uranium ((238)U). Radon exposure is the second most common cause of lung cancer and the first risk factor for lung cancer in never-smokers. Case-control studies have provided epidemiological evidence of the causative relationship between indoor radon exposure and lung cancer. Twenty-four case-control study papers were found by our search strategy from the PubMed database. Among them, seven studies showed that indoor radon has a statistically significant association with lung cancer. The studies performed in radon-prone areas showed a more positive association between radon and lung cancer. Reviewed papers had inconsistent results on the dose-response relationship between indoor radon and lung cancer risk. Further refined case-control studies will be required to evaluate the relationship between radon and lung cancer. Sufficient study sample size, proper interview methods, valid and precise indoor radon measurement, wide range of indoor radon, and appropriate control of confounders such as smoking status should be considered in further case-control studies.
Residential Agricultural Pesticide Exposures and Risks of Spontaneous Preterm Birth.
Shaw, Gary M; Yang, Wei; Roberts, Eric M; Kegley, Susan E; Stevenson, David K; Carmichael, Suzan L; English, Paul B
2018-01-01
Pesticides exposures are aspects of the human exposome that have not been sufficiently studied for their contribution to risk for preterm birth. We investigated risks of spontaneous preterm birth from potential residential exposures to 543 individual chemicals and 69 physicochemical groupings that were applied in the San Joaquin Valley of California during the study period, 1998-2011. The study population was derived from birth certificate data linked with Office of Statewide Health Planning and Development maternal and infant hospital discharge data. After exclusions, the analytic study base included 197,461 term control births and 27,913 preterm case births. Preterm cases were more narrowly defined as 20-23 weeks (n = 515), 24-27 weeks (n = 1,792), 28-31 weeks (n = 3,098), or 32-36 weeks (n = 22,508). The frequency of any (versus none) pesticide exposure was uniformly lower in each preterm case group relative to the frequency in term controls, irrespective of gestational month of exposure. All odds ratios were below 1.0 for these any versus no exposure comparisons. The majority of odds ratios were below 1.0, many of them statistically precise, for preterm birth and exposures to specific chemical groups or chemicals. This study showed a general lack of increased risk of preterm birth associated with a range of agriculture pesticide exposures near women's residences.
Estimating Divergence Parameters With Small Samples From a Large Number of Loci
Wang, Yong; Hey, Jody
2010-01-01
Most methods for studying divergence with gene flow rely upon data from many individuals at few loci. Such data can be useful for inferring recent population history but they are unlikely to contain sufficient information about older events. However, the growing availability of genome sequences suggests a different kind of sampling scheme, one that may be more suited to studying relatively ancient divergence. Data sets extracted from whole-genome alignments may represent very few individuals but contain a very large number of loci. To take advantage of such data we developed a new maximum-likelihood method for genomic data under the isolation-with-migration model. Unlike many coalescent-based likelihood methods, our method does not rely on Monte Carlo sampling of genealogies, but rather provides a precise calculation of the likelihood by numerical integration over all genealogies. We demonstrate that the method works well on simulated data sets. We also consider two models for accommodating mutation rate variation among loci and find that the model that treats mutation rates as random variables leads to better estimates. We applied the method to the divergence of Drosophila melanogaster and D. simulans and detected a low, but statistically significant, signal of gene flow from D. simulans to D. melanogaster. PMID:19917765
Pluto-Charon stellar occultation candidates - 1990-1995
NASA Technical Reports Server (NTRS)
Dunham, E. W.; Mcdonald, S. W.; Elliot, J. L.
1991-01-01
A search to identify stars that might be occulted by Pluto or Charon during the period 1990-1995 and part of 1996 is studied. This search was made with an unfiltered CCD camera operated in the strip scanning mode, and it reaches an R magnitude of approximately 17.5 - about 1.5 mag fainter than previous searches. Circumstances for each of the 162 potential occultations are given, including an approximate R magnitude of the star, which allows estimation of the signal-to-noise ratio (S/N) for observation of each occultation. The faintest stars in the list would yield an S/N of about 20 for a 1 s integration when observed with a CCD detector on an 8 m telescope under a dark sky. The astrometric precision (+/- 0.2 arcsec, with larger systematic errors possible for individual cases) is insufficient to serve as a final prediction for these potential occultations, but is sufficient to identify stars deserving of further, more accurate, astrometric observations. Statistically, about 32 of these events to be observable somewhere on earth are expected. The number of events actually observed will be substantially smaller because of clouds and the sparse distribution of large telescopes. Finder charts for each of the 91 stars involved are presented.
Mackiewicz, Dorota; de Oliveira, Paulo Murilo Castro; Moss de Oliveira, Suzana; Cebrat, Stanisław
2013-01-01
Recombination is the main cause of genetic diversity. Thus, errors in this process can lead to chromosomal abnormalities. Recombination events are confined to narrow chromosome regions called hotspots in which characteristic DNA motifs are found. Genomic analyses have shown that both recombination hotspots and DNA motifs are distributed unevenly along human chromosomes and are much more frequent in the subtelomeric regions of chromosomes than in their central parts. Clusters of motifs roughly follow the distribution of recombination hotspots whereas single motifs show a negative correlation with the hotspot distribution. To model the phenomena related to recombination, we carried out computer Monte Carlo simulations of genome evolution. Computer simulations generated uneven distribution of hotspots with their domination in the subtelomeric regions of chromosomes. They also revealed that purifying selection eliminating defective alleles is strong enough to cause such hotspot distribution. After sufficiently long time of simulations, the structure of chromosomes reached a dynamic equilibrium, in which number and global distribution of both hotspots and defective alleles remained statistically unchanged, while their precise positions were shifted. This resembles the dynamic structure of human and chimpanzee genomes, where hotspots change their exact locations but the global distributions of recombination events are very similar. PMID:23776462
Fuzzy neural network for flow estimation in sewer systems during wet weather.
Shen, Jun; Shen, Wei; Chang, Jian; Gong, Ning
2006-02-01
Estimation of the water flow from rainfall intensity during storm events is important in hydrology, sewer system control, and environmental protection. The runoff-producing behavior of a sewer system changes from one storm event to another because rainfall loss depends not only on rainfall intensities, but also on the state of the soil and vegetation, the general condition of the climate, and so on. As such, it would be difficult to obtain a precise flowrate estimation without sufficient a priori knowledge of these factors. To establish a model for flow estimation, one can also use statistical methods, such as the neural network STORMNET, software developed at Lyonnaise des Eaux, France, analyzing the relation between rainfall intensity and flowrate data of the known storm events registered in the past for a given sewer system. In this study, the authors propose a fuzzy neural network to estimate the flowrate from rainfall intensity. The fuzzy neural network combines four STORMNETs and fuzzy deduction to better estimate the flowrates. This study's system for flow estimation can be calibrated automatically by using known storm events; no data regarding the physical characteristics of the drainage basins are required. Compared with the neural network STORMNET, this method reduces the mean square error of the flow estimates by approximately 20%. Experimental results are reported herein.
Potassium Stable Isotopic Compositions Measured by High-Resolution MC-ICP-MS
NASA Technical Reports Server (NTRS)
Morgan, Leah E.; Lloyd, Nicholas S.; Ellam, Robert M.; Simon, Justin I.
2012-01-01
Potassium isotopic (K-41/K-39) compositions are notoriously difficult to measure. TIMS measurements are hindered by variable fractionation patterns throughout individual runs and too few isotopes to apply an internal spike method for instrumental mass fractionation corrections. Internal fractionation corrections via the K-40/K-39 ratio can provide precise values but assume identical K-40/K-39 ratios (e.g. 0.05% (1sigma) in [1]); this is appropriate in some cases (e.g. identifying excess K-41) but not others (e.g., determining mass fractionation effects and metrologically traceable isotopic abundances). SIMS analyses have yielded measurements with 0.25% precisions (1sigma) [2]. ICP-MS analyses are significantly affected by interferences from molecular species such as Ar-38H(+) and Ar-40H(+) and instrument mass bias. Single collector ICP-MS instruments in "cold plasma" mode have yielded uncertainties as low as 2% (1sigma, e.g. [3]). Although these precisions may be acceptable for some concentration determinations, they do not resolve isotopic variation in terrestrial materials. Here we present data from a series of measurements made on the Thermo Scientific NEPTUNE Plus multi-collector ICP-MS that demonstrate the ability to make K-41/K-39 ratio measurements with 0.07% precisions (1sigma). These data, collected on NIST K standards, indicate the potential for MC-ICP-MS measurements to look for K isotopic variations at the sub-permil level. The NEPTUNE Plus can sufficiently resolve 39K and 41K from the interfering 38ArH+ and 40ArH+ peaks in wet cold plasma and high-resolution mode. Measurements were made on small but flat, interference-free, plateaus (ca. 50 ppm by mass width for K-41). Although ICP-MS does not yield accurate K-41/K-39 values due to significant instrumental mass fractionation (ca. 6%), this bias can be sufficiently stable over the time required for several measurements so that relative K-41/K-39 values can be precisely determined via sample-standard bracketing. As cold plasma conditions can amplify matrix effects, experiments were conducted to test the matrix tolerance of measurements; the use of clean, matrix-matched samples and standards is critical. Limitations of the cold-plasma high-resolution MC-ICP-MS methodology with respect to matrix tolerance are discussed and compared with the limitations of TIMS methodologies.
Melching, C.S.; Coupe, R.H.
1995-01-01
During water years 1985-91, the U.S. Geological Survey (USGS) and the Illinois Environmental Protection Agency (IEPA) cooperated in the collection and analysis of concurrent and split stream-water samples from selected sites in Illinois. Concurrent samples were collected independently by field personnel from each agency at the same time and sent to the IEPA laboratory, whereas the split samples were collected by USGS field personnel and divided into aliquots that were sent to each agency's laboratory for analysis. The water-quality data from these programs were examined by means of the Wilcoxon signed ranks test to identify statistically significant differences between results of the USGS and IEPA analyses. The data sets for constituents and properties identified by the Wilcoxon test as having significant differences were further examined by use of the paired t-test, mean relative percentage difference, and scattergrams to determine if the differences were important. Of the 63 constituents and properties in the concurrent-sample analysis, differences in only 2 (pH and ammonia) were statistically significant and large enough to concern water-quality engineers and planners. Of the 27 constituents and properties in the split-sample analysis, differences in 9 (turbidity, dissolved potassium, ammonia, total phosphorus, dissolved aluminum, dissolved barium, dissolved iron, dissolved manganese, and dissolved nickel) were statistically significant and large enough to con- cern water-quality engineers and planners. The differences in concentration between pairs of the concurrent samples were compared to the precision of the laboratory or field method used. The differences in concentration between pairs of the concurrent samples were compared to the precision of the laboratory or field method used. The differences in concentration between paris of split samples were compared to the precision of the laboratory method used and the interlaboratory precision of measuring a given concentration or property. Consideration of method precision indicated that differences between concurrent samples were insignificant for all concentrations and properties except pH, and that differences between split samples were significant for all concentrations and properties. Consideration of interlaboratory precision indicated that the differences between the split samples were not unusually large. The results for the split samples illustrate the difficulty in obtaining comparable and accurate water-quality data.
Rapid manufacturing of metallic Molds for parts in Automobile
NASA Astrophysics Data System (ADS)
Zhang, Renji; Xu, Da; Liu, Yuan; Yan, Xudong; Yan, Yongnian
1998-03-01
The recent research of RPM (Rapid Prototyping Manufacturing) in our lab has been focused on the rapid creation of alloyed cast iron (ACI) molds. There are a lot of machinery parts in an automobile, so a lot of mettallic molds are needed in automobile industry. A new mold manufacturing technology has been proposed. A new large scale RP machine has been set up in our lab now. Then rapid prototypes could be manufactured by means of laminated object manufacturing (LOM) technology. The molds for parts in automobile have been produced by ceramic shell precision casting. An example is a drawing mold for cover parts in automobile. Sufficient precision and surface roughness have been obtained. Itis proved that this is a vew kind of technology. Work supported by the Mational Science Foundation of China.
Boehm, K. -J.; Gibson, C. R.; Hollaway, J. R.; ...
2016-09-01
This study presents the design of a flexure-based mount allowing adjustment in three rotational degrees of freedom (DOFs) through high-precision set-screw actuators. The requirements of the application called for small but controlled angular adjustments for mounting a cantilevered beam. The proposed design is based on an array of parallel beams to provide sufficiently high stiffness in the translational directions while allowing angular adjustment through the actuators. A simplified physical model in combination with standard beam theory was applied to estimate the deflection profile and maximum stresses in the beams. A finite element model was built to calculate the stresses andmore » beam profiles for scenarios in which the flexure is simultaneously actuated in more than one DOF.« less
Wavelength metrology with a color sensor integrated chip
NASA Astrophysics Data System (ADS)
Jackson, Jarom; Jones, Tyler; Otterstrom, Nils; Archibald, James; Durfee, Dallin
2016-03-01
We have developed a method of wavelength sensing using the TCS3414 from AMS, a color sensor developed for use in cell phones and consumer electronics. The sensor datasheet specifies 16 bits of precision and 200ppm/C° temperature dependence, which preliminary calculations showed might be sufficient for picometer level wavelength discrimination of narrow linewidth sources. We have successfully shown that this is possible by using internal etalon effects in addition to the filters' wavelength responses, and recently published our findings in OpticsExpress. Our device demonstrates sub picometer precision over short time periods, with about 10pm drift over a one month period. This method requires no moving or delicate optics, and has the potential to produce inexpensive and mechanically robust devices. Funded by Brigham Young University and NSF Grant Number PHY-1205736.
Exploring Flavor Physics with Lattice QCD
NASA Astrophysics Data System (ADS)
Du, Daping; Fermilab/MILC Collaborations Collaboration
2016-03-01
The Standard Model has been a very good description of the subatomic particle physics. In the search for physics beyond the Standard Model in the context of flavor physics, it is important to sharpen our probes using some gold-plated processes (such as B rare decays), which requires the knowledge of the input parameters, such as the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and other nonperturbative quantities, with sufficient precision. Lattice QCD is so far the only first-principle method which could compute these quantities with competitive and systematically improvable precision using the state of the art simulation techniques. I will discuss the recent progress of lattice QCD calculations on some of these nonpurturbative quantities and their applications in flavor physics. I will also discuss the implications and future perspectives of these calculations in flavor physics.
Aaltonen, T.; Álvarez González, B.; Amerio, S.; ...
2012-09-26
The transverse momentum cross section of e⁺e⁻ pairs in the Z-boson mass region of 66–116 GeV/c² is precisely measured using Run II data corresponding to 2.1 fb⁻¹ of integrated luminosity recorded by the Collider Detector at Fermilab. The cross section is compared with two quantum chromodynamic calculations. One is a fixed-order perturbative calculation at O(α 2s), and the other combines perturbative predictions at high transverse momentum with the gluon resummation formalism at low transverse momentum. Comparisons of the measurement with calculations show reasonable agreement. The measurement is of sufficient precision to allow refinements in the understanding of the transverse momentummore » distribution.« less
Precision of guided scanning procedures for full-arch digital impressions in vivo.
Zimmermann, Moritz; Koller, Christina; Rumetsch, Moritz; Ender, Andreas; Mehl, Albert
2017-11-01
System-specific scanning strategies have been shown to influence the accuracy of full-arch digital impressions. Special guided scanning procedures have been implemented for specific intraoral scanning systems with special regard to the digital orthodontic workflow. The aim of this study was to evaluate the precision of guided scanning procedures compared to conventional impression techniques in vivo. Two intraoral scanning systems with implemented full-arch guided scanning procedures (Cerec Omnicam Ortho; Ormco Lythos) were included along with one conventional impression technique with irreversible hydrocolloid material (alginate). Full-arch impressions were taken three times each from 5 participants (n = 15). Impressions were then compared within the test groups using a point-to-surface distance method after best-fit model matching (OraCheck). Precision was calculated using the (90-10%)/2 quantile and statistical analysis with one-way repeated measures ANOVA and post hoc Bonferroni test was performed. The conventional impression technique with alginate showed the lowest precision for full-arch impressions with 162.2 ± 71.3 µm. Both guided scanning procedures performed statistically significantly better than the conventional impression technique (p < 0.05). Mean values for group Cerec Omnicam Ortho were 74.5 ± 39.2 µm and for group Ormco Lythos 91.4 ± 48.8 µm. The in vivo precision of guided scanning procedures exceeds conventional impression techniques with the irreversible hydrocolloid material alginate. Guided scanning procedures may be highly promising for clinical applications, especially for digital orthodontic workflows.
Beyond-Standard-Model Tensor Interaction and Hadron Phenomenology.
Courtoy, Aurore; Baeßler, Stefan; González-Alonso, Martín; Liuti, Simonetta
2015-10-16
We evaluate the impact of recent developments in hadron phenomenology on extracting possible fundamental tensor interactions beyond the standard model. We show that a novel class of observables, including the chiral-odd generalized parton distributions, and the transversity parton distribution function can contribute to the constraints on this quantity. Experimental extractions of the tensor hadronic matrix elements, if sufficiently precise, will provide a, so far, absent testing ground for lattice QCD calculations.
1989-08-01
machinery design , precision machining, proper maintenance, and proper lubrication. Ordinarily, wear is thought of only in terms of abrasive wear occurring in...operate under this principle. However, the design must allow the plates to lift and tilt properly and provide sufficient area to lift the load. 38. Another...friction and wear to a minimum. Boundary Lubrication 42. Lubrication designed to protect against frictional effects when asperities meet is called
ERIC Educational Resources Information Center
Borkum, Evan; He, Fang; Linden, Leigh L.
2012-01-01
We conduct a randomized evaluation of a school library program on children's language skills. We find that the program had little impact on students' scores on a language test administered 16 months after implementation. The estimates are sufficiently precise to rule out effects larger than 0.13 and 0.11 standard deviations based on the 95 and 90…
Mørkrid, Lars; Rowe, Alexander D; Elgstoen, Katja B P; Olesen, Jess H; Ruijter, George; Hall, Patricia L; Tortorelli, Silvia; Schulze, Andreas; Kyriakopoulou, Lianna; Wamelink, Mirjam M C; van de Kamp, Jiddeke M; Salomons, Gajja S; Rinaldo, Piero
2015-05-01
Urinary concentrations of creatine and guanidinoacetic acid divided by creatinine are informative markers for cerebral creatine deficiency syndromes (CDSs). The renal excretion of these substances varies substantially with age and sex, challenging the sensitivity and specificity of postanalytical interpretation. Results from 155 patients with CDS and 12 507 reference individuals were contributed by 5 diagnostic laboratories. They were binned into 104 adjacent age intervals and renormalized with Box-Cox transforms (Ξ). Estimates for central tendency (μ) and dispersion (σ) of Ξ were obtained for each bin. Polynomial regression analysis was used to establish the age dependence of both μ[log(age)] and σ[log(age)]. The regression residuals were then calculated as z-scores = {Ξ - μ[log(age)]}/σ[log(age)]. The process was iterated until all z-scores outside Tukey fences ±3.372 were identified and removed. Continuous percentile charts were then calculated and plotted by retransformation. Statistically significant and biologically relevant subgroups of z-scores were identified. Significantly higher marker values were seen in females than males, necessitating separate reference intervals in both adolescents and adults. Comparison between our reconstructed reference percentiles and current standard age-matched reference intervals highlights an underlying risk of false-positive and false-negative events at certain ages. Disease markers depending strongly on covariates such as age and sex require large numbers of reference individuals to establish peripheral percentiles with sufficient precision. This is feasible only through collaborative data sharing and the use of appropriate statistical methods. Broad application of this approach can be implemented through freely available Web-based software. © 2015 American Association for Clinical Chemistry.
Sforza, Chiarella; De Menezes, Marcio; Bresciani, Elena; Cerón-Zapata, Ana M; López-Palacio, Ana M; Rodriguez-Ardila, Myriam J; Berrio-Gutiérrez, Lina M
2012-07-01
To assess a three-dimensional stereophotogrammetric method for palatal cast digitization of children with unilateral cleft lip and palate. As part of a collaboration between the University of Milan (Italy) and the University CES of Medellin (Colombia), 96 palatal cast models obtained from neonatal patients with unilateral cleft lip and palate were obtained and digitized using a three-dimensional stereophotogrammetric imaging system. Three-dimensional measurements (cleft width, depth, length) were made separately for the longer and shorter cleft segments on the digital dental cast surface between landmarks, previously marked. Seven linear measurements were computed. Systematic and random errors between operators' tracings, and accuracy on geometric objects of known size were calculated. In addition, mean measurements from three-dimensional stereophotographs were compared statistically with those from direct anthropometry. The three-dimensional method presented good accuracy error (<0.9%) on measuring geometric objects. No systematic errors between operators' measurements were found (p > .05). Statistically significant differences (p < 5%) were noted for different methods (caliper versus stereophotogrammetry) for almost all distances analyzed, with mean absolute difference values ranging between 0.22 and 3.41 mm. Therefore, rates for the technical error of measurement and relative error magnitude were scored as moderate for Ag-Am and poor for Ag-Pg and Am-Pm distances. Generally, caliper values were larger than three-dimensional stereophotogrammetric values. Three-dimensional stereophotogrammetric systems have some advantages over direct anthropometry, and therefore the method could be sufficiently precise and accurate on palatal cast digitization with unilateral cleft lip and palate. This would be useful for clinical analyses in maxillofacial, plastic, and aesthetic surgery.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
COSMOS2015 photometric redshifts probe the impact of filaments on galaxy properties
NASA Astrophysics Data System (ADS)
Laigle, C.; Pichon, C.; Arnouts, S.; McCracken, H. J.; Dubois, Y.; Devriendt, J.; Slyz, A.; Le Borgne, D.; Benoit-Lévy, A.; Hwang, Ho Seong; Ilbert, O.; Kraljic, K.; Malavasi, N.; Park, Changbom; Vibert, D.
2018-03-01
The variation of galaxy stellar masses and colour types with the distance to projected cosmic filaments are quantified using the precise photometric redshifts of the COSMOS2015 catalogue extracted from Cosmological Evolution Survey (COSMOS) field (2 deg2). Realistic mock catalogues are also extracted from the lightcone of the cosmological hydrodynamical simulation HORIZON-AGN. They show that the photometric redshift accuracy of the observed catalogue (σz < 0.015 at M* > 1010M⊙ and z < 0.9) is sufficient to provide two-dimensional (2D) filaments that closely match their projected three-dimensional (3D) counterparts. Transverse stellar mass gradients are measured in projected slices of thickness 75 Mpc between 0.5 < z < 0.9, showing that the most massive galaxies are statistically closer to their neighbouring filament. At fixed stellar mass, passive galaxies are also found closer to their filament, while active star-forming galaxies statistically lie further away. The contributions of nodes and local density are removed from these gradients to highlight the specific role played by the geometry of the filaments. We find that the measured signal does persist after this removal, clearly demonstrating that proximity to a filament is not equivalent to proximity to an overdensity. These findings are in agreement with gradients measured in both 2D and 3D in the HORIZON-AGN simulation and those observed in the spectroscopic surveys VIPERS and GAMA (which both rely on the identification of 3D filaments). They are consistent with a picture in which the influence of the geometry of the large-scale environment drives anisotropic tides that impact the assembly history of galaxies, and hence their observed properties.
Rare Event Simulation for T-cell Activation
NASA Astrophysics Data System (ADS)
Lipsmeier, Florian; Baake, Ellen
2009-02-01
The problem of statistical recognition is considered, as it arises in immunobiology, namely, the discrimination of foreign antigens against a background of the body's own molecules. The precise mechanism of this foreign-self-distinction, though one of the major tasks of the immune system, continues to be a fundamental puzzle. Recent progress has been made by van den Berg, Rand, and Burroughs (J. Theor. Biol. 209:465-486, 2001), who modelled the probabilistic nature of the interaction between the relevant cell types, namely, T-cells and antigen-presenting cells (APCs). Here, the stochasticity is due to the random sample of antigens present on the surface of every APC, and to the random receptor type that characterises individual T-cells. It has been shown previously (van den Berg et al. in J. Theor. Biol. 209:465-486, 2001; Zint et al. in J. Math. Biol. 57:841-861, 2008) that this model, though highly idealised, is capable of reproducing important aspects of the recognition phenomenon, and of explaining them on the basis of stochastic rare events. These results were obtained with the help of a refined large deviation theorem and were thus asymptotic in nature. Simulations have, so far, been restricted to the straightforward simple sampling approach, which does not allow for sample sizes large enough to address more detailed questions. Building on the available large deviation results, we develop an importance sampling technique that allows for a convenient exploration of the relevant tail events by means of simulation. With its help, we investigate the mechanism of statistical recognition in some depth. In particular, we illustrate how a foreign antigen can stand out against the self background if it is present in sufficiently many copies, although no a priori difference between self and nonself is built into the model.
Monitoring tigers with confidence.
Linkie, Matthew; Guillera-Arroita, Gurutzeta; Smith, Joseph; Rayan, D Mark
2010-12-01
With only 5% of the world's wild tigers (Panthera tigris Linnaeus, 1758) remaining since the last century, conservationists urgently need to know whether or not the management strategies currently being employed are effectively protecting these tigers. This knowledge is contingent on the ability to reliably monitor tiger populations, or subsets, over space and time. In the this paper, we focus on the 2 seminal methodologies (camera trap and occupancy surveys) that have enabled the monitoring of tiger populations with greater confidence. Specifically, we: (i) describe their statistical theory and application in the field; (ii) discuss issues associated with their survey designs and state variable modeling; and, (iii) discuss their future directions. These methods have had an unprecedented influence on increasing statistical rigor within tiger surveys and, also, surveys of other carnivore species. Nevertheless, only 2 published camera trap studies have gone beyond single baseline assessments and actually monitored population trends. For low density tiger populations (e.g. <1 adult tiger/100 km(2)) obtaining sufficient precision for state variable estimates from camera trapping remains a challenge because of insufficient detection probabilities and/or sample sizes. Occupancy surveys have overcome this problem by redefining the sampling unit (e.g. grid cells and not individual tigers). Current research is focusing on developing spatially explicit capture-mark-recapture models and estimating abundance indices from landscape-scale occupancy surveys, as well as the use of genetic information for identifying and monitoring tigers. The widespread application of these monitoring methods in the field now enables complementary studies on the impact of the different threats to tiger populations and their response to varying management intervention. © 2010 ISZS, Blackwell Publishing and IOZ/CAS.
On information, negentropy and H-theorem
NASA Astrophysics Data System (ADS)
Chakrabarti, C. G.; Sarker, N. G.
1983-09-01
The paper deals with the imprtance of the Kullback descrimination information in the statistical characterization of negentropy of non-equilibrium state and the irreversibility of a classical dynamical system. The theory based on the Kullback discrimination information as the H-function gives new insight into the interrelation between the concepts of coarse-graining and the principle of sufficiency leading to important statistical characterization of thermal equilibrium of a closed system.
Purpose Restrictions on Information Use
2013-06-03
Employees are authorized to access Customer Information for business purposes only.” [5]. The HIPAA Privacy Rule requires that healthcare providers in the...outcomes can be probabilistic since the network does not know what ad will be best for each visitor but does have statistical information about various...beliefs as such beliefs are a sufficient statistic . Thus, the agent need only consider for each possible belief β it can have, what action it would
Samadzadeh, Gholam Reza; Rigi, Tahereh; Ganjali, Ali Reza
2013-01-01
Surveying valuable and most recent information from internet, has become vital for researchers and scholars, because every day, thousands and perhaps millions of scientific works are brought out as digital resources which represented by internet and researchers can't ignore this great resource to find related documents for their literature search, which may not be found in any library. With regard to variety of documents presented on the internet, search engines are one of the most effective search tools for finding information. The aim of this study is to evaluate the three criteria, recall, preciseness and importance of the four search engines which are PubMed, Science Direct, Google Scholar and federated search of Iranian National Medical Digital Library in addiction (prevention and treatment) to select the most effective search engine for offering the best literature research. This research was a cross-sectional study by which four popular search engines in medical sciences were evaluated. To select keywords, medical subject heading (Mesh) was used. We entered given keywords in the search engines and after searching, 10 first entries were evaluated. Direct observation was used as a mean for data collection and they were analyzed by descriptive statistics (number, percent number and mean) and inferential statistics, One way analysis of variance (ANOVA) and post hoc Tukey in Spss. 15 statistical software. P Value < 0.05 was considered statistically significant. Results have shown that the search engines had different operations with regard to the evaluated criteria. Since P Value was 0.004 < 0.05 for preciseness and was 0.002 < 0.05 for importance, it shows significant difference among search engines. PubMed, Science Direct and Google Scholar were the best in recall, preciseness and importance respectively. As literature research is one of the most important stages of research, it's better for researchers, especially Substance-Related Disorders scholars to use different search engines with the best recall, preciseness and importance in that subject field to reach desirable results while searching and they don't depend on just one search engine.
Samadzadeh, Gholam Reza; Rigi, Tahereh; Ganjali, Ali Reza
2013-01-01
Background Surveying valuable and most recent information from internet, has become vital for researchers and scholars, because every day, thousands and perhaps millions of scientific works are brought out as digital resources which represented by internet and researchers can’t ignore this great resource to find related documents for their literature search, which may not be found in any library. With regard to variety of documents presented on the internet, search engines are one of the most effective search tools for finding information. Objectives The aim of this study is to evaluate the three criteria, recall, preciseness and importance of the four search engines which are PubMed, Science Direct, Google Scholar and federated search of Iranian National Medical Digital Library in addiction (prevention and treatment) to select the most effective search engine for offering the best literature research. Materials and Methods This research was a cross-sectional study by which four popular search engines in medical sciences were evaluated. To select keywords, medical subject heading (Mesh) was used. We entered given keywords in the search engines and after searching, 10 first entries were evaluated. Direct observation was used as a mean for data collection and they were analyzed by descriptive statistics (number, percent number and mean) and inferential statistics, One way analysis of variance (ANOVA) and post hoc Tukey in Spss. 15 statistical software. P Value < 0.05 was considered statistically significant. Results Results have shown that the search engines had different operations with regard to the evaluated criteria. Since P Value was 0.004 < 0.05 for preciseness and was 0.002 < 0.05 for importance, it shows significant difference among search engines. PubMed, Science Direct and Google Scholar were the best in recall, preciseness and importance respectively. Conclusions As literature research is one of the most important stages of research, it's better for researchers, especially Substance-Related Disorders scholars to use different search engines with the best recall, preciseness and importance in that subject field to reach desirable results while searching and they don’t depend on just one search engine. PMID:24971257
ERIC Educational Resources Information Center
Essid, Hedi; Ouellette, Pierre; Vigeant, Stephane
2010-01-01
The objective of this paper is to measure the efficiency of high schools in Tunisia. We use a statistical data envelopment analysis (DEA)-bootstrap approach with quasi-fixed inputs to estimate the precision of our measure. To do so, we developed a statistical model serving as the foundation of the data generation process (DGP). The DGP is…
NASA Astrophysics Data System (ADS)
Ng, C. S.; Bhattacharjee, A.
1996-08-01
A sufficient condition is obtained for the development of a finite-time singularity in a highly symmetric Euler flow, first proposed by Kida [J. Phys. Soc. Jpn. 54, 2132 (1995)] and recently simulated by Boratav and Pelz [Phys. Fluids 6, 2757 (1994)]. It is shown that if the second-order spatial derivative of the pressure (pxx) is positive following a Lagrangian element (on the x axis), then a finite-time singularity must occur. Under some assumptions, this Lagrangian sufficient condition can be reduced to an Eulerian sufficient condition which requires that the fourth-order spatial derivative of the pressure (pxxxx) at the origin be positive for all times leading up to the singularity. Analytical as well as direct numerical evaluation over a large ensemble of initial conditions demonstrate that for fixed total energy, pxxxx is predominantly positive with the average value growing with the numbers of modes.
Dexter, Franklin; Shafer, Steven L
2017-03-01
Considerable attention has been drawn to poor reproducibility in the biomedical literature. One explanation is inadequate reporting of statistical methods by authors and inadequate assessment of statistical reporting and methods during peer review. In this narrative review, we examine scientific studies of several well-publicized efforts to improve statistical reporting. We also review several retrospective assessments of the impact of these efforts. These studies show that instructions to authors and statistical checklists are not sufficient; no findings suggested that either improves the quality of statistical methods and reporting. Second, even basic statistics, such as power analyses, are frequently missing or incorrectly performed. Third, statistical review is needed for all papers that involve data analysis. A consistent finding in the studies was that nonstatistical reviewers (eg, "scientific reviewers") and journal editors generally poorly assess statistical quality. We finish by discussing our experience with statistical review at Anesthesia & Analgesia from 2006 to 2016.
Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan
2010-09-01
Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.
A novel approach for choosing summary statistics in approximate Bayesian computation.
Aeschbacher, Simon; Beaumont, Mark A; Futschik, Andreas
2012-11-01
The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θ(anc) = 4N(e)u) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L(2)-loss performs best. Applying that method to the ibex data, we estimate θ(anc)≈ 1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10(-4) and 3.5 × 10(-3) per locus per generation. The proportion of males with access to matings is estimated as ω≈ 0.21, which is in good agreement with recent independent estimates.
A Novel Approach for Choosing Summary Statistics in Approximate Bayesian Computation
Aeschbacher, Simon; Beaumont, Mark A.; Futschik, Andreas
2012-01-01
The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θanc = 4Neu) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L2-loss performs best. Applying that method to the ibex data, we estimate θ^anc≈1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10−4 and 3.5 × 10−3 per locus per generation. The proportion of males with access to matings is estimated as ω^≈0.21, which is in good agreement with recent independent estimates. PMID:22960215
The theory precision analyse of RFM localization of satellite remote sensing imagery
NASA Astrophysics Data System (ADS)
Zhang, Jianqing; Xv, Biao
2009-11-01
The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.
Zhou, Ping; Schechter, Clyde; Cai, Ziyong; Markowitz, Morri
2011-06-01
To highlight complexities in defining vitamin D sufficiency in children. Serum 25-(OH) vitamin D [25(OH)D] levels from 140 healthy obese children age 6 to 21 years living in the inner city were compared with multiple health outcome measures, including bone biomarkers and cardiovascular risk factors. Several statistical analytic approaches were used, including Pearson correlation, analysis of covariance (ANCOVA), and "hockey stick" regression modeling. Potential threshold levels for vitamin D sufficiency varied by outcome variable and analytic approach. Only systolic blood pressure (SBP) was significantly correlated with 25(OH)D (r = -0.261; P = .038). ANCOVA revealed that SBP and triglyceride levels were statistically significant in the test groups [25(OH)D <10, <15 and <20 ng/mL] compared with the reference group [25(OH)D >25 ng/mL]. ANCOVA also showed that only children with severe vitamin D deficiency [25(OH)D <10 ng/mL] had significantly higher parathyroid hormone levels (Δ = 15; P = .0334). Hockey stick model regression analyses found evidence of a threshold level in SBP, with a 25(OH)D breakpoint of 27 ng/mL, along with a 25(OH)D breakpoint of 18 ng/mL for triglycerides, but no relationship between 25(OH)D and parathyroid hormone. Defining vitamin D sufficiency should take into account different vitamin D-related health outcome measures and analytic methodologies. Copyright © 2011 Mosby, Inc. All rights reserved.
Bit-Grooming: Shave Your Bits with Razor-sharp Precision
NASA Astrophysics Data System (ADS)
Zender, C. S.; Silver, J.
2017-12-01
Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang
2016-01-01
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-09-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.
Passage relevance models for genomics search.
Urbain, Jay; Frieder, Ophir; Goharian, Nazli
2009-03-19
We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.
Constraining the mass–richness relationship of redMaPPer clusters with angular clustering
Baxter, Eric J.; Rozo, Eduardo; Jain, Bhuvnesh; ...
2016-08-04
The potential of using cluster clustering for calibrating the mass–richness relation of galaxy clusters has been recognized theoretically for over a decade. In this paper, we demonstrate the feasibility of this technique to achieve high-precision mass calibration using redMaPPer clusters in the Sloan Digital Sky Survey North Galactic Cap. By including cross-correlations between several richness bins in our analysis, we significantly improve the statistical precision of our mass constraints. The amplitude of the mass–richness relation is constrained to 7 per cent statistical precision by our analysis. However, the error budget is systematics dominated, reaching a 19 per cent total errormore » that is dominated by theoretical uncertainty in the bias–mass relation for dark matter haloes. We confirm the result from Miyatake et al. that the clustering amplitude of redMaPPer clusters depends on galaxy concentration as defined therein, and we provide additional evidence that this dependence cannot be sourced by mass dependences: some other effect must account for the observed variation in clustering amplitude with galaxy concentration. Assuming that the observed dependence of redMaPPer clustering on galaxy concentration is a form of assembly bias, we find that such effects introduce a systematic error on the amplitude of the mass–richness relation that is comparable to the error bar from statistical noise. Finally, the results presented here demonstrate the power of cluster clustering for mass calibration and cosmology provided the current theoretical systematics can be ameliorated.« less
Verzotto, Davide; M Teo, Audrey S; Hillmer, Axel M; Nagarajan, Niranjan
2016-01-01
Resolution of complex repeat structures and rearrangements in the assembly and analysis of large eukaryotic genomes is often aided by a combination of high-throughput sequencing and genome-mapping technologies (for example, optical restriction mapping). In particular, mapping technologies can generate sparse maps of large DNA fragments (150 kilo base pairs (kbp) to 2 Mbp) and thus provide a unique source of information for disambiguating complex rearrangements in cancer genomes. Despite their utility, combining high-throughput sequencing and mapping technologies has been challenging because of the lack of efficient and sensitive map-alignment algorithms for robustly aligning error-prone maps to sequences. We introduce a novel seed-and-extend glocal (short for global-local) alignment method, OPTIMA (and a sliding-window extension for overlap alignment, OPTIMA-Overlap), which is the first to create indexes for continuous-valued mapping data while accounting for mapping errors. We also present a novel statistical model, agnostic with respect to technology-dependent error rates, for conservatively evaluating the significance of alignments without relying on expensive permutation-based tests. We show that OPTIMA and OPTIMA-Overlap outperform other state-of-the-art approaches (1.6-2 times more sensitive) and are more efficient (170-200 %) and precise in their alignments (nearly 99 % precision). These advantages are independent of the quality of the data, suggesting that our indexing approach and statistical evaluation are robust, provide improved sensitivity and guarantee high precision.
DNA origami based Au-Ag-core-shell nanoparticle dimers with single-molecule SERS sensitivity
NASA Astrophysics Data System (ADS)
Prinz, J.; Heck, C.; Ellerik, L.; Merk, V.; Bald, I.
2016-03-01
DNA origami nanostructures are a versatile tool to arrange metal nanostructures and other chemical entities with nanometer precision. In this way gold nanoparticle dimers with defined distance can be constructed, which can be exploited as novel substrates for surface enhanced Raman scattering (SERS). We have optimized the size, composition and arrangement of Au/Ag nanoparticles to create intense SERS hot spots, with Raman enhancement up to 1010, which is sufficient to detect single molecules by Raman scattering. This is demonstrated using single dye molecules (TAMRA and Cy3) placed into the center of the nanoparticle dimers. In conjunction with the DNA origami nanostructures novel SERS substrates are created, which can in the future be applied to the SERS analysis of more complex biomolecular targets, whose position and conformation within the SERS hot spot can be precisely controlled.DNA origami nanostructures are a versatile tool to arrange metal nanostructures and other chemical entities with nanometer precision. In this way gold nanoparticle dimers with defined distance can be constructed, which can be exploited as novel substrates for surface enhanced Raman scattering (SERS). We have optimized the size, composition and arrangement of Au/Ag nanoparticles to create intense SERS hot spots, with Raman enhancement up to 1010, which is sufficient to detect single molecules by Raman scattering. This is demonstrated using single dye molecules (TAMRA and Cy3) placed into the center of the nanoparticle dimers. In conjunction with the DNA origami nanostructures novel SERS substrates are created, which can in the future be applied to the SERS analysis of more complex biomolecular targets, whose position and conformation within the SERS hot spot can be precisely controlled. Electronic supplementary information (ESI) available: Additional information about materials and methods, designs of DNA origami templates, height profiles, additional SERS spectra, assignment of DNA bands, SEM images, additional AFM images, FDTD simulations, additional reference spectra for Cy3 and detailed description of EF estimation, simulated absorption and scattering spectra. See DOI: 10.1039/c5nr08674d
UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS.
Ainsbury, Elizabeth A; Samaga, Daniel; Della Monaca, Sara; Marrale, Maurizio; Bassinet, Celine; Burbidge, Christopher I; Correcher, Virgilio; Discher, Michael; Eakins, Jon; Fattibene, Paola; Güçlü, Inci; Higueras, Manuel; Lund, Eva; Maltar-Strmecki, Nadica; McKeever, Stephen; Rääf, Christopher L; Sholom, Sergey; Veronese, Ivan; Wieser, Albrecht; Woda, Clemens; Trompier, Francois
2018-03-01
Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the 'EURADOS Working Group 10-Retrospective dosimetry' members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios.
A precise measurement of the $B^0$ meson oscillation frequency
Aaij, R.; Abellán Beteta, C.; Adeva, B.; ...
2016-07-21
The oscillation frequency, Δm d, of B 0 mesons is measured using semileptonic decays with a D – or D* – meson in the final state. The data sample corresponds to 3.0fb –1 of pp collisions, collected by the LHCb experiment at centre-of-mass energies √s = 7 and 8TeV. A combination of the two decay modes gives Δm d=(505.0±2.1±1.0)ns –1, where the first uncertainty is statistical and the second is systematic. This is the most precise single measurement of this parameter. It is consistent with the current world average and has similar precision.
Achieving metrological precision limits through postselection
NASA Astrophysics Data System (ADS)
Alves, G. Bié; Pimentel, A.; Hor-Meyll, M.; Walborn, S. P.; Davidovich, L.; Filho, R. L. de Matos
2017-01-01
Postselection strategies have been proposed with the aim of amplifying weak signals, which may help to overcome detection thresholds associated with technical noise in high-precision measurements. Here we use an optical setup to experimentally explore two different postselection protocols for the estimation of a small parameter: a weak-value amplification procedure and an alternative method that does not provide amplification but nonetheless is shown to be more robust for the sake of parameter estimation. Each technique leads approximately to the saturation of quantum limits for the estimation precision, expressed by the Cramér-Rao bound. For both situations, we show that parameter estimation is improved when the postselection statistics are considered together with the measurement device.
Ultra-High Precision Half-Life Measurement for the Superallowed &+circ; Emitter ^26Al^m
NASA Astrophysics Data System (ADS)
Finlay, P.; Demand, G.; Garrett, P. E.; Leach, K. G.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Grinyer, G. F.; Leslie, J. R.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Williams, S. J.
2009-10-01
The calculated nuclear structure dependent correction for ^26Al^m (δC-δNS= 0.305(27)% [1]) is smaller by nearly a factor of two than the other twelve precision superallowed cases, making it an ideal case to pursue a reduction in the experimental errors contributing to the Ft value. An ultra-high precision half-life measurement for the superallowed &+circ; emitter ^26Al^m has been made at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada. A beam of ˜10^5 ^26Al^m/s was delivered in October 2007 and its decay was observed using a 4π continuous gas flow proportional counter as part of an ongoing experimental program in superallowed Fermi β decay studies. With a statistical precision of ˜0.008%, the present work represents the single most precise measurement of any superallowed half-life to date. [4pt] [1] I.S. Towner and J.C. Hardy, Phys. Rev. C 79, 055502 (2009).
Ultra-High Precision Half-Life Measurement for the Superallowed &+circ; Emitter ^26Al^m
NASA Astrophysics Data System (ADS)
Finlay, P.; Demand, G.; Garrett, P. E.; Leach, K. G.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Ball, G. C.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Williams, S. J.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Grinyer, G. F.; Leslie, J. R.
2008-10-01
The calculated nuclear structure dependent correction for ^26Al^m (δC-δNS= 0.305(27)% [1]) is smaller by nearly a factor of two than the other twelve precision superallowed cases, making it an ideal case to pursue a reduction in the experimental errors contributing to the Ft value. An ultra-high precision half-life measurement for the superallowed &+circ; emitter ^26Al^m has been made using a 4π continuous gas flow proportional counter as part of an ongoing experimental program in superallowed Fermi β decay studies at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada, which delivered a beam of ˜10^5 ^26Al^m/s in October 2007. With a statistical precision of ˜0.008%, the present work represents the single most precise measurement of any superallowed half-life to date. [1] I.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008).
Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision
Yang, Bingwei; Xie, Xinhao; Li, Duan
2018-01-01
Time of flight (TOF) based light detection and ranging (LiDAR) is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC) that counts time between trigger signals and analog-to-digital converter (ADC) that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR) with analog discrete return system based ranging (AR), a peak detection method (WR-PK) shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC), WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision. PMID:29642639
Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz
2015-03-01
FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.
Statistical Approaches to Assess Biosimilarity from Analytical Data.
Burdick, Richard; Coffey, Todd; Gutka, Hiten; Gratzl, Gyöngyi; Conlon, Hugh D; Huang, Chi-Ting; Boyne, Michael; Kuehne, Henriette
2017-01-01
Protein therapeutics have unique critical quality attributes (CQAs) that define their purity, potency, and safety. The analytical methods used to assess CQAs must be able to distinguish clinically meaningful differences in comparator products, and the most important CQAs should be evaluated with the most statistical rigor. High-risk CQA measurements assess the most important attributes that directly impact the clinical mechanism of action or have known implications for safety, while the moderate- to low-risk characteristics may have a lower direct impact and thereby may have a broader range to establish similarity. Statistical equivalence testing is applied for high-risk CQA measurements to establish the degree of similarity (e.g., highly similar fingerprint, highly similar, or similar) of selected attributes. Notably, some high-risk CQAs (e.g., primary sequence or disulfide bonding) are qualitative (e.g., the same as the originator or not the same) and therefore not amenable to equivalence testing. For biosimilars, an important step is the acquisition of a sufficient number of unique originator drug product lots to measure the variability in the originator drug manufacturing process and provide sufficient statistical power for the analytical data comparisons. Together, these analytical evaluations, along with PK/PD and safety data (immunogenicity), provide the data necessary to determine if the totality of the evidence warrants a designation of biosimilarity and subsequent licensure for marketing in the USA. In this paper, a case study approach is used to provide examples of analytical similarity exercises and the appropriateness of statistical approaches for the example data.
How Do Statistical Detection Methods Compare to Entropy Measures
2012-08-28
October 2001. It is known as RS attack or “Reliable Detection of LSB Steganography in Grayscale and color images ”. The algorithm they use is very...precise for the detection of pseudo-aleatory LSB steganography . Its precision varies with the image but, its referential value is a 0.005 bits by...Jessica Fridrich, Miroslav Goljan, Rui Du, "Detecting LSB Steganography in Color and Gray-Scale Images ," IEEE Multimedia, vol. 8, no. 4, pp. 22-28, Oct
Corpus and Method for Identifying Citations in Non-Academic Text (Open Access, Publisher’s Version)
2014-05-31
patents, train a CRF classifier to find new citations, and apply a reranker to incorporate non-local information. Our best system achieves 0.83 F -score on...report precision, recall, and F -scores on chunk level. CRF training and decoding is performed with the CRF++ package7 using its default setting. 5.1...only obtain a very small number of training examples for statistical rerankers. 7http://crfpp.sourceforge.net Precision Recall F -score TEXT 0.7997 0.7805
Real-Time Single Frequency Precise Point Positioning Using SBAS Corrections
Li, Liang; Jia, Chun; Zhao, Lin; Cheng, Jianhua; Liu, Jianxu; Ding, Jicheng
2016-01-01
Real-time single frequency precise point positioning (PPP) is a promising technique for high-precision navigation with sub-meter or even centimeter-level accuracy because of its convenience and low cost. The navigation performance of single frequency PPP heavily depends on the real-time availability and quality of correction products for satellite orbits and satellite clocks. Satellite-based augmentation system (SBAS) provides the correction products in real-time, but they are intended to be used for wide area differential positioning at 1 meter level precision. By imposing the constraints for ionosphere error, we have developed a real-time single frequency PPP method by sufficiently utilizing SBAS correction products. The proposed PPP method are tested with static and kinematic data, respectively. The static experimental results show that the position accuracy of the proposed PPP method can reach decimeter level, and achieve an improvement of at least 30% when compared with the traditional SBAS method. The positioning convergence of the proposed PPP method can be achieved in 636 epochs at most in static mode. In the kinematic experiment, the position accuracy of the proposed PPP method can be improved by at least 20 cm relative to the SBAS method. Furthermore, it has revealed that the proposed PPP method can achieve decimeter level convergence within 500 s in the kinematic mode. PMID:27517930
Gotti, Riccardo; Gatti, Davide; Masłowski, Piotr; Lamperti, Marco; Belmonte, Michele; Laporta, Paolo; Marangoni, Marco
2017-10-07
We propose a novel approach to cavity-ring-down-spectroscopy (CRDS) in which spectra acquired with a frequency-agile rapid-scanning (FARS) scheme, i.e., with a laser sideband stepped across the modes of a high-finesse cavity, are interleaved with one another by a sub-millisecond readjustment of the cavity length. This brings to time acquisitions below 20 s for few-GHz-wide spectra composed of a very high number of spectral points, typically 3200. Thanks to the signal-to-noise ratio easily in excess of 10 000, each FARS-CRDS spectrum is shown to be sufficient to determine the line-centre frequency of a Doppler broadened line with a precision of 2 parts over 10 11 , thus very close to that of sub-Doppler regimes and in a few-seconds time scale. The referencing of the probe laser to a frequency comb provides absolute accuracy and long-term reproducibility to the spectrometer and makes it a powerful tool for precision spectroscopy and line-shape analysis. The experimental approach is discussed in detail together with experimental precision and accuracy tests on the (30 012) ← (00 001) P12e line of CO 2 at ∼1.57 μm.
Does choice of estimators influence conclusions from true metabolizable energy feeding trials?
Sherfy, M.H.; Kirkpatrick, R.L.; Webb, K.E.
2005-01-01
True metabolizable energy (TME) is a measure of avian dietary quality that accounts for metabolic fecal and endogenous urinary energy losses (EL) of non-dietary origin. The TME is calculated using a bird fed the test diet and an estimate of EL derived from another bird (Paired Bird Correction), the same bird (Self Correction), or several other birds (Group Mean Correction). We evaluated precision of these estimators by using each to calculate TME of three seed diets in blue-winged teal (Anas discors). The TME varied by <2% among estimators for all three diets, and Self Correction produced the least variable TMEs for each. The TME did not differ between estimators in nine paired comparisons within diets, but variation between estimators within individual birds was sufficient to be of practical consequence. Although differences in precision among methods were slight, Self Correction required the lowest sample size to achieve a given precision. Feeding trial methods that minimize variation among individuals have several desirable properties, including higher precision of TME estimates and more rigorous experimental control. Consequently, we believe that Self Correction is most likely to accurately represent nutritional value of food items and should be considered the standard method for TME feeding trials. ?? Dt. Ornithologen-Gesellschaft e.V. 2005.
[Contemporary threat of influenza virus infection].
Płusa, Tadeusz
2010-01-01
Swine-origine H1N1 influenza virus (S-OIV) caused a great mobilization of health medical service over the world. Now it is well known that a vaccine against novel virus is expected as a key point in that battle. In the situation when recommended treatment with neuraminidase inhibitors is not sufficient to control influenza A/H1N1 viral infection the quick and precisely diagnostic procedures should be applied to save and protect our patients.
Mathematical model governing laser-produced dental cavity
NASA Astrophysics Data System (ADS)
Yilbas, Bekir S.; Karatoy, M.; Yilbas, Z.; Karakas, Eyup S.; Bilge, A.; Ustunbas, Hasan B.; Ceyhan, O.
1990-06-01
Formation of dental cavity may be improved by using a laser beam. This provides nonmechanical contact, precise location of cavity, rapid processing and increased hygienity. Further examination of interaction mechanism is needed to improve the application of lasers in density. Present study examines the tenperature rise and thermal stress development in the enamel during Nd YAG laser irradiation. It is found that the stresses developed in the enamel is not sufficiently high enough to cause crack developed in the enamel.
Guiding-center equations for electrons in ultraintense laser fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, J.E.; Fisch, N.J.
1994-01-01
The guiding-center equations are derived for electrons in arbitrarily intense laser fields also subject to external fields and ponderomotive forces. Exhibiting the relativistic mass increase of the oscillating electrons, a simple frame-invariant equation is shown to govern the behavior of the electrons for sufficiently weak background fields and ponderomotive forces. The parameter regime for which such a formulation is valid is made precise, and some predictions of the equation are checked by numerical simulation.
Selecting the optimum plot size for a California design-based stream and wetland mapping program.
Lackey, Leila G; Stein, Eric D
2014-04-01
Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.
Visser, Steven; van der Molen, Henk F; Sluiter, Judith K; Frings-Dresen, Monique H W
2018-03-26
To gain insight into the process of applying two guidance strategies - face-to-face (F2F) or e-guidance strategy (EC) - of a Participatory Ergonomics (PE) intervention and whether differences between these guidance strategies occur, 12 construction companies were randomly assigned to a strategy. The process evaluation contained reach, dose delivered, dose received, precision, competence, satisfaction and behavioural change of individual workers. Data were assessed by logbooks, and questionnaires and interviews at baseline and/or after six months. Reach was low (1%). Dose delivered (F2F: 63%; EC: 44%), received (F2F: 42%; EC: 16%) were not sufficient. The precision and competence were sufficient for both strategies and satisfaction was strongly affected by dose received. For behavioural change, knowledge (F2F) and culture (EC) changed positively within companies. Neither strategy was delivered as intended. Compliance to the intervention was low, especially for EC. Starting with a face-to-face meeting might lead to higher compliance, especially in the EC group. Practitioner Summary: This study showed that compliance to a face-to-face and an e-guidance strategy is low. To improve the compliance, it is advised to start with a face-to-face meeting to see which parts of the intervention are needed and which guidance strategy can be used for these parts. ISRCTN73075751.
H0 from ten well-measured time delay lenses
NASA Astrophysics Data System (ADS)
Rathna Kumar, S.; Stalin, C. S.; Prabhu, T. P.
2015-08-01
In this work, we present a homogeneous curve-shifting analysis using the difference-smoothing technique of the publicly available light curves of 24 gravitationally lensed quasars, for which time delays have been reported in the literature. The uncertainty of each measured time delay was estimated using realistic simulated light curves. The recipe for generating such simulated light curves with known time delays in a plausible range around the measured time delay is introduced here. We identified 14 gravitationally lensed quasars that have light curves of sufficiently good quality to enable the measurement of at least one time delay between the images, adjacent to each other in terms of arrival-time order, to a precision of better than 20% (including systematic errors). We modeled the mass distribution of ten of those systems that have known lens redshifts, accurate astrometric data, and sufficiently simple mass distribution, using the publicly available PixeLens code to infer a value of H0 of 68.1 ± 5.9 km s-1 Mpc-1 (1σ uncertainty, 8.7% precision) for a spatially flat universe having Ωm = 0.3 and ΩΛ = 0.7. We note here that the lens modeling approach followed in this work is a relatively simple one and does not account for subtle systematics such as those resulting from line-of-sight effects and hence our H0 estimate should be considered as indicative.
Validity of Single Tract Microelectrode Recording in Subthalamic Nucleus Stimulation
Umemura, Atsushi; Oka, Yuichi; Yamada, Kazuo; Oyama, Genko; Shimo, Yasushi; Hattori, Nobutaka
2013-01-01
In surgery for subthalamic nucleus (STN) deep brain stimulation (DBS), precise implantation of the lead into the STN is essential. Physiological refinement with microelectrode recording (MER) is the gold standard for identifying STN. We studied single tract MER findings and surgical outcomes and verified our surgical method using single tract MER. The number of trajectories in MER and the final position of lead placement were retrospectively analyzed in 440 sides of STN DBS in 221 patients. Bilateral STN DBS yielded marked improvement in the motor score, dyskinesia/fluctuation score, and reduced requirement of dopaminergic medication in this series. The number of trajectories required to obtain sufficient activity of the STN was one in 79.0%, two in 18.2%, and three or more in 2.5% of 440 sides. In 92 sides requiring altered trajectory, the final direction of trajectory movement was posterior in 73.9%, anterior in 13.0%, lateral in 5.4%, and medial in 4.3%. In 18 patients, posterior moves were required due to significant brain shift with intracranial air caused by outflow of CSF during the second side procedure. Sufficient STN activity is obtained with minimum trajectories by proper targeting and precise interpretation of MER findings even in the single tract method. Anterior–posterior moves rather than medial–lateral moves should be attempted first in cases with insufficient recording of STN activity. PMID:24140767
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Yassin; Anand, Nk
2016-03-30
A 1/16th scaled VHTR experimental model was constructed and the preliminary test was performed in this study. To produce benchmark data for CFD validation in the future, the facility was first run at partial operation with five pipes being heated. PIV was performed to extract the vector velocity field for three adjacent naturally convective jets at statistically steady state. A small recirculation zone was found between the pipes, and the jets entered the merging zone at 3 cm from the pipe outlet but diverged as the flow approached the top of the test geometry. Turbulence analysis shows the turbulence intensitymore » peaked at 41-45% as the jets mixed. A sensitivity analysis confirmed that 1000 frames were sufficient to measure statistically steady state. The results were then validated by extracting the flow rate from the PIV jet velocity profile, and comparing it with an analytic flow rate and ultrasonic flowmeter; all flow rates lie within the uncertainty of the other two methods for Tests 1 and 2. This test facility can be used for further analysis of naturally convective mixing, and eventually produce benchmark data for CFD validation for the VHTR during a PCC or DCC accident scenario. Next, a PTV study of 3000 images (1500 image pairs) were used to quantify the velocity field in the upper plenum. A sensitivity analysis confirmed that 1500 frames were sufficient to precisely estimate the flow. Subsequently, three (3, 9, and 15 cm) Y-lines from the pipe output were extracted to consider the output differences between 50 to 1500 frames. The average velocity field and standard deviation error that accrued in the three different tests were calculated to assess repeatability. The error was varied, from 1 to 14%, depending on Y-elevation. The error decreased as the flow moved farther from the output pipe. In addition, turbulent intensity was calculated and found to be high near the output. Reynolds stresses and turbulent intensity were used to validate the data by comparing it with benchmark data. The experimental data gave the same pattern as the benchmark data. A turbulent single buoyant jet study was performed for the case of LOFC in the upper plenum of scaled VHTR. Time-averaged profiles show that 3,000 frames of images were sufficient for the study up to second-order statistics. Self-similarity is an important feature of jets since the behavior of jets is independent of Reynolds number and a sole function of geometry. Self-similarity profiles were well observed in the axial velocity and velocity magnitude profile regardless of z/D where the radial velocity did not show any similarity pattern. The normal components of Reynolds stresses have self-similarity within the expected range. The study shows that large vortices were observed close to the dome wall, indicating that the geometry of the VHTR has a significant impact on its safety and performance. Near the dome surface, large vortices were shown to inhibit the flows, resulting in reduced axial jet velocity. The vortices that develop subsequently reduce the Reynolds stresses that develop and the impact on the integrity of the VHTR upper plenum surface. Multiple jets study, including two, three and five jets, were investigated.« less
Precision estimate for Odin-OSIRIS limb scatter retrievals
NASA Astrophysics Data System (ADS)
Bourassa, A. E.; McLinden, C. A.; Bathgate, A. F.; Elash, B. J.; Degenstein, D. A.
2012-02-01
The limb scatter measurements made by the Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on the Odin spacecraft are used to routinely produce vertically resolved trace gas and aerosol extinction profiles. Version 5 of the ozone and stratospheric aerosol extinction retrievals, which are available for download, are performed using a multiplicative algebraic reconstruction technique (MART). The MART inversion is a type of relaxation method, and as such the covariance of the retrieved state is estimated numerically, which, if done directly, is a computationally heavy task. Here we provide a methodology for the derivation of a numerical estimate of the covariance matrix for the retrieved state using the MART inversion that is sufficiently efficient to perform for each OSIRIS measurement. The resulting precision is compared with the variability in a large set of pairs of OSIRIS measurements that are close in time and space in the tropical stratosphere where the natural atmospheric variability is weak. These results are found to be highly consistent and thus provide confidence in the numerical estimate of the precision in the retrieved profiles.
Flexible coordinate measurement system based on robot for industries
NASA Astrophysics Data System (ADS)
Guo, Yin; Yang, Xue-you; Liu, Chang-jie; Ye, Sheng-hua
2010-10-01
The flexible coordinate measurement system based on robot which is applicable to multi-model vehicle is designed to meet the needs of online measurement for current mainstream mixed body-in-white(BIW) production line. The moderate precision, good flexibility and no blind angle are the benefits of this measurement system. According to the measurement system, a monocular structured light vision sensor has been designed, which can measure not only edges, but also planes, apertures and other features. And a effective way to fast on-site calibration of the whole system using the laser tracker has also been proposed, which achieves the unity of various coordinate systems in industrial fields. The experimental results show satisfactory precision of +/-0.30mm of this measurement system, which is sufficient for the needs of online measurement for body-in-white(BIW) in the auto production line. The system achieves real-time detection and monitoring of the whole process of the car body's manufacture, and provides a complete data support in purpose of overcoming the manufacturing error immediately and accurately and improving the manufacturing precision.
NASA Technical Reports Server (NTRS)
Zhou, Hanying
2007-01-01
PREDICTS is a computer program that predicts the frequencies, as functions of time, of signals to be received by a radio science receiver in this case, a special-purpose digital receiver dedicated to analysis of signals received by an antenna in NASA s Deep Space Network (DSN). Unlike other software used in the DSN, PREDICTS does not use interpolation early in the calculations; as a consequence, PREDICTS is more precise and more stable. The precision afforded by the other DSN software is sufficient for telemetry; the greater precision afforded by PREDICTS is needed for radio-science experiments. In addition to frequencies as a function of time, PREDICTS yields the rates of change and interpolation coefficients for the frequencies and the beginning and ending times of reception, transmission, and occultation. PREDICTS is applicable to S-, X-, and Ka-band signals and can accommodate the following link configurations: (1) one-way (spacecraft to ground), (2) two-way (from a ground station to a spacecraft to the same ground station), and (3) three-way (from a ground transmitting station to a spacecraft to a different ground receiving station).
Choosing a DIVA: a comparison of emerging digital imagery vegetation analysis techniques
Jorgensen, Christopher F.; Stutzman, Ryan J.; Anderson, Lars C.; Decker, Suzanne E.; Powell, Larkin A.; Schacht, Walter H.; Fontaine, Joseph J.
2013-01-01
Question: What is the precision of five methods of measuring vegetation structure using ground-based digital imagery and processing techniques? Location: Lincoln, Nebraska, USA Methods: Vertical herbaceous cover was recorded using digital imagery techniques at two distinct locations in a mixed-grass prairie. The precision of five ground-based digital imagery vegetation analysis (DIVA) methods for measuring vegetation structure was tested using a split-split plot analysis of covariance. Variability within each DIVA technique was estimated using coefficient of variation of mean percentage cover. Results: Vertical herbaceous cover estimates differed among DIVA techniques. Additionally, environmental conditions affected the vertical vegetation obstruction estimates for certain digital imagery methods, while other techniques were more adept at handling various conditions. Overall, percentage vegetation cover values differed among techniques, but the precision of four of the five techniques was consistently high. Conclusions: DIVA procedures are sufficient for measuring various heights and densities of standing herbaceous cover. Moreover, digital imagery techniques can reduce measurement error associated with multiple observers' standing herbaceous cover estimates, allowing greater opportunity to detect patterns associated with vegetation structure.
Feedforward hysteresis compensation in trajectory control of piezoelectrically-driven nanostagers
NASA Astrophysics Data System (ADS)
Bashash, Saeid; Jalili, Nader
2006-03-01
Complex structural nonlinearities of piezoelectric materials drastically degrade their performance in variety of micro- and nano-positioning applications. From the precision positioning and control perspective, the multi-path time-history dependent hysteresis phenomenon is the most concerned nonlinearity in piezoelectric actuators to be analyzed. To realize the underlying physics of this phenomenon and to develop an efficient compensation strategy, the intelligent properties of hysteresis with the effects of non-local memories are discussed. Through performing a set of experiments on a piezoelectrically-driven nanostager with high resolution capacitive position sensor, it is shown that for the precise prediction of hysteresis path, certain memory units are required to store the previous hysteresis trajectory data. Based on the experimental observations, a constitutive memory-based mathematical modeling framework is developed and trained for the precise prediction of hysteresis path for arbitrarily assigned input profiles. Using the inverse hysteresis model, a feedforward control strategy is then developed and implemented on the nanostager to compensate for the system everpresent nonlinearity. Experimental results demonstrate that the controller remarkably eliminates the nonlinear effect if memory units are sufficiently chosen for the inverse model.
Current approach to fertility preservation by embryo cryopreservation
Bedoschi, Giuliano; Oktay, Kutluk
2013-01-01
The ovaries are susceptible to damage following treatment with gonadotoxic chemotherapy, pelvic radiotherapy, and/or ovarian surgery. Gonadotoxic treatments have also been used in patients with various nonmalignant systemic diseases. Any women of reproductive age with a sufficiently high risk of developing future ovarian failure due to those medical interventions may benefit from embryo cryopreservation though the tools of assessment of such a risk are still not very precise. Furthermore, the risk assessment can be influenced by many other factors such as the delay expected after chemotherapy and the number of children desired in the future. Embryo cryopreservation is an established and most successful method of fertility preservation when there is sufficient time available to perform ovarian stimulation. This publication will review the current state, approach, and indications of embryo cryopreservation for fertility preservation. PMID:23535505
Contribution of Apollo lunar photography to the establishment of selenodetic control
NASA Technical Reports Server (NTRS)
Dermanis, A.
1975-01-01
Among the various types of available data relevant to the establishment of geometric control on the moon, the only one covering significant portions of the lunar surface (20%) with sufficient information content, is lunar photography, taken at the proximity of the moon from lunar orbiters. The idea of free geodetic networks is introduced as a tool for the statistical comparison of the geometric aspects of the various data used. Methods were developed for the updating of the statistics of observations and the a priori parameter estimates to obtain statistically consistent solutions by means of the optimum relative weighting concept.
[Lymphocytic infiltration in uveal melanoma].
Sach, J; Kocur, J
1993-11-01
After our observation of lymphocytic infiltration in uveal melanomas we present theoretical review to this interesting topic. Due to relatively low incidence of this feature we haven't got sufficiently large collection of cases for presentation of our statistically significant conclusions.
Inflationary tensor perturbations after BICEP2.
Caligiuri, Jerod; Kosowsky, Arthur
2014-05-16
The measurement of B-mode polarization of the cosmic microwave background at large angular scales by the BICEP experiment suggests a stochastic gravitational wave background from early-Universe inflation with a surprisingly large amplitude. The power spectrum of these tensor perturbations can be probed both with further measurements of the microwave background polarization at smaller scales and also directly via interferometry in space. We show that sufficiently sensitive high-resolution B-mode measurements will ultimately have the ability to test the inflationary consistency relation between the amplitude and spectrum of the tensor perturbations, confirming their inflationary origin. Additionally, a precise B-mode measurement of the tensor spectrum will predict the tensor amplitude on solar system scales to 20% accuracy for an exact power-law tensor spectrum, so a direct detection will then measure the running of the tensor spectral index to high precision.
Comparison of Einstein-Boltzmann solvers for testing general relativity
NASA Astrophysics Data System (ADS)
Bellini, E.; Barreira, A.; Frusciante, N.; Hu, B.; Peirone, S.; Raveri, M.; Zumalacárregui, M.; Avilez-Lopez, A.; Ballardini, M.; Battye, R. A.; Bolliet, B.; Calabrese, E.; Dirian, Y.; Ferreira, P. G.; Finelli, F.; Huang, Z.; Ivanov, M. M.; Lesgourgues, J.; Li, B.; Lima, N. A.; Pace, F.; Paoletti, D.; Sawicki, I.; Silvestri, A.; Skordis, C.; Umiltà, C.; Vernizzi, F.
2018-01-01
We compare Einstein-Boltzmann solvers that include modifications to general relativity and find that, for a wide range of models and parameters, they agree to a high level of precision. We look at three general purpose codes that primarily model general scalar-tensor theories, three codes that model Jordan-Brans-Dicke (JBD) gravity, a code that models f (R ) gravity, a code that models covariant Galileons, a code that models Hořava-Lifschitz gravity, and two codes that model nonlocal models of gravity. Comparing predictions of the angular power spectrum of the cosmic microwave background and the power spectrum of dark matter for a suite of different models, we find agreement at the subpercent level. This means that this suite of Einstein-Boltzmann solvers is now sufficiently accurate for precision constraints on cosmological and gravitational parameters.
NASA Technical Reports Server (NTRS)
Payne, M. H.
1973-01-01
A computer program is described for the calculation of the zeroes of the associated Legendre functions, Pnm, and their derivatives, for the calculation of the extrema of Pnm and also the integral between pairs of successive zeroes. The program has been run for all n,m from (0,0) to (20,20) and selected cases beyond that for n up to 40. Up to (20,20), the program (written in double precision) retains nearly full accuracy, and indications are that up to (40,40) there is still sufficient precision (4-5 decimal digits for a 54-bit mantissa) for estimation of various bounds and errors involved in geopotential modelling, the purpose for which the program was written.
Spotting stellar activity cycles in Gaia astrometry
NASA Astrophysics Data System (ADS)
Morris, Brett M.; Agol, Eric; Davenport, James R. A.; Hawley, Suzanne L.
2018-06-01
Astrometry from Gaia will measure the positions of stellar photometric centroids to unprecedented precision. We show that the precision of Gaia astrometry is sufficient to detect starspot-induced centroid jitter for nearby stars in the Tycho-Gaia Astrometric Solution (TGAS) sample with magnetic activity similar to the young G-star KIC 7174505 or the active M4 dwarf GJ 1243, but is insufficient to measure centroid jitter for stars with Sun-like spot distributions. We simulate Gaia observations of stars with 10 year activity cycles to search for evidence of activity cycles, and find that Gaia astrometry alone likely cannot detect activity cycles for stars in the TGAS sample, even if they have spot distributions like KIC 7174505. We review the activity of the nearby low-mass stars in the TGAS sample for which we anticipate significant detections of spot-induced jitter.
Consistent calculation of the screening and exchange effects in allowed β- transitions
NASA Astrophysics Data System (ADS)
Mougeot, X.; Bisch, C.
2014-07-01
The atomic exchange effect has previously been demonstrated to have a great influence at low energy on the Pu241 β- transition. The screening effect has been given as a possible explanation for a remaining discrepancy. Improved calculations have been made to consistently evaluate these two atomic effects, compared here to the recent high-precision measurements of Pu241 and Ni63 β spectra. In this paper a screening correction has been defined to account for the spatial extension of the electron wave functions. Excellent overall agreement of about 1% from 500 eV to the end-point energy has been obtained for both β spectra, which demonstrates that a rather simple β decay model for allowed transitions, including atomic effects within an independent-particle model, is sufficient to describe well the current most precise measurements.
NASA Technical Reports Server (NTRS)
Gwo, Dz-Hung (Inventor)
2003-01-01
A method of bonding substrates by hydroxide-catalyzed hydration/dehydration involves applying a bonding material to at least one surface to be bonded, and placing the at least one surface sufficiently close to another surface such that a bonding interface is formed between them. A bonding material of the invention comprises a source of hydroxide ions, and may optionally include a silicate component, a particulate filling material, and a property-modifying component. Bonding methods of the invention reliably and reproducibly provide bonds which are strong and precise, and which may be tailored according to a wide range of possible applications. Possible applications for bonding materials of the invention include: forming composite materials, coating substrates, forming laminate structures, assembly of precision optical components, and preparing objects of defined geometry and composition. Bonding materials and methods of preparing the same are also disclosed.
SOLARIS 3-axis high load, low profile, high precision motorized positioner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acome, Eric; Van Every, Eric; Deyhim, Alex, E-mail: adc@adc9001.com
A 3-axis optical table, shown in Figure 1, was designed, fabricated, and assembled for the SOLARIS synchrotron facility at the Jagiellonian University in Krakow, Poland. To accommodate the facility, the table was designed to be very low profile, as seen in Figure 2, and bear a high load. The platform has degrees of freedom in the vertical (Z) direction as well as horizontal transversal (X and Y) directions. The table is intended to sustain loads as large as 1500 kg which will be sufficient to support a variety of equipment to measure and facilitate synchrotron radiation. After assembly, the tablemore » was tested and calibrated to find its position error in the vertical direction. ADC has extensive experience designing and building custom complex high precision motion systems [1,2].« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.
2014-04-15
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-01-01
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahadevan, Suvrath; Halverson, Samuel; Ramsey, Lawrence
2014-05-01
Modal noise in optical fibers imposes limits on the signal-to-noise ratio (S/N) and velocity precision achievable with the next generation of astronomical spectrographs. This is an increasingly pressing problem for precision radial velocity spectrographs in the near-infrared (NIR) and optical that require both high stability of the observed line profiles and high S/N. Many of these spectrographs plan to use highly coherent emission-line calibration sources like laser frequency combs and Fabry-Perot etalons to achieve precision sufficient to detect terrestrial-mass planets. These high-precision calibration sources often use single-mode fibers or highly coherent sources. Coupling light from single-mode fibers to multi-mode fibersmore » leads to only a very low number of modes being excited, thereby exacerbating the modal noise measured by the spectrograph. We present a commercial off-the-shelf solution that significantly mitigates modal noise at all optical and NIR wavelengths, and which can be applied to spectrograph calibration systems. Our solution uses an integrating sphere in conjunction with a diffuser that is moved rapidly using electrostrictive polymers, and is generally superior to most tested forms of mechanical fiber agitation. We demonstrate a high level of modal noise reduction with a narrow bandwidth 1550 nm laser. Our relatively inexpensive solution immediately enables spectrographs to take advantage of the innate precision of bright state-of-the art calibration sources by removing a major source of systematic noise.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redus, K.S.
2007-07-01
The foundation of statistics deals with (a) how to measure and collect data and (b) how to identify models using estimates of statistical parameters derived from the data. Risk is a term used by the statistical community and those that employ statistics to express the results of a statistically based study. Statistical risk is represented as a probability that, for example, a statistical model is sufficient to describe a data set; but, risk is also interpreted as a measure of worth of one alternative when compared to another. The common thread of any risk-based problem is the combination of (a)more » the chance an event will occur, with (b) the value of the event. This paper presents an introduction to, and some examples of, statistical risk-based decision making from a quantitative, visual, and linguistic sense. This should help in understanding areas of radioactive waste management that can be suitably expressed using statistical risk and vice-versa. (authors)« less
Hermassi, Souhail; Aouadi, Ridha; Khalifa, Riadh; van den Tillaar, Roland; Shephard, Roy J; Chelly, Mohamed Souhaiel
2015-03-29
The aim of the present study was to investigate relationships between a performance index derived from the Yo-Yo Intermittent Recovery Test level 1 (Yo-Yo IR1) and other measures of physical performance and skill in handball players. The other measures considered included peak muscular power of the lower limbs (Wpeak), jumping ability (squat and counter-movement jumps (SJ, CMJ), a handball skill test and the average sprinting velocities over the first step (VS) and the first 5 m (V5m). Test scores for 25 male national-level adolescent players (age: 17.2 ± 0.7 years) averaged 4.83 ± 0.34 m·s(-1) (maximal velocity reached at the Yo-Yo IR1); 917 ± 105 Watt, 12.7 ± 3 W·kg(-1) (Wpeak); 3.41 ± 0.5 m·s(-1) and 6.03 ± 0.6 m·s(-1) (sprint velocities for Vs and V5m respectively) and 10.3 ± 1 s (handball skill test). Yo-Yo IR1 test scores showed statistically significant correlations with all of the variables examined: Wpeak (W and W·kg(-1)) r = 0.80 and 0.65, respectively, p≤0.001); sprinting velocities (r = 0.73 and 0.71 for VS and V5m respectively; p≤0.001); jumping performance (SJ: r = 0.60, p≤0.001; CMJ: r= 0.66, p≤0.001) and the handball skill test (r = 0.71; p≤0.001). We concluded that the Yo-Yo test score showed a sufficient correlation with other potential means of assessing handball players, and that intra-individual changes of Yo-Yo IR1 score could provide a useful composite index of the response to training or rehabilitation, although correlations lack sufficient precision to help in players' selection.
Hermassi, Souhail; Aouadi, Ridha; Khalifa, Riadh; van den Tillaar, Roland; Shephard, Roy J.; Chelly, Mohamed Souhaiel
2015-01-01
The aim of the present study was to investigate relationships between a performance index derived from the Yo-Yo Intermittent Recovery Test level 1 (Yo-Yo IR1) and other measures of physical performance and skill in handball players. The other measures considered included peak muscular power of the lower limbs (Wpeak), jumping ability (squat and counter-movement jumps (SJ, CMJ), a handball skill test and the average sprinting velocities over the first step (VS) and the first 5 m (V5m). Test scores for 25 male national-level adolescent players (age: 17.2 ± 0.7 years) averaged 4.83 ± 0.34 m·s−1 (maximal velocity reached at the Yo-Yo IR1); 917 ± 105 Watt, 12.7 ± 3 W·kg−1 (Wpeak); 3.41 ± 0.5 m·s−1 and 6.03 ± 0.6 m·s−1 (sprint velocities for Vs and V5m respectively) and 10.3 ± 1 s (handball skill test). Yo-Yo IR1 test scores showed statistically significant correlations with all of the variables examined: Wpeak (W and W·kg−1) r = 0.80 and 0.65, respectively, p≤0.001); sprinting velocities (r = 0.73 and 0.71 for VS and V5m respectively; p≤0.001); jumping performance (SJ: r = 0.60, p≤0.001; CMJ: r= 0.66, p≤0.001) and the handball skill test (r = 0.71; p≤0.001). We concluded that the Yo-Yo test score showed a sufficient correlation with other potential means of assessing handball players, and that intra-individual changes of Yo-Yo IR1 score could provide a useful composite index of the response to training or rehabilitation, although correlations lack sufficient precision to help in players’ selection. PMID:25964822
Quantum Monte Carlo: Faster, More Reliable, And More Accurate
NASA Astrophysics Data System (ADS)
Anderson, Amos Gerald
2010-06-01
The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our combination of Generalized Valence Bond wavefunctions, improved correlation functions, and stabilized weighting techniques for calculations run on graphics cards, represents a new way for using Quantum Monte Carlo to study arbitrarily sized molecules.
Influence of Running on Pistol Shot Hit Patterns.
Kerkhoff, Wim; Bolck, Annabel; Mattijssen, Erwin J A T
2016-01-01
In shooting scene reconstructions, risk assessment of the situation can be important for the legal system. Shooting accuracy and precision, and thus risk assessment, might be correlated with the shooter's physical movement and experience. The hit patterns of inexperienced and experienced shooters, while shooting stationary (10 shots) and in running motion (10 shots) with a semi-automatic pistol, were compared visually (with confidence ellipses) and statistically. The results show a significant difference in precision (circumference of the hit patterns) between stationary shots and shots fired in motion for both inexperienced and experienced shooters. The decrease in precision for all shooters was significantly larger in the y-direction than in the x-direction. The precision of the experienced shooters is overall better than that of the inexperienced shooters. No significant change in accuracy (shift in the hit pattern center) between stationary shots and shots fired in motion can be seen for all shooters. © 2015 American Academy of Forensic Sciences.
Status and outlook of CHIP-TRAP: The Central Michigan University high precision Penning trap
NASA Astrophysics Data System (ADS)
Redshaw, M.; Bryce, R. A.; Hawks, P.; Gamage, N. D.; Hunt, C.; Kandegedara, R. M. E. B.; Ratnayake, I. S.; Sharp, L.
2016-06-01
At Central Michigan University we are developing a high-precision Penning trap mass spectrometer (CHIP-TRAP) that will focus on measurements with long-lived radioactive isotopes. CHIP-TRAP will consist of a pair of hyperbolic precision-measurement Penning traps, and a cylindrical capture/filter trap in a 12 T magnetic field. Ions will be produced by external ion sources, including a laser ablation source, and transported to the capture trap at low energies enabling ions of a given m / q ratio to be selected via their time-of-flight. In the capture trap, contaminant ions will be removed with a mass-selective rf dipole excitation and the ion of interest will be transported to the measurement traps. A phase-sensitive image charge detection technique will be used for simultaneous cyclotron frequency measurements on single ions in the two precision traps, resulting in a reduction in statistical uncertainty due to magnetic field fluctuations.
Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.
2008-08-11
Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution tomore » mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.« less
Validation of a Spectral Method for Quantitative Measurement of Color in Protein Drug Solutions.
Yin, Jian; Swartz, Trevor E; Zhang, Jian; Patapoff, Thomas W; Chen, Bartolo; Marhoul, Joseph; Shih, Norman; Kabakoff, Bruce; Rahimi, Kimia
2016-01-01
A quantitative spectral method has been developed to precisely measure the color of protein solutions. In this method, a spectrophotometer is utilized for capturing the visible absorption spectrum of a protein solution, which can then be converted to color values (L*a*b*) that represent human perception of color in a quantitative three-dimensional space. These quantitative values (L*a*b*) allow for calculating the best match of a sample's color to a European Pharmacopoeia reference color solution. In order to qualify this instrument and assay for use in clinical quality control, a technical assessment was conducted to evaluate the assay suitability and precision. Setting acceptance criteria for this study required development and implementation of a unique statistical method for assessing precision in 3-dimensional space. Different instruments, cuvettes, protein solutions, and analysts were compared in this study. The instrument accuracy, repeatability, and assay precision were determined. The instrument and assay are found suitable for use in assessing color of drug substances and drug products and is comparable to the current European Pharmacopoeia visual assessment method. In the biotechnology industry, a visual assessment is the most commonly used method for color characterization, batch release, and stability testing of liquid protein drug solutions. Using this method, an analyst visually determines the color of the sample by choosing the closest match to a standard color series. This visual method can be subjective because it requires an analyst to make a judgment of the best match of color of the sample to the standard color series, and it does not capture data on hue and chroma that would allow for improved product characterization and the ability to detect subtle differences between samples. To overcome these challenges, we developed a quantitative spectral method for color determination that greatly reduces the variability in measuring color and allows for a more precise understanding of color differences. In this study, we established a statistical method for assessing precision in 3-dimensional space and demonstrated that the quantitative spectral method is comparable with respect to precision and accuracy to the current European Pharmacopoeia visual assessment method. © PDA, Inc. 2016.
Baumrind, D
1983-12-01
The claims based on causal models employing either statistical or experimental controls are examined and found to be excessive when applied to social or behavioral science data. An exemplary case, in which strong causal claims are made on the basis of a weak version of the regularity model of cause, is critiqued. O'Donnell and Clayton claim that in order to establish that marijuana use is a cause of heroin use (their "reformulated stepping-stone" hypothesis), it is necessary and sufficient to demonstrate that marijuana use precedes heroin use and that the statistically significant association between the two does not vanish when the effects of other variables deemed to be prior to both of them are removed. I argue that O'Donnell and Clayton's version of the regularity model is not sufficient to establish cause and that the planning of social interventions both presumes and requires a generative rather than a regularity causal model. Causal modeling using statistical controls is of value when it compels the investigator to make explicit and to justify a causal explanation but not when it is offered as a substitute for a generative analysis of causal connection.
Machine vision system for measuring conifer seedling morphology
NASA Astrophysics Data System (ADS)
Rigney, Michael P.; Kranzler, Glenn A.
1995-01-01
A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.
Wickham, Hadley; Hofmann, Heike
2011-12-01
We propose a new framework for visualising tables of counts, proportions and probabilities. We call our framework product plots, alluding to the computation of area as a product of height and width, and the statistical concept of generating a joint distribution from the product of conditional and marginal distributions. The framework, with extensions, is sufficient to encompass over 20 visualisations previously described in fields of statistical graphics and infovis, including bar charts, mosaic plots, treemaps, equal area plots and fluctuation diagrams. © 2011 IEEE
Cardiac gating with a pulse oximeter for dual-energy imaging
NASA Astrophysics Data System (ADS)
Shkumat, N. A.; Siewerdsen, J. H.; Dhanantwari, A. C.; Williams, D. B.; Paul, N. S.; Yorkston, J.; Van Metter, R.
2008-11-01
The development and evaluation of a prototype cardiac gating system for double-shot dual-energy (DE) imaging is described. By acquiring both low- and high-kVp images during the resting phase of the cardiac cycle (diastole), heart misalignment between images can be reduced, thereby decreasing the magnitude of cardiac motion artifacts. For this initial implementation, a fingertip pulse oximeter was employed to measure the peripheral pulse waveform ('plethysmogram'), offering potential logistic, cost and workflow advantages compared to an electrocardiogram. A gating method was developed that accommodates temporal delays due to physiological pulse propagation, oximeter waveform processing and the imaging system (software, filter-wheel, anti-scatter Bucky-grid and flat-panel detector). Modeling the diastolic period allowed the calculation of an implemented delay, timp, required to trigger correctly during diastole at any patient heart rate (HR). The model suggests a triggering scheme characterized by two HR regimes, separated by a threshold, HRthresh. For rates at or below HRthresh, sufficient time exists to expose on the same heartbeat as the plethysmogram pulse [timp(HR) = 0]. Above HRthresh, a characteristic timp(HR) delays exposure to the subsequent heartbeat, accounting for all fixed and variable system delays. Performance was evaluated in terms of accuracy and precision of diastole-trigger coincidence and quantitative evaluation of artifact severity in gated and ungated DE images. Initial implementation indicated 85% accuracy in diastole-trigger coincidence. Through the identification of an improved HR estimation method (modified temporal smoothing of the oximeter waveform), trigger accuracy of 100% could be achieved with improved precision. To quantify the effect of the gating system on DE image quality, human observer tests were conducted to measure the magnitude of cardiac artifact under conditions of successful and unsuccessful diastolic gating. Six observers independently measured the artifact in 111 patient DE images. The data indicate that successful diastolic gating results in a statistically significant reduction (p < 0.001) in the magnitude of cardiac motion artifact, with residual artifact attributed primarily to gross patient motion.
NASA Astrophysics Data System (ADS)
Paredes-Alvarez, Leonardo; Nusdeo, Daniel Anthony; Henry, Todd J.; Jao, Wei-Chun; Gies, Douglas R.; White, Russel; RECONS Team
2017-01-01
To understand fundamental aspects of stellar populations, astronomers need carefully vetted, volume-complete samples. In our K-KIDS effort, our goal is to survey a large sample of K dwarfs for their "kids", companions that may be stellar, brown dwarf, or planetary in nature. Four surveys for companions orbiting an initial set of 1048 K dwarfs with declinations between +30 and -30 have begun. Companions are being detected with separations less than 1 AU out to 10000 AU. Fortuitously, the combination of Hipparcos and Gaia DR1 astrometry with optical photometry from APASS and infrared photometry from 2MASS now allows us to create an effectively volume-complete sample of K dwarfs to a horizon of 50 pc. This sample facilitates rigorous studies of the luminosity and mass functions, as well as comprehensive mapping of the companions orbiting K dwarfs that have never before been possible.Here we present two important results. First, we find that our initial sample of ~1000 K dwarfs can be expanded to 2000-3000 stars in what is an effectively volume-complete sample. This population is sufficiently large to provide superb statistics on the outcomes of star and planet formation processes. Second, initial results from our high-precision radial velocity survey of K dwarfs with the CHIRON spectrograph on the CTIO/SMARTS 1.5m reveal its short-term precision and indicate that stellar, brown dwarf and Jovian planets will be detectable. We present radial velocity curves for an inital sample of 8 K dwarfs with V = 7-10 using cross-correlation techniques on R=80,000 spectra, and illustrate the stability of CHIRON over hours, days, and weeks. Ultimately, the combination of all four surveys will provide an unprecedented portrait of K dwarfs and their kids.This effort has been supported by the NSF through grants AST-1412026 and AST-1517413, and via observations made possible by the SMARTS Consortium
GPS/GLONASS Combined Precise Point Positioning with Receiver Clock Modeling
Wang, Fuhong; Chen, Xinghan; Guo, Fei
2015-01-01
Research has demonstrated that receiver clock modeling can reduce the correlation coefficients among the parameters of receiver clock bias, station height and zenith tropospheric delay. This paper introduces the receiver clock modeling to GPS/GLONASS combined precise point positioning (PPP), aiming to better separate the receiver clock bias and station coordinates and therefore improve positioning accuracy. Firstly, the basic mathematic models including the GPS/GLONASS observation equations, stochastic model, and receiver clock model are briefly introduced. Then datasets from several IGS stations equipped with high-stability atomic clocks are used for kinematic PPP tests. To investigate the performance of PPP, including the positioning accuracy and convergence time, a week of (1–7 January 2014) GPS/GLONASS data retrieved from these IGS stations are processed with different schemes. The results indicate that the positioning accuracy as well as convergence time can benefit from the receiver clock modeling. This is particularly pronounced for the vertical component. Statistic RMSs show that the average improvement of three-dimensional positioning accuracy reaches up to 30%–40%. Sometimes, it even reaches over 60% for specific stations. Compared to the GPS-only PPP, solutions of the GPS/GLONASS combined PPP are much better no matter if the receiver clock offsets are modeled or not, indicating that the positioning accuracy and reliability are significantly improved with the additional GLONASS satellites in the case of insufficient number of GPS satellites or poor geometry conditions. In addition to the receiver clock modeling, the impacts of different inter-system timing bias (ISB) models are investigated. For the case of a sufficient number of satellites with fairly good geometry, the PPP performances are not seriously affected by the ISB model due to the low correlation between the ISB and the other parameters. However, the refinement of ISB model weakens the correlation between coordinates and ISB estimates and finally enhance the PPP performance in the case of poor observation conditions. PMID:26134106
Kramers, Cornelis; Derijks, Hieronymus J.; Wensing, Michel; Wetzels, Jack F. M.
2015-01-01
Background The Modification of Diet in Renal Disease (MDRD) formula is widely used in clinical practice to assess the correct drug dose. This formula is based on serum creatinine levels which might be influenced by chronic diseases itself or the effects of the chronic diseases. We conducted a systematic review to determine the validity of the MDRD formula in specific patient populations with renal impairment: elderly, hospitalized and obese patients, patients with cardiovascular disease, cancer, chronic respiratory diseases, diabetes mellitus, liver cirrhosis and human immunodeficiency virus. Methods and Findings We searched for articles in Pubmed published from January 1999 through January 2014. Selection criteria were (1) patients with a glomerular filtration rate (GFR) < 60 ml/min (/1.73m2), (2) MDRD formula compared with a gold standard and (3) statistical analysis focused on bias, precision and/or accuracy. Data extraction was done by the first author and checked by a second author. A bias of 20% or less, a precision of 30% or less and an accuracy expressed as P30% of 80% or higher were indicators of the validity of the MDRD formula. In total we included 27 studies. The number of patients included ranged from 8 to 1831. The gold standard and measurement method used varied across the studies. For none of the specific patient populations the studies provided sufficient evidence of validity of the MDRD formula regarding the three parameters. For patients with diabetes mellitus and liver cirrhosis, hospitalized patients and elderly with moderate to severe renal impairment we concluded that the MDRD formula is not valid. Limitations of the review are the lack of considering the method of measuring serum creatinine levels and the type of gold standard used. Conclusion In several specific patient populations with renal impairment the use of the MDRD formula is not valid or has uncertain validity. PMID:25741695
NASA Astrophysics Data System (ADS)
Hu, Zhan; Lenting, Walther; van der Wal, Daphne; Bouma, Tjeerd
2015-04-01
Tidal flat morphology is continuously shaped by hydrodynamic force, resulting in highly dynamic bed elevations. The knowledge of short-term bed-level changes is important both for understanding sediment transport processes as well as for assessing critical ecological processes such as e.g. vegetation recruitment chances on tidal flats. Due to the labour involved, manual discontinuous measurements lack the ability to continuously monitor bed-elevation changes. Existing methods for automated continuous monitoring of bed-level changes lack vertical accuracy (e.g., Photo-Electronic Erosion Pin sensor and resistive rod) or limited in spatial application by using expensive technology (e.g., acoustic bed level sensors). A method provides sufficient accuracy with a reasonable cost is needed. In light of this, a high-accuracy sensor (2 mm) for continuously measuring short-term Surface-Elevation Dynamics (SED-sensor) was developed. This SED-sensor makes use of photovoltaic cells and operates stand-alone using internal power supply and data logging system. The unit cost and the labour in deployments is therefore reduced, which facilitates monitoring with a number of units. In this study, the performance of a group of SED-sensors is tested against data obtained with precise manual measurements using traditional Sediment Erosion Bars (SEB). An excellent agreement between the two methods was obtained, indicating the accuracy and precision of the SED-sensors. Furthermore, to demonstrate how the SED-sensors can be used for measuring short-term bed-level dynamics, two SED-sensors were deployed for 1 month at two sites with contrasting wave exposure conditions. Daily bed-level changes were obtained including a severe storm erosion event. The difference in observed bed-level dynamics at both sites was statistically explained by their different hydrodynamic conditions. Thus, the stand-alone SED-sensor can be applied to monitor sediment surface dynamics with high vertical and temporal resolutions, which provides opportunities to pinpoint morphological responses to various forces in a number of environments (e.g. tidal flats, beaches, rivers and dunes).
Precision Measurement of the e + e − → Λ c + Λ ¯ c − Cross Section Near Threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ablikim, M.; Achasov, M. N.; Ahmed, S.
2018-03-01
The cross section of the e+e− ! +c¯ −c process is measured with unprecedented precision using data collected with the BESIII detector at ps = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. The non-zero cross section near the +c¯ −c production threshold is cleared. At center-of-mass energies ps = 4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the c polar angle distributions. From these, the c electric over magnetic form factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03 respectively, where the first uncertainties are statistical and the secondmore » are systematic.« less
Precision Measurement of the e^{+}e^{-}→Λ_{c}^{+}Λ[over ¯]_{c}^{-} Cross Section Near Threshold.
Ablikim, M; Achasov, M N; Ahmed, S; Albrecht, M; Alekseev, M; Amoroso, A; An, F F; An, Q; Bai, J Z; Bai, Y; Bakina, O; Baldini Ferroli, R; Ban, Y; Begzsuren, K; Bennett, D W; Bennett, J V; Berger, N; Bertani, M; Bettoni, D; Bianchi, F; Boger, E; Boyko, I; Briere, R A; Cai, H; Cai, X; Cakir, O; Calcaterra, A; Cao, G F; Cetin, S A; Chai, J; Chang, J F; Chelkov, G; Chen, G; Chen, H S; Chen, J C; Chen, M L; Chen, P L; Chen, S J; Chen, X R; Chen, Y B; Chu, X K; Cibinetto, G; Cossio, F; Dai, H L; Dai, J P; Dbeyssi, A; Dedovich, D; Deng, Z Y; Denig, A; Denysenko, I; Destefanis, M; De Mori, F; Ding, Y; Dong, C; Dong, J; Dong, L Y; Dong, M Y; Dou, Z L; Du, S X; Duan, P F; Fang, J; Fang, S S; Fang, Y; Farinelli, R; Fava, L; Fegan, S; Feldbauer, F; Felici, G; Feng, C Q; Fioravanti, E; Fritsch, M; Fu, C D; Gao, Q; Gao, X L; Gao, Y; Gao, Y G; Gao, Z; Garillon, B; Garzia, I; Gilman, A; Goetzen, K; Gong, L; Gong, W X; Gradl, W; Greco, M; Gu, M H; Gu, Y T; Guo, A Q; Guo, R P; Guo, Y P; Guskov, A; Haddadi, Z; Han, S; Hao, X Q; Harris, F A; He, K L; He, X Q; Heinsius, F H; Held, T; Heng, Y K; Holtmann, T; Hou, Z L; Hu, H M; Hu, J F; Hu, T; Hu, Y; Huang, G S; Huang, J S; Huang, X T; Huang, X Z; Huang, Z L; Hussain, T; Ikegami Andersson, W; Ji, Q; Ji, Q P; Ji, X B; Ji, X L; Jiang, X S; Jiang, X Y; Jiao, J B; Jiao, Z; Jin, D P; Jin, S; Jin, Y; Johansson, T; Julin, A; Kalantar-Nayestanaki, N; Kang, X S; Kavatsyuk, M; Ke, B C; Khan, T; Khoukaz, A; Kiese, P; Kliemt, R; Koch, L; Kolcu, O B; Kopf, B; Kornicer, M; Kuemmel, M; Kuhlmann, M; Kupsc, A; Kühn, W; Lange, J S; Lara, M; Larin, P; Lavezzi, L; Leithoff, H; Li, C; Li, Cheng; Li, D M; Li, F; Li, F Y; Li, G; Li, H B; Li, H J; Li, J C; Li, J W; Li, Jin; Li, K J; Li, Kang; Li, Ke; Li, Lei; Li, P L; Li, P R; Li, Q Y; Li, W D; Li, W G; Li, X L; Li, X N; Li, X Q; Li, Z B; Liang, H; Liang, Y F; Liang, Y T; Liao, G R; Libby, J; Lin, C X; Lin, D X; Liu, B; Liu, B J; Liu, C X; Liu, D; Liu, F H; Liu, Fang; Liu, Feng; Liu, H B; Liu, H L; Liu, H M; Liu, Huanhuan; Liu, Huihui; Liu, J B; Liu, J Y; Liu, K; Liu, K Y; Liu, Ke; Liu, L D; Liu, Q; Liu, S B; Liu, X; Liu, Y B; Liu, Z A; Liu, Zhiqing; Long, Y F; Lou, X C; Lu, H J; Lu, J G; Lu, Y; Lu, Y P; Luo, C L; Luo, M X; Luo, X L; Lusso, S; Lyu, X R; Ma, F C; Ma, H L; Ma, L L; Ma, M M; Ma, Q M; Ma, T; Ma, X N; Ma, X Y; Ma, Y M; Maas, F E; Maggiora, M; Malik, Q A; Mao, Y J; Mao, Z P; Marcello, S; Meng, Z X; Messchendorp, J G; Mezzadri, G; Min, J; Mitchell, R E; Mo, X H; Mo, Y J; Morales Morales, C; Muchnoi, N Yu; Muramatsu, H; Mustafa, A; Nefedov, Y; Nerling, F; Nikolaev, I B; Ning, Z; Nisar, S; Niu, S L; Niu, X Y; Olsen, S L; Ouyang, Q; Pacetti, S; Pan, Y; Papenbrock, M; Patteri, P; Pelizaeus, M; Pellegrino, J; Peng, H P; Peng, Z Y; Peters, K; Pettersson, J; Ping, J L; Ping, R G; Pitka, A; Poling, R; Prasad, V; Qi, H R; Qi, M; Qi, T Y; Qian, S; Qiao, C F; Qin, N; Qin, X S; Qin, Z H; Qiu, J F; Rashid, K H; Redmer, C F; Richter, M; Ripka, M; Rolo, M; Rong, G; Rosner, Ch; Sarantsev, A; Savrié, M; Schnier, C; Schoenning, K; Shan, W; Shan, X Y; Shao, M; Shen, C P; Shen, P X; Shen, X Y; Sheng, H Y; Shi, X; Song, J J; Song, W M; Song, X Y; Sosio, S; Sowa, C; Spataro, S; Sun, G X; Sun, J F; Sun, L; Sun, S S; Sun, X H; Sun, Y J; Sun, Y K; Sun, Y Z; Sun, Z J; Sun, Z T; Tan, Y T; Tang, C J; Tang, G Y; Tang, X; Tapan, I; Tiemens, M; Tsednee, B; Uman, I; Varner, G S; Wang, B; Wang, B L; Wang, D; Wang, D Y; Wang, Dan; Wang, K; Wang, L L; Wang, L S; Wang, M; Wang, Meng; Wang, P; Wang, P L; Wang, W P; Wang, X F; Wang, Y; Wang, Y D; Wang, Y F; Wang, Y Q; Wang, Z; Wang, Z G; Wang, Z Y; Wang, Zongyuan; Weber, T; Wei, D H; Wei, J H; Weidenkaff, P; Wen, S P; Wiedner, U; Wolke, M; Wu, L H; Wu, L J; Wu, Z; Xia, L; Xia, Y; Xiao, D; Xiao, Y J; Xiao, Z J; Xie, Y G; Xie, Y H; Xiong, X A; Xiu, Q L; Xu, G F; Xu, J J; Xu, L; Xu, Q J; Xu, Q N; Xu, X P; Yan, F; Yan, L; Yan, W B; Yan, W C; Yan, Y H; Yang, H J; Yang, H X; Yang, L; Yang, Y H; Yang, Y X; Yang, Yifan; Ye, M; Ye, M H; Yin, J H; You, Z Y; Yu, B X; Yu, C X; Yu, J S; Yuan, C Z; Yuan, Y; Yuncu, A; Zafar, A A; Zeng, Y; Zeng, Z; Zhang, B X; Zhang, B Y; Zhang, C C; Zhang, D H; Zhang, H H; Zhang, H Y; Zhang, J; Zhang, J L; Zhang, J Q; Zhang, J W; Zhang, J Y; Zhang, J Z; Zhang, K; Zhang, L; Zhang, S Q; Zhang, X Y; Zhang, Y; Zhang, Y H; Zhang, Y T; Zhang, Yang; Zhang, Yao; Zhang, Yu; Zhang, Z H; Zhang, Z P; Zhang, Z Y; Zhao, G; Zhao, J W; Zhao, J Y; Zhao, J Z; Zhao, Lei; Zhao, Ling; Zhao, M G; Zhao, Q; Zhao, S J; Zhao, T C; Zhao, Y B; Zhao, Z G; Zhemchugov, A; Zheng, B; Zheng, J P; Zheng, Y H; Zhong, B; Zhou, L; Zhou, Q; Zhou, X; Zhou, X K; Zhou, X R; Zhou, X Y; Zhu, A N; Zhu, J; Zhu, K; Zhu, K J; Zhu, S; Zhu, S H; Zhu, X L; Zhu, Y C; Zhu, Y S; Zhu, Z A; Zhuang, J; Zou, B S; Zou, J H
2018-03-30
The cross section of the e^{+}e^{-}→Λ_{c}^{+}Λ[over ¯]_{c}^{-} process is measured with unprecedented precision using data collected with the BESIII detector at sqrt[s]=4574.5, 4580.0, 4590.0 and 4599.5 MeV. The nonzero cross section near the Λ_{c}^{+}Λ[over ¯]_{c}^{-} production threshold is cleared. At center-of-mass energies sqrt[s]=4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λ_{c} polar angle distributions. From these, the Λ_{c} electric over magnetic form-factor ratios (|G_{E}/G_{M}|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03, respectively, where the first uncertainties are statistical and the second are systematic.
Precision Measurement of the e+e-→Λc+Λ¯c - Cross Section Near Threshold
NASA Astrophysics Data System (ADS)
Ablikim, M.; Achasov, M. N.; Ahmed, S.; Albrecht, M.; Alekseev, M.; Amoroso, A.; An, F. F.; An, Q.; Bai, J. Z.; Bai, Y.; Bakina, O.; Baldini Ferroli, R.; Ban, Y.; Begzsuren, K.; Bennett, D. W.; Bennett, J. V.; Berger, N.; Bertani, M.; Bettoni, D.; Bianchi, F.; Boger, E.; Boyko, I.; Briere, R. A.; Cai, H.; Cai, X.; Cakir, O.; Calcaterra, A.; Cao, G. F.; Cetin, S. A.; Chai, J.; Chang, J. F.; Chelkov, G.; Chen, G.; Chen, H. S.; Chen, J. C.; Chen, M. L.; Chen, P. L.; Chen, S. J.; Chen, X. R.; Chen, Y. B.; Chu, X. K.; Cibinetto, G.; Cossio, F.; Dai, H. L.; Dai, J. P.; Dbeyssi, A.; Dedovich, D.; Deng, Z. Y.; Denig, A.; Denysenko, I.; Destefanis, M.; de Mori, F.; Ding, Y.; Dong, C.; Dong, J.; Dong, L. Y.; Dong, M. Y.; Dou, Z. L.; Du, S. X.; Duan, P. F.; Fang, J.; Fang, S. S.; Fang, Y.; Farinelli, R.; Fava, L.; Fegan, S.; Feldbauer, F.; Felici, G.; Feng, C. Q.; Fioravanti, E.; Fritsch, M.; Fu, C. D.; Gao, Q.; Gao, X. L.; Gao, Y.; Gao, Y. G.; Gao, Z.; Garillon, B.; Garzia, I.; Gilman, A.; Goetzen, K.; Gong, L.; Gong, W. X.; Gradl, W.; Greco, M.; Gu, M. H.; Gu, Y. T.; Guo, A. Q.; Guo, R. P.; Guo, Y. P.; Guskov, A.; Haddadi, Z.; Han, S.; Hao, X. Q.; Harris, F. A.; He, K. L.; He, X. Q.; Heinsius, F. H.; Held, T.; Heng, Y. K.; Holtmann, T.; Hou, Z. L.; Hu, H. M.; Hu, J. F.; Hu, T.; Hu, Y.; Huang, G. S.; Huang, J. S.; Huang, X. T.; Huang, X. Z.; Huang, Z. L.; Hussain, T.; Ikegami Andersson, W.; Ji, Q.; Ji, Q. P.; Ji, X. B.; Ji, X. L.; Jiang, X. S.; Jiang, X. Y.; Jiao, J. B.; Jiao, Z.; Jin, D. P.; Jin, S.; Jin, Y.; Johansson, T.; Julin, A.; Kalantar-Nayestanaki, N.; Kang, X. S.; Kavatsyuk, M.; Ke, B. C.; Khan, T.; Khoukaz, A.; Kiese, P.; Kliemt, R.; Koch, L.; Kolcu, O. B.; Kopf, B.; Kornicer, M.; Kuemmel, M.; Kuhlmann, M.; Kupsc, A.; Kühn, W.; Lange, J. S.; Lara, M.; Larin, P.; Lavezzi, L.; Leithoff, H.; Li, C.; Li, Cheng; Li, D. M.; Li, F.; Li, F. Y.; Li, G.; Li, H. B.; Li, H. J.; Li, J. C.; Li, J. W.; Li, Jin; Li, K. J.; Li, Kang; Li, Ke; Li, Lei; Li, P. L.; Li, P. R.; Li, Q. Y.; Li, W. D.; Li, W. G.; Li, X. L.; Li, X. N.; Li, X. Q.; Li, Z. B.; Liang, H.; Liang, Y. F.; Liang, Y. T.; Liao, G. R.; Libby, J.; Lin, C. X.; Lin, D. X.; Liu, B.; Liu, B. J.; Liu, C. X.; Liu, D.; Liu, F. H.; Liu, Fang; Liu, Feng; Liu, H. B.; Liu, H. L.; Liu, H. M.; Liu, Huanhuan; Liu, Huihui; Liu, J. B.; Liu, J. Y.; Liu, K.; Liu, K. Y.; Liu, Ke; Liu, L. D.; Liu, Q.; Liu, S. B.; Liu, X.; Liu, Y. B.; Liu, Z. A.; Liu, Zhiqing; Long, Y. F.; Lou, X. C.; Lu, H. J.; Lu, J. G.; Lu, Y.; Lu, Y. P.; Luo, C. L.; Luo, M. X.; Luo, X. L.; Lusso, S.; Lyu, X. R.; Ma, F. C.; Ma, H. L.; Ma, L. L.; Ma, M. M.; Ma, Q. M.; Ma, T.; Ma, X. N.; Ma, X. Y.; Ma, Y. M.; Maas, F. E.; Maggiora, M.; Malik, Q. A.; Mao, Y. J.; Mao, Z. P.; Marcello, S.; Meng, Z. X.; Messchendorp, J. G.; Mezzadri, G.; Min, J.; Mitchell, R. E.; Mo, X. H.; Mo, Y. J.; Morales Morales, C.; Muchnoi, N. Yu.; Muramatsu, H.; Mustafa, A.; Nefedov, Y.; Nerling, F.; Nikolaev, I. B.; Ning, Z.; Nisar, S.; Niu, S. L.; Niu, X. Y.; Olsen, S. L.; Ouyang, Q.; Pacetti, S.; Pan, Y.; Papenbrock, M.; Patteri, P.; Pelizaeus, M.; Pellegrino, J.; Peng, H. P.; Peng, Z. Y.; Peters, K.; Pettersson, J.; Ping, J. L.; Ping, R. G.; Pitka, A.; Poling, R.; Prasad, V.; Qi, H. R.; Qi, M.; Qi, T. Y.; Qian, S.; Qiao, C. F.; Qin, N.; Qin, X. S.; Qin, Z. H.; Qiu, J. F.; Rashid, K. H.; Redmer, C. F.; Richter, M.; Ripka, M.; Rolo, M.; Rong, G.; Rosner, Ch.; Sarantsev, A.; Savrié, M.; Schnier, C.; Schoenning, K.; Shan, W.; Shan, X. Y.; Shao, M.; Shen, C. P.; Shen, P. X.; Shen, X. Y.; Sheng, H. Y.; Shi, X.; Song, J. J.; Song, W. M.; Song, X. Y.; Sosio, S.; Sowa, C.; Spataro, S.; Sun, G. X.; Sun, J. F.; Sun, L.; Sun, S. S.; Sun, X. H.; Sun, Y. J.; Sun, Y. K.; Sun, Y. Z.; Sun, Z. J.; Sun, Z. T.; Tan, Y. T.; Tang, C. J.; Tang, G. Y.; Tang, X.; Tapan, I.; Tiemens, M.; Tsednee, B.; Uman, I.; Varner, G. S.; Wang, B.; Wang, B. L.; Wang, D.; Wang, D. Y.; Wang, Dan; Wang, K.; Wang, L. L.; Wang, L. S.; Wang, M.; Wang, Meng; Wang, P.; Wang, P. L.; Wang, W. P.; Wang, X. F.; Wang, Y.; Wang, Y. D.; Wang, Y. F.; Wang, Y. Q.; Wang, Z.; Wang, Z. G.; Wang, Z. Y.; Wang, Zongyuan; Weber, T.; Wei, D. H.; Wei, J. H.; Weidenkaff, P.; Wen, S. P.; Wiedner, U.; Wolke, M.; Wu, L. H.; Wu, L. J.; Wu, Z.; Xia, L.; Xia, Y.; Xiao, D.; Xiao, Y. J.; Xiao, Z. J.; Xie, Y. G.; Xie, Y. H.; Xiong, X. A.; Xiu, Q. L.; Xu, G. F.; Xu, J. J.; Xu, L.; Xu, Q. J.; Xu, Q. N.; Xu, X. P.; Yan, F.; Yan, L.; Yan, W. B.; Yan, W. C.; Yan, Y. H.; Yang, H. J.; Yang, H. X.; Yang, L.; Yang, Y. H.; Yang, Y. X.; Yang, Yifan; Ye, M.; Ye, M. H.; Yin, J. H.; You, Z. Y.; Yu, B. X.; Yu, C. X.; Yu, J. S.; Yuan, C. Z.; Yuan, Y.; Yuncu, A.; Zafar, A. A.; Zeng, Y.; Zeng, Z.; Zhang, B. X.; Zhang, B. Y.; Zhang, C. C.; Zhang, D. H.; Zhang, H. H.; Zhang, H. Y.; Zhang, J.; Zhang, J. L.; Zhang, J. Q.; Zhang, J. W.; Zhang, J. Y.; Zhang, J. Z.; Zhang, K.; Zhang, L.; Zhang, S. Q.; Zhang, X. Y.; Zhang, Y.; Zhang, Y. H.; Zhang, Y. T.; Zhang, Yang; Zhang, Yao; Zhang, Yu; Zhang, Z. H.; Zhang, Z. P.; Zhang, Z. Y.; Zhao, G.; Zhao, J. W.; Zhao, J. Y.; Zhao, J. Z.; Zhao, Lei; Zhao, Ling; Zhao, M. G.; Zhao, Q.; Zhao, S. J.; Zhao, T. C.; Zhao, Y. B.; Zhao, Z. G.; Zhemchugov, A.; Zheng, B.; Zheng, J. P.; Zheng, Y. H.; Zhong, B.; Zhou, L.; Zhou, Q.; Zhou, X.; Zhou, X. K.; Zhou, X. R.; Zhou, X. Y.; Zhu, A. N.; Zhu, J.; Zhu, K.; Zhu, K. J.; Zhu, S.; Zhu, S. H.; Zhu, X. L.; Zhu, Y. C.; Zhu, Y. S.; Zhu, Z. A.; Zhuang, J.; Zou, B. S.; Zou, J. H.; Besiii Collaboration
2018-03-01
The cross section of the e+e-→Λc+Λ¯c - process is measured with unprecedented precision using data collected with the BESIII detector at √{s }=4574.5 , 4580.0, 4590.0 and 4599.5 MeV. The nonzero cross section near the Λc+Λ¯c- production threshold is cleared. At center-of-mass energies √{s }=4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λc polar angle distributions. From these, the Λc electric over magnetic form-factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14 ±0.14 ±0.07 and 1.23 ±0.05 ±0.03 , respectively, where the first uncertainties are statistical and the second are systematic.
Precision Measurement of the e + e - → Λ c + Λ ¯ c - Cross Section Near Threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ablikim, M.; Achasov, M. N.; Ahmed, S.
The cross section of the e +e - →more » $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ process is measured with unprecedented precision using data collected with the BESIII detector at √s = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. The non-zero cross section near the $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ production threshold is cleared. At center-of-mass energies √s = 4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λ c polar angle distributions. From these, the Λ c electric over magnetic form factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03 respectively, where the first uncertainties are statistical and the second are systematic.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, B.; Erni, W.; Krusche, B.
Simulation results for future measurements of electromagnetic proton form factors atmore » $$\\overline{\\rm P}$$ANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel p¯p → e +e – is studied on the basis of two different but consistent procedures. The suppression of the main background channel, i.e. p¯p → π +π –, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. Furthermore, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.« less
Singh, B.; Erni, W.; Krusche, B.; ...
2016-10-28
Simulation results for future measurements of electromagnetic proton form factors atmore » $$\\overline{\\rm P}$$ANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel p¯p → e +e – is studied on the basis of two different but consistent procedures. The suppression of the main background channel, i.e. p¯p → π +π –, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. Furthermore, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.« less
Precision Cosmology: The First Half Million Years
NASA Astrophysics Data System (ADS)
Jones, Bernard J. T.
2017-06-01
Cosmology seeks to characterise our Universe in terms of models based on well-understood and tested physics. Today we know our Universe with a precision that once would have been unthinkable. This book develops the entire mathematical, physical and statistical framework within which this has been achieved. It tells the story of how we arrive at our profound conclusions, starting from the early twentieth century and following developments up to the latest data analysis of big astronomical datasets. It provides an enlightening description of the mathematical, physical and statistical basis for understanding and interpreting the results of key space- and ground-based data. Subjects covered include general relativity, cosmological models, the inhomogeneous Universe, physics of the cosmic background radiation, and methods and results of data analysis. Extensive online supplementary notes, exercises, teaching materials, and exercises in Python make this the perfect companion for researchers, teachers and students in physics, mathematics, and astrophysics.
Precision Measurement of the e + e - → Λ c + Λ ¯ c - Cross Section Near Threshold
Ablikim, M.; Achasov, M. N.; Ahmed, S.; ...
2018-03-29
The cross section of the e +e - →more » $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ process is measured with unprecedented precision using data collected with the BESIII detector at √s = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. The non-zero cross section near the $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ production threshold is cleared. At center-of-mass energies √s = 4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λ c polar angle distributions. From these, the Λ c electric over magnetic form factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03 respectively, where the first uncertainties are statistical and the second are systematic.« less
Methodologies for the Statistical Analysis of Memory Response to Radiation
NASA Astrophysics Data System (ADS)
Bosser, Alexandre L.; Gupta, Viyas; Tsiligiannis, Georgios; Frost, Christopher D.; Zadeh, Ali; Jaatinen, Jukka; Javanainen, Arto; Puchner, Helmut; Saigné, Frédéric; Virtanen, Ari; Wrobel, Frédéric; Dilillo, Luigi
2016-08-01
Methodologies are proposed for in-depth statistical analysis of Single Event Upset data. The motivation for using these methodologies is to obtain precise information on the intrinsic defects and weaknesses of the tested devices, and to gain insight on their failure mechanisms, at no additional cost. The case study is a 65 nm SRAM irradiated with neutrons, protons and heavy ions. This publication is an extended version of a previous study [1].
The role of reference in cross-situational word learning.
Wang, Felix Hao; Mintz, Toben H
2018-01-01
Word learning involves massive ambiguity, since in a particular encounter with a novel word, there are an unlimited number of potential referents. One proposal for how learners surmount the problem of ambiguity is that learners use cross-situational statistics to constrain the ambiguity: When a word and its referent co-occur across multiple situations, learners will associate the word with the correct referent. Yu and Smith (2007) propose that these co-occurrence statistics are sufficient for word-to-referent mapping. Alternative accounts hold that co-occurrence statistics alone are insufficient to support learning, and that learners are further guided by knowledge that words are referential (e.g., Waxman & Gelman, 2009). However, no behavioral word learning studies we are aware of explicitly manipulate subjects' prior assumptions about the role of the words in the experiments in order to test the influence of these assumptions. In this study, we directly test whether, when faced with referential ambiguity, co-occurrence statistics are sufficient for word-to-referent mappings in adult word-learners. Across a series of cross-situational learning experiments, we varied the degree to which there was support for the notion that the words were referential. At the same time, the statistical information about the words' meanings was held constant. When we overrode support for the notion that words were referential, subjects failed to learn the word-to-referent mappings, but otherwise they succeeded. Thus, cross-situational statistics were useful only when learners had the goal of discovering mappings between words and referents. We discuss the implications of these results for theories of word learning in children's language acquisition. Copyright © 2017 Elsevier B.V. All rights reserved.
Touch Precision Modulates Visual Bias.
Misceo, Giovanni F; Jones, Maurice D
2018-01-01
The sensory precision hypothesis holds that different seen and felt cues about the size of an object resolve themselves in favor of the more reliable modality. To examine this precision hypothesis, 60 college students were asked to look at one size while manually exploring another unseen size either with their bare fingers or, to lessen the reliability of touch, with their fingers sleeved in rigid tubes. Afterwards, the participants estimated either the seen size or the felt size by finding a match from a visual display of various sizes. Results showed that the seen size biased the estimates of the felt size when the reliability of touch decreased. This finding supports the interaction between touch reliability and visual bias predicted by statistically optimal models of sensory integration.
van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald
2017-12-04
Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the measurement of hours worked each week by GPs strongly varied according to the number of GPs included and the frequency of measurements per GP during the week measured. The best balance between both dimensions will depend upon different circumstances, such as the target group and the budget available.
NASA Astrophysics Data System (ADS)
O'Neill, P.
Accurate knowledge of the interplanetary Galactic Cosmic Ray (GCR) environment is critical to planning and operating manned space flight to the moon and beyond. In the early 1990's Badhwar and O'Neill developed a GCR model based on balloon and satellite data from 1954 to 1992. This model accurately accounts for solar modulation of each element (hydrogen -- iron) by propagating the Local Interplanetary Spectrum (LIS) of each element through the heliosphere by solving the Fokker -- Planck diffusion, convection, energy loss boundary value problem. A single value of the deceleration parameter describes the modulation of each of the elements and determines the GCR energy spectrum at any distance from the sun for a given level of solar cycle modulation. Since August 1997 the Advanced Composition Explorer (ACE) stationed at the Earth-Sun L1 libration point (about 1.5 million km from earth) has provided GCR energy spectra for boron - nickel. The Cosmic Ray Isotope Spectrometer (CRIS) provides ``quiet time'' spectra in the range of highest modulation ˜ 50 -- 500 MeV / nucleon. The collection power of CRIS is much larger than any of the previous satellite or balloon GCR instruments: 250 cm**2 --sr compared to <10 cm**2-sr! This new data was used to update the original Badhwar -- O'Neill Model and greatly improve the interplanetary GCR prediction accuracy. When the new -- highly precise ACE CRIS data was analyzed it became obvious that the LIS spectrum for each element precisely fit a very simple analytical energy power-law that was suggested by Leonard Fisk over 30 years ago. The updated Badhwar -- O'Neill Model is shown to be accurate to within 5%, for elements such as oxygen, which have sufficient abundance that over 1000 ions are captured in each energy bin within a 30 day period. The paper clearly demonstrates the statistical relationship between the number of ions captured by the instrument in a given time and the precision of the model for each element. This is a significant model upgrade that should provide interplanetary mission planners with highly accurate GCR environment data for radiation protection for astronauts and radiation hardness assurance for electronic equipment.
Krahenbuhl, Jason T; Cho, Seok-Hwan; Irelan, Jon; Bansal, Naveen K
2016-08-01
Little peer-reviewed information is available regarding the accuracy and precision of the occlusal contact reproduction of digitally mounted stereolithographic casts. The purpose of this in vitro study was to evaluate the accuracy and precision of occlusal contacts among stereolithographic casts mounted by digital occlusal registrations. Four complete anatomic dentoforms were arbitrarily mounted on a semi-adjustable articulator in maximal intercuspal position and served as the 4 different simulated patients (SP). A total of 60 digital impressions and digital interocclusal registrations were made with a digital intraoral scanner to fabricate 15 sets of mounted stereolithographic (SLA) definitive casts for each dentoform. After receiving a total of 60 SLA casts, polyvinyl siloxane (PVS) interocclusal records were made for each set. The occlusal contacts for each set of SLA casts were measured by recording the amount of light transmitted through the interocclusal records. To evaluate the accuracy between the SP and their respective SLA casts, the areas of actual contact (AC) and near contact (NC) were calculated. For precision analysis, the coefficient of variation (CoV) was used. The data was analyzed with t tests for accuracy and the McKay and Vangel test for precision (α=.05). The accuracy analysis showed a statistically significant difference between the SP and the SLA cast of each dentoform (P<.05). For the AC in all dentoforms, a significant increase was found in the areas of actual contact of SLA casts compared with the contacts present in the SP (P<.05). Conversely, for the NC in all dentoforms, a significant decrease was found in the occlusal contact areas of the SLA casts compared with the contacts in the SP (P<.05). The precision analysis demonstrated the different CoV values between AC (5.8 to 8.8%) and NC (21.4 to 44.6%) of digitally mounted SLA casts, indicating that the overall precision of the SLA cast was low. For the accuracy evaluation, statistically significant differences were found between the occlusal contacts of all digitally mounted SLA casts groups, with an increase in AC values and a decrease in NC values. For the precision assessment, the CoV values of the AC and NC showed the digitally articulated cast's inability to reproduce the uniform occlusal contacts. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Gaven, Jr., Joseph V.; Bak, Chan S.
1983-01-01
Minute durable plate-like thermal indicators are employed for precision measuring static and dynamic temperatures of well drilling fluids. The indicators are small enough and sufficiently durable to be circulated in the well with drilling fluids during the drilling operation. The indicators include a heat resistant indicating layer, a coacting meltable solid component and a retainer body which serves to unitize each indicator and which may carry permanent indicator identifying indicia. The indicators are recovered from the drilling fluid at ground level by known techniques.
2010-03-01
sufficient replications often lead to models that lack precision in error estimation and thus imprecision in corresponding conclusions. This work develops...v Preface This work is dedicated to all who gave and continue to give in order for me to achieve some semblance of success. Benjamin M. Lee vi...develop, examine and test methodologies for an- alyzing test results from split-plot designs. In particular, this work determines the applicability
1979-04-01
crosshead of the piston assembly. Shock transients at this location cause demagnetization of the magnet . This is being alleviated by in- stallation of magnets ...substantial structure, such as bulk - heads with edge cape. Soond, the wire-out foam *or* for the wing could not be sufficiently precise to preven the used for...characterize the power potential, fuel consumption, weight, bulk , and adaptability to closed loop control of candidate carburetion systems to be employed with
Fabrication of precision glass shells by joining glass rods
Gac, Frank D.; Blake, Rodger D.; Day, Delbert E.; Haggerty, John S.
1988-01-01
A method for making uniform spherical shells. The present invention allows niform hollow spheres to be made by first making a void in a body of material. The material is heated so that the viscosity is sufficiently low so that the surface tension will transform the void into a bubble. The bubble is allowed to rise in the body until it is spherical. The excess material is removed from around the void to form a spherical shell with a uniform outside diameter.
1980-12-31
surfaces. Reactions involving the Pt(O)- triphenylphosphine complexes Pt(PPh 3)n, where n = 2, 3, 4, have been shown to have precise analogues on Pt...12], the triphenylphosphine (PPh 3 ) group is modeled by the simpler but chemically similar phosphine (PH3) group. The appropriate Pt-P bond distances...typically refractory oxides ) are of sufficient magnitude as to suggest significant chemical and electronic modifications of the metal at the metal-support
1984-06-25
Scientific Terms the proper authorized terms that Identify the major concept of the research and are sufficiently specific and precise to be used as index...ended terms written in descriptor form for those subjects for which no desCriptor exists. (c). COSATI Field/Group. Field and Group assignments are to be...taken from the 1964 COSATI Subject Category List. Since the maj’rity of documents are multidisciplinary In nature, the primary Field/Gr-up assignment
Radio Frequency Microelectromechanical Systems [Book Chapter Manuscript
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordquist, Christopher; Olsson, Roy H.
2014-12-15
Radio frequency microelectromechanical system (RF MEMS) devices are microscale devices that achieve superior performance relative to other technologies by taking advantage of the accuracy, precision, materials, and miniaturization available through microfabrication. To do this, these devices use their mechanical and electrical properties to perform a specific RF electrical function such as switching, transmission, or filtering. RF MEMS has been a popular area of research since the early 1990s, and within the last several years, the technology has matured sufficiently for commercialization and use in commercial market systems.
Midrapidity Neutral-Pion Production in Proton-Proton Collisions at √(s)=200 GeV
NASA Astrophysics Data System (ADS)
Adler, S. S.; Afanasiev, S.; Aidala, C.; Ajitanand, N. N.; Akiba, Y.; Alexander, J.; Amirikas, R.; Aphecetche, L.; Aronson, S. H.; Averbeck, R.; Awes, T. C.; Azmoun, R.; Babintsev, V.; Baldisseri, A.; Barish, K. N.; Barnes, P. D.; Bassalleck, B.; Bathe, S.; Batsouli, S.; Baublis, V.; Bazilevsky, A.; Belikov, S.; Berdnikov, Y.; Bhagavatula, S.; Boissevain, J. G.; Borel, H.; Borenstein, S.; Brooks, M. L.; Brown, D. S.; Bruner, N.; Bucher, D.; Buesching, H.; Bumazhnov, V.; Bunce, G.; Burward-Hoy, J. M.; Butsyk, S.; Camard, X.; Chai, J.-S.; Chand, P.; Chang, W. C.; Chernichenko, S.; Chi, C. Y.; Chiba, J.; Chiu, M.; Choi, I. J.; Choi, J.; Choudhury, R. K.; Chujo, T.; Cianciolo, V.; Cobigo, Y.; Cole, B. A.; Constantin, P.; D'Enterria, D. G.; David, G.; Delagrange, H.; Denisov, A.; Deshpande, A.; Desmond, E. J.; Dietzsch, O.; Drapier, O.; Drees, A.; Drees, K. A.; Du Rietz, R.; Durum, A.; Dutta, D.; Efremenko, Y. V.; El Chenawi, K.; Enokizono, A.; En'yo, H.; Esumi, S.; Ewell, L.; Fields, D. E.; Fleuret, F.; Fokin, S. L.; Fox, B. D.; Fraenkel, Z.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fung, S.-Y.; Garpman, S.; Ghosh, T. K.; Glenn, A.; Gogiberidze, G.; Gonin, M.; Gosset, J.; Goto, Y.; Granier de Cassagnac, R.; Grau, N.; Greene, S. V.; Grosse Perdekamp, M.; Guryn, W.; Gustafsson, H.-Å.; Hachiya, T.; Haggerty, J. S.; Hamagaki, H.; Hansen, A. G.; Hartouni, E. P.; Harvey, M.; Hayano, R.; He, X.; Heffner, M.; Hemmick, T. K.; Heuser, J. M.; Hibino, M.; Hill, J. C.; Holzmann, W.; Homma, K.; Hong, B.; Hoover, A.; Ichihara, T.; Ikonnikov, V. V.; Imai, K.; Isenhower, D.; Ishihara, M.; Issah, M.; Isupov, A.; Jacak, B. V.; Jang, W. Y.; Jeong, Y.; Jia, J.; Jinnouchi, O.; Johnson, B. M.; Johnson, S. C.; Joo, K. S.; Jouan, D.; Kametani, S.; Kamihara, N.; Kang, J. H.; Kapoor, S. S.; Katou, K.; Kelly, S.; Khachaturov, B.; Khanzadeev, A.; Kikuchi, J.; Kim, D. H.; Kim, D. J.; Kim, D. W.; Kim, E.; Kim, G.-B.; Kim, H. J.; Kistenev, E.; Kiyomichi, A.; Kiyoyama, K.; Klein-Boesing, C.; Kobayashi, H.; Kochenda, L.; Kochetkov, V.; Koehler, D.; Kohama, T.; Kopytine, M.; Kotchetkov, D.; Kozlov, A.; Kroon, P. J.; Kuberg, C. H.; Kurita, K.; Kuroki, Y.; Kweon, M. J.; Kwon, Y.; Kyle, G. S.; Lacey, R.; Ladygin, V.; Lajoie, J. G.; Lebedev, A.; Leckey, S.; Lee, D. M.; Lee, S.; Leitch, M. J.; Li, X. H.; Lim, H.; Litvinenko, A.; Liu, M. X.; Liu, Y.; Maguire, C. F.; Makdisi, Y. I.; Malakhov, A.; Manko, V. I.; Mao, Y.; Martinez, G.; Marx, M. D.; Masui, H.; Matathias, F.; Matsumoto, T.; McGaughey, P. L.; Melnikov, E.; Messer, F.; Miake, Y.; Milan, J.; Miller, T. E.; Milov, A.; Mioduszewski, S.; Mischke, R. E.; Mishra, G. C.; Mitchell, J. T.; Mohanty, A. K.; Morrison, D. P.; Moss, J. M.; Mühlbacher, F.; Mukhopadhyay, D.; Muniruzzaman, M.; Murata, J.; Nagamiya, S.; Nagle, J. L.; Nakamura, T.; Nandi, B. K.; Nara, M.; Newby, J.; Nilsson, P.; Nyanin, A. S.; Nystrand, J.; O'Brien, E.; Ogilvie, C. A.; Ohnishi, H.; Ojha, I. D.; Okada, K.; Ono, M.; Onuchin, V.; Oskarsson, A.; Otterlund, I.; Oyama, K.; Ozawa, K.; Pal, D.; Palounek, A. P.; Pantuev, V. S.; Papavassiliou, V.; Park, J.; Parmar, A.; Pate, S. F.; Peitzmann, T.; Peng, J.-C.; Peresedov, V.; Pinkenburg, C.; Pisani, R. P.; Plasil, F.; Purschke, M. L.; Purwar, A. K.; Rak, J.; Ravinovich, I.; Read, K. F.; Reuter, M.; Reygers, K.; Riabov, V.; Riabov, Y.; Roche, G.; Romana, A.; Rosati, M.; Rosnet, P.; Ryu, S. S.; Sadler, M. E.; Saito, N.; Sakaguchi, T.; Sakai, M.; Sakai, S.; Samsonov, V.; Sanfratello, L.; Santo, R.; Sato, H. D.; Sato, S.; Sawada, S.; Schutz, Y.; Semenov, V.; Seto, R.; Shaw, M. R.; Shea, T. K.; Shibata, T.-A.; Shigaki, K.; Shiina, T.; Silva, C. L.; Silvermyr, D.; Sim, K. S.; Singh, C. P.; Singh, V.; Sivertz, M.; Soldatov, A.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Staley, F.; Stankus, P. W.; Stenlund, E.; Stepanov, M.; Ster, A.; Stoll, S. P.; Sugitate, T.; Sullivan, J. P.; Takagui, E. M.; Taketani, A.; Tamai, M.; Tanaka, K. H.; Tanaka, Y.; Tanida, K.; Tannenbaum, M. J.; Tarján, P.; Tepe, J. D.; Thomas, T. L.; Tojo, J.; Torii, H.; Towell, R. S.; Tserruya, I.; Tsuruoka, H.; Tuli, S. K.; Tydesjö, H.; Tyurin, N.; van Hecke, H. W.; Velkovska, J.; Velkovsky, M.; Villatte, L.; Vinogradov, A. A.; Volkov, M. A.; Vznuzdaev, E.; Wang, X. R.; Watanabe, Y.; White, S. N.; Wohn, F. K.; Woody, C. L.; Xie, W.; Yang, Y.; Yanovich, A.; Yokkaichi, S.; Young, G. R.; Yushmanov, I. E.; Zajc, W. A.; Zhang, C.; Zhou, S.; Zolin, L.
2003-12-01
The invariant differential cross section for inclusive neutral-pion production in p+p collisions at √(s)=200 GeV has been measured at midrapidity (|η|<0.35) over the range 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ling, J. F., E-mail: josefinaf.ling@usc.es
This paper presents the first published orbits and masses for nine visual double stars: WDS 00149-3209 (B 1024), WDS 01006+4719 (MAD 1), WDS 03130+4417 (STT 51), WDS 04357+3944 (HU 1084), WDS 19083+2706 (HO 98 AB), WDS 19222-0735 (A 102 AB), WDS 20524+2008 (HO 144), WDS 21051+0757 (HDS 3004 AB), and WDS 22202+2931 (BU 1216). Masses were calculated from the updated Hipparcos parallax data when available and sufficiently precise, or from dynamical parallaxes otherwise. Other physical and orbital properties are also discussed.
Statistical and Economic Techniques for Site-specific Nematode Management.
Liu, Zheng; Griffin, Terry; Kirkpatrick, Terrence L
2014-03-01
Recent advances in precision agriculture technologies and spatial statistics allow realistic, site-specific estimation of nematode damage to field crops and provide a platform for the site-specific delivery of nematicides within individual fields. This paper reviews the spatial statistical techniques that model correlations among neighboring observations and develop a spatial economic analysis to determine the potential of site-specific nematicide application. The spatial econometric methodology applied in the context of site-specific crop yield response contributes to closing the gap between data analysis and realistic site-specific nematicide recommendations and helps to provide a practical method of site-specifically controlling nematodes.
NASA Astrophysics Data System (ADS)
Luo, Hanjun; Ouyang, Zhengbiao; Liu, Qiang; Chen, Zhiliang; Lu, Hualan
2017-10-01
Cumulative pulses detection with appropriate cumulative pulses number and threshold has the ability to improve the detection performance of the pulsed laser ranging system with GM-APD. In this paper, based on Poisson statistics and multi-pulses cumulative process, the cumulative detection probabilities and their influence factors are investigated. With the normalized probability distribution of each time bin, the theoretical model of the range accuracy and precision is established, and the factors limiting the range accuracy and precision are discussed. The results show that the cumulative pulses detection can produce higher target detection probability and lower false alarm probability. However, for a heavy noise level and extremely weak echo intensity, the false alarm suppression performance of the cumulative pulses detection deteriorates quickly. The range accuracy and precision is another important parameter evaluating the detection performance, the echo intensity and pulse width are main influence factors on the range accuracy and precision, and higher range accuracy and precision is acquired with stronger echo intensity and narrower echo pulse width, for 5-ns echo pulse width, when the echo intensity is larger than 10, the range accuracy and precision lower than 7.5 cm can be achieved.
Hayama, Hironari; Fueki, Kenji; Wadachi, Juro; Wakabayashi, Noriyuki
2018-03-01
It remains unclear whether digital impressions obtained using an intraoral scanner are sufficiently accurate for use in fabrication of removable partial dentures. We therefore compared the trueness and precision between conventional and digital impressions in the partially edentulous mandible. Mandibular Kennedy Class I and III models with soft silicone simulated-mucosa placed on the residual edentulous ridge were used. The reference models were converted to standard triangulated language (STL) file format using an extraoral scanner. Digital impressions were obtained using an intraoral scanner with a large or small scanning head, and converted to STL files. For conventional impressions, pressure impressions of the reference models were made and working casts fabricated using modified dental stone; these were converted to STL file format using an extraoral scanner. Conversion to STL file format was performed 5 times for each method. Trueness and precision were evaluated by deviation analysis using three-dimensional image processing software. Digital impressions had superior trueness (54-108μm), but inferior precision (100-121μm) compared to conventional impressions (trueness 122-157μm, precision 52-119μm). The larger intraoral scanning head showed better trueness and precision than the smaller head, and on average required fewer scanned images of digital impressions than the smaller head (p<0.05). On the color map, the deviation distribution tended to differ between the conventional and digital impressions. Digital impressions are partially comparable to conventional impressions in terms of accuracy; the use of a larger scanning head may improve the accuracy for removable partial denture fabrication. Copyright © 2018 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Skeletal Correlates for Body Mass Estimation in Modern and Fossil Flying Birds
Field, Daniel J.; Lynner, Colton; Brown, Christian; Darroch, Simon A. F.
2013-01-01
Scaling relationships between skeletal dimensions and body mass in extant birds are often used to estimate body mass in fossil crown-group birds, as well as in stem-group avialans. However, useful statistical measurements for constraining the precision and accuracy of fossil mass estimates are rarely provided, which prevents the quantification of robust upper and lower bound body mass estimates for fossils. Here, we generate thirteen body mass correlations and associated measures of statistical robustness using a sample of 863 extant flying birds. By providing robust body mass regressions with upper- and lower-bound prediction intervals for individual skeletal elements, we address the longstanding problem of body mass estimation for highly fragmentary fossil birds. We demonstrate that the most precise proxy for estimating body mass in the overall dataset, measured both as coefficient determination of ordinary least squares regression and percent prediction error, is the maximum diameter of the coracoid’s humeral articulation facet (the glenoid). We further demonstrate that this result is consistent among the majority of investigated avian orders (10 out of 18). As a result, we suggest that, in the majority of cases, this proxy may provide the most accurate estimates of body mass for volant fossil birds. Additionally, by presenting statistical measurements of body mass prediction error for thirteen different body mass regressions, this study provides a much-needed quantitative framework for the accurate estimation of body mass and associated ecological correlates in fossil birds. The application of these regressions will enhance the precision and robustness of many mass-based inferences in future paleornithological studies. PMID:24312392
[Application of statistics on chronic-diseases-relating observational research papers].
Hong, Zhi-heng; Wang, Ping; Cao, Wei-hua
2012-09-01
To study the application of statistics on Chronic-diseases-relating observational research papers which were recently published in the Chinese Medical Association Magazines, with influential index above 0.5. Using a self-developed criterion, two investigators individually participated in assessing the application of statistics on Chinese Medical Association Magazines, with influential index above 0.5. Different opinions reached an agreement through discussion. A total number of 352 papers from 6 magazines, including the Chinese Journal of Epidemiology, Chinese Journal of Oncology, Chinese Journal of Preventive Medicine, Chinese Journal of Cardiology, Chinese Journal of Internal Medicine and Chinese Journal of Endocrinology and Metabolism, were reviewed. The rate of clear statement on the following contents as: research objectives, t target audience, sample issues, objective inclusion criteria and variable definitions were 99.43%, 98.57%, 95.43%, 92.86% and 96.87%. The correct rates of description on quantitative and qualitative data were 90.94% and 91.46%, respectively. The rates on correctly expressing the results, on statistical inference methods related to quantitative, qualitative data and modeling were 100%, 95.32% and 87.19%, respectively. 89.49% of the conclusions could directly response to the research objectives. However, 69.60% of the papers did not mention the exact names of the study design, statistically, that the papers were using. 11.14% of the papers were in lack of further statement on the exclusion criteria. Percentage of the papers that could clearly explain the sample size estimation only taking up as 5.16%. Only 24.21% of the papers clearly described the variable value assignment. Regarding the introduction on statistical conduction and on database methods, the rate was only 24.15%. 18.75% of the papers did not express the statistical inference methods sufficiently. A quarter of the papers did not use 'standardization' appropriately. As for the aspect of statistical inference, the rate of description on statistical testing prerequisite was only 24.12% while 9.94% papers did not even employ the statistical inferential method that should be used. The main deficiencies on the application of Statistics used in papers related to Chronic-diseases-related observational research were as follows: lack of sample-size determination, variable value assignment description not sufficient, methods on statistics were not introduced clearly or properly, lack of consideration for pre-requisition regarding the use of statistical inferences.
Statistical Learning in a Natural Language by 8-Month-Old Infants
Pelucchi, Bruna; Hay, Jessica F.; Saffran, Jenny R.
2013-01-01
Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants’ ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition. PMID:19489896
Statistical learning in a natural language by 8-month-old infants.
Pelucchi, Bruna; Hay, Jessica F; Saffran, Jenny R
2009-01-01
Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants' ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition.
Code of Federal Regulations, 2010 CFR
2010-07-01
... by the Administrator. (1) Statistical analysis of initial water penetration data performed to support ASTM Designation D2099-00 indicates that poor quantitative precision is associated with this testing...
Kline, Joshua C.
2014-01-01
Over the past four decades, various methods have been implemented to measure synchronization of motor-unit firings. In this work, we provide evidence that prior reports of the existence of universal common inputs to all motoneurons and the presence of long-term synchronization are misleading, because they did not use sufficiently rigorous statistical tests to detect synchronization. We developed a statistically based method (SigMax) for computing synchronization and tested it with data from 17,736 motor-unit pairs containing 1,035,225 firing instances from the first dorsal interosseous and vastus lateralis muscles—a data set one order of magnitude greater than that reported in previous studies. Only firing data, obtained from surface electromyographic signal decomposition with >95% accuracy, were used in the study. The data were not subjectively selected in any manner. Because of the size of our data set and the statistical rigor inherent to SigMax, we have confidence that the synchronization values that we calculated provide an improved estimate of physiologically driven synchronization. Compared with three other commonly used techniques, ours revealed three types of discrepancies that result from failing to use sufficient statistical tests necessary to detect synchronization. 1) On average, the z-score method falsely detected synchronization at 16 separate latencies in each motor-unit pair. 2) The cumulative sum method missed one out of every four synchronization identifications found by SigMax. 3) The common input assumption method identified synchronization from 100% of motor-unit pairs studied. SigMax revealed that only 50% of motor-unit pairs actually manifested synchronization. PMID:25210152
Ma, Li-Xin; Liu, Jian-Ping
2012-01-01
To investigate whether the power of the effect size was based on adequate sample size in randomized controlled trials (RCTs) for the treatment of patients with type 2 diabetes mellitus (T2DM) using Chinese medicine. China Knowledge Resource Integrated Database (CNKI), VIP Database for Chinese Technical Periodicals (VIP), Chinese Biomedical Database (CBM), and Wangfang Data were systematically recruited using terms like "Xiaoke" or diabetes, Chinese herbal medicine, patent medicine, traditional Chinese medicine, randomized, controlled, blinded, and placebo-controlled. Limitation was set on the intervention course > or = 3 months in order to identify the information of outcome assessement and the sample size. Data collection forms were made according to the checking lists found in the CONSORT statement. Independent double data extractions were performed on all included trials. The statistical power of the effects size for each RCT study was assessed using sample size calculation equations. (1) A total of 207 RCTs were included, including 111 superiority trials and 96 non-inferiority trials. (2) Among the 111 superiority trials, fasting plasma glucose (FPG) and glycosylated hemoglobin HbA1c (HbA1c) outcome measure were reported in 9% and 12% of the RCTs respectively with the sample size > 150 in each trial. For the outcome of HbA1c, only 10% of the RCTs had more than 80% power. For FPG, 23% of the RCTs had more than 80% power. (3) In the 96 non-inferiority trials, the outcomes FPG and HbA1c were reported as 31% and 36% respectively. These RCTs had a samples size > 150. For HbA1c only 36% of the RCTs had more than 80% power. For FPG, only 27% of the studies had more than 80% power. The sample size for statistical analysis was distressingly low and most RCTs did not achieve 80% power. In order to obtain a sufficient statistic power, it is recommended that clinical trials should establish clear research objective and hypothesis first, and choose scientific and evidence-based study design and outcome measurements. At the same time, calculate required sample size to ensure a precise research conclusion.
Emancipation through interaction--how eugenics and statistics converged and diverged.
Louçã, Francisco
2009-01-01
The paper discusses the scope and influence of eugenics in defining the scientific programme of statistics and the impact of the evolution of biology on social scientists. It argues that eugenics was instrumental in providing a bridge between sciences, and therefore created both the impulse and the institutions necessary for the birth of modern statistics in its applications first to biology and then to the social sciences. Looking at the question from the point of view of the history of statistics and the social sciences, and mostly concentrating on evidence from the British debates, the paper discusses how these disciplines became emancipated from eugenics precisely because of the inspiration of biology. It also relates how social scientists were fascinated and perplexed by the innovations taking place in statistical theory and practice.
Li, Jun; Wu, Chuanchuan; Wang, Hui; Liu, Huanyuan; Vuitton, Dominique A.; Wen, Hao; Zhang, Wenbao
2014-01-01
Proper disposal of carcasses and offal after home slaughter is difficult in poor and remote communities and therefore dogs readily have access to hydatid cysts containing offal from livestock, thus completing the parasite cycle of Echinococcus granulosus and putting communities at risk of cystic echinococcosis. Boiling livers and lungs which contain hydatid cysts could be a simple, efficient and energy- and time-saving way to kill the infectious protoscoleces. The aim of this study was to provide precise practical recommendations to livestock owners. Our results show that boiling the whole sheep liver and/or lung, with single or multiple hydatid cysts, for 30 min is necessary and sufficient to kill E. granulosus protoscoleces in hydatid cysts. Advertising on this simple rule in at-risk communities would be an efficient and cheap complement to other veterinary public health operations to control cystic echinococcosis. PMID:25456565
Retrieval of charge mobility from apparent charge packet movements in LDPE thin films
NASA Astrophysics Data System (ADS)
Meng, Jia; Zhang, Yewen; Holé, Stéphane; Zheng, Feihu; An, Zhenlian
2017-03-01
The charge packet phenomenon observed in polyethylene materials has been reported extensively during the last decades. To explain its movement, Negative Differential Mobility (NDM) theory is a competitive model among several proposed mechanisms. However, as a key concept of this theory, a sufficiently acute relationship between charge mobility and electric field has never been reported until now, which makes it hard to precisely describe the migration of charge packets with this theory. Based on the substantial negative-charge packet observations with a sufficiently by wide electric field range from 15 kV/mm to 50 kV/mm, the present contribution successfully retrieved the negative-charge mobility from the apparent charge packet movements, which reveals a much closer relationship between the NDM theory and charge packet migrations. Back simulations of charge packets with the retrieved charge mobility offer a good agreement with the experimental data.
Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong
2006-09-01
Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.
Sub-micron resolution rf cavity beam position monitor system at the SACLA XFEL facility
NASA Astrophysics Data System (ADS)
Maesaka, H.; Ego, H.; Inoue, S.; Matsubara, S.; Ohshima, T.; Shintake, T.; Otake, Y.
2012-12-01
We have developed and constructed a C-band (4.760 GHz) rf cavity beam position monitor (RF-BPM) system for the XFEL facility at SPring-8, SACLA. The demanded position resolution of the RF-BPM is less than 1 μm, because an electron beam and x-rays must be overlapped within 4 μm precision in the undulator section for sufficient FEL interaction between the electrons and x-rays. In total, 57 RF-BPMs, including IQ demodulators and high-speed waveform digitizers for signal processing, were produced and installed into SACLA. We evaluated the position resolutions of 20 RF-BPMs in the undulator section by using a 7 GeV electron beam having a 0.1 nC bunch charge. The position resolution was measured to be less than 0.6 μm, which was sufficient for the XFEL lasing in the wavelength region of 0.1 nm, or shorter.
Precision measurements of solar energetic particle elemental composition
NASA Technical Reports Server (NTRS)
Breneman, H.; Stone, E. C.
1985-01-01
Using data from the Cosmic Ray Subsystem (CRS) aboard the Voyager 1 and 2 spacecraft, solar energetic particle abundances or upper limits for all elements with 3 = Z = 30 from a combined set of 10 solar flares during the 1977 to 1982 time period were determined. Statistically meaningful abundances have been determined for the first time for several rare elements including P, Cl, K, Ti and Mn, while the precision of the mean abundances for the more abundant elements has been improved by typically a factor of approximately 3 over previously reported values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B.
The precision of double-beta ββ-decay experimental half lives and their uncertainties is reanalyzed. The method of Benford's distributions has been applied to nuclear reaction, structure and decay data sets. First-digit distribution trend for ββ-decay T 2v 1/2 is consistent with large nuclear reaction and structure data sets and provides validation of experimental half-lives. A complementary analysis of the decay uncertainties indicates deficiencies due to small size of statistical samples, and incomplete collection of experimental information. Further experimental and theoretical efforts would lead toward more precise values of-decay half-lives and nuclear matrix elements.
Accardo, L; Aguilar, M; Aisa, D; Alpat, B; Alvino, A; Ambrosi, G; Andeen, K; Arruda, L; Attig, N; Azzarello, P; Bachlechner, A; Barao, F; Barrau, A; Barrin, L; Bartoloni, A; Basara, L; Battarbee, M; Battiston, R; Bazo, J; Becker, U; Behlmann, M; Beischer, B; Berdugo, J; Bertucci, B; Bigongiari, G; Bindi, V; Bizzaglia, S; Bizzarri, M; Boella, G; de Boer, W; Bollweg, K; Bonnivard, V; Borgia, B; Borsini, S; Boschini, M J; Bourquin, M; Burger, J; Cadoux, F; Cai, X D; Capell, M; Caroff, S; Carosi, G; Casaus, J; Cascioli, V; Castellini, G; Cernuda, I; Cerreta, D; Cervelli, F; Chae, M J; Chang, Y H; Chen, A I; Chen, H; Cheng, G M; Chen, H S; Cheng, L; Chikanian, A; Chou, H Y; Choumilov, E; Choutko, V; Chung, C H; Cindolo, F; Clark, C; Clavero, R; Coignet, G; Consolandi, C; Contin, A; Corti, C; Coste, B; Cui, Z; Dai, M; Delgado, C; Della Torre, S; Demirköz, M B; Derome, L; Di Falco, S; Di Masso, L; Dimiccoli, F; Díaz, C; von Doetinchem, P; Du, W J; Duranti, M; D'Urso, D; Eline, A; Eppling, F J; Eronen, T; Fan, Y Y; Farnesini, L; Feng, J; Fiandrini, E; Fiasson, A; Finch, E; Fisher, P; Galaktionov, Y; Gallucci, G; García, B; García-López, R; Gast, H; Gebauer, I; Gervasi, M; Ghelfi, A; Gillard, W; Giovacchini, F; Goglov, P; Gong, J; Goy, C; Grabski, V; Grandi, D; Graziani, M; Guandalini, C; Guerri, I; Guo, K H; Haas, D; Habiby, M; Haino, S; Han, K C; He, Z H; Heil, M; Henning, R; Hoffman, J; Hsieh, T H; Huang, Z C; Huh, C; Incagli, M; Ionica, M; Jang, W Y; Jinchi, H; Kanishev, K; Kim, G N; Kim, K S; Kirn, Th; Kossakowski, R; Kounina, O; Kounine, A; Koutsenko, V; Krafczyk, M S; Kunz, S; La Vacca, G; Laudi, E; Laurenti, G; Lazzizzera, I; Lebedev, A; Lee, H T; Lee, S C; Leluc, C; Levi, G; Li, H L; Li, J Q; Li, Q; Li, Q; Li, T X; Li, W; Li, Y; Li, Z H; Li, Z Y; Lim, S; Lin, C H; Lipari, P; Lippert, T; Liu, D; Liu, H; Lolli, M; Lomtadze, T; Lu, M J; Lu, Y S; Luebelsmeyer, K; Luo, F; Luo, J Z; Lv, S S; Majka, R; Malinin, A; Mañá, C; Marín, J; Martin, T; Martínez, G; Masi, N; Massera, F; Maurin, D; Menchaca-Rocha, A; Meng, Q; Mo, D C; Monreal, B; Morescalchi, L; Mott, P; Müller, M; Ni, J Q; Nikonov, N; Nozzoli, F; Nunes, P; Obermeier, A; Oliva, A; Orcinha, M; Palmonari, F; Palomares, C; Paniccia, M; Papi, A; Pauluzzi, M; Pedreschi, E; Pensotti, S; Pereira, R; Pilastrini, R; Pilo, F; Piluso, A; Pizzolotto, C; Plyaskin, V; Pohl, M; Poireau, V; Postaci, E; Putze, A; Quadrani, L; Qi, X M; Rancoita, P G; Rapin, D; Ricol, J S; Rodríguez, I; Rosier-Lees, S; Rossi, L; Rozhkov, A; Rozza, D; Rybka, G; Sagdeev, R; Sandweiss, J; Saouter, P; Sbarra, C; Schael, S; Schmidt, S M; Schuckardt, D; Schulz von Dratzig, A; Schwering, G; Scolieri, G; Seo, E S; Shan, B S; Shan, Y H; Shi, J Y; Shi, X Y; Shi, Y M; Siedenburg, T; Son, D; Spada, F; Spinella, F; Sun, W; Sun, W H; Tacconi, M; Tang, C P; Tang, X W; Tang, Z C; Tao, L; Tescaro, D; Ting, Samuel C C; Ting, S M; Tomassetti, N; Torsti, J; Türkoğlu, C; Urban, T; Vagelli, V; Valente, E; Vannini, C; Valtonen, E; Vaurynovich, S; Vecchi, M; Velasco, M; Vialle, J P; Vitale, V; Volpini, G; Wang, L Q; Wang, Q L; Wang, R S; Wang, X; Wang, Z X; Weng, Z L; Whitman, K; Wienkenhöver, J; Wu, H; Wu, K Y; Xia, X; Xie, M; Xie, S; Xiong, R Q; Xin, G M; Xu, N S; Xu, W; Yan, Q; Yang, J; Yang, M; Ye, Q H; Yi, H; Yu, Y J; Yu, Z Q; Zeissler, S; Zhang, J H; Zhang, M T; Zhang, X B; Zhang, Z; Zheng, Z M; Zhou, F; Zhuang, H L; Zhukov, V; Zichichi, A; Zimmermann, N; Zuccon, P; Zurbach, C
2014-09-19
A precision measurement by AMS of the positron fraction in primary cosmic rays in the energy range from 0.5 to 500 GeV based on 10.9 million positron and electron events is presented. This measurement extends the energy range of our previous observation and increases its precision. The new results show, for the first time, that above ∼200 GeV the positron fraction no longer exhibits an increase with energy.
Gao, Xing; He, Yao; Hu, Hongpu
2017-01-01
Allowing for the differences in economy development, informatization degree and characteristic of population served and so on among different community health service organizations, community health service precision fund appropriation system based on performance management is designed, which can provide support for the government to appropriate financial funds scientifically and rationally for primary care. The system has the characteristic of flexibility and practicability, in which there are five subsystems including data acquisition, parameter setting, fund appropriation, statistical analysis system and user management.
NASA Astrophysics Data System (ADS)
Accardo, L.; Aguilar, M.; Aisa, D.; Alvino, A.; Ambrosi, G.; Andeen, K.; Arruda, L.; Attig, N.; Azzarello, P.; Bachlechner, A.; Barao, F.; Barrau, A.; Barrin, L.; Bartoloni, A.; Basara, L.; Battarbee, M.; Battiston, R.; Bazo, J.; Becker, U.; Behlmann, M.; Beischer, B.; Berdugo, J.; Bertucci, B.; Bigongiari, G.; Bindi, V.; Bizzaglia, S.; Bizzarri, M.; Boella, G.; de Boer, W.; Bollweg, K.; Bonnivard, V.; Borgia, B.; Borsini, S.; Boschini, M. J.; Bourquin, M.; Burger, J.; Cadoux, F.; Cai, X. D.; Capell, M.; Caroff, S.; Casaus, J.; Cascioli, V.; Castellini, G.; Cernuda, I.; Cervelli, F.; Chae, M. J.; Chang, Y. H.; Chen, A. I.; Chen, H.; Cheng, G. M.; Chen, H. S.; Cheng, L.; Chikanian, A.; Chou, H. Y.; Choumilov, E.; Choutko, V.; Chung, C. H.; Clark, C.; Clavero, R.; Coignet, G.; Consolandi, C.; Contin, A.; Corti, C.; Coste, B.; Cui, Z.; Dai, M.; Delgado, C.; Della Torre, S.; Demirköz, M. B.; Derome, L.; Di Falco, S.; Di Masso, L.; Dimiccoli, F.; Díaz, C.; von Doetinchem, P.; Du, W. J.; Duranti, M.; D'Urso, D.; Eline, A.; Eppling, F. J.; Eronen, T.; Fan, Y. Y.; Farnesini, L.; Feng, J.; Fiandrini, E.; Fiasson, A.; Finch, E.; Fisher, P.; Galaktionov, Y.; Gallucci, G.; García, B.; García-López, R.; Gast, H.; Gebauer, I.; Gervasi, M.; Ghelfi, A.; Gillard, W.; Giovacchini, F.; Goglov, P.; Gong, J.; Goy, C.; Grabski, V.; Grandi, D.; Graziani, M.; Guandalini, C.; Guerri, I.; Guo, K. H.; Habiby, M.; Haino, S.; Han, K. C.; He, Z. H.; Heil, M.; Hoffman, J.; Hsieh, T. H.; Huang, Z. C.; Huh, C.; Incagli, M.; Ionica, M.; Jang, W. Y.; Jinchi, H.; Kanishev, K.; Kim, G. N.; Kim, K. S.; Kirn, Th.; Kossakowski, R.; Kounina, O.; Kounine, A.; Koutsenko, V.; Krafczyk, M. S.; Kunz, S.; La Vacca, G.; Laudi, E.; Laurenti, G.; Lazzizzera, I.; Lebedev, A.; Lee, H. T.; Lee, S. C.; Leluc, C.; Li, H. L.; Li, J. Q.; Li, Q.; Li, Q.; Li, T. X.; Li, W.; Li, Y.; Li, Z. H.; Li, Z. Y.; Lim, S.; Lin, C. H.; Lipari, P.; Lippert, T.; Liu, D.; Liu, H.; Lomtadze, T.; Lu, M. J.; Lu, Y. S.; Luebelsmeyer, K.; Luo, F.; Luo, J. Z.; Lv, S. S.; Majka, R.; Malinin, A.; Mañá, C.; Marín, J.; Martin, T.; Martínez, G.; Masi, N.; Maurin, D.; Menchaca-Rocha, A.; Meng, Q.; Mo, D. C.; Morescalchi, L.; Mott, P.; Müller, M.; Ni, J. Q.; Nikonov, N.; Nozzoli, F.; Nunes, P.; Obermeier, A.; Oliva, A.; Orcinha, M.; Palmonari, F.; Palomares, C.; Paniccia, M.; Papi, A.; Pedreschi, E.; Pensotti, S.; Pereira, R.; Pilo, F.; Piluso, A.; Pizzolotto, C.; Plyaskin, V.; Pohl, M.; Poireau, V.; Postaci, E.; Putze, A.; Quadrani, L.; Qi, X. M.; Rancoita, P. G.; Rapin, D.; Ricol, J. S.; Rodríguez, I.; Rosier-Lees, S.; Rozhkov, A.; Rozza, D.; Sagdeev, R.; Sandweiss, J.; Saouter, P.; Sbarra, C.; Schael, S.; Schmidt, S. M.; Schuckardt, D.; von Dratzig, A. Schulz; Schwering, G.; Scolieri, G.; Seo, E. S.; Shan, B. S.; Shan, Y. H.; Shi, J. Y.; Shi, X. Y.; Shi, Y. M.; Siedenburg, T.; Son, D.; Spada, F.; Spinella, F.; Sun, W.; Sun, W. H.; Tacconi, M.; Tang, C. P.; Tang, X. W.; Tang, Z. C.; Tao, L.; Tescaro, D.; Ting, Samuel C. C.; Ting, S. M.; Tomassetti, N.; Torsti, J.; Türkoǧlu, C.; Urban, T.; Vagelli, V.; Valente, E.; Vannini, C.; Valtonen, E.; Vaurynovich, S.; Vecchi, M.; Velasco, M.; Vialle, J. P.; Wang, L. Q.; Wang, Q. L.; Wang, R. S.; Wang, X.; Wang, Z. X.; Weng, Z. L.; Whitman, K.; Wienkenhöver, J.; Wu, H.; Xia, X.; Xie, M.; Xie, S.; Xiong, R. Q.; Xin, G. M.; Xu, N. S.; Xu, W.; Yan, Q.; Yang, J.; Yang, M.; Ye, Q. H.; Yi, H.; Yu, Y. J.; Yu, Z. Q.; Zeissler, S.; Zhang, J. H.; Zhang, M. T.; Zhang, X. B.; Zhang, Z.; Zheng, Z. M.; Zhuang, H. L.; Zhukov, V.; Zichichi, A.; Zimmermann, N.; Zuccon, P.; Zurbach, C.; AMS Collaboration
2014-09-01
A precision measurement by AMS of the positron fraction in primary cosmic rays in the energy range from 0.5 to 500 GeV based on 10.9 million positron and electron events is presented. This measurement extends the energy range of our previous observation and increases its precision. The new results show, for the first time, that above ∼200 GeV the positron fraction no longer exhibits an increase with energy.
Onsite Gaseous Centrifuge Enrichment Plant UF6 Cylinder Destructive Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anheier, Norman C.; Cannon, Bret D.; Qiao, Hong
2012-07-17
The IAEA safeguards approach for gaseous centrifuge enrichment plants (GCEPs) includes measurements of gross, partial, and bias defects in a statistical sampling plan. These safeguard methods consist principally of mass and enrichment nondestructive assay (NDA) verification. Destructive assay (DA) samples are collected from a limited number of cylinders for high precision offsite mass spectrometer analysis. DA is typically used to quantify bias defects in the GCEP material balance. Under current safeguards measures, the operator collects a DA sample from a sample tap following homogenization. The sample is collected in a small UF6 sample bottle, then sealed and shipped under IAEAmore » chain of custody to an offsite analytical laboratory. Current practice is expensive and resource intensive. We propose a new and novel approach for performing onsite gaseous UF6 DA analysis that provides rapid and accurate assessment of enrichment bias defects. DA samples are collected using a custom sampling device attached to a conventional sample tap. A few micrograms of gaseous UF6 is chemically adsorbed onto a sampling coupon in a matter of minutes. The collected DA sample is then analyzed onsite using Laser Ablation Absorption Ratio Spectrometry-Destructive Assay (LAARS-DA). DA results are determined in a matter of minutes at sufficient accuracy to support reliable bias defect conclusions, while greatly reducing DA sample volume, analysis time, and cost.« less
Estimation of sport fish harvest for risk and hazard assessment of environmental contaminants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poston, T.M.; Strenge, D.L.
1989-01-01
Consumption of contaminated fish flesh can be a significant route of human exposure to hazardous chemicals. Estimation of exposure resulting from the consumption of fish requires knowledge of fish consumption and contaminant levels in the edible portion of fish. Realistic figures of sport fish harvest are needed to estimate consumption. Estimates of freshwater sport fish harvest were developed from a review of 72 articles and reports. Descriptive statistics based on fishing pressure were derived from harvest data for four distinct groups of freshwater sport fish in three water types: streams, lakes, and reservoirs. Regression equations were developed to relate harvestmore » to surface area fished where data bases were sufficiently large. Other aspects of estimating human exposure to contaminants in fish flesh that are discussed include use of bioaccumulation factors for trace metals and organic compounds. Using the bioaccumulation factor and the concentration of contaminants in water as variables in the exposure equation may also lead to less precise estimates of tissue concentration. For instance, muscle levels of contaminants may not increase proportionately with increases in water concentrations, leading to overestimation of risk. In addition, estimates of water concentration may be variable or expressed in a manner that does not truly represent biological availability of the contaminant. These factors are discussed. 45 refs., 1 fig., 7 tabs.« less
NASA Astrophysics Data System (ADS)
Tramutoli, V.; Armandi, B.; Coviello, I.; Eleftheriou, A.; Filizzola, C.; Genzano, N.; Lacava, T.; Lisi, M.; Paciello, R.; Pergola, N.; Satriano, V.; Vallianatos, F.
2014-12-01
A large scientific documentation is to-date available about the appearance of anomalous space-time patterns of geophysical parameters measured from days to week before earthquakes occurrence. Nevertheless up to now no one measurable parameter, no one observational methodology has demonstrated to be sufficiently reliable and effective for the implementation of an operational earthquake prediction system. In this context PRE-EARTHQUAKES EU-FP7 project (www.pre-earthquakes.org), investigated to which extent the combined use of different observations/parameters together with the refinement of data analysis methods, can reduce false alarm rates and improve reliability and precision (in the space-time domain) of predictions. Among the different parameters/methodologies proposed to provide useful information in the earthquake prediction system, since 2001 a statistical approach named RST (Robust Satellite Technique) has been used to identify the space-time fluctuations of Earth's emitted Thermal Infrared (TIR) radiation observed from satellite in seismically active regions. In this paper RST-based long-term analysis of TIR satellite record collected by MSG/SEVIRI over European (Italy and Greece) and by GOES/IMAGER over American (California) regions will be presented. Its enhanced potential, when applied in the framework of time-Dependent Assessment of Seismic Hazard (t-DASH) system continuously integrating independent observations, will be moreover discussed.
What Ever Happened to N-of-1 Trials? Insiders’ Perspectives and a Look to the Future
Kravitz, Richard L; Duan, Naihua; Niedzinski, Edmund J; Hay, M Cameron; Subramanian, Saskia K; Weisner, Thomas S
2008-01-01
Context When feasible, randomized, blinded single-patient (n-of-1) trials are uniquely capable of establishing the best treatment in an individual patient. Despite early enthusiasm, by the turn of the twenty-first century, few academic centers were conducting n-of-1 trials on a regular basis. Methods The authors reviewed the literature and conducted in-depth telephone interviews with leaders in the n-of-1 trial movement. Findings N-of-1 trials can improve care by increasing therapeutic precision. However, they have not been widely adopted, in part because physicians do not sufficiently value the reduction in uncertainty they yield weighed against the inconvenience they impose. Limited evidence suggests that patients may be receptive to n-of-1 trials once they understand the benefits. Conclusions N-of-1 trials offer a unique opportunity to individualize clinical care and enrich clinical research. While ongoing changes in drug discovery, manufacture, and marketing may ultimately spur pharmaceutical makers and health care payers to support n-of-1 trials, at present the most promising resuscitation strategy is stripping n-of-1 trials to their essentials and marketing them directly to patients. In order to optimize statistical inference from these trials, empirical Bayes methods can be used to combine individual patient data with aggregate data from comparable patients. PMID:19120979
Automatic physical inference with information maximizing neural networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.
NASA Astrophysics Data System (ADS)
Zhang, Shun-Rong; Holt, John M.; Erickson, Philip J.; Goncharenko, Larisa P.
2018-05-01
Perrone and Mikhailov (2017, https://doi.org/10.1002/2017JA024193) and Mikhailov et al. (2017, https://doi.org/10.1002/2017JA023909) have recently examined thermospheric and ionospheric long-term trends using a data set of four thermospheric parameters (Tex, [O], [N2], and [O2]) and solar EUV flux. These data were derived from one single ionospheric parameter, foF1, using a nonlinear fitting procedure involving a photochemical model for the F1 peak. The F1 peak is assumed at the transition height ht with the linear recombination for atomic oxygen ions being equal to the quadratic recombination for molecular ions. This procedure has a number of obvious problems that are not addressed or not sufficiently justified. The potentially large ambiguities and biases in derived parameters make them unsuitable for precise quantitative ionospheric and thermospheric long-term trend studies. Furthermore, we assert that Perrone and Mikhailov (2017, https://doi.org/10.1002/2017JA024193) conclusions regarding incoherent scatter radar (ISR) ion temperature analysis for long-term trend studies are incorrect and in particular are based on a misunderstanding of the nature of the incoherent scatter radar measurement process. Large ISR data sets remain a consistent and statistically robust method for determining long term secular plasma temperature trends.
The beta distribution: A statistical model for world cloud cover
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin
2018-01-01
We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.
Holman, B W B; Alvarenga, T I R C; van de Ven, R J; Hopkins, D L
2015-07-01
The Warner-Bratzler shear force (WBSF) of 335 lamb m. longissimus lumborum (LL) caudal and cranial ends was measured to examine and simulate the effect of replicate number (r: 1-8) on the precision of mean WBSF estimates and to compare LL caudal and cranial end WBSF means. All LL were sourced from two experimental flocks as part of the Information Nucleus slaughter programme (CRC for Sheep Industry Innovation) and analysed using a Lloyd Texture analyser with a Warner-Bratzler blade attachment. WBSF data were natural logarithm (ln) transformed before statistical analysis. Mean ln(WBSF) precision improved as r increased; however the practical implications support an r equal to 6, as precision improves only marginally with additional replicates. Increasing LL sample replication results in better ln(WBSF) precision compared with increasing r, provided that sample replicates are removed from the same LL end. Cranial end mean WBSF was 11.2 ± 1.3% higher than the caudal end. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Kovač, Marko; Bauer, Arthur; Ståhl, Göran
2014-01-01
Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120
Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David
2015-01-01
New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.
A theoretical study on the bottlenecks of GPS phase ambiguity resolution in a CORS RTK Network
NASA Astrophysics Data System (ADS)
Odijk, D.; Teunissen, P.
2011-01-01
Crucial to the performance of GPS Network RTK positioning is that a user receives and applies correction information from a CORS Network. These corrections are necessary for the user to account for the atmospheric (ionospheric and tropospheric) delays and possibly orbit errors between his approximate location and the locations of the CORS Network stations. In order to provide the most precise corrections to users, the CORS Network processing should be based on integer resolution of the carrier phase ambiguities between the network's CORS stations. One of the main challenges is to reduce the convergence time, thus being able to quickly resolve the integer carrier phase ambiguities between the network's reference stations. Ideally, the network ambiguity resolution should be conducted within one single observation epoch, thus truly in real time. Unfortunately, single-epoch CORS Network RTK ambiguity resolution is currently not feasible and in the present contribution we study the bottlenecks preventing this. For current dual-frequency GPS the primary cause of these CORS Network integer ambiguity initialization times is the lack of a sufficiently large number of visible satellites. Although an increase in satellite number shortens the ambiguity convergence times, instantaneous CORS Network RTK ambiguity resolution is not feasible even with 14 satellites. It is further shown that increasing the number of stations within the CORS Network itself does not help ambiguity resolution much, since every new station introduces new ambiguities. The problem with CORS Network RTK ambiguity resolution is the presence of the atmospheric (mainly ionospheric) delays themselves and the fact that there are no external corrections that are sufficiently precise. We also show that external satellite clock corrections hardly contribute to CORS Network RTK ambiguity resolution, despite their quality, since the network satellite clock parameters and the ambiguities are almost completely uncorrelated. One positive is that the foreseen modernized GPS will have a very beneficial effect on CORS ambiguity resolution, because of an additional frequency with improved code precision.
A method for determining the weak statistical stationarity of a random process
NASA Technical Reports Server (NTRS)
Sadeh, W. Z.; Koper, C. A., Jr.
1978-01-01
A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.
Low-complexity stochastic modeling of wall-bounded shear flows
NASA Astrophysics Data System (ADS)
Zare, Armin
Turbulent flows are ubiquitous in nature and they appear in many engineering applications. Transition to turbulence, in general, increases skin-friction drag in air/water vehicles compromising their fuel-efficiency and reduces the efficiency and longevity of wind turbines. While traditional flow control techniques combine physical intuition with costly experiments, their effectiveness can be significantly enhanced by control design based on low-complexity models and optimization. In this dissertation, we develop a theoretical and computational framework for the low-complexity stochastic modeling of wall-bounded shear flows. Part I of the dissertation is devoted to the development of a modeling framework which incorporates data-driven techniques to refine physics-based models. We consider the problem of completing partially known sample statistics in a way that is consistent with underlying stochastically driven linear dynamics. Neither the statistics nor the dynamics are precisely known. Thus, our objective is to reconcile the two in a parsimonious manner. To this end, we formulate optimization problems to identify the dynamics and directionality of input excitation in order to explain and complete available covariance data. For problem sizes that general-purpose solvers cannot handle, we develop customized optimization algorithms based on alternating direction methods. The solution to the optimization problem provides information about critical directions that have maximal effect in bringing model and statistics in agreement. In Part II, we employ our modeling framework to account for statistical signatures of turbulent channel flow using low-complexity stochastic dynamical models. We demonstrate that white-in-time stochastic forcing is not sufficient to explain turbulent flow statistics and develop models for colored-in-time forcing of the linearized Navier-Stokes equations. We also examine the efficacy of stochastically forced linearized NS equations and their parabolized equivalents in the receptivity analysis of velocity fluctuations to external sources of excitation as well as capturing the effect of the slowly-varying base flow on streamwise streaks and Tollmien-Schlichting waves. In Part III, we develop a model-based approach to design surface actuation of turbulent channel flow in the form of streamwise traveling waves. This approach is capable of identifying the drag reducing trends of traveling waves in a simulation-free manner. We also use the stochastically forced linearized NS equations to examine the Reynolds number independent effects of spanwise wall oscillations on drag reduction in turbulent channel flows. This allows us to extend the predictive capability of our simulation-free approach to high Reynolds numbers.
Implantable optoelectronic probes for in vivo optogenetics.
Iseri, Ege; Kuzum, Duygu
2017-06-01
More than a decade has passed since optics and genetics came together and lead to the emerging technologies of optogenetics. The advent of light-sensitive opsins made it possible to optically trigger the neurons into activation or inhibition by using visible light. The importance of spatiotemporally isolating a segment of a neural network and controlling nervous signaling in a precise manner has driven neuroscience researchers and engineers to invest great efforts in designing high precision in vivo implantable devices. These efforts have focused on delivery of sufficient power to deep brain regions, while monitoring neural activity with high resolution and fidelity. In this review, we report the progress made in the field of hybrid optoelectronic neural interfaces that combine optical stimulation with electrophysiological recordings. Different approaches that incorporate optical or electrical components on implantable devices are discussed in detail. Advantages of various different designs as well as practical and fundamental limitations are summarized to illuminate the future of neurotechnology development.
Following the dynamics of matter with femtosecond precision using the X-ray streaking method
David, C.; Karvinen, P.; Sikorski, M.; ...
2015-01-06
X-ray Free Electron Lasers (FELs) can produce extremely intense and very short pulses, down to below 10 femtoseconds (fs). Among the key applications are ultrafast time-resolved studies of dynamics of matter by observing responses to fast excitation pulses in a pump-probe manner. Detectors with sufficient time resolution for observing these processes are not available. Therefore, such experiments typically measure a sample's full dynamics by repeating multiple pump-probe cycles at different delay times. This conventional method assumes that the sample returns to an identical or very similar state after each cycle. Here we describe a novel approach that can provide amore » time trace of responses following a single excitation pulse, jitter-free, with fs timing precision. We demonstrate, in an X-ray diffraction experiment, how it can be applied to the investigation of ultrafast irreversible processes.« less
Stability, precision, and near-24-hour period of the human circadian pacemaker
NASA Technical Reports Server (NTRS)
Czeisler, C. A.; Duffy, J. F.; Shanahan, T. L.; Brown, E. N.; Mitchell, J. F.; Rimmer, D. W.; Ronda, J. M.; Silva, E. J.; Allan, J. S.; Emens, J. S.;
1999-01-01
Regulation of circadian period in humans was thought to differ from that of other species, with the period of the activity rhythm reported to range from 13 to 65 hours (median 25.2 hours) and the period of the body temperature rhythm reported to average 25 hours in adulthood, and to shorten with age. However, those observations were based on studies of humans exposed to light levels sufficient to confound circadian period estimation. Precise estimation of the periods of the endogenous circadian rhythms of melatonin, core body temperature, and cortisol in healthy young and older individuals living in carefully controlled lighting conditions has now revealed that the intrinsic period of the human circadian pacemaker averages 24.18 hours in both age groups, with a tight distribution consistent with other species. These findings have important implications for understanding the pathophysiology of disrupted sleep in older people.
Huang, Yimei; Lui, Harvey; Zhao, Jianhua; Wu, Zhenguo; Zeng, Haishan
2017-01-01
The successful application of lasers in the treatment of skin diseases and cosmetic surgery is largely based on the principle of conventional selective photothermolysis which relies strongly on the difference in the absorption between the therapeutic target and its surroundings. However, when the differentiation in absorption is not sufficient, collateral damage would occur due to indiscriminate and nonspecific tissue heating. To deal with such cases, we introduce a novel spatially selective photothermolysis method based on multiphoton absorption in which the radiant energy of a tightly focused near-infrared femtosecond laser beam can be directed spatially by aiming the laser focal point to the target of interest. We construct a multimodal optical microscope to perform and monitor the spatially selective photothermolysis. We demonstrate that precise alteration of the targeted tissue is achieved while leaving surrounding tissue intact by choosing appropriate femtosecond laser exposure with multimodal optical microscopy monitoring in real time.
Huang, Yimei; Lui, Harvey; Zhao, Jianhua; Wu, Zhenguo; Zeng, Haishan
2017-01-01
The successful application of lasers in the treatment of skin diseases and cosmetic surgery is largely based on the principle of conventional selective photothermolysis which relies strongly on the difference in the absorption between the therapeutic target and its surroundings. However, when the differentiation in absorption is not sufficient, collateral damage would occur due to indiscriminate and nonspecific tissue heating. To deal with such cases, we introduce a novel spatially selective photothermolysis method based on multiphoton absorption in which the radiant energy of a tightly focused near-infrared femtosecond laser beam can be directed spatially by aiming the laser focal point to the target of interest. We construct a multimodal optical microscope to perform and monitor the spatially selective photothermolysis. We demonstrate that precise alteration of the targeted tissue is achieved while leaving surrounding tissue intact by choosing appropriate femtosecond laser exposure with multimodal optical microscopy monitoring in real time. PMID:28255346
High accuracy LADAR scene projector calibration sensor development
NASA Astrophysics Data System (ADS)
Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.; Bowden, Mark H.
2008-04-01
A sensor system for the characterization of infrared laser radar scene projectors has been developed. Available sensor systems do not provide sufficient range resolution to evaluate the high precision LADAR projector systems developed by the U.S. Army Research, Development and Engineering Command (RDECOM) Aviation and Missile Research, Development and Engineering Center (AMRDEC). With timing precision capability to a fraction of a nanosecond, it can confirm the accuracy of simulated return pulses from a nominal range of up to 6.5 km to a resolution of 4cm. Increased range can be achieved through firmware reconfiguration. Two independent amplitude triggers measure both rise and fall time providing a judgment of pulse shape and allowing estimation of the contained energy. Each return channel can measure up to 32 returns per trigger characterizing each return pulse independently. Currently efforts include extending the capability to 8 channels. This paper outlines the development, testing, capabilities and limitations of this new sensor system.
Probing the fermionic Higgs portal at lepton colliders
Fedderke, Michael A.; Lin, Tongyan; Wang, Lian -Tao
2016-04-26
Here, we study the sensitivity of future electron-positron colliders to UV completions of the fermionic Higgs portal operator H †Hχ¯χ. Measurements of precision electroweak S and T parameters and the e +e – → Zh cross-section at the CEPC, FCC-ee, and ILC are considered. The scalar completion of the fermionic Higgs portal is closely related to the scalar Higgs portal, and we summarize existing results. We devote the bulk of our analysis to a singlet-doublet fermion completion. Assuming the doublet is sufficiently heavy, we construct the effective field theory (EFT) at dimension-6 in order to compute contributions to the observables.more » We also provide full one-loop results for S and T in the general mass parameter space. In both completions, future precision measurements can probe the new states at the (multi-)TeV scale, beyond the direct reach of the LHC.« less
Probing the fermionic Higgs portal at lepton colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedderke, Michael A.; Lin, Tongyan; Wang, Lian -Tao
Here, we study the sensitivity of future electron-positron colliders to UV completions of the fermionic Higgs portal operator H †Hχ¯χ. Measurements of precision electroweak S and T parameters and the e +e – → Zh cross-section at the CEPC, FCC-ee, and ILC are considered. The scalar completion of the fermionic Higgs portal is closely related to the scalar Higgs portal, and we summarize existing results. We devote the bulk of our analysis to a singlet-doublet fermion completion. Assuming the doublet is sufficiently heavy, we construct the effective field theory (EFT) at dimension-6 in order to compute contributions to the observables.more » We also provide full one-loop results for S and T in the general mass parameter space. In both completions, future precision measurements can probe the new states at the (multi-)TeV scale, beyond the direct reach of the LHC.« less
2011-01-01
Nanoscaled materials are attractive building blocks for hierarchical assembly of functional nanodevices, which exhibit diverse performances and simultaneous functions. We innovatively fabricated semiconductor nano-probes of tapered ZnS nanowires through melting and solidifying by electro-thermal process; and then, as-prepared nano-probes can manipulate nanomaterials including semiconductor/metal nanowires and nanoparticles through sufficiently electrostatic force to the desired location without structurally and functionally damage. With some advantages of high precision and large domain, we can move and position and interconnect individual nanowires for contracting nanodevices. Interestingly, by the manipulating technique, the nanodevice made of three vertically interconnecting nanowires, i.e., diode, was realized and showed an excellent electrical property. This technique may be useful to fabricate electronic devices based on the nanowires' moving, positioning, and interconnecting and may overcome fundamental limitations of conventional mechanical fabrication. PMID:21794151
Combined Feature Based and Shape Based Visual Tracker for Robot Navigation
NASA Technical Reports Server (NTRS)
Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.
2005-01-01
We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.
Pendulums, Pedagogy, and Matter: Lessons from the Editing of Newton's Principia
NASA Astrophysics Data System (ADS)
Biener, Zvi; Smeenk, Chris
Teaching Newtonian physics involves the replacement of students'' ideas about physical situations with precise concepts appropriate for mathematical applications. This paper focuses on the concepts of `matter'' and `mass''. We suggest that students, like some pre-Newtonian scientists we examine, use these terms in a way that conflicts with their Newtonian meaning. Specifically, `matter''and `mass'' indicate to them the sorts of things that are tangible,bulky, and take up space. In Newtonian mechanics, however, the terms are defined by Newton's Second Law: `mass'' is simply a measure of the acceleration generated by an impressed force. We examine the relationship between these conceptions as it was discussed by Newton and his editor, Roger Cotes, when analyzing a series of pendulum experiments. We suggest that these experiments, as well as more sophisticated computer simulations, can be used in the classroom to sufficiently differentiate the colloquial and precise meaning of these terms.
Principles for the dynamic maintenance of cortical polarity
Marco, Eugenio; Wedlich-Soldner, Roland; Li, Rong; Altschuler, Steven J.; Wu, Lani F.
2007-01-01
Summary Diverse cell types require the ability to dynamically maintain polarized membrane protein distributions through balancing transport and diffusion. However, design principles underlying dynamically maintained cortical polarity are not well understood. Here we constructed a mathematical model for characterizing the morphology of dynamically polarized protein distributions. We developed analytical approaches for measuring all model parameters from single-cell experiments. We applied our methods to a well-characterized system for studying polarized membrane proteins: budding yeast cells expressing activated Cdc42. We found that balanced diffusion and colocalized transport to and from the plasma membrane were sufficient for accurately describing polarization morphologies. Surprisingly, the model predicts that polarized regions are defined with a precision that is nearly optimal for measured transport rates, and that polarity can be dynamically stabilized through positive feedback with directed transport. Our approach provides a step towards understanding how biological systems shape spatially precise, unambiguous cortical polarity domains using dynamic processes. PMID:17448998
Aquacells — Flagellates under long-term microgravity and potential usage for life support systems
NASA Astrophysics Data System (ADS)
Häder, Donat-P.; Richter, Peter R.; Strauch, S. M.; Schuster, M.
2006-09-01
The motile behavior of the unicellular photosynthetic flagellate Euglena gracilis was studied during a two-week mission on the Russian satellite Foton M2. The precision of gravitactic orientation was high before launch and, as expected, the cells were unoriented during microgravity. While after previous short-term TEXUS flights the precision of orientation was as high as before launch, it took several hours for the organisms to regain their gravitaxis. Also the percentage of motile cells and the swimming velocity of the remaining motile cells were considerably lower than in the ground control. In preparatory experiments the flagellate Euglena was shown to produce considerable amounts of photosynthetically generated oxygen. In a coupling experiment in a prototype for a planned space mission on Foton M3, the photosynthetic producers were shown to supply sufficient amounts of oxygen to a fish compartment with 35 larval cichlids, Oreochromis mossambicus.
Implantable optoelectronic probes for in vivo optogenetics
NASA Astrophysics Data System (ADS)
Iseri, Ege; Kuzum, Duygu
2017-06-01
More than a decade has passed since optics and genetics came together and lead to the emerging technologies of optogenetics. The advent of light-sensitive opsins made it possible to optically trigger the neurons into activation or inhibition by using visible light. The importance of spatiotemporally isolating a segment of a neural network and controlling nervous signaling in a precise manner has driven neuroscience researchers and engineers to invest great efforts in designing high precision in vivo implantable devices. These efforts have focused on delivery of sufficient power to deep brain regions, while monitoring neural activity with high resolution and fidelity. In this review, we report the progress made in the field of hybrid optoelectronic neural interfaces that combine optical stimulation with electrophysiological recordings. Different approaches that incorporate optical or electrical components on implantable devices are discussed in detail. Advantages of various different designs as well as practical and fundamental limitations are summarized to illuminate the future of neurotechnology development.
Mapped Landmark Algorithm for Precision Landing
NASA Technical Reports Server (NTRS)
Johnson, Andrew; Ansar, Adnan; Matthies, Larry
2007-01-01
A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.
Bode, Rita K; Lai, Jin-shei; Dineen, Kelly; Heinemann, Allen W; Shevrin, Daniel; Von Roenn, Jamie; Cella, David
2006-01-01
We expanded an existing 33-item physical function (PF) item bank with a sufficient number of items to enable computerized adaptive testing (CAT). Ten items were written to expand the bank and the new item pool was administered to 295 people with cancer. For this analysis of the new pool, seven poorly performing items were identified for further examination. This resulted in a bank with items that define an essentially unidimensional PF construct, cover a wide range of that construct, reliably measure the PF of persons with cancer, and distinguish differences in self-reported functional performance levels. We also developed a 5-item (static) assessment form ("BriefPF") that can be used in clinical research to express scores on the same metric as the overall bank. The BriefPF was compared to the PF-10 from the Medical Outcomes Study SF-36. Both short forms significantly differentiated persons across functional performance levels. While the entire bank was more precise across the PF continuum than either short form, there were differences in the area of the continuum in which each short form was more precise: the BriefPF was more precise than the PF-10 at the lower functional levels and the PF-10 was more precise than the BriefPF at the higher levels. Future research on this bank will include the development of a CAT version, the PF-CAT.
Lossdörfer, Stefan; Schwestka-Polly, Rainer; Wiechmann, Dirk
2013-09-01
Bracket slots and orthodontic archwires offering high dimensional precision are needed for fully customized lingual appliances. We aimed to investigate whether high-precision appliances of this type enable dentoalveolar compensation of class III malocclusion so that lower incisor inclination at the end of treatment will closely match the anticipated situation as defined in a pretreatment setup. This retrospective study included a total of 34 consecutive patients who had worn a fully customized lingual appliance to achieve dentoalveolar compensation for class III malocclusion by intermaxillary elastics, or proximal enamel reduction, or extraction of teeth in one or both jaws. Casts fabricated at different points in time were three-dimensionally scanned to analyze how precisely the lower incisor inclinations envisioned in the setup were implemented in clinical practice. Aside from minor deviations of ±3.75°, the lower incisor inclinations were clinically implemented as planned even in patients with major sagittal discrepancies. Treatment goals predefined in a setup of dentoalveolar compensation for class III malocclusion can be very precisely achieved via a customized lingual appliance. Correct planning can prevent undesirable lingual tipping of the lower incisors. This finding should not encourage a more liberal use of dentoalveolar compensation, but it should heighten clinicians' awareness of how essential it is to sufficiently consider the individual anatomy of the dentoalveolar complex during treatment planning.
Detection of bio-signature by microscopy and mass spectrometry
NASA Astrophysics Data System (ADS)
Tulej, M.; Wiesendanger, R.; Neuland, M., B.; Meyer, S.; Wurz, P.; Neubeck, A.; Ivarsson, M.; Riedo, V.; Moreno-Garcia, P.; Riedo, A.; Knopp, G.
2017-09-01
We demonstrate detection of micro-sized fossilized bacteria by means of microscopy and mass spectrometry. The characteristic structures of lifelike forms are visualized with a micrometre spatial resolution and mass spectrometric analyses deliver elemental and isotope composition of host and fossilized materials. Our studies show that high selectivity in isolation of fossilized material from host phase can be achieved while applying a microscope visualization (location), a laser ablation ion source with sufficiently small laser spot size and applying depth profiling method. Our investigations shows that fossilized features can be well isolated from host phase. The mass spectrometric measurements can be conducted with sufficiently high accuracy and precision yielding quantitative elemental and isotope composition of micro-sized objects. The current performance of the instrument allows the measurement of the isotope fractionation in per mill level and yield exclusively definition of the origin of the investigated species by combining optical visualization of investigated samples (morphology and texture), chemical characterization of host and embedded in the host micro-sized structure. Our isotope analyses involved bio-relevant B, C, S, and Ni isotopes which could be measured with sufficiently accuracy to conclude about the nature of the micro-sized objects.
User's Manual for Downscaler Fusion Software
Recently, a series of 3 papers has been published in the statistical literature that details the use of downscaling to obtain more accurate and precise predictions of air pollution across the conterminous U.S. This downscaling approach combines CMAQ gridded numerical model output...
15 CFR 200.103 - Consulting and advisory services.
Code of Federal Regulations, 2013 CFR
2013-01-01
...., details of design and construction, operational aspects, unusual or extreme conditions, methods of statistical control of the measurement process, automated acquisition of laboratory data, and data reduction... group seminars on the precision measurement of specific types of physical quantities, offering the...
15 CFR 200.103 - Consulting and advisory services.
Code of Federal Regulations, 2011 CFR
2011-01-01
...., details of design and construction, operational aspects, unusual or extreme conditions, methods of statistical control of the measurement process, automated acquisition of laboratory data, and data reduction... group seminars on the precision measurement of specific types of physical quantities, offering the...
A roughness-corrected index of relative bed stability for regional stream surveys
Quantitative regional assessments of streambed sedimentation and its likely causes are hampered because field investigations typically lack the requisite sample size, measurements, or precision for sound geomorphic and statistical interpretation. We adapted an index of relative b...
ADEQUACY OF VISUALLY CLASSIFIED PARTICLE COUNT STATISTICS FROM REGIONAL STREAM HABITAT SURVEYS
Streamlined sampling procedures must be used to achieve a sufficient sample size with limited resources in studies undertaken to evaluate habitat status and potential management-related habitat degradation at a regional scale. At the same time, these sampling procedures must achi...
Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C
2018-03-07
Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.
NASA Astrophysics Data System (ADS)
Roberts, B. M.; Blewitt, G.; Dailey, C.; Derevianko, A.
2018-04-01
We analyze the prospects of employing a distributed global network of precision measurement devices as a dark matter and exotic physics observatory. In particular, we consider the atomic clocks of the global positioning system (GPS), consisting of a constellation of 32 medium-Earth orbit satellites equipped with either Cs or Rb microwave clocks and a number of Earth-based receiver stations, some of which employ highly-stable H-maser atomic clocks. High-accuracy timing data is available for almost two decades. By analyzing the satellite and terrestrial atomic clock data, it is possible to search for transient signatures of exotic physics, such as "clumpy" dark matter and dark energy, effectively transforming the GPS constellation into a 50 000 km aperture sensor array. Here we characterize the noise of the GPS satellite atomic clocks, describe the search method based on Bayesian statistics, and test the method using simulated clock data. We present the projected discovery reach using our method, and demonstrate that it can surpass the existing constrains by several order of magnitude for certain models. Our method is not limited in scope to GPS or atomic clock networks, and can also be applied to other networks of precision measurement devices.
NASA GPM GV Science Requirements
NASA Technical Reports Server (NTRS)
Smith, E.
2003-01-01
An important scientific objective of the NASA portion of the GPM Mission is to generate quantitatively-based error characterization information along with the rainrate retrievals emanating from the GPM constellation of satellites. These data must serve four main purposes: (1) they must be of sufficient quality, uniformity, and timeliness to govern the observation weighting schemes used in the data assimilation modules of numerical weather prediction models; (2) they must extend over that portion of the globe accessible by the GPM core satellite to which the NASA GV program is focused - (approx.65 degree inclination); (3) they must have sufficient specificity to enable detection of physically-formulated microphysical and meteorological weaknesses in the standard physical level 2 rainrate algorithms to be used in the GPM Precipitation Processing System (PPS), i.e., algorithms which will have evolved from the TRMM standard physical level 2 algorithms; and (4) they must support the use of physical error modeling as a primary validation tool and as the eventual replacement of the conventional GV approach of statistically intercomparing surface rainrates fiom ground and satellite measurements. This approach to ground validation research represents a paradigm shift vis-&-vis the program developed for the TRMM mission, which conducted ground validation largely as a statistical intercomparison process between raingauge-derived or radar-derived rainrates and the TRMM satellite rainrate retrievals -- long after the original satellite retrievals were archived. This approach has been able to quantify averaged rainrate differences between the satellite algorithms and the ground instruments, but has not been able to explain causes of algorithm failures or produce error information directly compatible with the cost functions of data assimilation schemes. These schemes require periodic and near-realtime bias uncertainty (i.e., global space-time distributed conditional accuracy of the retrieved rainrates) and local error covariance structure (i.e., global space-time distributed error correlation information for the local 4-dimensional space-time domain -- or in simpler terms, the matrix form of precision error). This can only be accomplished by establishing a network of high quality-heavily instrumented supersites selectively distributed at a few oceanic, continental, and coastal sites. Economics and pragmatics dictate that the network must be made up of a relatively small number of sites (6-8) created through international cooperation. This presentation will address some of the details of the methodology behind the error characterization approach, some proposed solutions for expanding site-developed error properties to regional scales, a data processing and communications concept that would enable rapid implementation of algorithm improvement by the algorithm developers, and the likely available options for developing the supersite network.
Lee, L.; Helsel, D.
2005-01-01
Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.
Precise measurement of scleral radius using anterior eye profilometry.
Jesus, Danilo A; Kedzia, Renata; Iskander, D Robert
2017-02-01
To develop a new and precise methodology to measure the scleral radius based on anterior eye surface. Eye Surface Profiler (ESP, Eaglet-Eye, Netherlands) was used to acquire the anterior eye surface of 23 emmetropic subjects aged 28.1±6.6years (mean±standard deviation) ranging from 20 to 45. Scleral radius was obtained based on the approximation of the topographical scleral data to a sphere using least squares fitting and considering the axial length as a reference point. To better understand the role of scleral radius in ocular biometry, measurements of corneal radius, central corneal thickness, anterior chamber depth and white-to-white corneal diameter were acquired with IOLMaster 700 (Carl Zeiss Meditec AG, Jena, Germany). The estimated scleral radius (11.2±0.3mm) was shown to be highly precise with a coefficient of variation of 0.4%. A statistically significant correlation between axial length and scleral radius (R 2 =0.957, p<0.001) was observed. Moreover, corneal radius (R 2 =0.420, p<0.001), anterior chamber depth (R 2 =0.141, p=0.039) and white-to-white corneal diameter (R 2 =0.146, p=0.036) have also shown statistically significant correlations with the scleral radius. Lastly, no correlation was observed comparing scleral radius to the central corneal thickness (R 2 =0.047, p=0.161). Three-dimensional topography of anterior eye acquired with Eye Surface Profiler together with a given estimate of the axial length, can be used to calculate the scleral radius with high precision. Copyright © 2016 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Berg, Wolfgang; Bechler, Robin; Laube, Norbert
2009-01-01
Since its first publication in 2000, the BONN-Risk-Index (BRI) has been successfully used to determine the calcium oxalate (CaOx) crystallization risk from urine samples. To date, a BRI-measuring device, the "Urolizer", has been developed, operating automatically and requiring only a minimum of preparation. Two major objectives were pursued: determination of Urolizer precision, and determination of the influence of 24-h urine storage at moderate temperatures on BRI. 24-h urine samples from 52 CaOx stone-formers were collected. A total of 37 urine samples were used for the investigation of Urolizer precision by performing six independent BRI determinations in series. In total, 30 samples were taken for additional investigation of urine storability. Each sample was measured thrice: directly after collection, after 24-h storage at T=21 degrees C, and after 24-h cooling at T=4 degrees C. Outcomes were statistically tested for identity with regard to the immediately obtained results. Repeat measurements for evaluation of Urolizer precision revealed statistical identity of data (p-0.05). 24-h storage of urine at both tested temperatures did not significantly affect BRI (p-0.05). The pilot-run Urolizer shows high analytical reliability. The innovative analysis device may be especially suited for urologists specializing in urolithiasis treatment. The possibility for urine storage at moderate temperatures without loss of analysis quality further demonstrates the applicability of the BRI method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holladay, S.K.; Anderson, H.M.; Benson, S.B.
Quality assurance (QA) objectives for Phase 2 were that (1) scientific data generated would withstand scientific and legal scrutiny; (2) data would be gathered using appropriate procedures for sample collection, sample handling and security, chain of custody, laboratory analyses, and data reporting; (3) data would be of known precision and accuracy; and (4) data would meet data quality objectives defined in the Phase 2 Sampling and Analysis Plan. A review of the QA systems and quality control (QC) data associated with the Phase 2 investigation is presented to evaluate whether the data were of sufficient quality to satisfy Phase 2more » objectives. The data quality indicators of precision, accuracy, representativeness, comparability, completeness, and sensitivity were evaluated to determine any limitations associated with the data. Data were flagged with qualifiers that were associated with appropriate reason codes and documentation relating the qualifiers to the reviewer of the data. These qualifiers were then consolidated into an overall final qualifier to represent the quality of the data to the end user. In summary, reproducible, precise, and accurate measurements consistent with CRRI objectives and the limitations of the sampling and analytical procedures used were obtained for the data collected in support of the Phase 2 Remedial Investigation.« less
Rajan, Sekar; Colaco, Socorrina; Ramesh, N; Meyyanathan, Subramania Nainar; Elango, K
2014-02-01
This study describes the development and validation of dissolution tests for sustained release Dextromethorphan hydrobromide tablets using an HPLC method. Chromatographic separation was achieved on a C18 column utilizing 0.5% triethylamine (pH 7.5) and acetonitrile in the ratio of 50:50. The detection wavelength was 280 nm. The method was validated and response was found to be linear in the drug concentration range of 10-80 microg mL(-1). The suitable conditions were clearly decided after testing sink conditions, dissolution medium and agitation intensity. The most excellent dissolution conditions tested, for the Dextromethorphan hydrobromide was applied to appraise the dissolution profiles. The method was validated and response was found to be linear in the drug concentration range of 10-80 microg mL(-1). The method was established to have sufficient intermediate precision as similar separation was achieved on another instrument handled by different operators. Mean Recovery was 101.82%. Intra precisions for three different concentrations were 1.23, 1.10 0.72 and 1.57, 1.69, 0.95 and inter run precisions were % RSD 0.83, 1.36 and 1.57%, respectively. The method was successfully applied for dissolution study of the developed Dextromethorphan hydrobromide tablets.
High-resolution 40Ar 39Ar chronology of Oligocene volcanic rocks, San Juan Mountains, Colorado
Lanphere, M.A.
1988-01-01
The central San Juan caldera complex consists of seven calderas from which eight major ash-flow tuffs were erupted during a period of intense volcanic activity that lasted for approximately 2 m.y. about 26-28 Ma. The analytical precision of conventional K-Ar dating in this time interval is not sufficient to unambiguously resolve this complex history. However, 40Ar 39Ar incremental-heating experiments provide data for a high-resolution chronology that is consistent with stratigraphie relations. Weighted-mean age-spectrum plateau ages of biotite and sanidine are the most precise with standard deviations ranging from 0.08 to 0.21 m.y. The pooled estimate of standard deviation for the plateau ages of 12 minerals is about 0.5 percent or about 125,000 to 135,000 years. Age measurements on coexisting minerals from one tuff and on two samples of each of two other tuffs indicate that a precision in the age of a tuff of better than 100,000 years can be achieved at 27 Ma. New data indicate that the San Luis caldera is the youngest caldera in the central complex, not the Creede caldera as previously thought. ?? 1988.
Precise comparisons of bottom-pressure and altimetric ocean tides
NASA Astrophysics Data System (ADS)
Ray, R. D.
2013-09-01
A new set of pelagic tide determinations is constructed from seafloor pressure measurements obtained at 151 sites in the deep ocean. To maximize precision of estimated tides, only stations with long time series are used; median time series length is 567 days. Geographical coverage is considerably improved by use of the international tsunami network, but coverage in the Indian Ocean and South Pacific is still weak. As a tool for assessing global ocean tide models, the data set is considerably more reliable than older data sets: the root-mean-square difference with a recent altimetric tide model is approximately 5 mm for the M2 constituent. Precision is sufficiently high to allow secondary effects in altimetric and bottom-pressure tide differences to be studied. The atmospheric tide in bottom pressure is clearly detected at the S1, S2, and T2 frequencies. The altimetric tide model is improved if satellite altimetry is corrected for crustal loading by the atmospheric tide. Models of the solid body tide can also be constrained. The free core-nutation effect in the K1 Love number is easily detected, but the overall estimates are not as accurate as a recent determination with very long baseline interferometry.
Precise Comparisons of Bottom-Pressure and Altimetric Ocean Tides
NASA Technical Reports Server (NTRS)
Ray, Richard D.
2013-01-01
A new set of pelagic tide determinations is constructed from seafloor pressure measurements obtained at 151 sites in the deep ocean. To maximize precision of estimated tides, only stations with long time series are used; median time series length is 567 days. Geographical coverage is considerably improved by use of the international tsunami network, but coverage in the Indian Ocean and South Pacific is still weak. As a tool for assessing global ocean tide models, the data set is considerably more reliable than older data sets : the root-mean-square difference with a recent altimetric tide model is approximately 5 mm for the M2 constituent. Precision is sufficiently high to allow secondary effects in altimetric and bottom-pressure tide differences to be studied. The atmospheric tide in bottom pressure is clearly detected at the S1, S2, and T2 frequencies. The altimetric tide model is improved if satellite altimetry is corrected for crustal loading by the atmospheric tide. Models of the solid body tide can also be constrained. The free corenutation effect in the K1 Love number is easily detected, but the overall estimates are not as accurate as a recent determination with very long baseline interferometry.
Limits of Risk Predictability in a Cascading Alternating Renewal Process Model.
Lin, Xin; Moussawi, Alaa; Korniss, Gyorgy; Bakdash, Jonathan Z; Szymanski, Boleslaw K
2017-07-27
Most risk analysis models systematically underestimate the probability and impact of catastrophic events (e.g., economic crises, natural disasters, and terrorism) by not taking into account interconnectivity and interdependence of risks. To address this weakness, we propose the Cascading Alternating Renewal Process (CARP) to forecast interconnected global risks. However, assessments of the model's prediction precision are limited by lack of sufficient ground truth data. Here, we establish prediction precision as a function of input data size by using alternative long ground truth data generated by simulations of the CARP model with known parameters. We illustrate the approach on a model of fires in artificial cities assembled from basic city blocks with diverse housing. The results confirm that parameter recovery variance exhibits power law decay as a function of the length of available ground truth data. Using CARP, we also demonstrate estimation using a disparate dataset that also has dependencies: real-world prediction precision for the global risk model based on the World Economic Forum Global Risk Report. We conclude that the CARP model is an efficient method for predicting catastrophic cascading events with potential applications to emerging local and global interconnected risks.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
Population Neuroscience: Dementia Epidemiology Serving Precision Medicine and Population Health.
Ganguli, Mary; Albanese, Emiliano; Seshadri, Sudha; Bennett, David A; Lyketsos, Constantine; Kukull, Walter A; Skoog, Ingmar; Hendrie, Hugh C
2018-01-01
Over recent decades, epidemiology has made significant contributions to our understanding of dementia, translating scientific discoveries into population health. Here, we propose reframing dementia epidemiology as "population neuroscience," blending techniques and models from contemporary neuroscience with those of epidemiology and biostatistics. On the basis of emerging evidence and newer paradigms and methods, population neuroscience will minimize the bias typical of traditional clinical research, identify the relatively homogenous subgroups that comprise the general population, and investigate broader and denser phenotypes of dementia and cognitive impairment. Long-term follow-up of sufficiently large study cohorts will allow the identification of cohort effects and critical windows of exposure. Molecular epidemiology and omics will allow us to unravel the key distinctions within and among subgroups and better understand individuals' risk profiles. Interventional epidemiology will allow us to identify the different subgroups that respond to different treatment/prevention strategies. These strategies will inform precision medicine. In addition, insights into interactions between disease biology, personal and environmental factors, and social determinants of health will allow us to measure and track disease in communities and improve population health. By placing neuroscience within a real-world context, population neuroscience can fulfill its potential to serve both precision medicine and population health.