Sample records for statistical error bars

  1. Three-dimensional accuracy of different correction methods for cast implant bars

    PubMed Central

    Kwon, Ji-Yung; Kim, Chang-Whe; Lim, Young-Jun; Kwon, Ho-Beom

    2014-01-01

    PURPOSE The aim of the present study was to evaluate the accuracy of three techniques for correction of cast implant bars. MATERIALS AND METHODS Thirty cast implant bars were fabricated on a metal master model. All cast implant bars were sectioned at 5 mm from the left gold cylinder using a disk of 0.3 mm thickness, and then each group of ten specimens was corrected by gas-air torch soldering, laser welding, and additional casting technique. Three dimensional evaluation including horizontal, vertical, and twisting measurements was based on measurement and comparison of (1) gap distances of the right abutment replica-gold cylinder interface at buccal, distal, lingual side, (2) changes of bar length, and (3) axis angle changes of the right gold cylinders at the step of the post-correction measurements on the three groups with a contact and non-contact coordinate measuring machine. One-way analysis of variance (ANOVA) and paired t-test were performed at the significance level of 5%. RESULTS Gap distances of the cast implant bars after correction procedure showed no statistically significant difference among groups. Changes in bar length between pre-casting and post-correction measurement were statistically significance among groups. Axis angle changes of the right gold cylinders were not statistically significance among groups. CONCLUSION There was no statistical significance among three techniques in horizontal, vertical and axial errors. But, gas-air torch soldering technique showed the most consistent and accurate trend in the correction of implant bar error. However, Laser welding technique, showed a large mean and standard deviation in vertical and twisting measurement and might be technique-sensitive method. PMID:24605205

  2. OR14-V-Uncertainty-PD2La Uncertainty Quantification for Nuclear Safeguards and Nondestructive Assay Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Andrew D.; Croft, Stephen; McElroy, Robert Dennis

    2017-08-01

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically provide error bars and also partition total uncertainty into “random” and “systematic” components so that, for example, an error bar can be developed for the total mass estimate in multiple items. Uncertainty Quantification (UQ) for NDA has always been important, but itmore » is recognized that greater rigor is needed and achievable using modern statistical methods.« less

  3. The Importance of Statistical Modeling in Data Analysis and Inference

    ERIC Educational Resources Information Center

    Rollins, Derrick, Sr.

    2017-01-01

    Statistical inference simply means to draw a conclusion based on information that comes from data. Error bars are the most commonly used tool for data analysis and inference in chemical engineering data studies. This work demonstrates, using common types of data collection studies, the importance of specifying the statistical model for sound…

  4. Metacontrast masking and attention do not interact.

    PubMed

    Agaoglu, Sevda; Breitmeyer, Bruno; Ogmen, Haluk

    2016-07-01

    Visual masking and attention have been known to control the transfer of information from sensory memory to visual short-term memory. A natural question is whether these processes operate independently or interact. Recent evidence suggests that studies that reported interactions between masking and attention suffered from ceiling and/or floor effects. The objective of the present study was to investigate whether metacontrast masking and attention interact by using an experimental design in which saturation effects are avoided. We asked observers to report the orientation of a target bar randomly selected from a display containing either two or six bars. The mask was a ring that surrounded the target bar. Attentional load was controlled by set-size and masking strength by the stimulus onset asynchrony between the target bar and the mask ring. We investigated interactions between masking and attention by analyzing two different aspects of performance: (i) the mean absolute response errors and (ii) the distribution of signed response errors. Our results show that attention affects observers' performance without interacting with masking. Statistical modeling of response errors suggests that attention and metacontrast masking exert their effects by independently modulating the probability of "guessing" behavior. Implications of our findings for models of attention are discussed.

  5. Reading color barcodes using visual snakes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaub, Hanspeter

    2004-05-01

    Statistical pressure snakes are used to track a mono-color target in an unstructured environment using a video camera. The report discusses an algorithm to extract a bar code signal that is embedded within the target. The target is assumed to be rectangular in shape, with the bar code printed in a slightly different saturation and value in HSV color space. Thus, the visual snake, which primarily weighs hue tracking errors, will not be deterred by the presence of the color bar codes in the target. The bar code is generate with the standard 3 of 9 method. Using this method,more » the numeric bar codes reveal if the target is right-side-up or up-side-down.« less

  6. Heavy flavor decay of Zγ at CDF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timothy M. Harrington-Taber

    2013-01-01

    Diboson production is an important and frequently measured parameter of the Standard Model. This analysis considers the previously neglected pmore » $$\\bar{p}$$ →Z γ→ b$$\\bar{b}$$ channel, as measured at the Collider Detector at Fermilab. Using the entire Tevatron Run II dataset, the measured result is consistent with Standard Model predictions, but the statistical error associated with this method of measurement limits the strength of this correlation.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miftakov, V

    The BABAR experiment at SLAC provides an opportunity for measurement of the Standard Model parameters describing CP violation. A method of measuring the CKM matrix element |V{sub cb}| using Inclusive Semileptonic B decays in events tagged by a fully reconstructed decay of one of the B mesons is presented here. This mode is considered to be one of the most powerful approaches due to its large branching fraction, simplicity of the theoretical description and very clean experimental signatures. Using fully reconstructed B mesons to flag B{bar B} event we were able to produce the spectrum and branching fraction for electronmore » momenta P{sub C.M.S.} > 0.5 GeV/c. Extrapolation to the lower momenta has been carried out with Heavy Quark Effective Theory. The branching fractions are measured separately for charged and neutral B mesons. For 82 fb{sup -1} of data collected at BABAR we obtain: BR(B{sup {+-}} {yields} X e{bar {nu}}) = 10.63 {+-} 0.24 {+-} 0.29%, BR(B{sup 0} {yields} X e{bar {nu}}) = 10.68 {+-} 0.34 {+-} 0.31%, averaged BR(B {yields} X e{bar {nu}}) = 10.65 {+-} 0.19 {+-} 0.27%, ratio of Branching fractions BR(B{sup {+-}})/BR(B{sup 0}) = 0.996 {+-} 0.039 {+-} 0.015 (errors are statistical and systematic, respectively). They also obtain V{sub cb} = 0.0409 {+-} 0.00074 {+-} 0.0010 {+-} 0.000858 (errors are: statistical, systematic and theoretical).« less

  8. Effect of bar-code technology on the safety of medication administration.

    PubMed

    Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K

    2010-05-06

    Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society

  9. Time trend of injection drug errors before and after implementation of bar-code verification system.

    PubMed

    Sakushima, Ken; Umeki, Reona; Endoh, Akira; Ito, Yoichi M; Nasuhara, Yasuyuki

    2015-01-01

    Bar-code technology, used for verification of patients and their medication, could prevent medication errors in clinical practice. Retrospective analysis of electronically stored medical error reports was conducted in a university hospital. The number of reported medication errors of injected drugs, including wrong drug administration and administration to the wrong patient, was compared before and after implementation of the bar-code verification system for inpatient care. A total of 2867 error reports associated with injection drugs were extracted. Wrong patient errors decreased significantly after implementation of the bar-code verification system (17.4/year vs. 4.5/year, p< 0.05), although wrong drug errors did not decrease sufficiently (24.2/year vs. 20.3/year). The source of medication errors due to wrong drugs was drug preparation in hospital wards. Bar-code medication administration is effective for prevention of wrong patient errors. However, ordinary bar-code verification systems are limited in their ability to prevent incorrect drug preparation in hospital wards.

  10. [Medication error management climate and perception for system use according to construction of medication error prevention system].

    PubMed

    Kim, Myoung Soo

    2012-08-01

    The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.

  11. Measurement of the τ Michel parameters \\bar{η} and ξκ in the radiative leptonic decay τ^- \\rArr ℓ^- ν_{τ} \\bar{ν}_{ℓ}γ

    NASA Astrophysics Data System (ADS)

    Shimizu, N.; Aihara, H.; Epifanov, D.; Adachi, I.; Al Said, S.; Asner, D. M.; Aulchenko, V.; Aushev, T.; Ayad, R.; Babu, V.; Badhrees, I.; Bakich, A. M.; Bansal, V.; Barberio, E.; Bhardwaj, V.; Bhuyan, B.; Biswal, J.; Bobrov, A.; Bozek, A.; Bračko, M.; Browder, T. E.; Červenkov, D.; Chang, M.-C.; Chang, P.; Chekelian, V.; Chen, A.; Cheon, B. G.; Chilikin, K.; Cho, K.; Choi, S.-K.; Choi, Y.; Cinabro, D.; Czank, T.; Dash, N.; Di Carlo, S.; Doležal, Z.; Dutta, D.; Eidelman, S.; Fast, J. E.; Ferber, T.; Fulsom, B. G.; Garg, R.; Gaur, V.; Gabyshev, N.; Garmash, A.; Gelb, M.; Goldenzweig, P.; Greenwald, D.; Guido, E.; Haba, J.; Hayasaka, K.; Hayashii, H.; Hedges, M. T.; Hirose, S.; Hou, W.-S.; Iijima, T.; Inami, K.; Inguglia, G.; Ishikawa, A.; Itoh, R.; Iwasaki, M.; Jaegle, I.; Jeon, H. B.; Jia, S.; Jin, Y.; Joo, K. K.; Julius, T.; Kang, K. H.; Karyan, G.; Kawasaki, T.; Kiesling, C.; Kim, D. Y.; Kim, J. B.; Kim, S. H.; Kim, Y. J.; Kinoshita, K.; Kodyž, P.; Korpar, S.; Kotchetkov, D.; Križan, P.; Kroeger, R.; Krokovny, P.; Kulasiri, R.; Kuzmin, A.; Kwon, Y.-J.; Lange, J. S.; Lee, I. S.; Li, L. K.; Li, Y.; Li Gioi, L.; Libby, J.; Liventsev, D.; Masuda, M.; Merola, M.; Miyabayashi, K.; Miyata, H.; Mohanty, G. B.; Moon, H. K.; Mori, T.; Mussa, R.; Nakano, E.; Nakao, M.; Nanut, T.; Nath, K. J.; Natkaniec, Z.; Nayak, M.; Niiyama, M.; Nisar, N. K.; Nishida, S.; Ogawa, S.; Okuno, S.; Ono, H.; Pakhlova, G.; Pal, B.; Park, C. W.; Park, H.; Paul, S.; Pedlar, T. K.; Pestotnik, R.; Piilonen, L. E.; Popov, V.; Ritter, M.; Rostomyan, A.; Sakai, Y.; Salehi, M.; Sandilya, S.; Sato, Y.; Savinov, V.; Schneider, O.; Schnell, G.; Schwanda, C.; Seino, Y.; Senyo, K.; Sevior, M. E.; Shebalin, V.; Shibata, T.-A.; Shiu, J.-G.; Shwartz, B.; Sokolov, A.; Solovieva, E.; Starič, M.; Strube, J. F.; Sumisawa, K.; Sumiyoshi, T.; Tamponi, U.; Tanida, K.; Tenchini, F.; Trabelsi, K.; Uchida, M.; Uglov, T.; Unno, Y.; Uno, S.; Usov, Y.; Van Hulse, C.; Varner, G.; Vorobyev, V.; Vossen, A.; Wang, C. H.; Wang, M.-Z.; Wang, P.; Watanabe, M.; Widmann, E.; Won, E.; Yamashita, Y.; Ye, H.; Yuan, C. Z.; Zhang, Z. P.; Zhilich, V.; Zhukova, V.; Zhulanov, V.; Zupanc, A.

    2018-02-01

    We present a measurement of the Michel parameters of the τ lepton, \\bar{η} and ξκ, in the radiative leptonic decay τ^- \\rArr ℓ^- ν_{τ} \\bar{ν}_{ℓ} γ using 711 fb^{-1} of collision data collected with the Belle detector at the KEKB e^+e^- collider. The Michel parameters are measured in an unbinned maximum likelihood fit to the kinematic distribution of e^+e^-\\rArrτ^+τ^-\\rArr (π^+π^0 \\bar{ν}_τ)(ℓ^-ν_{τ}\\bar{ν}_{ℓ}γ)(ℓ=e or μ). The measured values of the Michel parameters are \\bar{η} = -1.3 ± 1.5 ± 0.8 and ξκ = 0.5 ± 0.4 ± 0.2, where the first error is statistical and the second is systematic. This is the first measurement of these parameters. These results are consistent with the Standard Model predictions within their uncertainties, and constrain the coupling constants of the generalized weak interaction.

  12. The Effects of Bar-coding Technology on Medication Errors: A Systematic Literature Review.

    PubMed

    Hutton, Kevin; Ding, Qian; Wellman, Gregory

    2017-02-24

    The bar-coding technology adoptions have risen drastically in U.S. health systems in the past decade. However, few studies have addressed the impact of bar-coding technology with strong prospective methodologies and the research, which has been conducted from both in-pharmacy and bedside implementations. This systematic literature review is to examine the effectiveness of bar-coding technology on preventing medication errors and what types of medication errors may be prevented in the hospital setting. A systematic search of databases was performed from 1998 to December 2016. Studies measuring the effect of bar-coding technology on medication errors were included in a full-text review. Studies with the outcomes other than medication errors such as efficiency or workarounds were excluded. The outcomes were measured and findings were summarized for each retained study. A total of 2603 articles were initially identified and 10 studies, which used prospective before-and-after study design, were fully reviewed in this article. Of the 10 included studies, 9 took place in the United States, whereas the remaining was conducted in the United Kingdom. One research article focused on bar-coding implementation in a pharmacy setting, whereas the other 9 focused on bar coding within patient care areas. All 10 studies showed overall positive effects associated with bar-coding implementation. The results of this review show that bar-coding technology may reduce medication errors in hospital settings, particularly on preventing targeted wrong dose, wrong drug, wrong patient, unauthorized drug, and wrong route errors.

  13. Previous Estimates of Mitochondrial DNA Mutation Level Variance Did Not Account for Sampling Error: Comparing the mtDNA Genetic Bottleneck in Mice and Humans

    PubMed Central

    Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.

    2010-01-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273

  14. Bayesian aerosol retrieval algorithm for MODIS AOD retrieval over land

    NASA Astrophysics Data System (ADS)

    Lipponen, Antti; Mielonen, Tero; Pitkänen, Mikko R. A.; Levy, Robert C.; Sawyer, Virginia R.; Romakkaniemi, Sami; Kolehmainen, Ville; Arola, Antti

    2018-03-01

    We have developed a Bayesian aerosol retrieval (BAR) algorithm for the retrieval of aerosol optical depth (AOD) over land from the Moderate Resolution Imaging Spectroradiometer (MODIS). In the BAR algorithm, we simultaneously retrieve all dark land pixels in a granule, utilize spatial correlation models for the unknown aerosol parameters, use a statistical prior model for the surface reflectance, and take into account the uncertainties due to fixed aerosol models. The retrieved parameters are total AOD at 0.55 µm, fine-mode fraction (FMF), and surface reflectances at four different wavelengths (0.47, 0.55, 0.64, and 2.1 µm). The accuracy of the new algorithm is evaluated by comparing the AOD retrievals to Aerosol Robotic Network (AERONET) AOD. The results show that the BAR significantly improves the accuracy of AOD retrievals over the operational Dark Target (DT) algorithm. A reduction of about 29 % in the AOD root mean square error and decrease of about 80 % in the median bias of AOD were found globally when the BAR was used instead of the DT algorithm. Furthermore, the fraction of AOD retrievals inside the ±(0.05+15 %) expected error envelope increased from 55 to 76 %. In addition to retrieving the values of AOD, FMF, and surface reflectance, the BAR also gives pixel-level posterior uncertainty estimates for the retrieved parameters. The BAR algorithm always results in physical, non-negative AOD values, and the average computation time for a single granule was less than a minute on a modern personal computer.

  15. Weak charge form factor and radius of 208Pb through parity violation in electron scattering

    DOE PAGES

    Horowitz, C. J.; Ahmed, Z.; Jen, C. -M.; ...

    2012-03-26

    We use distorted wave electron scattering calculations to extract the weak charge form factor F W(more » $$\\bar{q}$$), the weak charge radius R W, and the point neutron radius R n, of 208Pb from the PREX parity violating asymmetry measurement. The form factor is the Fourier transform of the weak charge density at the average momentum transfer $$\\bar{q}$$ = 0.475 fm -1. We find F W($$\\bar{q}$$) = 0.204 ± 0.028(exp) ± 0.001(model). We use the Helm model to infer the weak radius from F W($$\\bar{q}$$). We find RW = 5.826 ± 0.181(exp) ± 0.027(model) fm. Here the exp error includes PREX statistical and systematic errors, while the model error describes the uncertainty in R W from uncertainties in the surface thickness σ of the weak charge density. The weak radius is larger than the charge radius, implying a 'weak charge skin' where the surface region is relatively enriched in weak charges compared to (electromagnetic) charges. We extract the point neutron radius R n = 5.751 ± 0.175 (exp) ± 0.026(model) ± 0.005(strange) fm, from R W. Here there is only a very small error (strange) from possible strange quark contributions. We find R n to be slightly smaller than R W because of the nucleon's size. As a result, we find a neutron skin thickness of R n-R p = 0.302 ± 0.175 (exp) ± 0.026 (model) ± 0.005 (strange) fm, where R p is the point proton radius.« less

  16. Pioneer-Venus radio occultation (ORO) data reduction: Profiles of 13 cm absorptivity

    NASA Technical Reports Server (NTRS)

    Steffes, Paul G.

    1990-01-01

    In order to characterize possible variations in the abundance and distribution of subcloud sulfuric acid vapor, 13 cm radio occultation signals from 23 orbits that occurred in late 1986 and 1987 (Season 10) and 7 orbits that occurred in 1979 (Season 1) were processed. The data were inverted via inverse Abel transform to produce 13 cm absorptivity profiles. Pressure and temperature profiles obtained with the Pioneer-Venus night probe and the northern probe were used along with the absorptivity profiles to infer upper limits for vertical profiles of the abundance of gaseous H2SO4. In addition to inverting the data, error bars were placed on the absorptivity profiles and H2SO4 abundance profiles using the standard propagation of errors. These error bars were developed by considering the effects of statistical errors only. The profiles show a distinct pattern with regard to latitude which is consistent with latitude variations observed in data obtained during the occultation seasons nos. 1 and 2. However, when compared with the earlier data, the recent occultation studies suggest that the amount of sulfuric acid vapor occurring at and below the main cloud layer may have decreased between early 1979 and late 1986.

  17. Revision of laser-induced damage threshold evaluation from damage probability data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas

    2013-04-15

    In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less

  18. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    NASA Astrophysics Data System (ADS)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  19. Data free inference with processed data products

    DOE PAGES

    Chowdhary, K.; Najm, H. N.

    2014-07-12

    Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.

  20. Precision modelling of M dwarf stars: the magnetic components of CM Draconis

    NASA Astrophysics Data System (ADS)

    MacDonald, J.; Mullan, D. J.

    2012-04-01

    The eclipsing binary CM Draconis (CM Dra) contains two nearly identical red dwarfs of spectral class dM4.5. The masses and radii of the two components have been reported with unprecedentedly small statistical errors: for M, these errors are 1 part in 260, while for R, the errors reported by Morales et al. are 1 part in 130. When compared with standard stellar models with appropriate mass and age (≈4 Gyr), the empirical results indicate that both components are discrepant from the models in the following sense: the observed stars are larger in R ('bloated'), by several standard deviations, than the models predict. The observed luminosities are also lower than the models predict. Here, we attempt at first to model the two components of CM Dra in the context of standard (non-magnetic) stellar models using a systematic array of different assumptions about helium abundances (Y), heavy element abundances (Z), opacities and mixing length parameter (α). We find no 4-Gyr-old models with plausible values of these four parameters that fit the observed L and R within the reported statistical error bars. However, CM Dra is known to contain magnetic fields, as evidenced by the occurrence of star-spots and flares. Here we ask: can inclusion of magnetic effects into stellar evolution models lead to fits of L and R within the error bars? Morales et al. have reported that the presence of polar spots results in a systematic overestimate of R by a few per cent when eclipses are interpreted with a standard code. In a star where spots cover a fraction f of the surface area, we find that the revised R and L for CM Dra A can be fitted within the error bars by varying the parameter α. The latter is often assumed to be reduced by the presence of magnetic fields, although the reduction in α as a function of B is difficult to quantify. An alternative magnetic effect, namely inhibition of the onset of convection, can be readily quantified in terms of a magnetic parameter δ≈B2/4πγpgas (where B is the strength of the local vertical magnetic field). In the context of δ models in which B is not allowed to exceed a 'ceiling' of 106 G, we find that the revised R and L can also be fitted, within the error bars, in a finite region of the f-δ plane. The permitted values of δ near the surface leads us to estimate that the vertical field strength on the surface of CM Dra A is about 500 G, in good agreement with independent observational evidence for similar low-mass stars. Recent results for another binary with parameters close to those of CM Dra suggest that metallicity differences cannot be the dominant explanation for the bloating of the two components of CM Dra.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  2. Error analysis of mechanical system and wavelength calibration of monochromator

    NASA Astrophysics Data System (ADS)

    Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong

    2018-02-01

    This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.

  3. 3DXRD at the Advanced Photon Source: Orientation Mapping and Deformation Studies

    DTIC Science & Technology

    2010-09-01

    statistics in the same sample (Hefferan et al. (2010)). This low orientation uncertainty or error bar might be surprising at first since we do measurements...may be a combination of noise and real gradients. Some of the intra‐ granular  disorder  in  (b)  should  be  interpreted  as  statistical   and  only...cooling (AC), but are not present after ice water quenching (IWQ). The presence of SRO domains is known to lead to planar slip bands during tensile

  4. The Impact of Bar Code Medication Administration Technology on Reported Medication Errors

    ERIC Educational Resources Information Center

    Holecek, Andrea

    2011-01-01

    The use of bar-code medication administration technology is on the rise in acute care facilities in the United States. The technology is purported to decrease medication errors that occur at the point of administration. How significantly this technology affects actual rate and severity of error is unknown. This descriptive, longitudinal research…

  5. Influence of survey strategy and interpolation model on DEM quality

    NASA Astrophysics Data System (ADS)

    Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.

    2009-11-01

    Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.

  6. Bandwagon effects and error bars in particle physics

    NASA Astrophysics Data System (ADS)

    Jeng, Monwhea

    2007-02-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.

  7. Numerical modeling of the divided bar measurements

    NASA Astrophysics Data System (ADS)

    LEE, Y.; Keehm, Y.

    2011-12-01

    The divided-bar technique has been used to measure thermal conductivity of rocks and fragments in heat flow studies. Though widely used, divided-bar measurements can have errors, which are not systematically quantified yet. We used an FEM and performed a series of numerical studies to evaluate various errors in divided-bar measurements and to suggest more reliable measurement techniques. A divided-bar measurement should be corrected against lateral heat loss on the sides of rock samples, and the thermal resistance at the contacts between the rock sample and the bar. We first investigated how the amount of these corrections would change by the thickness and thermal conductivity of rock samples through numerical modeling. When we fixed the sample thickness as 10 mm and varied thermal conductivity, errors in the measured thermal conductivity ranges from 2.02% for 1.0 W/m/K to 7.95% for 4.0 W/m/K. While we fixed thermal conductivity as 1.38 W/m/K and varied the sample thickness, we found that the error ranges from 2.03% for the 30 mm-thick sample to 11.43% for the 5 mm-thick sample. After corrections, a variety of error analyses for divided-bar measurements were conducted numerically. Thermal conductivity of two thin standard disks (2 mm in thickness) located at the top and the bottom of the rock sample slightly affects the accuracy of thermal conductivity measurements. When the thermal conductivity of a sample is 3.0 W/m/K and that of two standard disks is 0.2 W/m/K, the relative error in measured thermal conductivity is very small (~0.01%). However, the relative error would reach up to -2.29% for the same sample when thermal conductivity of two disks is 0.5 W/m/K. The accuracy of thermal conductivity measurements strongly depends on thermal conductivity and the thickness of thermal compound that is applied to reduce thermal resistance at contacts between the rock sample and the bar. When the thickness of thermal compound (0.29 W/m/K) is 0.03 mm, we found that the relative error in measured thermal conductivity is 4.01%, while the relative error can be very significant (~12.2%) if the thickness increases to 0.1 mm. Then, we fixed the thickness (0.03 mm) and varied thermal conductivity of the thermal compound. We found that the relative error with an 1.0 W/m/K compound is 1.28%, and the relative error with a 0.29 W/m/K is 4.06%. When we repeated this test with a different thickness of the thermal compound (0.1 mm), the relative error with an 1.0 W/m/K compound is 3.93%, and that with a 0.29 W/m/K is 12.2%. In addition, the cell technique by Sass et al.(1971), which is widely used to measure thermal conductivity of rock fragments, was evaluated using the FEM modeling. A total of 483 isotropic and homogeneous spherical rock fragments in the sample holder were used to test numerically the accuracy of the cell technique. The result shows the relative error of -9.61% for rock fragments with the thermal conductivity of 2.5 W/m/K. In conclusion, we report quantified errors in the divided-bar and the cell technique for thermal conductivity measurements for rocks and fragments. We found that the FEM modeling can accurately mimic these measurement techniques and can help us to estimate measurement errors quantitatively.

  8. Uncertainty quantification in application of the enrichment meter principle for nondestructive assay of special nuclear material

    DOE PAGES

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    2015-09-05

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  9. Technology and medication errors: impact in nursing homes.

    PubMed

    Baril, Chantal; Gascon, Viviane; St-Pierre, Liette; Lagacé, Denis

    2014-01-01

    The purpose of this paper is to study a medication distribution technology's (MDT) impact on medication errors reported in public nursing homes in Québec Province. The work was carried out in six nursing homes (800 patients). Medication error data were collected from nursing staff through a voluntary reporting process before and after MDT was implemented. The errors were analysed using: totals errors; medication error type; severity and patient consequences. A statistical analysis verified whether there was a significant difference between the variables before and after introducing MDT. The results show that the MDT detected medication errors. The authors' analysis also indicates that errors are detected more rapidly resulting in less severe consequences for patients. MDT is a step towards safer and more efficient medication processes. Our findings should convince healthcare administrators to implement technology such as electronic prescriber or bar code medication administration systems to improve medication processes and to provide better healthcare to patients. Few studies have been carried out in long-term healthcare facilities such as nursing homes. The authors' study extends what is known about MDT's impact on medication errors in nursing homes.

  10. Spirality: A Noval Way to Measure Spiral Arm Pitch Angle

    NASA Astrophysics Data System (ADS)

    Shields, Douglas W.; Boe, Benjamin; Henderson, Casey L.; Hartley, Matthew; Davis, Benjamin L.; Pour Imani, Hamed; Kennefick, Daniel; Kennefick, Julia D.

    2015-01-01

    We present the MATLAB code Spirality, a novel method for measuring spiral arm pitch angles by fitting galaxy images to spiral templates of known pitch. For a given pitch angle template, the mean pixel value is found along each of typically 1000 spiral axes. The fitting function, which shows a local maximum at the best-fit pitch angle, is the variance of these means. Error bars are found by varying the inner radius of the measurement annulus and finding the standard deviation of the best-fit pitches. Computation time is typically on the order of 2 minutes per galaxy, assuming at least 8 GB of working memory. We tested the code using 128 synthetic spiral images of known pitch. These spirals varied in the number of spiral arms, pitch angle, degree of logarithmicity, radius, SNR, inclination angle, bar length, and bulge radius. A correct result is defined as a result that matches the true pitch within the error bars, with error bars no greater than ±7°. For the non-logarithmic spiral sample, the correct answer is similarly defined, with the mean pitch as function of radius in place of the true pitch. For all synthetic spirals, correct results were obtained so long as SNR > 0.25, the bar length was no more than 60% of the spiral's diameter (when the bar was included in the measurement), the input center of the spiral was no more than 6% of the spiral radius away from the true center, and the inclination angle was no more than 30°. The synthetic spirals were not deprojected prior to measurement. The code produced the correct result for all barred spirals when the measurement annulus was placed outside the bar. Additionally, we compared the code's results against 2DFFT results for 203 visually selected spiral galaxies in GOODS North and South. Among the entire sample, Spirality's error bars overlapped 2DFFT's error bars 64% of the time. For those galaxies in which Source code is available by email request from the primary author.

  11. Measurement of the double differential diject mass cross section in p$$\\bar{p}$$ collisions at √(s) = 1.96 TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rominsky, Mandy Kathleen

    2009-01-01

    This thesis presents the analysis of the double differential dijet mass cross section, measured at the D0 detector in Batavia, IL, using pmore » $$\\bar{p}$$ collisions at a center of mass energy of √s = 1.96 TeV. The dijet mass was calculated using the two highest p T jets in the event, with approximately 0.7 fb -1 of data collected between 2004 and 2005. The analysis was presented in bins of dijet mass (M JJ) and rapidity (y), and extends the measurement farther in M JJ and y than any previous measurement. Corrections due to detector effects were calculated using a Monte Carlo simulation and applied to data. The errors on the measurement consist of statistical and systematic errors, of which the Jet Energy Scale was the largest. The final result was compared to next-to-leading order theory and good agreement was found. These results may be used in the determination of the proton parton distribution functions and to set limits on new physics.« less

  12. FastSim: A Fast Simulation for the SuperB Detector

    NASA Astrophysics Data System (ADS)

    Andreassen, R.; Arnaud, N.; Brown, D. N.; Burmistrov, L.; Carlson, J.; Cheng, C.-h.; Di Simone, A.; Gaponenko, I.; Manoni, E.; Perez, A.; Rama, M.; Roberts, D.; Rotondo, M.; Simi, G.; Sokoloff, M.; Suzuki, A.; Walsh, J.

    2011-12-01

    We have developed a parameterized (fast) simulation for detector optimization and physics reach studies of the proposed SuperB Flavor Factory in Italy. Detector components are modeled as thin sections of planes, cylinders, disks or cones. Particle-material interactions are modeled using simplified cross-sections and formulas. Active detectors are modeled using parameterized response functions. Geometry and response parameters are configured using xml files with a custom-designed schema. Reconstruction algorithms adapted from BaBar are used to build tracks and clusters. Multiple sources of background signals can be merged with primary signals. Pattern recognition errors are modeled statistically by randomly misassigning nearby tracking hits. Standard BaBar analysis tuples are used as an event output. Hadronic B meson pair events can be simulated at roughly 10Hz.

  13. Computer-assisted bar-coding system significantly reduces clinical laboratory specimen identification errors in a pediatric oncology hospital.

    PubMed

    Hayden, Randall T; Patterson, Donna J; Jay, Dennis W; Cross, Carl; Dotson, Pamela; Possel, Robert E; Srivastava, Deo Kumar; Mirro, Joseph; Shenep, Jerry L

    2008-02-01

    To assess the ability of a bar code-based electronic positive patient and specimen identification (EPPID) system to reduce identification errors in a pediatric hospital's clinical laboratory. An EPPID system was implemented at a pediatric oncology hospital to reduce errors in patient and laboratory specimen identification. The EPPID system included bar-code identifiers and handheld personal digital assistants supporting real-time order verification. System efficacy was measured in 3 consecutive 12-month time frames, corresponding to periods before, during, and immediately after full EPPID implementation. A significant reduction in the median percentage of mislabeled specimens was observed in the 3-year study period. A decline from 0.03% to 0.005% (P < .001) was observed in the 12 months after full system implementation. On the basis of the pre-intervention detected error rate, it was estimated that EPPID prevented at least 62 mislabeling events during its first year of operation. EPPID decreased the rate of misidentification of clinical laboratory samples. The diminution of errors observed in this study provides support for the development of national guidelines for the use of bar coding for laboratory specimens, paralleling recent recommendations for medication administration.

  14. A non-perturbative exploration of the high energy regime in Nf=3 QCD. ALPHA Collaboration

    NASA Astrophysics Data System (ADS)

    Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer

    2018-05-01

    Using continuum extrapolated lattice data we trace a family of running couplings in three-flavour QCD over a large range of scales from about 4 to 128 GeV. The scale is set by the finite space time volume so that recursive finite size techniques can be applied, and Schrödinger functional (SF) boundary conditions enable direct simulations in the chiral limit. Compared to earlier studies we have improved on both statistical and systematic errors. Using the SF coupling to implicitly define a reference scale 1/L_0≈ 4 GeV through \\bar{g}^2(L_0) =2.012, we quote L_0 Λ ^{N_f=3}_{{\\overline{MS}}} =0.0791(21). This error is dominated by statistics; in particular, the remnant perturbative uncertainty is negligible and very well controlled, by connecting to infinite renormalization scale from different scales 2^n/L_0 for n=0,1,\\ldots ,5. An intermediate step in this connection may involve any member of a one-parameter family of SF couplings. This provides an excellent opportunity for tests of perturbation theory some of which have been published in a letter (ALPHA collaboration, M. Dalla Brida et al. in Phys Rev Lett 117(18):182001, 2016). The results indicate that for our target precision of 3 per cent in L_0 Λ ^{N_f=3}_{{\\overline{MS}}}, a reliable estimate of the truncation error requires non-perturbative data for a sufficiently large range of values of α _s=\\bar{g}^2/(4π ). In the present work we reach this precision by studying scales that vary by a factor 2^5= 32, reaching down to α _s≈ 0.1. We here provide the details of our analysis and an extended discussion.

  15. Analysis of the technology acceptance model in examining hospital nurses' behavioral intentions toward the use of bar code medication administration.

    PubMed

    Song, Lunar; Park, Byeonghwa; Oh, Kyeung Mi

    2015-04-01

    Serious medication errors continue to exist in hospitals, even though there is technology that could potentially eliminate them such as bar code medication administration. Little is known about the degree to which the culture of patient safety is associated with behavioral intention to use bar code medication administration. Based on the Technology Acceptance Model, this study evaluated the relationships among patient safety culture and perceived usefulness and perceived ease of use, and behavioral intention to use bar code medication administration technology among nurses in hospitals. Cross-sectional surveys with a convenience sample of 163 nurses using bar code medication administration were conducted. Feedback and communication about errors had a positive impact in predicting perceived usefulness (β=.26, P<.01) and perceived ease of use (β=.22, P<.05). In a multiple regression model predicting for behavioral intention, age had a negative impact (β=-.17, P<.05); however, teamwork within hospital units (β=.20, P<.05) and perceived usefulness (β=.35, P<.01) both had a positive impact on behavioral intention. The overall bar code medication administration behavioral intention model explained 24% (P<.001) of the variance. Identified factors influencing bar code medication administration behavioral intention can help inform hospitals to develop tailored interventions for RNs to reduce medication administration errors and increase patient safety by using this technology.

  16. Minimizing human error in radiopharmaceutical preparation and administration via a bar code-enhanced nuclear pharmacy management system.

    PubMed

    Hakala, John L; Hung, Joseph C; Mosman, Elton A

    2012-09-01

    The objective of this project was to ensure correct radiopharmaceutical administration through the use of a bar code system that links patient and drug profiles with on-site information management systems. This new combined system would minimize the amount of manual human manipulation, which has proven to be a primary source of error. The most common reason for dosing errors is improper patient identification when a dose is obtained from the nuclear pharmacy or when a dose is administered. A standardized electronic transfer of information from radiopharmaceutical preparation to injection will further reduce the risk of misadministration. Value stream maps showing the flow of the patient dose information, as well as potential points of human error, were developed. Next, a future-state map was created that included proposed corrections for the most common critical sites of error. Transitioning the current process to the future state will require solutions that address these sites. To optimize the future-state process, a bar code system that links the on-site radiology management system with the nuclear pharmacy management system was proposed. A bar-coded wristband connects the patient directly to the electronic information systems. The bar code-enhanced process linking the patient dose with the electronic information reduces the number of crucial points for human error and provides a framework to ensure that the prepared dose reaches the correct patient. Although the proposed flowchart is designed for a site with an in-house central nuclear pharmacy, much of the framework could be applied by nuclear medicine facilities using unit doses. An electronic connection between information management systems to allow the tracking of a radiopharmaceutical from preparation to administration can be a useful tool in preventing the mistakes that are an unfortunate reality for any facility.

  17. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules.

    PubMed

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-12-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  18. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules.

    PubMed

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  19. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-12-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  20. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Adam Paul

    The authors present a measurement of the mass of the top quark. The event sample is selected from proton-antiproton collisions, at 1.96 TeV center-of-mass energy, observed with the CDF detector at Fermilab's Tevatron. They consider a 318 pb -1 dataset collected between March 2002 and August 2004. They select events that contain one energetic lepton, large missing transverse energy, exactly four energetic jets, and at least one displaced vertex b tag. The analysis uses leading-order tmore » $$\\bar{t}$$ and background matrix elements along with parameterized parton showering to construct event-by-event likelihoods as a function of top quark mass. From the 63 events observed with the 318 pb -1 dataset they extract a top quark mass of 172.0 ± 2.6(stat) ± 3.3(syst) GeV/c 2 from the joint likelihood. The mean expected statistical uncertainty is 3.2 GeV/c 2 for m $$\\bar{t}$$ = 178 GTeV/c 2 and 3.1 GeV/c 2 for m $$\\bar{t}$$ = 172.5 GeV/c 2. The systematic error is dominated by the uncertainty of the jet energy scale.« less

  2. Analysis of D0 -> K anti-K X Decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessop, Colin P.

    2003-06-06

    Using data taken with the CLEO II detector, they have studied the decays of the D{sup 0} to K{sup +}K{sup -}, K{sup 0}{bar K}{sup 0}, K{sub S}{sup 0}K{sub S}{sup 0}, K{sub S}{sup 0}K{sub S}{sup 0}{pi}{sup 0}, K{sup +}K{sup -}{pi}{sup 0}. The authors present significantly improved results for B(D{sup 0} {yields} K{sup +}K{sup -}) = (0.454 {+-} 0.028 {+-} 0.035)%, B(D{sup 0} {yields} K{sup 0}{bar K}{sup 0}) = (0.054 {+-} 0.012 {+-} 0.010)% and B(D{sup 0} {yields} K{sub S}{sup 0}K{sub S}{sup 0}K{sub S}{sup 0}) = (0.074 {+-} 0.010 {+-} 0.015)% where the first errors are statistical and the second errors aremore » the estimate of their systematic uncertainty. They also present a new upper limit B(D{sup 0} {yields} K{sub S}{sup 0}K{sub S}{sup 0}{pi}{sup 0}) < 0.059% at the 90% confidence level and the first measurement of B(D{sup 0} {yields} K{sup +}K{sup -}{pi}{sup 0}) = (0.14 {+-} 0.04)%.« less

  3. On the Bar Pattern Speed Determination of NGC 3367

    NASA Astrophysics Data System (ADS)

    Gabbasov, R. F.; Repetto, P.; Rosado, M.

    2009-09-01

    An important dynamic parameter of barred galaxies is the bar pattern speed, Ω P . Among several methods that are used for the determination of Ω P , the Tremaine-Weinberg method has the advantage of model independence and accuracy. In this work, we apply the method to a simulated bar including gas dynamics and study the effect of two-dimensional spectroscopy data quality on robustness of the method. We added white noise and a Gaussian random field to the data and measured the corresponding errors in Ω P . We found that a signal to noise ratio in surface density ~5 introduces errors of ~20% for the Gaussian noise, while for the white noise the corresponding errors reach ~50%. At the same time, the velocity field is less sensitive to contamination. On the basis of the performed study, we applied the method to the NGC 3367 spiral galaxy using Hα Fabry-Pérot interferometry data. We found Ω P = 43 ± 6 km s-1 kpc-1 for this galaxy.

  4. A Search for Periodicity in the X-Ray Spectrum of Black Hole Candidate A0620-00

    DTIC Science & Technology

    1991-06-01

    They are observed as radio pulsars and as the X-ray emitting components of binary X-ray sources. The limits of stability of neutron stars are not...4 Lo ). The three candidates are CYG X-1, LMC X-3, and A0620. In this section all data such as mass functions, luminosities, distances, periods, etc...1.4. Finally, we discard data for which a/ lo > 1. Such a point is of little statistical significance since its error bars are so large. Figure 2.2d

  5. Uncertainty analysis technique for OMEGA Dante measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M. J.; Widmann, K.; Sorce, C.

    2010-10-15

    The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  6. Uncertainty Analysis Technique for OMEGA Dante Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M J; Widmann, K; Sorce, C

    2010-05-07

    The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  7. Measurements of Time-Dependent CP-Asymmetry Parameters in B Meson Decays to η' K 0 and of Branching Fractions of SU(3) Related Modes with BaBar Experiment at SLAC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biassoni, Pietro

    2009-01-01

    In this thesis work we have measured the following upper limits at 90% of confidence level, for B meson decays (in units of 10 -6), using a statistics of 465.0 x 10 6 Bmore » $$\\bar{B}$$ pairs: β(B 0 → ηK 0) < 1.6 β(B 0 → ηη) < 1.4 β(B 0 → η'η') < 2.1 β(B 0 → ηΦ) < 0.52 β(B 0 → ηω) < 1.6 β(B 0 → η'Φ) < 1.2 β(B 0 → η'ω) < 1.7 We have no observation of any decay mode, statistical significance for our measurements is in the range 1.3-3.5 standard deviation. We have a 3.5σ evidence for B → ηω and a 3.1 σ evidence for B → η'ω. The absence of observation of the B 0 → ηK 0 open an issue related to the large difference compared to the charged mode B + → ηK + branching fraction, which is measured to be 3.7 ± 0.4 ± 0.1 [118]. Our results represent substantial improvements of the previous ones [109, 110, 111] and are consistent with theoretical predictions. All these results were presented at Flavor Physics and CP Violation (FPCP) 2008 Conference, that took place in Taipei, Taiwan. They will be soon included into a paper to be submitted to Physical Review D. For time-dependent analysis, we have reconstructed 1820 ± 48 flavor-tagged B 0 → η'K 0 events, using the final BABAR statistic of 467.4 x 10 6 B$$\\bar{B}$$ pairs. We use these events to measure the time-dependent asymmetry parameters S and C. We find S = 0.59 ± 0.08 ± 0.02, and C = -0.06 ± 0.06 ± 0.02. A non-zero value of C would represent a directly CP non-conserving component in B 0 → η'K 0, while S would be equal to sin2β measured in B 0 → J/ΨK s 0 [108], a mixing-decay interference effect, provided the decay is dominated by amplitudes of a single weak phase. The new measured value of S can be considered in agreement with the expectations of the 'Standard Model', inside the experimental and theoretical uncertainties. Inconsistency of our result for S with CP conservation (S = 0) has a significance of 7.1 standard deviations (statistical and systematics included). Our result for the direct-CP violation parameter C is 0.9 standard deviations from zero (statistical and systematics included). Our results are in agreement with the previous ones [18]. Despite the statistics is only 20% larger than the one used in previous measurement, we improved of 20% the error on S and of 14% the error on C. This error is the smaller ever achieved, by both BABAR and Belle, in Time-Dependent CP Violation Parameters measurement is a b → s transition.« less

  8. The opercular mouth-opening mechanism of largemouth bass functions as a 3D four-bar linkage with three degrees of freedom.

    PubMed

    Olsen, Aaron M; Camp, Ariel L; Brainerd, Elizabeth L

    2017-12-15

    The planar, one degree of freedom (1-DoF) four-bar linkage is an important model for understanding the function, performance and evolution of numerous biomechanical systems. One such system is the opercular mechanism in fishes, which is thought to function like a four-bar linkage to depress the lower jaw. While anatomical and behavioral observations suggest some form of mechanical coupling, previous attempts to model the opercular mechanism as a planar four-bar have consistently produced poor model fits relative to observed kinematics. Using newly developed, open source mechanism fitting software, we fitted multiple three-dimensional (3D) four-bar models with varying DoF to in vivo kinematics in largemouth bass to test whether the opercular mechanism functions instead as a 3D four-bar with one or more DoF. We examined link position error, link rotation error and the ratio of output to input link rotation to identify a best-fit model at two different levels of variation: for each feeding strike and across all strikes from the same individual. A 3D, 3-DoF four-bar linkage was the best-fit model for the opercular mechanism, achieving link rotational errors of less than 5%. We also found that the opercular mechanism moves with multiple degrees of freedom at the level of each strike and across multiple strikes. These results suggest that active motor control may be needed to direct the force input to the mechanism by the axial muscles and achieve a particular mouth-opening trajectory. Our results also expand the versatility of four-bar models in simulating biomechanical systems and extend their utility beyond planar or single-DoF systems. © 2017. Published by The Company of Biologists Ltd.

  9. A novel measure and significance testing in data analysis of cell image segmentation.

    PubMed

    Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L

    2017-03-14

    Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.

  10. The statistical properties and possible causes of polar motion prediction errors

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria

    2015-08-01

    The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.

  11. On the formulation of gravitational potential difference between the GRACE satellites based on energy integral in Earth fixed frame

    NASA Astrophysics Data System (ADS)

    Zeng, Y. Y.; Guo, J. Y.; Shang, K.; Shum, C. K.; Yu, J. H.

    2015-09-01

    Two methods for computing gravitational potential difference (GPD) between the GRACE satellites using orbit data have been formulated based on energy integral; one in geocentric inertial frame (GIF) and another in Earth fixed frame (EFF). Here we present a rigorous theoretical formulation in EFF with particular emphasis on necessary approximations, provide a computational approach to mitigate the approximations to negligible level, and verify our approach using simulations. We conclude that a term neglected or ignored in all former work without verification should be retained. In our simulations, 2 cycle per revolution (CPR) errors are present in the GPD computed using our formulation, and empirical removal of the 2 CPR and lower frequency errors can improve the precisions of Stokes coefficients (SCs) of degree 3 and above by 1-2 orders of magnitudes. This is despite of the fact that the result without removing these errors is already accurate enough. Furthermore, the relation between data errors and their influences on GPD is analysed, and a formal examination is made on the possible precision that real GRACE data may attain. The result of removing 2 CPR errors may imply that, if not taken care of properly, the values of SCs computed by means of the energy integral method using real GRACE data may be seriously corrupted by aliasing errors from possibly very large 2 CPR errors based on two facts: (1) errors of bar C_{2,0} manifest as 2 CPR errors in GPD and (2) errors of bar C_{2,0} in GRACE data-the differences between the CSR monthly values of bar C_{2,0} independently determined using GRACE and SLR are a reasonable measure of their magnitude-are very large. Our simulations show that, if 2 CPR errors in GPD vary from day to day as much as those corresponding to errors of bar C_{2,0} from month to month, the aliasing errors of degree 15 and above SCs computed using a month's GPD data may attain a level comparable to the magnitude of gravitational potential variation signal that GRACE was designed to recover. Consequently, we conclude that aliasing errors from 2 CPR errors in real GRACE data may be very large if not properly handled; and therefore, we propose an approach to reduce aliasing errors from 2 CPR and lower frequency errors for computing SCs above degree 2.

  12. Probability distributions of molecular observables computed from Markov models. II. Uncertainties in observables and their time-evolution

    NASA Astrophysics Data System (ADS)

    Chodera, John D.; Noé, Frank

    2010-09-01

    Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.

  13. Modified SPC for short run test and measurement process in multi-stations

    NASA Astrophysics Data System (ADS)

    Koh, C. K.; Chin, J. F.; Kamaruddin, S.

    2018-03-01

    Due to short production runs and measurement error inherent in electronic test and measurement (T&M) processes, continuous quality monitoring through real-time statistical process control (SPC) is challenging. Industry practice allows the installation of guard band using measurement uncertainty to reduce the width of acceptance limit, as an indirect way to compensate the measurement errors. This paper presents a new SPC model combining modified guard band and control charts (\\bar{\\text{Z}} chart and W chart) for short runs in T&M process in multi-stations. The proposed model standardizes the observed value with measurement target (T) and rationed measurement uncertainty (U). S-factor (S f) is introduced to the control limits to improve the sensitivity in detecting small shifts. The model was embedded in automated quality control system and verified with a case study in real industry.

  14. Students' Understanding of Bar Graphs and Histograms: Results from the LOCUS Assessments

    ERIC Educational Resources Information Center

    Whitaker, Douglas; Jacobbe, Tim

    2017-01-01

    Bar graphs and histograms are core statistical tools that are widely used in statistical practice and commonly taught in classrooms. Despite their importance and the instructional time devoted to them, many students demonstrate misunderstandings when asked to read and interpret bar graphs and histograms. Much of the research that has been…

  15. Dalitz plot analysis of the decay B 0 ( B ¯ 0 ) → K ± π ∓ π 0

    DOE PAGES

    Aubert, B.; Bona, M.; Karyotakis, Y.; ...

    2008-09-12

    Here, we report a Dalitz-plot analysis of the charmless hadronic decays of neutral B mesons to K ± π ∓ π 0 . With a sample of ( 231.8 ± 2.6 ) × 10 6 Υ ( 4 S ) → Bmore » $$\\bar{B}$$ decays collected by the BABAR detector at the PEP-II asymmetric-energy B Factory at SLAC, we measure the magnitudes and phases of the intermediate resonant and nonresonant amplitudes for B 0 and $$\\bar{B}$$ 0 decays and determine the corresponding C P -averaged branching fractions and charge asymmetries. Furthermore, we measure the inclusive branching fraction and C P -violating charge asymmetry and found it to be B ( B 0 → K + π - π 0 ) = ( 35.7$$+2.6\\atop{-1.5}$$ + 2.6 - 1.5 ± 2.2 ) × 10 - 6 and A C P = - 0.030 $$+ 0.045\\atop{- 0.051}$$ ± 0.055 where the first errors are statistical and the second systematic. We observe the decay B 0 → K * 0 ( 892 ) π 0 with the branching fraction B ( B 0 → K * 0 ( 892 ) π 0 ) = ( 3.6 $$+ 0.7\\atop- {0.8}$$ ± 0.4 ) × 10 - 6 . This measurement differs from zero by 5.6 standard deviations (including the systematic uncertainties). The selected sample also contains B 0 → $$\\bar{D}$$ 0 π 0 decays where $$\\bar{D}$$ 0 → K + π - , and we measure B ( B 0 → $$\\bar{D}$$ 0π 0 ) = ( 2.93 ± 0.17 ± 0.18 ) × 10 - 4 .« less

  16. Constraining the mass–richness relationship of redMaPPer clusters with angular clustering

    DOE PAGES

    Baxter, Eric J.; Rozo, Eduardo; Jain, Bhuvnesh; ...

    2016-08-04

    The potential of using cluster clustering for calibrating the mass–richness relation of galaxy clusters has been recognized theoretically for over a decade. In this paper, we demonstrate the feasibility of this technique to achieve high-precision mass calibration using redMaPPer clusters in the Sloan Digital Sky Survey North Galactic Cap. By including cross-correlations between several richness bins in our analysis, we significantly improve the statistical precision of our mass constraints. The amplitude of the mass–richness relation is constrained to 7 per cent statistical precision by our analysis. However, the error budget is systematics dominated, reaching a 19 per cent total errormore » that is dominated by theoretical uncertainty in the bias–mass relation for dark matter haloes. We confirm the result from Miyatake et al. that the clustering amplitude of redMaPPer clusters depends on galaxy concentration as defined therein, and we provide additional evidence that this dependence cannot be sourced by mass dependences: some other effect must account for the observed variation in clustering amplitude with galaxy concentration. Assuming that the observed dependence of redMaPPer clustering on galaxy concentration is a form of assembly bias, we find that such effects introduce a systematic error on the amplitude of the mass–richness relation that is comparable to the error bar from statistical noise. Finally, the results presented here demonstrate the power of cluster clustering for mass calibration and cosmology provided the current theoretical systematics can be ameliorated.« less

  17. Large Pt processes in ppbar collisions at 2 TeV: measurement of ttbar production cross section in ppbar collisions at s**(1/2) = 1.96 TeV in the dielectron final states at the D0 experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Ashish; /Delhi U.

    2005-10-01

    The measurement of the top-antitop pair production cross section in p{bar p} collisions at {radical}s = 1.96 TeV in the dielectron decay channel using 384 pb{sup -1} of D0 data yields a t{bar t} production cross-section of {sigma}{sub t{bar t}} = 7.9{sub -3.8}{sup +5.2}(stat){sub -1.0}{sup +1.3}(syst) {+-} 0.5 (lumi) pb. This measurement [98] is based on 5 observed events with a prediction of 1.04 background events. The cross-section corresponds to the top mass of 175 GeV, and is in good agreement with the Standard Model expectation of 6.77 {+-} 0.42 pb based on next-to-next-leading-order (NNLO) perturbative QCD calculations [78]. Thismore » analysis shows significant improvement from our previous cross-section measurement in this channel [93] with 230 pb{sup -1} dataset in terms of significantly better signal to background ratio and uncertainties on the measured cross-section. Combination of all the dilepton final states [98] yields a yields a t{bar t} cross-section of {sigma}{sub t{bar t}} = 8.6{sub -2.0}{sup +2.3}(stat){sub -1.0}{sup +1.2}(syst) {+-} 0.6(lumi) pb, which again is in good agreement with theoretical predictions and with measurements in other final states. Hence, these results show no discernible deviation from the Standard Model. Fig. 6.1 shows the summary of cross-section measurements in different final states by the D0 in Run II. This measurement of cross-section in the dilepton channels is the best dilepton result from D0 till date. Previous D0 result based on analysis of 230 pb{sup -1} of data (currently under publication in Physics Letters B) is {sigma}{sub t{bar t}} = 8.6{sub -2.7}{sup +3.2}(stat){sub -1.1}{sup +1.1}(syst) {+-} 0.6(lumi) pb. It can be seen that the present cross-section suffers from less statistical uncertainty. This result is also quite consistent with CDF collaboration's result of {sigma}{sub t{bar t}} = 8.6{sub -2.4}{sup +2.5}(stat){sub -1.1}{sup +1.1}(syst) pb. These results have been presented as D0's preliminary results in the high energy physics conferences in the Summer of 2005 (Hadron Collider Physics Symposium, European Physical Society Conference, etc.). The uncertainty on the cross-section is still dominated by statistics due to the small number of observed events. It can be seen that we are at a level where statistical uncertainties are becoming closer to the systematic ones. Future measurements of the cross section will benefit from considerably more integrated luminosity, leading to a smaller statistical error. Thus the next generation of measurements will be limited by systematic uncertainties. Monte Carlo samples with higher statistics are also being generated in order to decrease the uncertainty on the background estimation. In addition, as the jet energy scale, the electron energy scale, the detector resolutions, and the luminosity measurement are fine-tuned, the systematic uncertainties will continue to decrease.« less

  18. Astrostatistics in X-ray Astronomy: Systematics and Calibration

    NASA Astrophysics Data System (ADS)

    Siemiginowska, Aneta; Kashyap, Vinay; CHASC

    2014-01-01

    Astrostatistics has been emerging as a new field in X-ray and gamma-ray astronomy, driven by the analysis challenges arising from data collected by high performance missions since the beginning of this century. The development and implementation of new analysis methods and techniques requires a close collaboration between astronomers and statisticians, and requires support from a reliable and continuous funding source. The NASA AISR program was one such, and played a crucial part in our work. Our group (CHASC; http://heawww.harvard.edu/AstroStat/), composed of a mixture of high energy astrophysicists and statisticians, was formed ~15 years ago to address specific issues related to Chandra X-ray Observatory data (Siemiginowska et al. 1997) and was initially fully supported by Chandra. We have developed several statistical methods that have laid the foundation for extensive application of Bayesian methodologies to Poisson data in high-energy astrophysics. I will describe one such project, on dealing with systematic uncertainties (Lee et al. 2011, ApJ ), and present the implementation of the method in Sherpa, the CIAO modeling and fitting application. This algorithm propagates systematic uncertainties in instrumental responses (e.g., ARFs) through the Sherpa spectral modeling chain to obtain realistic error bars on model parameters when the data quality is high. Recent developments include the ability to narrow the space of allowed calibration and obtain better parameter estimates as well as tighter error bars. Acknowledgements: This research is funded in part by NASA contract NAS8-03060. References: Lee, H., Kashyap, V.L., van Dyk, D.A., et al. 2011, ApJ, 731, 126 Siemiginowska, A., Elvis, M., Connors, A., et al. 1997, Statistical Challenges in Modern Astronomy II, 241

  19. Association between workarounds and medication administration errors in bar-code-assisted medication administration in hospitals.

    PubMed

    van der Veen, Willem; van den Bemt, Patricia M L A; Wouters, Hans; Bates, David W; Twisk, Jos W R; de Gier, Johan J; Taxis, Katja; Duyvendak, Michiel; Luttikhuis, Karen Oude; Ros, Johannes J W; Vasbinder, Erwin C; Atrafi, Maryam; Brasse, Bjorn; Mangelaars, Iris

    2018-04-01

    To study the association of workarounds with medication administration errors using barcode-assisted medication administration (BCMA), and to determine the frequency and types of workarounds and medication administration errors. A prospective observational study in Dutch hospitals using BCMA to administer medication. Direct observation was used to collect data. Primary outcome measure was the proportion of medication administrations with one or more medication administration errors. Secondary outcome was the frequency and types of workarounds and medication administration errors. Univariate and multivariate multilevel logistic regression analysis were used to assess the association between workarounds and medication administration errors. Descriptive statistics were used for the secondary outcomes. We included 5793 medication administrations for 1230 inpatients. Workarounds were associated with medication administration errors (adjusted odds ratio 3.06 [95% CI: 2.49-3.78]). Most commonly, procedural workarounds were observed, such as not scanning at all (36%), not scanning patients because they did not wear a wristband (28%), incorrect medication scanning, multiple medication scanning, and ignoring alert signals (11%). Common types of medication administration errors were omissions (78%), administration of non-ordered drugs (8.0%), and wrong doses given (6.0%). Workarounds are associated with medication administration errors in hospitals using BCMA. These data suggest that BCMA needs more post-implementation evaluation if it is to achieve the intended benefits for medication safety. In hospitals using barcode-assisted medication administration, workarounds occurred in 66% of medication administrations and were associated with large numbers of medication administration errors.

  20. Statistical inference with quantum measurements: methodologies for nitrogen vacancy centers in diamond

    NASA Astrophysics Data System (ADS)

    Hincks, Ian; Granade, Christopher; Cory, David G.

    2018-01-01

    The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.

  1. Reduction in specimen labeling errors after implementation of a positive patient identification system in phlebotomy.

    PubMed

    Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F

    2010-06-01

    Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.

  2. High-resolution smile measurement and control of wavelength-locked QCW and CW laser diode bars

    NASA Astrophysics Data System (ADS)

    Rosenkrantz, Etai; Yanson, Dan; Klumel, Genady; Blonder, Moshe; Rappaport, Noam; Peleg, Ophir

    2018-02-01

    High-power linewidth-narrowed applications of laser diode arrays demand high beam quality in the fast, or vertical, axis. This requires very high fast-axis collimation (FAC) quality with sub-mrad angular errors, especially where laser diode bars are wavelength-locked by a volume Bragg grating (VBG) to achieve high pumping efficiency in solid-state and fiber lasers. The micron-scale height deviation of emitters in a bar against the FAC lens causes the so-called smile effect with variable beam pointing errors and wavelength locking degradation. We report a bar smile imaging setup allowing FAC-free smile measurement in both QCW and CW modes. By Gaussian beam simulation, we establish optimum smile imaging conditions to obtain high resolution and accuracy with well-resolved emitter images. We then investigate the changes in the smile shape and magnitude under thermal stresses such as variable duty cycles in QCW mode and, ultimately, CW operation. Our smile measurement setup provides useful insights into the smile behavior and correlation between the bar collimation in QCW mode and operating conditions under CW pumping. With relaxed alignment tolerances afforded by our measurement setup, we can screen bars for smile compliance and potential VBG lockability prior to assembly, with benefits in both lower manufacturing costs and higher yield.

  3. Some conservative estimates in quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2006-08-15

    Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.

  4. Instrument Reflections and Scene Amplitude Modulation in a Polychromatic Microwave Quadrature Interferometer

    NASA Technical Reports Server (NTRS)

    Dobson, Chris C.; Jones, Jonathan E.; Chavers, Greg

    2003-01-01

    A polychromatic microwave quadrature interferometer has been characterized using several laboratory plasmas. Reflections between the transmitter and the receiver have been observed, and the effects of including reflection terms in the data reduction equation have been examined. An error analysis which includes the reflections, modulation of the scene beam amplitude by the plasma, and simultaneous measurements at two frequencies has been applied to the empirical database, and the results are summarized. For reflection amplitudes around 1096, the reflection terms were found to reduce the calculated error bars for electron density measurements by about a factor of 2. The impact of amplitude modulation is also quantified. In the complete analysis, the mean error bar for high- density measurements is 7.596, and the mean phase shift error for low-density measurements is 1.2". .

  5. The Thurgood Marshall School of Law Empirical Findings: A Report of the Statistical Analysis of the July 2010 TMSL Texas Bar Results

    ERIC Educational Resources Information Center

    Kadhi, Tau; Holley, D.

    2010-01-01

    The following report gives the statistical findings of the July 2010 TMSL Bar results. Procedures: Data is pre-existing and was given to the Evaluator by email from the Registrar and Dean. Statistical analyses were run using SPSS 17 to address the following research questions: 1. What are the statistical descriptors of the July 2010 overall TMSL…

  6. A Measurement of Gravitational Lensing of the Cosmic Microwave Background by Galaxy Clusters Using Data from the South Pole Telescope

    DOE PAGES

    Baxter, E. J.; Keisler, R.; Dodelson, S.; ...

    2015-06-22

    Clusters of galaxies are expected to gravitationally lens the cosmic microwave background (CMB) and thereby generate a distinct signal in the CMB on arcminute scales. Measurements of this effect can be used to constrain the masses of galaxy clusters with CMB data alone. Here we present a measurement of lensing of the CMB by galaxy clusters using data from the South Pole Telescope (SPT). We also develop a maximum likelihood approach to extract the CMB cluster lensing signal and validate the method on mock data. We quantify the effects on our analysis of several potential sources of systematic error andmore » find that they generally act to reduce the best-fit cluster mass. It is estimated that this bias to lower cluster mass is roughly 0.85σ in units of the statistical error bar, although this estimate should be viewed as an upper limit. Furthermore, we apply our maximum likelihood technique to 513 clusters selected via their Sunyaev–Zeldovich (SZ) signatures in SPT data, and rule out the null hypothesis of no lensing at 3.1σ. The lensing-derived mass estimate for the full cluster sample is consistent with that inferred from the SZ flux: M 200,lens = 0.83 +0.38 -0.37 M 200,SZ (68% C.L., statistical error only).« less

  7. The intrinsic three-dimensional shape of galactic bars

    NASA Astrophysics Data System (ADS)

    Méndez-Abreu, J.; Costantin, L.; Aguerri, J. A. L.; de Lorenzo-Cáceres, A.; Corsini, E. M.

    2018-06-01

    We present the first statistical study on the intrinsic three-dimensional (3D) shape of a sample of 83 galactic bars extracted from the CALIFA survey. We use the galaXYZ code to derive the bar intrinsic shape with a statistical approach. The method uses only the geometric information (ellipticities and position angles) of bars and discs obtained from a multi-component photometric decomposition of the galaxy surface-brightness distributions. We find that bars are predominantly prolate-triaxial ellipsoids (68%), with a small fraction of oblate-triaxial ellipsoids (32%). The typical flattening (intrinsic C/A semiaxis ratio) of the bars in our sample is 0.34, which matches well the typical intrinsic flattening of stellar discs at these galaxy masses. We demonstrate that, for prolate-triaxial bars, the intrinsic shape of bars depends on the galaxy Hubble type and stellar mass (bars in massive S0 galaxies are thicker and more circular than those in less massive spirals). The bar intrinsic shape correlates with bulge, disc, and bar parameters. In particular with the bulge-to-total (B/T) luminosity ratio, disc g - r color, and central surface brightness of the bar, confirming the tight link between bars and their host galaxies. Combining the probability distributions of the intrinsic shape of bulges and bars in our sample we show that 52% (16%) of bulges are thicker (flatter) than the surrounding bar at 1σ level. We suggest that these percentages might be representative of the fraction of classical and disc-like bulges in our sample, respectively.

  8. Optics measurement algorithms and error analysis for the proton energy frontier

    NASA Astrophysics Data System (ADS)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  9. The p-Value You Can't Buy.

    PubMed

    Demidenko, Eugene

    2016-01-02

    There is growing frustration with the concept of the p -value. Besides having an ambiguous interpretation, the p- value can be made as small as desired by increasing the sample size, n . The p -value is outdated and does not make sense with big data: Everything becomes statistically significant. The root of the problem with the p- value is in the mean comparison. We argue that statistical uncertainty should be measured on the individual, not the group, level. Consequently, standard deviation (SD), not standard error (SE), error bars should be used to graphically present the data on two groups. We introduce a new measure based on the discrimination of individuals/objects from two groups, and call it the D -value. The D -value can be viewed as the n -of-1 p -value because it is computed in the same way as p while letting n equal 1. We show how the D -value is related to discrimination probability and the area above the receiver operating characteristic (ROC) curve. The D -value has a clear interpretation as the proportion of patients who get worse after the treatment, and as such facilitates to weigh up the likelihood of events under different scenarios. [Received January 2015. Revised June 2015.].

  10. Sediment Transport Variability in Global Rivers: Implications for the Interpretation of Paleoclimate Signals

    NASA Astrophysics Data System (ADS)

    Syvitski, J. P.; Hutton, E. W.

    2001-12-01

    A new numerical approach (HydroTrend, v.2) allows the daily flux of sediment to be estimated for any river, whether gauged or not. The model can be driven by actual climate measurements (precipitation, temperature) or with statistical estimates of climate (modeled climate, remotely-sensed climate). In both cases, the character (e.g. soil depth, relief, vegetation index) of the drainage terrain is needed to complete the model domain. The HydroTrend approach allows us to examine the effects of climate on the supply of sediment to continental margins, and the nature of supply variability. A new relationship is defined as: $Qs = f (Psi) Qs-bar (Q/Q-bar)c+-σ where Qs-bar is the long-term sediment load, Q-bar is the long-term discharge, c and sigma are mean and standard deviation of the inter-annual variability of the rating coefficient, and Psi captures the measurement errors associated with Q and Qs, and the annual transients, affecting the supply of sediment including sediment and water source, and river (flood wave) dynamics. F = F(Psi, s). Smaller-discharge rivers have larger values of s, and s asymptotes to a small but consistent value for larger-discharge rivers. The coefficient c is directly proportional to the long-term suspended load (Qs-bar) and basin relief (R), and inversely proportional to mean annual temperature (T). sigma is directly proportional to the mean annual discharge. The long-term sediment load is given by: Qs-bar = a R1.5 A0.5 TT $ where a is a global constant, A is basin area; and TT is a function of mean annual temperature. This new approach provides estimates of sediment flux at the dynamic (daily) level and provides us a means to experiment on the sensitivity of marine sedimentary deposits in recording a paleoclimate signal. In addition the method provides us with spatial estimates for the flux of sediment to the coastal zone at the global scale.

  11. Introduction to Statistics. Learning Packages in the Policy Sciences Series, PS-26. Revised Edition.

    ERIC Educational Resources Information Center

    Policy Studies Associates, Croton-on-Hudson, NY.

    The primary objective of this booklet is to introduce students to basic statistical skills that are useful in the analysis of public policy data. A few, selected statistical methods are presented, and theory is not emphasized. Chapter 1 provides instruction for using tables, bar graphs, bar graphs with grouped data, trend lines, pie diagrams,…

  12. The Thurgood Marshall School of Law Empirical Findings: A Report of the Statistical Analysis of the February 2010 TMSL Texas Bar Results

    ERIC Educational Resources Information Center

    Kadhi, T.; Holley, D.; Rudley, D.; Garrison, P.; Green, T.

    2010-01-01

    The following report gives the statistical findings of the 2010 Thurgood Marshall School of Law (TMSL) Texas Bar results. This data was pre-existing and was given to the Evaluator by email from the Dean. Then, in-depth statistical analyses were run using the SPSS 17 to address the following questions: 1. What are the statistical descriptors of the…

  13. Mechanical design of deformation compensated flexural pivots structured for linear nanopositioning stages

    DOEpatents

    Shu, Deming; Kearney, Steven P.; Preissner, Curt A.

    2015-02-17

    A method and deformation compensated flexural pivots structured for precision linear nanopositioning stages are provided. A deformation-compensated flexural linear guiding mechanism includes a basic parallel mechanism including a U-shaped member and a pair of parallel bars linked to respective pairs of I-link bars and each of the I-bars coupled by a respective pair of flexural pivots. The basic parallel mechanism includes substantially evenly distributed flexural pivots minimizing center shift dynamic errors.

  14. Use the Bar Code System to Improve Accuracy of the Patient and Sample Identification.

    PubMed

    Chuang, Shu-Hsia; Yeh, Huy-Pzu; Chi, Kun-Hung; Ku, Hsueh-Chen

    2018-01-01

    In time and correct sample collection were highly related to patient's safety. The sample error rate was 11.1%, because misbranded patient information and wrong sample containers during January to April, 2016. We developed a barcode system of "Specimens Identify System" through process of reengineering of TRM, used bar code scanners, add sample container instructions, and mobile APP. Conclusion, the bar code systems improved the patient safety and created green environment.

  15. Introducing 3D U-statistic method for separating anomaly from background in exploration geochemical data with associated software development

    NASA Astrophysics Data System (ADS)

    Ghannadpour, Seyyed Saeed; Hezarkhani, Ardeshir

    2016-03-01

    The U-statistic method is one of the most important structural methods to separate the anomaly from the background. It considers the location of samples and carries out the statistical analysis of the data without judging from a geochemical point of view and tries to separate subpopulations and determine anomalous areas. In the present study, to use U-statistic method in three-dimensional (3D) condition, U-statistic is applied on the grade of two ideal test examples, by considering sample Z values (elevation). So far, this is the first time that this method has been applied on a 3D condition. To evaluate the performance of 3D U-statistic method and in order to compare U-statistic with one non-structural method, the method of threshold assessment based on median and standard deviation (MSD method) is applied on the two example tests. Results show that the samples indicated by U-statistic method as anomalous are more regular and involve less dispersion than those indicated by the MSD method. So that, according to the location of anomalous samples, denser areas of them can be determined as promising zones. Moreover, results show that at a threshold of U = 0, the total error of misclassification for U-statistic method is much smaller than the total error of criteria of bar {x}+n× s. Finally, 3D model of two test examples for separating anomaly from background using 3D U-statistic method is provided. The source code for a software program, which was developed in the MATLAB programming language in order to perform the calculations of the 3D U-spatial statistic method, is additionally provided. This software is compatible with all the geochemical varieties and can be used in similar exploration projects.

  16. Validation of instrumentation to monitor dynamic performance of olympic weightlifters.

    PubMed

    Bruenger, Adam J; Smith, Sarah L; Sands, William A; Leigh, Michael R

    2007-05-01

    The purpose of this study was to validate the accuracy and reliability of the Weightlifting Video Overlay System (WVOS) used by coaches and sport biomechanists at the United States Olympic Training Center. Static trials with the bar set at specific positions and dynamic trials of a power snatch were performed. Static and dynamic values obtained by the WVOS were compared with values obtained by tape measure and standard video kinematic analysis. Coordinate positions (horizontal [X] and vertical [Y]) were compared on both ends (left and right) of the bar. Absolute technical error of measurement between WVOS and kinematic values were calculated (0.97 cm [left X], 0.98 cm [right X], 0.88 cm [left Y], and 0.53 cm [right Y]) for the static data. Pearson correlations for all dynamic trials exceeded r = 0.88. The greatest discrepancies between the 2 measuring systems were found to occur when there was twisting of the bar during the performance. This error was probably due to the location on the bar where the coordinates were measured. The WVOS appears to provide accurate position information when compared with standard kinematics; however, care must be taken in evaluating position measurements if there is a significant amount of twisting in the movement. The WVOS appears to be reliable and valid within reasonable error limits for the determination of weightlifting movement technique.

  17. Predicting Error Bars for QSAR Models

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  18. Error-Detecting Identification Codes for Algebra Students.

    ERIC Educational Resources Information Center

    Sutherland, David C.

    1990-01-01

    Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)

  19. A Substantive Process Analysis of Responses to Items from the Multistate Bar Examination

    ERIC Educational Resources Information Center

    Bonner, Sarah M.; D'Agostino, Jerome V.

    2012-01-01

    We investigated examinees' cognitive processes while they solved selected items from the Multistate Bar Exam (MBE), a high-stakes professional certification examination. We focused on ascertaining those mental processes most frequently used by examinees, and the most common types of errors in their thinking. We compared the relationships between…

  20. ENDF/B-IV fission-product files: summary of major nuclide data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    England, T.R.; Schenter, R.E.

    1975-09-01

    The major fission-product parameters [sigma/sub th/, RI, tau/sub 1/2/, E- bar/sub $beta$/, E-bar/sub $gamma$/, E-bar/sub $alpha$/, decay and (n,$gamma$) branching, Q, and AWR] abstracted from ENDF/B-IV files for 824 nuclides are summarized. These data are most often requested by users concerned with reactor design, reactor safety, dose, and other sundry studies. The few known file errors are corrected to date. Tabular data are listed by increasing mass number. (auth)

  1. Random number generators tested on quantum Monte Carlo simulations.

    PubMed

    Hongo, Kenta; Maezono, Ryo; Miura, Kenichi

    2010-08-01

    We have tested and compared several (pseudo) random number generators (RNGs) applied to a practical application, ground state energy calculations of molecules using variational and diffusion Monte Carlo metheds. A new multiple recursive generator with 8th-order recursion (MRG8) and the Mersenne twister generator (MT19937) are tested and compared with the RANLUX generator with five luxury levels (RANLUX-[0-4]). Both MRG8 and MT19937 are proven to give the same total energy as that evaluated with RANLUX-4 (highest luxury level) within the statistical error bars with less computational cost to generate the sequence. We also tested the notorious implementation of linear congruential generator (LCG), RANDU, for comparison. (c) 2010 Wiley Periodicals, Inc.

  2. An optimal scheme for top quark mass measurement near the \\rm{t}\\bar{t} threshold at future \\rm{e}^{+}{e}^{-} colliders

    NASA Astrophysics Data System (ADS)

    Chen, Wei-Guo; Wan, Xia; Wang, You-Kai

    2018-05-01

    A top quark mass measurement scheme near the {{t}}\\bar{{{t}}} production threshold in future {{{e}}}+{{{e}}}- colliders, e.g. the Circular Electron Positron Collider (CEPC), is simulated. A {χ }2 fitting method is adopted to determine the number of energy points to be taken and their locations. Our results show that the optimal energy point is located near the largest slope of the cross section v. beam energy plot, and the most efficient scheme is to concentrate all luminosity on this single energy point in the case of one-parameter top mass fitting. This suggests that the so-called data-driven method could be the best choice for future real experimental measurements. Conveniently, the top mass statistical uncertainty can also be calculated directly by the error matrix even without any sampling and fitting. The agreement of the above two optimization methods has been checked. Our conclusion is that by taking 50 fb‑1 total effective integrated luminosity data, the statistical uncertainty of the top potential subtracted mass can be suppressed to about 7 MeV and the total uncertainty is about 30 MeV. This precision will help to identify the stability of the electroweak vacuum at the Planck scale. Supported by National Science Foundation of China (11405102) and the Fundamental Research Funds for the Central Universities of China (GK201603027, GK201803019)

  3. How High Do Sandbars Grow?

    NASA Astrophysics Data System (ADS)

    Alexander, J. S.; McElroy, B. J.

    2015-12-01

    Bar forms in wide sandy rivers store sediment, control channel hydraulics, and are fundamental units of riverine ecosystems. Bar form height is often used as a measure of channel depth in ancient fluvial deposits and is also a crucially important measure of habitat quality in modern rivers. In the Great Plains of North America, priority bird species use emergent bars to nest, and sandbar heights are a direct predictor of flood hazard for bird nests. Our current understanding of controls on bar height are limited to few datasets and ad hoc observations from specific settings. We here examine a new dataset of bar heights and explore models of bar growth. We present bar a height dataset from the Platte and Niobrara Rivers in Nebraska, and an unchannelized reach of the Missouri River along the Nebraska-South Dakota border. Bar height data are normalized by flow frequency, and we examine parsimonious statistical models between expected controls (depth, stage, discharge, flow duration, work etc.) and maximum bar heights. From this we generate empirical-statistical models of maximum bar height for wide, sand-bedded rivers in the Great Plains of the United States and rivers of similar morphology elsewhere. Migration of bar forms is driven by downstream slip-face additions of sediment sourced from their stoss sides, but bars also sequester sediment and grow vertically and longitudinally. We explore our empirical data with a geometric-kinematic model of bar growth driven by sediment transport from smaller-scale bedforms. Our goal is to understand physical limitations on bar growth and geometry, with implications for interpreting the rock record and predicting physically-driven riverine habitat variables.

  4. Patient safety with blood products administration using wireless and bar-code technology.

    PubMed

    Porcella, Aleta; Walker, Kristy

    2005-01-01

    Supported by a grant from the Agency for Healthcare Research and Quality, a University of Iowa Hospitals and Clinics interdisciplinary research team created an online data-capture-response tool utilizing wireless mobile devices and bar code technology to track and improve blood products administration process. The tool captures 1) sample collection, 2) sample arrival in the blood bank, 3) blood product dispense from blood bank, and 4) administration. At each step, the scanned patient wristband ID bar code is automatically compared to scanned identification barcode on requisition, sample, and/or product, and the system presents either a confirmation or an error message to the user. Following an eight-month, 5 unit, staged pilot, a 'big bang,' hospital-wide implementation occurred on February 7, 2005. Preliminary results from pilot data indicate that the new barcode process captures errors 3 to 10 times better than the old manual process.

  5. Computerized bar code-based blood identification systems and near-miss transfusion episodes and transfusion errors.

    PubMed

    Nuttall, Gregory A; Abenstein, John P; Stubbs, James R; Santrach, Paula; Ereth, Mark H; Johnson, Pamela M; Douglas, Emily; Oliver, William C

    2013-04-01

    To determine whether the use of a computerized bar code-based blood identification system resulted in a reduction in transfusion errors or near-miss transfusion episodes. Our institution instituted a computerized bar code-based blood identification system in October 2006. After institutional review board approval, we performed a retrospective study of transfusion errors from January 1, 2002, through December 31, 2005, and from January 1, 2007, through December 31, 2010. A total of 388,837 U were transfused during the 2002-2005 period. There were 6 misidentification episodes of a blood product being transfused to the wrong patient during that period (incidence of 1 in 64,806 U or 1.5 per 100,000 transfusions; 95% CI, 0.6-3.3 per 100,000 transfusions). There was 1 reported near-miss transfusion episode (incidence of 0.3 per 100,000 transfusions; 95% CI, <0.1-1.4 per 100,000 transfusions). A total of 304,136 U were transfused during the 2007-2010 period. There was 1 misidentification episode of a blood product transfused to the wrong patient during that period when the blood bag and patient's armband were scanned after starting to transfuse the unit (incidence of 1 in 304,136 U or 0.3 per 100,000 transfusions; 95% CI, <0.1-1.8 per 100,000 transfusions; P=.14). There were 34 reported near-miss transfusion errors (incidence of 11.2 per 100,000 transfusions; 95% CI, 7.7-15.6 per 100,000 transfusions; P<.001). Institution of a computerized bar code-based blood identification system was associated with a large increase in discovered near-miss events. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  6. Reanalyzing the visible colors of Centaurs and KBOs: what is there and what we might be missing

    NASA Astrophysics Data System (ADS)

    Peixinho, Nuno; Delsanti, Audrey; Doressoundiram, Alain

    2015-05-01

    Since the discovery of the Kuiper belt, broadband surface colors were thoroughly studied as a first approximation to the object reflectivity spectra. Visible colors (BVRI) have proven to be a reasonable proxy for real spectra, which are rather linear in this range. In contrast, near-IR colors (JHK bands) could be misleading when absorption features of ices are present in the spectra. Although the physical and chemical information provided by colors are rather limited, broadband photometry remains the best tool for establishing the bulk surface properties of Kuiper belt objects (KBOs) and Centaurs. In this work, we explore for the first time general, recurrent effects in the study of visible colors that could affect the interpretation of the scientific results: i) how a correlation could be missed or weakened as a result of the data error bars; ii) the "risk" of missing an existing trend because of low sampling, and the possibility of making quantified predictions on the sample size needed to detect a trend at a given significance level - assuming the sample is unbiased; iii) the use of partial correlations to distinguish the mutual effect of two or more (physical) parameters; and iv) the sensitivity of the "reddening line" tool to the central wavelength of the filters used. To illustrate and apply these new tools, we have compiled the visible colors and orbital parameters of about 370 objects available in the literature - assumed, by default, as unbiased samples - and carried out a traditional analysis per dynamical family. Our results show in particular how a) data error bars impose a limit on the detectable correlations regardless of sample size and that therefore, once that limit is achieved, it is important to diminish the error bars, but it is pointless to enlarge the sampling with the same or larger errors; b) almost all dynamical families still require larger samplings to ensure the detection of correlations stronger than ±0.5, that is, correlations that may explain ~25% or more of the color variability; c) the correlation strength between (V - R) vs. (R - I) is systematically lower than the one between (B - V) vs. (V - R) and is not related with error-bar differences between these colors; d) it is statistically equivalent to use any of the different flavors of orbital excitation or collisional velocity parameters regarding the famous color-inclination correlation among classical KBOs - which no longer appears to be a strong correlation - whereas the inclination and Tisserand parameter relative to Neptune cannot be separated from one another; and e) classical KBOs are the only dynamical family that shows neither (B - V) vs. (V - R) nor (V - R) vs. (R - I) correlations. It therefore is the family with the most unpredictable visible surface reflectivities. Tables 4 and 5 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/577/A35

  7. Measurement of elliptic flow of light nuclei at s N N = 200 , 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV at the BNL Relativistic Heavy Ion Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamczyk, L.; Adkins, J. K.; Agakishiev, G.

    Here we present measurements of second-order azimuthal anisotropy ( v 2 ) at midrapidity ( |y| < 1.0 ) for light nuclei d , t , 3He (formore » $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV) and antinuclei$$\\bar{d}$$ ( $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, and 19.6 GeV) and 3 ¯¯¯¯¯ He ( $$\\sqrt{s}$$$_{NN}$$ = 200 GeV) in the STAR (Solenoidal Tracker at RHIC) experiment. The v 2 for these light nuclei produced in heavy-ion collisions is compared with those for p and $$\\bar{p}$$. We observe mass ordering in nuclei v 2 ( p T) at low transverse momenta ( p T < 2.0 GeV/c). We also find a centrality dependence of v 2 for d and $$\\bar{d}$$ . The magnitude of v 2 for t and 3He agree within statistical errors. Light-nuclei v 2 are compared with predictions from a blast-wave model. Atomic mass number ( A ) scaling of light-nuclei v 2 (p T) seems to hold for p T / A < 1.5 GeV/c . Results on light-nuclei v 2 from a transport-plus-coalescence model are consistent with the experimental measurements.« less

  8. Measurement of elliptic flow of light nuclei at s N N = 200 , 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV at the BNL Relativistic Heavy Ion Collider

    DOE PAGES

    Adamczyk, L.; Adkins, J. K.; Agakishiev, G.; ...

    2016-09-23

    Here we present measurements of second-order azimuthal anisotropy ( v 2 ) at midrapidity ( |y| < 1.0 ) for light nuclei d , t , 3He (formore » $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV) and antinuclei$$\\bar{d}$$ ( $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, and 19.6 GeV) and 3 ¯¯¯¯¯ He ( $$\\sqrt{s}$$$_{NN}$$ = 200 GeV) in the STAR (Solenoidal Tracker at RHIC) experiment. The v 2 for these light nuclei produced in heavy-ion collisions is compared with those for p and $$\\bar{p}$$. We observe mass ordering in nuclei v 2 ( p T) at low transverse momenta ( p T < 2.0 GeV/c). We also find a centrality dependence of v 2 for d and $$\\bar{d}$$ . The magnitude of v 2 for t and 3He agree within statistical errors. Light-nuclei v 2 are compared with predictions from a blast-wave model. Atomic mass number ( A ) scaling of light-nuclei v 2 (p T) seems to hold for p T / A < 1.5 GeV/c . Results on light-nuclei v 2 from a transport-plus-coalescence model are consistent with the experimental measurements.« less

  9. Impact of Frequent Interruption on Nurses' Patient-Controlled Analgesia Programming Performance.

    PubMed

    Campoe, Kristi R; Giuliano, Karen K

    2017-12-01

    The purpose was to add to the body of knowledge regarding the impact of interruption on acute care nurses' cognitive workload, total task completion times, nurse frustration, and medication administration error while programming a patient-controlled analgesia (PCA) pump. Data support that the severity of medication administration error increases with the number of interruptions, which is especially critical during the administration of high-risk medications. Bar code technology, interruption-free zones, and medication safety vests have been shown to decrease administration-related errors. However, there are few published data regarding the impact of number of interruptions on nurses' clinical performance during PCA programming. Nine acute care nurses completed three PCA pump programming tasks in a simulation laboratory. Programming tasks were completed under three conditions where the number of interruptions varied between two, four, and six. Outcome measures included cognitive workload (six NASA Task Load Index [NASA-TLX] subscales), total task completion time (seconds), nurse frustration (NASA-TLX Subscale 6), and PCA medication administration error (incorrect final programming). Increases in the number of interruptions were associated with significant increases in total task completion time ( p = .003). We also found increases in nurses' cognitive workload, nurse frustration, and PCA pump programming errors, but these increases were not statistically significant. Complex technology use permeates the acute care nursing practice environment. These results add new knowledge on nurses' clinical performance during PCA pump programming and high-risk medication administration.

  10. Recent Measurement of Flavor Asymmetry of Antiquarks in the Proton by Drell–Yan Experiment SeaQuest at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagai, Kei

    A measurement of the flavor asymmetry of the antiquarks (more » $$\\bar{d}$$ and $$\\bar{u}$$) in the proton is described in this thesis. The proton consists of three valence quarks, sea quarks, and gluons. Antiquarks in the proton are sea quarks. They are generated from the gluon splitting: g → q + $$\\bar{q}$$. According to QCD (Quantum Chromodynamics), the gluon splitting is independent of quark flavor. It suggests that the amounts of $$\\bar{d}$$ and $$\\bar{u}$$ should be the same in the proton. However, the NMC experiment at CERN found that the amount of $$\\bar{d}$$ is larger than that of $$\\bar{u}$$ in the proton using the deep inelastic scattering in 1991. This result is obtained for $$\\bar{d}$$ and $$\\bar{u}$$ integrated over Bjorken x. Bjorken x is the fraction of the momentum of the parton to that of the proton. The NA51 experiment (x ~ 0.2) at CERN and E866/NuSea experiment (0.015 < x < 0.35) at Fermilab measured the flavor asymmetry of the antiquarks ($$\\bar{d}$$/$$\\bar{u}$$) in the proton as a function of x using Drell–Yan process. The experiments reported that the flavor symmetry is broken over all measured x values. Understanding the flavor asymmetry of the antiquarks in the proton is a challenge of the QCD. The theo- retical investigation from the first principle of QCD such as lattice QCD calculation is important. In addition, the QCD effective models and hadron models such as the meson cloud model can also be tested with the flavor asymmetry of antiquarks. From the experimental side, it is important to measure with higher accuracy and in a wider x range. The SeaQuest (E906) experiment measures $$\\bar{d}$$/$$\\bar{u}$$ at large x (0.15 < x < 0.45) accurately to understand its behavior. The SeaQuest experiment is a Drell–Yan experiment at Fermi National Accelerator Laboratory (Fermilab). In the Drell–Yan process of proton-proton reaction, an antiquark in a proton and a quark in another proton annihilate and create a virtual photon, which then decays into a muon pair (q$$\\bar{q}$$ → γ* → µ +µ -). The SeaQuest experiment uses a 120 GeV proton beam extracted from Fermilab’s Main Injector. The proton beam interacts with hydrogen and deuterium targets. The SeaQuest spectrometer detects the muon pairs from the Drell–Yan process. The $$\\bar{d}$$/$$\\bar{u}$$ ratio at 0.1 < x < 0.58 is extracted from the number of detected Drell–Yan muon pairs. After the detector construction, commissioning run and detector upgrade, the SeaQuest experiment started the physics data acquisition from 2013. We finished so far three periods of physics data acquisition. The fourth period is in progress. The detector construction, detector performance evaluation, data taking and data analysis for the flavor asymmetry of the antiquarks $$\\bar{d}$$/$$\\bar{u}$$ in the proton are my contribution to SeaQuest. The cross section ratio of Drell–Yan process in p- p and p-d reactions is obtained from dimuon yields. In the experiment with high beam intensity, it is important to control the tracking efficiency of charged particles through the magnetic spectrometer. The tracking efficiency depends on the chamber occupancy, and the appropriate method for the correction is important. The chamber occupancy is the number of hits in drift chambers. A new method of the correction for the tracking efficiency is developed based on the occupancy, and applied to the data. This method reflects the real response of the drift chambers. Therefore, the systematic error is well controlled by this method. The flavor asymmetry of antiquarks is obtained at 0.1 < x < 0.58. At 0.1 < x < 0.45, the result is $$\\bar{d}$$/$$\\bar{u}$$ > 1. The result at 0.1 < x < 0.24 agrees with the E866 result. The result at x > 0.24, however, disagrees with the E866 result. The result at 0.45 < x < 0 the statistical errors. u¯ results extracted from experiments are used to investigate the validity of the theoretical models. The present experimental result provides the data points in wide x region. It is useful for understanding the proton structure in the light of QCD and effective hadron models. The present result has a practical application as well. Antiquark distributions are important as inputs to simulations of hadron reactions such as W± production in various experiments. The new knowledge on antiquark distributions helps to improve the precision of the simulations.« less

  11. Predicting Error Bars for QSAR Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeter, Timon; Technische Universitaet Berlin, Department of Computer Science, Franklinstrasse 28/29, 10587 Berlin; Schwaighofer, Anton

    2007-09-18

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D{sub 7} models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniquesmore » for the other modelling approaches.« less

  12. Rate Constants for Fine-Structure Excitations in O - H Collisions with Error Bars Obtained by Machine Learning

    NASA Astrophysics Data System (ADS)

    Vieira, Daniel; Krems, Roman

    2017-04-01

    Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.

  13. A large-scale test of free-energy simulation estimates of protein-ligand binding affinities.

    PubMed

    Mikulskis, Paulius; Genheden, Samuel; Ryde, Ulf

    2014-10-27

    We have performed a large-scale test of alchemical perturbation calculations with the Bennett acceptance-ratio (BAR) approach to estimate relative affinities for the binding of 107 ligands to 10 different proteins. Employing 20-Å truncated spherical systems and only one intermediate state in the perturbations, we obtain an error of less than 4 kJ/mol for 54% of the studied relative affinities and a precision of 0.5 kJ/mol on average. However, only four of the proteins gave acceptable errors, correlations, and rankings. The results could be improved by using nine intermediate states in the simulations or including the entire protein in the simulations using periodic boundary conditions. However, 27 of the calculated affinities still gave errors of more than 4 kJ/mol, and for three of the proteins the results were not satisfactory. This shows that the performance of BAR calculations depends on the target protein and that several transformations gave poor results owing to limitations in the molecular-mechanics force field or the restricted sampling possible within a reasonable simulation time. Still, the BAR results are better than docking calculations for most of the proteins.

  14. A neural network for real-time retrievals of PWV and LWP from Arctic millimeter-wave ground-based observations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadeddu, M. P.; Turner, D. D.; Liljegren, J. C.

    2009-07-01

    This paper presents a new neural network (NN) algorithm for real-time retrievals of low amounts of precipitable water vapor (PWV) and integrated liquid water from millimeter-wave ground-based observations. Measurements are collected by the 183.3-GHz G-band vapor radiometer (GVR) operating at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility, Barrow, AK. The NN provides the means to explore the nonlinear regime of the measurements and investigate the physical boundaries of the operability of the instrument. A methodology to compute individual error bars associated with the NN output is developed, and a detailed error analysis of the network output is provided.more » Through the error analysis, it is possible to isolate several components contributing to the overall retrieval errors and to analyze the dependence of the errors on the inputs. The network outputs and associated errors are then compared with results from a physical retrieval and with the ARM two-channel microwave radiometer (MWR) statistical retrieval. When the NN is trained with a seasonal training data set, the retrievals of water vapor yield results that are comparable to those obtained from a traditional physical retrieval, with a retrieval error percentage of {approx}5% when the PWV is between 2 and 10 mm, but with the advantages that the NN algorithm does not require vertical profiles of temperature and humidity as input and is significantly faster computationally. Liquid water path (LWP) retrievals from the NN have a significantly improved clear-sky bias (mean of {approx}2.4 g/m{sup 2}) and a retrieval error varying from 1 to about 10 g/m{sup 2} when the PWV amount is between 1 and 10 mm. As an independent validation of the LWP retrieval, the longwave downwelling surface flux was computed and compared with observations. The comparison shows a significant improvement with respect to the MWR statistical retrievals, particularly for LWP amounts of less than 60 g/m{sup 2}.« less

  15. Fabricating CAD/CAM Implant-Retained Mandibular Bar Overdentures: A Clinical and Technical Overview.

    PubMed

    Goo, Chui Ling; Tan, Keson Beng Choon

    2017-01-01

    This report describes the clinical and technical aspects in the oral rehabilitation of an edentulous patient with knife-edge ridge at the mandibular anterior edentulous region, using implant-retained overdentures. The application of computer-aided design and computer-aided manufacturing (CAD/CAM) in the fabrication of the overdenture framework simplifies the laboratory process of the implant prostheses. The Nobel Procera CAD/CAM System was utilised to produce a lightweight titanium overdenture bar with locator attachments. It is proposed that the digital workflow of CAD/CAM milled implant overdenture bar allows us to avoid numerous technical steps and possibility of casting errors involved in the conventional casting of such bars.

  16. Testing physical models for dipolar asymmetry with CMB polarization

    NASA Astrophysics Data System (ADS)

    Contreras, D.; Zibin, J. P.; Scott, D.; Banday, A. J.; Górski, K. M.

    2017-12-01

    The cosmic microwave background (CMB) temperature anisotropies exhibit a large-scale dipolar power asymmetry. To determine whether this is due to a real, physical modulation or is simply a large statistical fluctuation requires the measurement of new modes. Here we forecast how well CMB polarization data from Planck and future experiments will be able to confirm or constrain physical models for modulation. Fitting several such models to the Planck temperature data allows us to provide predictions for polarization asymmetry. While for some models and parameters Planck polarization will decrease error bars on the modulation amplitude by only a small percentage, we show, importantly, that cosmic-variance-limited (and in some cases even Planck) polarization data can decrease the errors by considerably better than the expectation of √{2 } based on simple ℓ-space arguments. We project that if the primordial fluctuations are truly modulated (with parameters as indicated by Planck temperature data) then Planck will be able to make a 2 σ detection of the modulation model with 20%-75% probability, increasing to 45%-99% when cosmic-variance-limited polarization is considered. We stress that these results are quite model dependent. Cosmic variance in temperature is important: combining statistically isotropic polarization with temperature data will spuriously increase the significance of the temperature signal with 30% probability for Planck.

  17. Publisher Correction: Role of outer surface probes for regulating ion gating of nanochannels.

    PubMed

    Li, Xinchun; Zhai, Tianyou; Gao, Pengcheng; Cheng, Hongli; Hou, Ruizuo; Lou, Xiaoding; Xia, Fan

    2018-02-08

    The original version of this Article contained an error in Fig. 3. The scale bars in Figs 3c and 3d were incorrectly labelled as 50 μA. In the correct version, the scale bars are labelled as 0.5 μA. This has now been corrected in both the PDF and HTML versions of the Article.

  18. Defining the formative discharge for alternate bars in alluvial rivers

    NASA Astrophysics Data System (ADS)

    Redolfi, M.; Carlin, M.; Tubino, M.; Adami, L.; Zolezzi, G.

    2017-12-01

    We investigate the properties of alternate bars in long straight reaches of channelized streams subject to an unsteady, irregular flow regime. To this aim we propose a novel integration of a statistical approach with the analytical perturbation model of Tubino (1991) which predicts the evolution of bar properties (namely amplitude and wavelength) as consequence of a flood. The outcomes of our integrated modelling approach are probability distribution of the bar properties, which depend essentially on two ingredients: (i) the statistical properties of the flow regime (duration, frequency and magnitude of the flood events, and (ii) the reach-averaged hydro-geomorphic characteristics of the channel (bed material, channel gradient and width). This allows to define a "bar-forming" discharge value as the flow value which would reproduce the most likely bar properties in a river reach under unsteady flow. Alternate bars are often migrating downstream and growing or declining during flood events. The timescale of bar growth and migration is often comparable with the duration of the floods: consequently, bar properties such as height and wavelength do not respond instantaneously to discharge variations (i.e. quasi-equilibrium response) but may depend on previous flood events. Theoretical results are compared with observations in three Alpine, channelized gravel bed rivers with encouraging outcomes.

  19. A 5- to 8-year retrospective study comparing the clinical results of implant-supported telescopic crown versus bar overdentures in patients with edentulous maxillae.

    PubMed

    Zou, Duohong; Wu, Yiqun; Huang, Wei; Zhang, Zhiyong; Zhang, Zhiyuan

    2013-01-01

    The objective of this study was to compare implant survival and success rates, peri-implant parameters, and prosthodontic maintenance efforts for implant-supported telescopic crown overdentures and bar overdentures to restore maxillary edentulism. This retrospective clinical study involved patients with maxillary edentulism who were fitted with implant-supported overdentures from January 2004 to June 2007. During a 5- to 8-year follow-up period, the implant survival and success rates, biologic and mechanical complications, prosthodontic maintenance, and patient satisfaction were retrospectively analyzed. The data were evaluated statistically and P < .05 was considered to be statistically significant. Forty-four patients with maxillary edentulism received implant-supported removable overdentures. Twenty-one patients chose telescopic crown overdentures and 23 patients chose bar overdentures. A total of 41 patients and 201 implants were available for follow-up. The implant survival and success rates, average bone resorption, and subjective patient satisfaction scores showed no difference between the telescopic crown and the bar overdenture group at follow-up. However, there were higher values for Plaque and Calculus Indexes in the bar group compared with the telescopic crown group, and these values showed a statistically significant difference annually from the 3-year follow-up (P < .05). Each year, the number of prosthodontics maintenance procedures per patient did not significantly differ between the telescopic crown (approximately 0.36 to 0.58) and bar groups (approximately 0.30 to 0.49) (P = .16). Although there were higher plaque and calculus levels in the bar group and more maintenance was required for the telescopic crown group, overdentures provided a healthy peri-implant structure for implants in both groups. Implant-supported telescopic crown or bar overdentures can provide a good treatment option for patients with edentulous maxillae.

  20. Electoral surveys’ influence on the voting processes: a cellular automata model

    NASA Astrophysics Data System (ADS)

    Alves, S. G.; Oliveira Neto, N. M.; Martins, M. L.

    2002-12-01

    Nowadays, in societies threatened by atomization, selfishness, short-term thinking, and alienation from political life, there is a renewed debate about classical questions concerning the quality of democratic decision making. In this work a cellular automata model for the dynamics of free elections, based on the social impact theory is proposed. By using computer simulations, power-law distributions for the size of electoral clusters and decision time have been obtained. The major role of broadcasted electoral surveys in guiding opinion formation and stabilizing the “status quo” was demonstrated. Furthermore, it was shown that in societies where these surveys are manipulated within the universally accepted statistical error bars, even a majoritary opposition could be hindered from reaching power through the electoral path.

  1. Detecting Multiple Model Components with the Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Protassov, R. S.; van Dyk, D. A.

    2000-05-01

    The likelihood ratio test (LRT) and F-test popularized in astrophysics by Bevington (Data Reduction and Error Analysis in the Physical Sciences ) and Cash (1977, ApJ 228, 939), do not (even asymptotically) adhere to their nominal χ2 and F distributions in many statistical tests commonly used in astrophysics. The many legitimate uses of the LRT (see, e.g., the examples given in Cash (1977)) notwithstanding, it can be impossible to compute the false positive rate of the LRT or related tests such as the F-test. For example, although Cash (1977) did not suggest the LRT for detecting a line profile in a spectral model, it has become common practice despite the lack of certain required mathematical regularity conditions. Contrary to common practice, the nominal distribution of the LRT statistic should not be used in these situations. In this paper, we characterize an important class of problems where the LRT fails, show the non-standard behavior of the test in this setting, and provide a Bayesian alternative to the LRT, i.e., posterior predictive p-values. We emphasize that there are many legitimate uses of the LRT in astrophysics, and even when the LRT is inappropriate, there remain several statistical alternatives (e.g., judicious use of error bars and Bayes factors). We illustrate this point in our analysis of GRB 970508 that was studied by Piro et al. in ApJ, 514:L73-L77, 1999.

  2. THE HST/ACS COMA CLUSTER SURVEY. VIII. BARRED DISK GALAXIES IN THE CORE OF THE COMA CLUSTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marinova, Irina; Jogee, Shardha; Weinzirl, Tim

    2012-02-20

    We use high-resolution ({approx}0.''1) F814W Advanced Camera for Surveys (ACS) images from the Hubble Space Telescope ACS Treasury survey of the Coma cluster at z {approx} 0.02 to study bars in massive disk galaxies (S0s), as well as low-mass dwarf galaxies in the core of the Coma cluster, the densest environment in the nearby universe. Our study helps to constrain the evolution of bars and disks in dense environments and provides a comparison point for studies in lower density environments and at higher redshifts. Our results are: (1) we characterize the fraction and properties of bars in a sample ofmore » 32 bright (M{sub V} {approx}< -18, M{sub *} > 10{sup 9.5} M{sub Sun }) S0 galaxies, which dominate the population of massive disk galaxies in the Coma core. We find that the measurement of a bar fraction among S0 galaxies must be handled with special care due to the difficulty in separating unbarred S0s from ellipticals, and the potential dilution of the bar signature by light from a relatively large, bright bulge. The results depend sensitively on the method used: the bar fraction for bright S0s in the Coma core is 50% {+-} 11%, 65% {+-} 11%, and 60% {+-} 11% based on three methods of bar detection, namely, strict ellipse fit criteria, relaxed ellipse fit criteria, and visual classification. (2) We compare the S0 bar fraction across different environments (the Coma core, A901/902, and Virgo) adopting the critical step of using matched samples and matched methods in order to ensure robust comparisons. We find that the bar fraction among bright S0 galaxies does not show a statistically significant variation (within the error bars of {+-}11%) across environments which span two orders of magnitude in galaxy number density (n {approx} 300-10,000 galaxies Mpc{sup -3}) and include rich and poor clusters, such as the core of Coma, the A901/902 cluster, and Virgo. We speculate that the bar fraction among S0s is not significantly enhanced in rich clusters compared to low-density environments for two reasons. First, S0s in rich clusters are less prone to bar instabilities as they are dynamically heated by harassment and are gas poor as a result of ram pressure stripping and accelerated star formation. Second, high-speed encounters in rich clusters may be less effective than slow, strong encounters in inducing bars. (3) We also take advantage of the high resolution of the ACS ({approx}50 pc) to analyze a sample of 333 faint (M{sub V} > -18) dwarf galaxies in the Coma core. Using visual inspection of unsharp-masked images, we find only 13 galaxies with bar and/or spiral structure. An additional eight galaxies show evidence for an inclined disk. The paucity of disk structures in Coma dwarfs suggests that either disks are not common in these galaxies or that any disks present are too hot to develop instabilities.« less

  3. An Insertable Passive LC Pressure Sensor Based on an Alumina Ceramic for In Situ Pressure Sensing in High-Temperature Environments.

    PubMed

    Xiong, Jijun; Li, Chen; Jia, Pinggang; Chen, Xiaoyong; Zhang, Wendong; Liu, Jun; Xue, Chenyang; Tan, Qiulin

    2015-08-31

    Pressure measurements in high-temperature applications, including compressors, turbines, and others, have become increasingly critical. This paper proposes an implantable passive LC pressure sensor based on an alumina ceramic material for in situ pressure sensing in high-temperature environments. The inductance and capacitance elements of the sensor were designed independently and separated by a thermally insulating material, which is conducive to reducing the influence of the temperature on the inductance element and improving the quality factor of the sensor. In addition, the sensor was fabricated using thick film integrated technology from high-temperature materials that ensure stable operation of the sensor in high-temperature environments. Experimental results showed that the sensor accurately monitored pressures from 0 bar to 2 bar at temperatures up to 800 °C. The sensitivity, linearity, repeatability error, and hysteretic error of the sensor were 0.225 MHz/bar, 95.3%, 5.5%, and 6.2%, respectively.

  4. An Insertable Passive LC Pressure Sensor Based on an Alumina Ceramic for In Situ Pressure Sensing in High-Temperature Environments

    PubMed Central

    Xiong, Jijun; Li, Chen; Jia, Pinggang; Chen, Xiaoyong; Zhang, Wendong; Liu, Jun; Xue, Chenyang; Tan, Qiulin

    2015-01-01

    Pressure measurements in high-temperature applications, including compressors, turbines, and others, have become increasingly critical. This paper proposes an implantable passive LC pressure sensor based on an alumina ceramic material for in situ pressure sensing in high-temperature environments. The inductance and capacitance elements of the sensor were designed independently and separated by a thermally insulating material, which is conducive to reducing the influence of the temperature on the inductance element and improving the quality factor of the sensor. In addition, the sensor was fabricated using thick film integrated technology from high-temperature materials that ensure stable operation of the sensor in high-temperature environments. Experimental results showed that the sensor accurately monitored pressures from 0 bar to 2 bar at temperatures up to 800 °C. The sensitivity, linearity, repeatability error, and hysteretic error of the sensor were 0.225 MHz/bar, 95.3%, 5.5%, and 6.2%, respectively. PMID:26334279

  5. Using Modified-ISS Model to Evaluate Medication Administration Safety During Bar Code Medication Administration Implementation in Taiwan Regional Teaching Hospital.

    PubMed

    Ma, Pei-Luen; Jheng, Yan-Wun; Jheng, Bi-Wei; Hou, I-Ching

    2017-01-01

    Bar code medication administration (BCMA) could reduce medical errors and promote patient safety. This research uses modified information systems success model (M-ISS model) to evaluate nurses' acceptance to BCMA. The result showed moderate correlation between medication administration safety (MAS) to system quality, information quality, service quality, user satisfaction, and limited satisfaction.

  6. Measurements of {Gamma}(Z{sup O} {yields} b{bar b})/{Gamma}(Z{sup O} {yields} hadrons) using the SLD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, H.A. Jr. II

    1995-07-01

    The quantity R{sub b} = {Gamma}(Z{sup o} {yields}b{bar b})/{Gamma}(Z{sup o} {yields} hadrons) is a sensitive measure of corrections to the Zbb vertex. The precision necessary to observe the top quark mass dependent corrections is close to being achieved. LEP is already observing a 1.8{sigma} deviation from the Standard Model prediction. Knowledge of the top quark mass combined with the observation of deviations from the Standard Model prediction would indicate new physics. Models which include charged Higgs or light SUSY particles yield predictions for R{sub b} appreciably different from the Standard Model. In this thesis two independent methods are used tomore » measure R{sub b}. One uses a general event tag which determines R{sub b} from the rate at which events are tagged as Z{sup o} {yields} b{bar b} in data and the estimated rates at which various flavors of events are tagged from the Monte Carlo. The second method reduces the reliance on the Monte Carlo by separately tagging each hemisphere as containing a b-decay. The rates of single hemisphere tagged events and both hemisphere tagged events are used to determine the tagging efficiency for b-quarks directly from the data thus eliminating the main sources of systematic error present in the event tag. Both measurements take advantage of the unique environment provided by the SLAC Linear Collider (SLC) and the SLAC Large Detector (SLD). From the event tag a result of R{sub b} = 0.230{plus_minus}0.004{sub statistical}{plus_minus}0.013{sub systematic} is obtained. The higher precision hemisphere tag result obtained is R{sub b} = 0.218{plus_minus}0.004{sub statistical}{plus_minus}0.004{sub systematic}{plus_minus}0.003{sub Rc}.« less

  7. Haptic spatial matching in near peripersonal space.

    PubMed

    Kaas, Amanda L; Mier, Hanneke I van

    2006-04-01

    Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \\) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.

  8. Tracking control of a closed-chain five-bar robot with two degrees of freedom by integration of an approximation-based approach and mechanical design.

    PubMed

    Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhang, W J

    2012-10-01

    The trajectory tracking problem of a closed-chain five-bar robot is studied in this paper. Based on an error transformation function and the backstepping technique, an approximation-based tracking algorithm is proposed, which can guarantee the control performance of the robotic system in both the stable and transient phases. In particular, the overshoot, settling time, and final tracking error of the robotic system can be all adjusted by properly setting the parameters in the error transformation function. The radial basis function neural network (RBFNN) is used to compensate the complicated nonlinear terms in the closed-loop dynamics of the robotic system. The approximation error of the RBFNN is only required to be bounded, which simplifies the initial "trail-and-error" configuration of the neural network. Illustrative examples are given to verify the theoretical analysis and illustrate the effectiveness of the proposed algorithm. Finally, it is also shown that the proposed approximation-based controller can be simplified by a smart mechanical design of the closed-chain robot, which demonstrates the promise of the integrated design and control philosophy.

  9. Point-by-point compositional analysis for atom probe tomography.

    PubMed

    Stephenson, Leigh T; Ceguerra, Anna V; Li, Tong; Rojhirunsakool, Tanaporn; Nag, Soumya; Banerjee, Rajarshi; Cairney, Julie M; Ringer, Simon P

    2014-01-01

    This new alternate approach to data processing for analyses that traditionally employed grid-based counting methods is necessary because it removes a user-imposed coordinate system that not only limits an analysis but also may introduce errors. We have modified the widely used "binomial" analysis for APT data by replacing grid-based counting with coordinate-independent nearest neighbour identification, improving the measurements and the statistics obtained, allowing quantitative analysis of smaller datasets, and datasets from non-dilute solid solutions. It also allows better visualisation of compositional fluctuations in the data. Our modifications include:.•using spherical k-atom blocks identified by each detected atom's first k nearest neighbours.•3D data visualisation of block composition and nearest neighbour anisotropy.•using z-statistics to directly compare experimental and expected composition curves. Similar modifications may be made to other grid-based counting analyses (contingency table, Langer-Bar-on-Miller, sinusoidal model) and could be instrumental in developing novel data visualisation options.

  10. The Brandeis Dice Problem and Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    van Enk, Steven J.

    2014-11-01

    Jaynes invented the Brandeis Dice Problem as a simple illustration of the MaxEnt (Maximum Entropy) procedure that he had demonstrated to work so well in Statistical Mechanics. I construct here two alternative solutions to his toy problem. One, like Jaynes' solution, uses MaxEnt and yields an analog of the canonical ensemble, but at a different level of description. The other uses Bayesian updating and yields an analog of the micro-canonical ensemble. Both, unlike Jaynes' solution, yield error bars, whose operational merits I discuss. These two alternative solutions are not equivalent for the original Brandeis Dice Problem, but become so in what must, therefore, count as the analog of the thermodynamic limit, M-sided dice with M → ∞. Whereas the mathematical analogies between the dice problem and Stat Mech are quite close, there are physical properties that the former lacks but that are crucial to the workings of the latter. Stat Mech is more than just MaxEnt.

  11. Implant-supported overdenture with prefabricated bar attachment system in mandibular edentulous patient

    PubMed Central

    Ha, Seung-Ryong; Song, Seung-Il; Hong, Seong-Tae; Kim, Gy-Young

    2012-01-01

    Implant-supported overdenture is a reliable treatment option for the patients with edentulous mandible when they have difficulty in using complete dentures. Several options have been used for implant-supported overdenture attachments. Among these, bar attachment system has greater retention and better maintainability than others. SFI-Bar® is prefabricated and can be adjustable at chairside. Therefore, laboratory procedures such as soldering and welding are unnecessary, which leads to fewer errors and lower costs. A 67-year-old female patient presented, complaining of mobility of lower anterior teeth with old denture. She had been wearing complete denture in the maxilla and removable partial denture in the mandible with severe bone loss. After extracting the teeth, two implants were placed in front of mental foramen, and SFI-Bar® was connected. A tube bar was seated to two adapters through large ball joints and fixation screws, connecting each implant. The length of the tube bar was adjusted according to inter-implant distance. Then, a female part was attached to the bar beneath the new denture. This clinical report describes two-implant-supported overdenture using the SFI-Bar® system in a mandibular edentulous patient. PMID:23236580

  12. Economic Statistical Design of Integrated X-bar-S Control Chart with Preventive Maintenance and General Failure Distribution

    PubMed Central

    Caballero Morales, Santiago Omar

    2013-01-01

    The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082

  13. The influence of graphic display format on the interpretations of quantitative risk information among adults with lower education and literacy: a randomized experimental study.

    PubMed

    McCaffery, Kirsten J; Dixon, Ann; Hayen, Andrew; Jansen, Jesse; Smith, Sian; Simpson, Judy M

    2012-01-01

    To test optimal graphic risk communication formats for presenting small probabilities using graphics with a denominator of 1000 to adults with lower education and literacy. A randomized experimental study, which took place in adult basic education classes in Sydney, Australia. The participants were 120 adults with lower education and literacy. An experimental computer-based manipulation compared 1) pictographs in 2 forms, shaded "blocks" and unshaded "dots"; and 2) bar charts across different orientations (horizontal/vertical) and numerator size (small <100, medium 100-499, large 500-999). Accuracy (size of error) and ease of processing (reaction time) were assessed on a gist task (estimating the larger chance of survival) and a verbatim task (estimating the size of difference). Preferences for different graph types were also assessed. Accuracy on the gist task was very high across all conditions (>95%) and not tested further. For the verbatim task, optimal graph type depended on the numerator size. For small numerators, pictographs resulted in fewer errors than bar charts (blocks: odds ratio [OR] = 0.047, 95% confidence interval [CI] = 0.023-0.098; dots: OR = 0.049, 95% CI = 0.024-0.099). For medium and large numerators, bar charts were more accurate (e.g., medium dots: OR = 4.29, 95% CI = 2.9-6.35). Pictographs were generally processed faster for small numerators (e.g., blocks: 14.9 seconds v. bars: 16.2 seconds) and bar charts for medium or large numerators (e.g., large blocks: 41.6 seconds v. 26.7 seconds). Vertical formats were processed slightly faster than horizontal graphs with no difference in accuracy. Most participants preferred bar charts (64%); however, there was no relationship with performance. For adults with low education and literacy, pictographs are likely to be the best format to use when displaying small numerators (<100/1000) and bar charts for larger numerators (>100/1000).

  14. Studying W‧ boson contributions in \\bar{B} \\rightarrow {D}^{(* )}{{\\ell }}^{-}{\\bar{\

    NASA Astrophysics Data System (ADS)

    Wang, Yi-Long; Wei, Bin; Sheng, Jin-Huan; Wang, Ru-Min; Yang, Ya-Dong

    2018-05-01

    Recently, the Belle collaboration reported the first measurement of the τ lepton polarization P τ (D*) in \\bar{B}\\to {D}* {τ }-{\\bar{ν }}τ decay and a new measurement of the rate of the branching ratios R(D*), which are consistent with the Standard Model (SM) predictions. These could be used to constrain the New Physics (NP) beyond the SM. In this paper, we probe \\bar{B}\\to {D}(* ){{\\ell }}-{\\bar{ν }}{\\ell } (ℓ = e, μ, τ) decays in the model-independent way and in the specific G(221) models with lepton flavour universality. Considering the theoretical uncertainties and the experimental errors at the 95% C.L., we obtain the quite strong bounds on the model-independent parameters {C}{{LL}}{\\prime },{C}{{LR}}{\\prime },{C}{{RR}}{\\prime },{C}{{RL}}{\\prime },{g}V,{g}A,{g}V{\\prime },{g}A{\\prime } and the specific G(221) model parameter rates. We find that the constrained NP couplings have no obvious effects on all (differential) branching ratios and their rates, nevertheless, many NP couplings have very large effects on the lepton spin asymmetries of \\bar{B}\\to {D}(* ){{\\ell }}-{\\bar{ν }}{\\ell } decays and the forward–backward asymmetries of \\bar{B}\\to {D}* {{\\ell }}-{\\bar{ν }}{\\ell }. So we expect precision measurements of these observables would be researched by LHCb and Belle-II.

  15. Sine-Bar Attachment For Machine Tools

    NASA Technical Reports Server (NTRS)

    Mann, Franklin D.

    1988-01-01

    Sine-bar attachment for collets, spindles, and chucks helps machinists set up quickly for precise angular cuts that require greater precision than provided by graduations of machine tools. Machinist uses attachment to index head, carriage of milling machine or lathe relative to table or turning axis of tool. Attachment accurate to 1 minute or arc depending on length of sine bar and precision of gauge blocks in setup. Attachment installs quickly and easily on almost any type of lathe or mill. Requires no special clamps or fixtures, and eliminates many trial-and-error measurements. More stable than improvised setups and not jarred out of position readily.

  16. Effects of a direct refill program for automated dispensing cabinets on medication-refill errors.

    PubMed

    Helmons, Pieter J; Dalton, Ashley J; Daniels, Charles E

    2012-10-01

    The effects of a direct refill program for automated dispensing cabinets (ADCs) on medication-refill errors were studied. This study was conducted in designated acute care areas of a 386-bed academic medical center. A wholesaler-to-ADC direct refill program, consisting of prepackaged delivery of medications and bar-code-assisted ADC refilling, was implemented in the inpatient pharmacy of the medical center in September 2009. Medication-refill errors in 26 ADCs from the general medicine units, the infant special care unit, the surgical and burn intensive care units, and intermediate units were assessed before and after the implementation of this program. Medication-refill errors were defined as an ADC pocket containing the wrong drug, wrong strength, or wrong dosage form. ADC refill errors decreased by 77%, from 62 errors per 6829 refilled pockets (0.91%) to 8 errors per 3855 refilled pockets (0.21%) (p < 0.0001). The predominant error type detected before the intervention was the incorrect medication (wrong drug, wrong strength, or wrong dosage form) in the ADC pocket. Of the 54 incorrect medications found before the intervention, 38 (70%) were loaded in a multiple-drug drawer. After the implementation of the new refill process, 3 of the 5 incorrect medications were loaded in a multiple-drug drawer. There were 3 instances of expired medications before and only 1 expired medication after implementation of the program. A redesign of the ADC refill process using a wholesaler-to-ADC direct refill program that included delivery of prepackaged medication and bar-code-assisted refill significantly decreased the occurrence of ADC refill errors.

  17. First observation of forward Z → b b bar production in pp collisions at √{ s } = 8 TeV

    NASA Astrophysics Data System (ADS)

    Aaij, R.; Adeva, B.; Adinolfi, M.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Alfonso Albero, A.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Andreassi, G.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Archilli, F.; d'Argent, P.; Arnau Romeu, J.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Babuschkin, I.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baker, S.; Balagura, V.; Baldini, W.; Baranov, A.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Baryshnikov, F.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Beiter, A.; Bel, L. J.; Beliy, N.; Bellee, V.; Belloli, N.; Belous, K.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Beranek, S.; Berezhnoy, A.; Bernet, R.; Berninghoff, D.; Bertholet, E.; Bertolin, A.; Betancourt, C.; Betti, F.; Bettler, M.-O.; van Beuzekom, M.; Bezshyiko, Ia.; Bifani, S.; Billoir, P.; Birnkraut, A.; Bitadze, A.; Bizzeti, A.; Bjørn, M.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Boettcher, T.; Bondar, A.; Bondar, N.; Bonivento, W.; Bordyuzhin, I.; Borgheresi, A.; Borghi, S.; Borisyak, M.; Borsato, M.; Bossu, F.; Boubdir, M.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Braun, S.; Britton, T.; Brodzicka, J.; Brundu, D.; Buchanan, E.; Burr, C.; Bursche, A.; Buytaert, J.; Byczynski, W.; Cadeddu, S.; Cai, H.; Calabrese, R.; Calladine, R.; Calvi, M.; Calvo Gomez, M.; Camboni, A.; Campana, P.; Campora Perez, D. H.; Capriotti, L.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carniti, P.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Cassina, L.; Castillo Garcia, L.; Cattaneo, M.; Cavallero, G.; Cenci, R.; Chamont, D.; Chapman, M. G.; Charles, M.; Charpentier, Ph.; Chatzikonstantinidis, G.; Chefdeville, M.; Chen, S.; Cheung, S. F.; Chitic, S.-G.; Chobanova, V.; Chrzaszcz, M.; Chubykin, A.; Ciambrone, P.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Cogan, J.; Cogneras, E.; Cogoni, V.; Cojocariu, L.; Collins, P.; Colombo, T.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombs, G.; Coquereau, S.; Corti, G.; Corvo, M.; Costa Sobral, C. M.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Crocombe, A.; Cruz Torres, M.; Currie, R.; D'Ambrosio, C.; Da Cunha Marinho, F.; Dall'Occo, E.; Dalseno, J.; Davis, A.; De Aguiar Francisco, O.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Serio, M.; De Simone, P.; Dean, C. T.; Decamp, D.; Del Buono, L.; Dembinski, H.-P.; Demmer, M.; Dendek, A.; Derkach, D.; Deschamps, O.; Dettori, F.; Dey, B.; Di Canto, A.; Di Nezza, P.; Dijkstra, H.; Dordei, F.; Dorigo, M.; Dosil Suárez, A.; Douglas, L.; Dovbnya, A.; Dreimanis, K.; Dufour, L.; Dujany, G.; Durante, P.; Dzhelyadin, R.; Dziewiecki, M.; Dziurda, A.; Dzyuba, A.; Easo, S.; Ebert, M.; Egede, U.; Egorychev, V.; Eidelman, S.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; Ely, S.; Esen, S.; Evans, H. M.; Evans, T.; Falabella, A.; Farley, N.; Farry, S.; Fazzini, D.; Federici, L.; Ferguson, D.; Fernandez, G.; Fernandez Declara, P.; Fernandez Prieto, A.; Ferrari, F.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fini, R. A.; Fiore, M.; Fiorini, M.; Firlej, M.; Fitzpatrick, C.; Fiutowski, T.; Fleuret, F.; Fohl, K.; Fontana, M.; Fontanelli, F.; Forshaw, D. C.; Forty, R.; Franco Lima, V.; Frank, M.; Frei, C.; Fu, J.; Funk, W.; Furfaro, E.; Färber, C.; Gabriel, E.; Gallas Torreira, A.; Galli, D.; Gallorini, S.; Gambetta, S.; Gandelman, M.; Gandini, P.; Gao, Y.; Garcia Martin, L. M.; García Pardiñas, J.; Garra Tico, J.; Garrido, L.; Garsed, P. J.; Gascon, D.; Gaspar, C.; Gavardi, L.; Gazzoni, G.; Gerick, D.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gianì, S.; Gibson, V.; Girard, O. G.; Giubega, L.; Gizdov, K.; Gligorov, V. V.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gorelov, I. V.; Gotti, C.; Govorkova, E.; Grabowski, J. P.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graverini, E.; Graziani, G.; Grecu, A.; Greim, R.; Griffith, P.; Grillo, L.; Gruber, L.; Gruberg Cazon, B. R.; Grünberg, O.; Gushchin, E.; Guz, Yu.; Gys, T.; Göbel, C.; Hadavizadeh, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hamilton, B.; Han, X.; Hancock, T. H.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; Hasse, C.; Hatch, M.; He, J.; Hecker, M.; Heinicke, K.; Heister, A.; Hennessy, K.; Henrard, P.; Henry, L.; van Herwijnen, E.; Heß, M.; Hicheur, A.; Hill, D.; Hombach, C.; Hopchev, P. H.; Huard, Z. C.; Hulsbergen, W.; Humair, T.; Hushchyn, M.; Hutchcroft, D.; Ibis, P.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jalocha, J.; Jans, E.; Jawahery, A.; Jiang, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Jurik, N.; Kandybei, S.; Karacson, M.; Kariuki, J. M.; Karodia, S.; Kazeev, N.; Kecke, M.; Kelsey, M.; Kenzie, M.; Ketel, T.; Khairullin, E.; Khanji, B.; Khurewathanakul, C.; Kirn, T.; Klaver, S.; Klimaszewski, K.; Klimkovich, T.; Koliiev, S.; Kolpin, M.; Komarov, I.; Kopecna, R.; Koppenburg, P.; Kosmyntseva, A.; Kotriakhova, S.; Kozeiha, M.; Kravchuk, L.; Kreps, M.; Krokovny, P.; Kruse, F.; Krzemien, W.; Kucewicz, W.; Kucharczyk, M.; Kudryavtsev, V.; Kuonen, A. K.; Kurek, K.; Kvaratskheliya, T.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lanfranchi, G.; Langenbruch, C.; Latham, T.; Lazzeroni, C.; Le Gac, R.; Leflat, A.; Lefrançois, J.; Lefèvre, R.; Lemaitre, F.; Lemos Cid, E.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, P.-R.; Li, T.; Li, Y.; Li, Z.; Likhomanenko, T.; Lindner, R.; Lionetto, F.; Lisovskyi, V.; Liu, X.; Loh, D.; Loi, A.; Longstaff, I.; Lopes, J. H.; Lucchesi, D.; Lucio Martinez, M.; Luo, H.; Lupato, A.; Luppi, E.; Lupton, O.; Lusiani, A.; Lyu, X.; Machefert, F.; Maciuc, F.; Macko, V.; Mackowiak, P.; Maddrell-Mander, S.; Maev, O.; Maguire, K.; Maisuzenko, D.; Majewski, M. W.; Malde, S.; Malinin, A.; Maltsev, T.; Manca, G.; Mancinelli, G.; Manning, P.; Marangotto, D.; Maratas, J.; Marchand, J. F.; Marconi, U.; Marin Benito, C.; Marinangeli, M.; Marino, P.; Marks, J.; Martellotti, G.; Martin, M.; Martinelli, M.; Martinez Santos, D.; Martinez Vidal, F.; Martins Tostes, D.; Massacrier, L. M.; Massafferri, A.; Matev, R.; Mathad, A.; Mathe, Z.; Matteuzzi, C.; Mauri, A.; Maurice, E.; Maurin, B.; Mazurov, A.; McCann, M.; McNab, A.; McNulty, R.; Mead, J. V.; Meadows, B.; Meaux, C.; Meier, F.; Meinert, N.; Melnychuk, D.; Merk, M.; Merli, A.; Michielin, E.; Milanes, D. A.; Millard, E.; Minard, M.-N.; Minzoni, L.; Mitzel, D. S.; Mogini, A.; Molina Rodriguez, J.; Mombächer, T.; Monroy, I. A.; Monteil, S.; Morandin, M.; Morello, M. J.; Morgunova, O.; Moron, J.; Morris, A. B.; Mountain, R.; Muheim, F.; Mulder, M.; Müller, D.; Müller, J.; Müller, K.; Müller, V.; Naik, P.; Nakada, T.; Nandakumar, R.; Nandi, A.; Nasteva, I.; Needham, M.; Neri, N.; Neubert, S.; Neufeld, N.; Neuner, M.; Nguyen, T. D.; Nguyen-Mau, C.; Nieswand, S.; Niet, R.; Nikitin, N.; Nikodem, T.; Nogay, A.; O'Hanlon, D. P.; Oblakowska-Mucha, A.; Obraztsov, V.; Ogilvy, S.; Oldeman, R.; Onderwater, C. J. G.; Ossowska, A.; Otalora Goicochea, J. M.; Owen, P.; Oyanguren, A.; Pais, P. R.; Palano, A.; Palutan, M.; Papanestis, A.; Pappagallo, M.; Pappalardo, L. L.; Parker, W.; Parkes, C.; Passaleva, G.; Pastore, A.; Patel, M.; Patrignani, C.; Pearce, A.; Pellegrino, A.; Penso, G.; Pepe Altarelli, M.; Perazzini, S.; Perret, P.; Pescatore, L.; Petridis, K.; Petrolini, A.; Petrov, A.; Petruzzo, M.; Picatoste Olloqui, E.; Pietrzyk, B.; Pikies, M.; Pinci, D.; Pisani, F.; Pistone, A.; Piucci, A.; Placinta, V.; Playfer, S.; Plo Casasus, M.; Polci, F.; Poli Lener, M.; Poluektov, A.; Polyakov, I.; Polycarpo, E.; Pomery, G. J.; Ponce, S.; Popov, A.; Popov, D.; Poslavskii, S.; Potterat, C.; Price, E.; Prisciandaro, J.; Prouve, C.; Pugatch, V.; Puig Navarro, A.; Pullen, H.; Punzi, G.; Qian, W.; Quagliani, R.; Quintana, B.; Rachwal, B.; Rademacker, J. H.; Rama, M.; Ramos Pernas, M.; Rangel, M. S.; Raniuk, I.; Ratnikov, F.; Raven, G.; Ravonel Salzgeber, M.; Reboud, M.; Redi, F.; Reichert, S.; dos Reis, A. C.; Remon Alepuz, C.; Renaudin, V.; Ricciardi, S.; Richards, S.; Rihl, M.; Rinnert, K.; Rives Molina, V.; Robbe, P.; Robert, A.; Rodrigues, A. B.; Rodrigues, E.; Rodriguez Lopez, J. A.; Rodriguez Perez, P.; Rogozhnikov, A.; Roiser, S.; Rollings, A.; Romanovskiy, V.; Romero Vidal, A.; Ronayne, J. W.; Rotondo, M.; Rudolph, M. S.; Ruf, T.; Ruiz Valls, P.; Ruiz Vidal, J.; Saborido Silva, J. J.; Sadykhov, E.; Sagidova, N.; Saitta, B.; Salustino Guimaraes, V.; Sanchez Mayordomo, C.; Sanmartin Sedes, B.; Santacesaria, R.; Santamarina Rios, C.; Santimaria, M.; Santovetti, E.; Sarpis, G.; Sarti, A.; Satriano, C.; Satta, A.; Saunders, D. M.; Savrina, D.; Schael, S.; Schellenberg, M.; Schiller, M.; Schindler, H.; Schlupp, M.; Schmelling, M.; Schmelzer, T.; Schmidt, B.; Schneider, O.; Schopper, A.; Schreiner, H. F.; Schubert, K.; Schubiger, M.; Schune, M.-H.; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Semennikov, A.; Sepulveda, E. S.; Sergi, A.; Serra, N.; Serrano, J.; Sestini, L.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, V.; Siddi, B. G.; Silva Coutinho, R.; Silva de Oliveira, L.; Simi, G.; Simone, S.; Sirendi, M.; Skidmore, N.; Skwarnicki, T.; Smith, E.; Smith, I. T.; Smith, J.; Smith, M.; Soares Lavra, l.; Sokoloff, M. D.; Soler, F. J. P.; Souza De Paula, B.; Spaan, B.; Spradlin, P.; Sridharan, S.; Stagni, F.; Stahl, M.; Stahl, S.; Stefko, P.; Stefkova, S.; Steinkamp, O.; Stemmle, S.; Stenyakin, O.; Stepanova, M.; Stevens, H.; Stone, S.; Storaci, B.; Stracka, S.; Stramaglia, M. E.; Straticiuc, M.; Straumann, U.; Sun, J.; Sun, L.; Sutcliffe, W.; Swientek, K.; Syropoulos, V.; Szczekowski, M.; Szumlak, T.; Szymanski, M.; T'Jampens, S.; Tayduganov, A.; Tekampe, T.; Tellarini, G.; Teubert, F.; Thomas, E.; van Tilburg, J.; Tilley, M. J.; Tisserand, V.; Tobin, M.; Tolk, S.; Tomassetti, L.; Tonelli, D.; Toriello, F.; Tourinho Jadallah Aoude, R.; Tournefier, E.; Traill, M.; Tran, M. T.; Tresch, M.; Trisovic, A.; Tsaregorodtsev, A.; Tsopelas, P.; Tully, A.; Tuning, N.; Ukleja, A.; Usachov, A.; Ustyuzhanin, A.; Uwer, U.; Vacca, C.; Vagner, A.; Vagnoni, V.; Valassi, A.; Valat, S.; Valenti, G.; Vazquez Gomez, R.; Vazquez Regueiro, P.; Vecchi, S.; van Veghel, M.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Venkateswaran, A.; Verlage, T. A.; Vernet, M.; Vesterinen, M.; Viana Barbosa, J. V.; Viaud, B.; Vieira, D.; Vieites Diaz, M.; Viemann, H.; Vilasis-Cardona, X.; Vitti, M.; Volkov, V.; Vollhardt, A.; Voneki, B.; Vorobyev, A.; Vorobyev, V.; Voß, C.; de Vries, J. A.; Vázquez Sierra, C.; Waldi, R.; Wallace, C.; Wallace, R.; Walsh, J.; Wang, J.; Ward, D. R.; Wark, H. M.; Watson, N. K.; Websdale, D.; Weiden, A.; Whitehead, M.; Wicht, J.; Wilkinson, G.; Wilkinson, M.; Williams, M.; Williams, M. P.; Williams, M.; Williams, T.; Wilson, F. F.; Wimberley, J.; Winn, M.; Wishahi, J.; Wislicki, W.; Witek, M.; Wormser, G.; Wotton, S. A.; Wraight, K.; Wyllie, K.; Xie, Y.; Xu, Z.; Yang, Z.; Yang, Z.; Yao, Y.; Yin, H.; Yu, J.; Yuan, X.; Yushchenko, O.; Zarebski, K. A.; Zavertyaev, M.; Zhang, L.; Zhang, Y.; Zhelezov, A.; Zheng, Y.; Zhu, X.; Zhukov, V.; Zonneveld, J. B.; Zucchelli, S.; LHCb Collaboration

    2018-01-01

    The decay Z → b b bar is reconstructed in pp collision data, corresponding to 2 fb-1 of integrated luminosity, collected by the LHCb experiment at a centre-of-mass energy of √{ s } = 8 TeV. The product of the Z production cross-section and the Z → b b bar branching fraction is measured for candidates in the fiducial region defined by two particle-level b-quark jets with pseudorapidities in the range 2.2 < η < 4.2, with transverse momenta pT > 20 GeV and dijet invariant mass in the range 45

  18. Positional reference system for ultraprecision machining

    DOEpatents

    Arnold, Jones B.; Burleson, Robert R.; Pardue, Robert M.

    1982-01-01

    A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of position interferometers and part contour description data inputs to calculate error components for each axis of movement and output them to corresponding axis drives with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.

  19. Positional reference system for ultraprecision machining

    DOEpatents

    Arnold, J.B.; Burleson, R.R.; Pardue, R.M.

    1980-09-12

    A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of positions interferometers and part contour description data input to calculate error components for each axis of movement and output them to corresponding axis driven with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.

  20. Quantitative NO-LIF imaging in high-pressure flames

    NASA Astrophysics Data System (ADS)

    Bessler, W. G.; Schulz, C.; Lee, T.; Shin, D.-I.; Hofmann, M.; Jeffries, J. B.; Wolfrum, J.; Hanson, R. K.

    2002-07-01

    Planar laser-induced fluorescence (PLIF) images of NO concentration are reported in premixed laminar flames from 1-60 bar exciting the A-X(0,0) band. The influence of O2 interference and gas composition, the variation with local temperature, and the effect of laser and signal attenuation by UV light absorption are investigated. Despite choosing a NO excitation and detection scheme with minimum O2-LIF contribution, this interference produces errors of up to 25% in a slightly lean 60 bar flame. The overall dependence of the inferred NO number density with temperature in the relevant (1200-2500 K) range is low (<±15%) because different effects cancel. The attenuation of laser and signal light by combustion products CO2 and H2O is frequently neglected, yet such absorption yields errors of up to 40% in our experiment despite the small scale (8 mm flame diameter). Understanding the dynamic range for each of these corrections provides guidance to minimize errors in single shot imaging experiments at high pressure.

  1. Measurement of the absolute branching fraction of D + → $$\\bar{K}$$ 0 e + ν e via $$\\bar{K}$$0 → π0 π0

    DOE PAGES

    Ablikim, M.; Achasov, M. N.; Ai, X. C.; ...

    2016-11-01

    By analyzing 2.93 fb–1 data collected at the center-of-mass energy with the BESIII detector, we measure the absolute branching fraction of the semileptonic decay D+ →more » $$\\bar{K}$$0 e+νe to be Β(D + → $$\\bar{K}$$ 0 e +ν e) = (8.59 ± 0.14 ± 0.21)% using $$\\bar{K}$$ 0 → K 0 s → π 0π 0, where the first uncertainty is statistical and the second systematic. Finally, our result is consistent with previous measurements within uncertainties..« less

  2. Measurement of the absolute branching fraction of D + → $$\\bar{K}$$ 0 e + ν e via $$\\bar{K}$$0 → π0 π0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ablikim, M.; Achasov, M. N.; Ai, X. C.

    By analyzing 2.93 fb–1 data collected at the center-of-mass energy with the BESIII detector, we measure the absolute branching fraction of the semileptonic decay D+ →more » $$\\bar{K}$$0 e+νe to be Β(D + → $$\\bar{K}$$ 0 e +ν e) = (8.59 ± 0.14 ± 0.21)% using $$\\bar{K}$$ 0 → K 0 s → π 0π 0, where the first uncertainty is statistical and the second systematic. Finally, our result is consistent with previous measurements within uncertainties..« less

  3. The cost of implementing inpatient bar code medication administration.

    PubMed

    Sakowski, Julie Ann; Ketchel, Alan

    2013-02-01

    To calculate the costs associated with implementing and operating an inpatient bar-code medication administration (BCMA) system in the community hospital setting and to estimate the cost per harmful error prevented. This is a retrospective, observational study. Costs were calculated from the hospital perspective and a cost-consequence analysis was performed to estimate the cost per preventable adverse drug event averted. Costs were collected from financial records and key informant interviews at 4 not-for profit community hospitals. Costs included direct expenditures on capital, infrastructure, additional personnel, and the opportunity costs of time for existing personnel working on the project. The number of adverse drug events prevented using BCMA was estimated by multiplying the number of doses administered using BCMA by the rate of harmful errors prevented by interventions in response to system warnings. Our previous work found that BCMA identified and intercepted medication errors in 1.1% of doses administered, 9% of which potentially could have resulted in lasting harm. The cost of implementing and operating BCMA including electronic pharmacy management and drug repackaging over 5 years is $40,000 (range: $35,600 to $54,600) per BCMA-enabled bed and $2000 (range: $1800 to $2600) per harmful error prevented. BCMA can be an effective and potentially cost-saving tool for preventing the harm and costs associated with medication errors.

  4. Star formation suppression and bar ages in nearby barred galaxies

    NASA Astrophysics Data System (ADS)

    James, P. A.; Percival, S. M.

    2018-03-01

    We present new spectroscopic data for 21 barred spiral galaxies, which we use to explore the effect of bars on disc star formation, and to place constraints on the characteristic lifetimes of bar episodes. The analysis centres on regions of heavily suppressed star formation activity, which we term `star formation deserts'. Long-slit optical spectroscopy is used to determine H β absorption strengths in these desert regions, and comparisons with theoretical stellar population models are used to determine the time since the last significant star formation activity, and hence the ages of the bars. We find typical ages of ˜1 Gyr, but with a broad range, much larger than would be expected from measurement errors alone, extending from ˜0.25 to >4 Gyr. Low-level residual star formation, or mixing of stars from outside the `desert' regions, could result in a doubling of these age estimates. The relatively young ages of the underlying populations coupled with the strong limits on the current star formation rule out a gradual exponential decline in activity, and hence support our assumption of an abrupt truncation event.

  5. Enhancing the sensitivity to new physics in the tt¯ invariant mass distribution

    NASA Astrophysics Data System (ADS)

    Álvarez, Ezequiel

    2012-08-01

    We propose selection cuts on the LHC tt¯ production sample which should enhance the sensitivity to new physics signals in the study of the tt¯ invariant mass distribution. We show that selecting events in which the tt¯ object has little transverse and large longitudinal momentum enlarges the quark-fusion fraction of the sample and therefore increases its sensitivity to new physics which couples to quarks and not to gluons. We find that systematic error bars play a fundamental role and assume a simple model for them. We check how a non-visible new particle would become visible after the selection cuts enhance its resonance bump. A final realistic analysis should be done by the experimental groups with a correct evaluation of the systematic error bars.

  6. The Thurgood Marshall School of Law Empirical Findings: A Report of the Relationship between Graduate GPAs and First-Time Texas Bar Scores of February 2010 and July 2009

    ERIC Educational Resources Information Center

    Kadhi, T.; Holley, D.; Palasota, A.

    2010-01-01

    The following report gives descriptive and correlational statistical findings of the Grade Point Averages (GPAs) of the February 2010 and July 2009 TMSL First Time Texas Bar Test Takers to their TMSL Final GPA. Data was pre-existing and was given to the Evaluator by email from the Dean and Registrar. Statistical analyses were run using SPSS 17 to…

  7. A novel single-ended readout depth-of-interaction PET detector fabricated using sub-surface laser engraving.

    PubMed

    Uchida, H; Sakai, T; Yamauchi, H; Hakamata, K; Shimizu, K; Yamashita, T

    2016-09-21

    We propose a novel scintillation detector design for positron emission tomography (PET), which has depth of interaction (DOI) capability and uses a single-ended readout scheme. The DOI detector contains a pair of crystal bars segmented using sub-surface laser engraving (SSLE). The two crystal bars are optically coupled to each other at their top segments and are coupled to two photo-sensors at their bottom segments. Initially, we evaluated the performance of different designs of single crystal bars coupled to photomultiplier tubes at both ends. We found that segmentation by SSLE results in superior performance compared to the conventional method. As the next step, we constructed a crystal unit composed of a 3  ×  3  ×  20 mm 3 crystal bar pair, with each bar containing four layers segmented using the SSLE. We measured the DOI performance by changing the optical conditions for the crystal unit. Based on the experimental results, we then assessed the detector performance in terms of the DOI capability by evaluating the position error, energy resolution, and light collection efficiency for various crystal unit designs with different bar sizes and a different number of layers (four to seven layers). DOI encoding with small position error was achieved for crystal units composed of a 3  ×  3  ×  20 mm 3 LYSO bar pair having up to seven layers, and with those composed of a 2  ×  2  ×  20 mm 3 LYSO bar pair having up to six layers. The energy resolution of the segment in the seven-layer 3  ×  3  ×  20 mm 3 crystal bar pair was 9.3%-15.5% for 662 keV gamma-rays, where the segments closer to the photo-sensors provided better energy resolution. SSLE provides high geometrical accuracy at low production cost due to the simplicity of the crystal assembly. Therefore, the proposed DOI detector is expected to be an attractive choice for practical small-bore PET systems dedicated to imaging of the brain, breast, and small animals.

  8. Automatic Identification Technology (AIT): The Development of Functional Capability and Card Application Matrices

    DTIC Science & Technology

    1994-09-01

    650 B.C. in Asia Minor, coins were developed and used in acquiring goods and services. In France, during the eighteenth century, paper money made its... counterfeited . [INFO94, p. 23] Other weaknesses of bar code technology include limited data storage capability based on the bar code symbology used when...extremely accurate, with calculated error rates as low as 1 in 100 trillion, and are difficult to counterfeit . Strong magnetic fields cannot erase RF

  9. Errors of Measurement, Theory, and Public Policy. William H. Angoff Memorial Lecture Series

    ERIC Educational Resources Information Center

    Kane, Michael

    2010-01-01

    The 12th annual William H. Angoff Memorial Lecture was presented by Dr. Michael T. Kane, ETS's (Educational Testing Service) Samuel J. Messick Chair in Test Validity and the former Director of Research at the National Conference of Bar Examiners. Dr. Kane argues that it is important for policymakers to recognize the impact of errors of measurement…

  10. The extended statistical analysis of toxicity tests using standardised effect sizes (SESs): a comparison of nine published papers.

    PubMed

    Festing, Michael F W

    2014-01-01

    The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error) due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs) a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL) was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.

  11. Study of aging and embrittlement of microalloyed steel bars

    NASA Astrophysics Data System (ADS)

    Campillo, B.; Perez, R.; Martinez, L.

    1996-10-01

    The aging of hooks, anchors, and other bent reinforcing steel bars in concrete structures are considered in modern international standards. Rebend test procedures have been designed in order to predict the aging embrittlement susceptibility by submerging bent reinforcing bar specimens in boiling water. Subsequently the bars are rebent or straightened in order to determine the loss of ductility or embrittlement of the aged material. The present work considers the influence of carbon, sulfur, and niobium on the performance of reinforcing bars in rebend tests of 300 heats of microalloyed steel bars with a variety of compositions. The microstructural evidence and the statistical results clearly indicate the strong influence of carbon and sulfur on rebend failure, while niobium-rich precipitates contribute to the hardening of the ferrite grains during aging.

  12. Bar Code Medication Administration Technology: Characterization of High-Alert Medication Triggers and Clinician Workarounds.

    PubMed

    Miller, Daniel F; Fortier, Christopher R; Garrison, Kelli L

    2011-02-01

    Bar code medication administration (BCMA) technology is gaining acceptance for its ability to prevent medication administration errors. However, studies suggest that improper use of BCMA technology can yield unsatisfactory error prevention and introduction of new potential medication errors. To evaluate the incidence of high-alert medication BCMA triggers and alert types and discuss the type of nursing and pharmacy workarounds occurring with the use of BCMA technology and the electronic medication administration record (eMAR). Medication scanning and override reports from January 1, 2008, through November 30, 2008, for all adult medical/surgical units were retrospectively evaluated for high-alert medication system triggers, alert types, and override reason documentation. An observational study of nursing workarounds on an adult medicine step-down unit was performed and an analysis of potential pharmacy workarounds affecting BCMA and the eMAR was also conducted. Seventeen percent of scanned medications triggered an error alert of which 55% were for high-alert medications. Insulin aspart, NPH insulin, hydromorphone, potassium chloride, and morphine were the top 5 high-alert medications that generated alert messages. Clinician override reasons for alerts were documented in only 23% of administrations. Observational studies assessing for nursing workarounds revealed a median of 3 clinician workarounds per administration. Specific nursing workarounds included a failure to scan medications/patient armband and scanning the bar code once the dosage has been removed from the unit-dose packaging. Analysis of pharmacy order entry process workarounds revealed the potential for missed doses, duplicate doses, and doses being scheduled at the wrong time. BCMA has the potential to prevent high-alert medication errors by alerting clinicians through alert messages. Nursing and pharmacy workarounds can limit the recognition of optimal safety outcomes and therefore workflow processes must be continually analyzed and restructured to yield the intended full benefits of BCMA technology. © 2011 SAGE Publications.

  13. Radial basis function network learns ceramic processing and predicts related strength and density

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Baaklini, George Y.; Vary, Alex; Tjia, Robert E.

    1993-01-01

    Radial basis function (RBF) neural networks were trained using the data from 273 Si3N4 modulus of rupture (MOR) bars which were tested at room temperature and 135 MOR bars which were tested at 1370 C. Milling time, sintering time, and sintering gas pressure were the processing parameters used as the input features. Flexural strength and density were the outputs by which the RBF networks were assessed. The 'nodes-at-data-points' method was used to set the hidden layer centers and output layer training used the gradient descent method. The RBF network predicted strength with an average error of less than 12 percent and density with an average error of less than 2 percent. Further, the RBF network demonstrated a potential for optimizing and accelerating the development and processing of ceramic materials.

  14. Laser damage metrology in biaxial nonlinear crystals using different test beams

    NASA Astrophysics Data System (ADS)

    Hildenbrand, Anne; Wagner, Frank R.; Akhouayri, Hassan; Natoli, Jean-Yves; Commandre, Mireille

    2008-01-01

    Laser damage measurements in nonlinear optical crystals, in particular in biaxial crystals, may be influenced by several effects proper to these materials or greatly enhanced in these materials. Before discussion of these effects, we address the topic of error bar determination for probability measurements. Error bars for the damage probabilities are important because nonlinear crystals are often small and expensive, thus only few sites are used for a single damage probability measurement. We present the mathematical basics and a flow diagram for the numerical calculation of error bars for probability measurements that correspond to a chosen confidence level. Effects that possibly modify the maximum intensity in a biaxial nonlinear crystal are: focusing aberration, walk-off and self-focusing. Depending on focusing conditions, propagation direction, polarization of the light and the position of the focus point in the crystal, strong aberrations may change the beam profile and drastically decrease the maximum intensity in the crystal. A correction factor for this effect is proposed, but quantitative corrections are not possible without taking into account the experimental beam profile after the focusing lens. The characteristics of walk-off and self-focusing have quickly been reviewed for the sake of completeness of this article. Finally, parasitic second harmonic generation may influence the laser damage behavior of crystals. The important point for laser damage measurements is that the amount of externally observed SHG after the crystal does not correspond to the maximum amount of second harmonic light inside the crystal.

  15. Evaluation of Key Factors Impacting Feeding Safety in the Neonatal Intensive Care Unit: A Systematic Review.

    PubMed

    Matus, Bethany A; Bridges, Kayla M; Logomarsino, John V

    2018-06-21

    Individualized feeding care plans and safe handling of milk (human or formula) are critical in promoting growth, immune function, and neurodevelopment in the preterm infant. Feeding errors and disruptions or limitations to feeding processes in the neonatal intensive care unit (NICU) are associated with negative safety events. Feeding errors include contamination of milk and delivery of incorrect or expired milk and may result in adverse gastrointestinal illnesses. The purpose of this review was to evaluate the effect(s) of centralized milk preparation, use of trained technicians, use of bar code-scanning software, and collaboration between registered dietitians and registered nurses on feeding safety in the NICU. A systematic review of the literature was completed, and 12 articles were selected as relevant to search criteria. Study quality was evaluated using the Downs and Black scoring tool. An evaluation of human studies indicated that the use of centralized milk preparation, trained technicians, bar code-scanning software, and possible registered dietitian involvement decreased feeding-associated error in the NICU. A state-of-the-art NICU includes a centralized milk preparation area staffed by trained technicians, care supported by bar code-scanning software, and utilization of a registered dietitian to improve patient safety. These resources will provide nurses more time to focus on nursing-specific neonatal care. Further research is needed to evaluate the impact of factors related to feeding safety in the NICU as well as potential financial benefits of these quality improvement opportunities.

  16. Development of self-sensing BFRP bars with distributed optic fiber sensors

    NASA Astrophysics Data System (ADS)

    Tang, Yongsheng; Wu, Zhishen; Yang, Caiqian; Shen, Sheng; Wu, Gang; Hong, Wan

    2009-03-01

    In this paper, a new type of self-sensing basalt fiber reinforced polymer (BFRP) bars is developed with using the Brillouin scattering-based distributed optic fiber sensing technique. During the fabrication, optic fiber without buffer and sheath as a core is firstly reinforced through braiding around mechanically dry continuous basalt fiber sheath in order to survive the pulling-shoving process of manufacturing the BFRP bars. The optic fiber with dry basalt fiber sheath as a core embedded further in the BFRP bars will be impregnated well with epoxy resin during the pulling-shoving process. The bond between the optic fiber and the basalt fiber sheath as well as between the basalt fiber sheath and the FRP bar can be controlled and ensured. Therefore, the measuring error due to the slippage between the optic fiber core and the coating can be improved. Moreover, epoxy resin of the segments, where the connection of optic fibers will be performed, is uncured by isolating heat from these parts of the bar during the manufacture. Consequently, the optic fiber in these segments of the bar can be easily taken out, and the connection between optic fibers can be smoothly carried out. Finally, a series of experiments are performed to study the sensing and mechanical properties of the propose BFRP bars. The experimental results show that the self-sensing BFRP bar is characterized by not only excellent accuracy, repeatability and linearity for strain measuring but also good mechanical property.

  17. Unconscious analyses of visual scenes based on feature conjunctions.

    PubMed

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  18. The Thurgood Marshall School of Law Empirical Findings: A Six-Year Study of the First-Time and Ultimate Bar Exam Results of Students According to Law School Admission Council (LSAC) Index

    ERIC Educational Resources Information Center

    Kadhi, T.; Holley, D.; Beard, J.

    2011-01-01

    The following report of descriptive statistics addresses the matriculating class of 2001-2007 according to their Law School Admission Council (LSAC) index. Generally, this report will offer information on the first time bar and ultimate performance on the Bar Exam of TMSL students. In addition, graduating GPA according to the LSAC index will also…

  19. Minimally Invasive Repair of Pectus Excavatum Without Bar Stabilizers Using Endo Close.

    PubMed

    Pio, Luca; Carlucci, Marcello; Leonelli, Lorenzo; Erminio, Giovanni; Mattioli, Girolamo; Torre, Michele

    2016-02-01

    Since the introduction of the Nuss technique for pectus excavatum (PE) repair, stabilization of the bar has been a matter of debate and a crucial point for the outcome, as bar dislocation remains one of the most frequent complications. Several techniques have been described, most of them including the use of a metal stabilizer, which, however, can increase morbidity and be difficult to remove. Our study compares bar stabilization techniques in two groups of patients, respectively, with and without the metal stabilizer. A retrospective study on patients affected by PE and treated by the Nuss technique from January 2012 to June 2013 at our institution was performed in order to evaluate the efficacy of metal stabilizers. Group 1 included patients who did not have the metal stabilizer inserted; stabilization was achieved with multiple (at least four) bilateral pericostal Endo Close™ (Auto Suture, US Surgical; Tyco Healthcare Group, Norwalk, CT) sutures. Group 2 included patients who had a metal stabilizer placed because pericostal sutures could not be used bilaterally. We compared the two groups in terms of bar dislocation rate, surgical operative time, and other complications. Statistical analysis was performed with the Mann-Whitney U test and Fisher's exact test. Fifty-seven patients were included in the study: 37 in Group 1 and 20 in Group 2. Two patients from Group 2 had a bar dislocation. Statistical analysis showed no difference between the two groups in dislocation rate or other complications. In our experience, the placement of a metal stabilizer did not reduce the rate of bar dislocation. Bar stabilization by the pericostal Endo Close suture technique appears to have no increase in morbidity or migration compared with the metal lateral stabilizer technique.

  20. Does a tow-bar increase the risk of neck injury in rear-end collisions?

    PubMed

    Olesen, Anne Vingaard; Elvik, Rune; Andersen, Camilla Sloth; Lahrmann, Harry S

    2018-06-01

    Does a tow-bar increase the risk of neck injury in the struck car in a rear-end collision? The rear part of a modern car has collision zones that are rendered nonoperational when the car is equipped with a tow-bar. Past crash tests have shown that a car's acceleration was higher in a car equipped with a tow-bar and also that a dummy placed in a car with a tow-bar had higher peak acceleration in the lower neck area. This study aimed to investigate the association between the risk of neck injury in drivers and passengers, and the presence of a registered tow-bar on the struck car in a rear-end collision. We performed a merger of police reports, the National Hospital Discharge Registry, and the National Registry of Motor Vehicles in Denmark. We identified 9,370 drivers and passengers of whom 1,519 were diagnosed with neck injury within the first year after the collision. We found a statistically insignificant 5% decrease in the risk of neck injury in the occupants of the struck car when a tow-bar was fitted compared to when it was not fitted (hazard ratio=0.95; 95% confidence level=0.85-1.05; p=0.32). The result was controlled for gender, age, and the seat of the occupant. Several other collision and car characteristics and demographic information on the drivers and passengers were evaluated as confounders but were not statistically significant. The present study may serve as valuable input for a meta-analysis on the effect of a tow-bar because negative results are necessary in order to avoid publication bias. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. The effect of smoke-free policies on revenue in bars in Tasmania, Australia.

    PubMed

    Lal, A; Siahpush, M

    2009-10-01

    To examine the impact of smoke-free policies on revenue in Tasmanian bars. Monthly sales turnover from January 2002 to March 2007, provided by the Australian Bureau of Statistics was analysed. There were two (1) the ratio of monthly bar sales turnover in Tasmania to monthly bar sales turnover in four other Australian states, and (2) the ratio of monthly bar turnover to monthly retail turnover in Tasmania. Linear regression was used to assess the impact of the smoke-free policy on expenditure. The smoke-free policy had no effect on sales turnover. The smoke-free policy protects hospitality workers and patrons from exposure to secondhand smoke and has had no adverse effect on sales turnover.

  2. Remote Sensing Global Surface Air Pressure Using Differential Absorption BArometric Radar (DiBAR)

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Harrah, Steven; Lawrence, Wes; Hu, Yongxiang; Min, Qilong

    2016-01-01

    Tropical storms and severe weathers are listed as one of core events that need improved observations and predictions in World Meteorological Organization and NASA Decadal Survey (DS) documents and have major impacts on public safety and national security. This effort tries to observe surface air pressure, especially over open seas, from space using a Differential-absorption BArometric Radar (DiBAR) operating at the 50-55 gigahertz O2 absorption band. Air pressure is among the most important variables that affect atmospheric dynamics, and currently can only be measured by limited in-situ observations over oceans. Analyses show that with the proposed space radar the errors in instantaneous (averaged) pressure estimates can be as low as approximately 4 millibars (approximately 1 millibar under all weather conditions). With these sea level pressure measurements, the forecasts of severe weathers such as hurricanes will be significantly improved. Since the development of the DiBAR concept about a decade ago, NASA Langley DiBAR research team has made substantial progress in advancing the concept. The feasibility assessment clearly shows the potential of sea surface barometry using existing radar technologies. The team has developed a DiBAR system design, fabricated a Prototype-DiBAR (P-DiBAR) for proof-of-concept, conducted lab, ground and airborne P-DiBAR tests. The flight test results are consistent with the instrumentation goals. Observational system simulation experiments for space DiBAR performance based on the existing DiBAR technology and capability show substantial improvements in tropical storm predictions, not only for the hurricane track and position but also for the hurricane intensity. DiBAR measurements will lead us to an unprecedented level of the prediction and knowledge on global extreme weather and climate conditions.

  3. [Clinical application of biofragmentable anastomosis ring for intestinal anastomosis].

    PubMed

    Ye, Feng; Lin, Jian-jiang

    2006-11-01

    To compare the efficacy of the biofragmentable anastomotic ring (BAR) with conventional hand-sutured and stapling techniques,and to evaluate the safety and applicability of the BAR in intestinal anastomosis. The totol of 498 patients performed intestinal anastomosis from January 2000 to November 2005 were allocated to BAR group (n=186), hand-sutured group (n=177) and linear cutter group (n=135). The operative time, postoperative convalescence and corresponding complication were recorded. Postoperative anastomotic inflammation and anastomotic stenosis were observed during half or one year follow-up of 436 patients. The operative time was (102 +/- 16) min in the BAR group, (121 +/- 15) min in the hand-sutured group, and (105 +/- 18 ) min in the linear cutter group. The difference was significant statistically (P <0.05). The operative time in BAR group and linear cutter group was shorter than hand-sutured group. One case of anastomotic leakage was noted in the BAR group, one case in the hand-sutured group, and none in the linear cutter group. They were cured by conservative methods. One case of anastomotic obstruction happened in the BAR group, one case in the hand-sutured group. Two of them were cured by conservative methods. Two cases of anastomotic obstruction happened in the hand-sutured group. However, one of them required reoperation to remove the obstruction. In the BAR, hand-sutured and the linear cutter group, the postoperative first flatus time was (67.2+/- 4.6) h, (70.2 +/- 5.8) h and (69.2 +/- 6.2)h, respectively. No significant differences were observed among three groups(P > 0.05). The rate of postoperative anastomotic inflammation was 3.0 % (5/164) in the BAR group, 47.8 % (76/159) in hand-sutured group and 7.1 % (8/113) in the linear cutter group. The difference was significant statistically (P <0.05). The rate of postoperative anastomotic inflammation in the BAR group and in the linear cutter group was less than that in hand-sutured group. BAR is one of rapid,safe and effective methods in intestinal anastomosis. It has less anastomotic inflammatory reaction than hand-sutured technique. It should be considered equal to manual and stapler methods.

  4. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  5. Raising the bar for reproducible science at the U.S. Environmental Protection Agency Office of Research and Development.

    PubMed

    George, Barbara Jane; Sobus, Jon R; Phelps, Lara P; Rashleigh, Brenda; Simmons, Jane Ellen; Hines, Ronald N

    2015-05-01

    Considerable concern has been raised regarding research reproducibility both within and outside the scientific community. Several factors possibly contribute to a lack of reproducibility, including a failure to adequately employ statistical considerations during study design, bias in sample selection or subject recruitment, errors in developing data inclusion/exclusion criteria, and flawed statistical analysis. To address some of these issues, several publishers have developed checklists that authors must complete. Others have either enhanced statistical expertise on existing editorial boards, or formed distinct statistics editorial boards. Although the U.S. Environmental Protection Agency, Office of Research and Development, already has a strong Quality Assurance Program, an initiative was undertaken to further strengthen statistics consideration and other factors in study design and also to ensure these same factors are evaluated during the review and approval of study protocols. To raise awareness of the importance of statistical issues and provide a forum for robust discussion, a Community of Practice for Statistics was formed in January 2014. In addition, three working groups were established to develop a series of questions or criteria that should be considered when designing or reviewing experimental, observational, or modeling focused research. This article describes the process used to develop these study design guidance documents, their contents, how they are being employed by the Agency's research enterprise, and expected benefits to Agency science. The process and guidance documents presented here may be of utility for any research enterprise interested in enhancing the reproducibility of its science. © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology.

  6. Selective recovery of tagatose from mixtures with galactose by direct extraction with supercritical CO2 and different cosolvents.

    PubMed

    Montañés, Fernando; Fornari, Tiziana; Martín-Alvarez, Pedro J; Corzo, Nieves; Olano, Agustin; Ibañez, Elena

    2006-10-18

    A selective fractionation method of carbohydrate mixtures of galactose/tagatose, using supercritical CO(2) and isopropanol as cosolvent, has been evaluated. Optimization was carried out using a central composite face design and considering as factors the extraction pressure (from 100 to 300 bar), the extraction temperature (from 60 to 100 degrees C), and the modifier flow rate (from 0.2 to 0.4 mL/min, which corresponded to a total cosolvent percentage ranging from 4 to 18% vol). The responses evaluated were the amount (milligrams) of tagatose and galactose extracted and their recoveries (percent). The statistical analysis of the results provided mathematical models for each response variable. The corresponding parameters were estimated by multiple linear regression, and high determination coefficients (>0.96) were obtained. The optimum conditions of the extraction process to get the maximum recovery of tagatose (37%) were 300 bar, 60 degrees C, and 0.4 mL/min of cosolvent. The predicted value was 24.37 mg of tagatose, whereas the experimental value was 26.34 mg, which is a 7% error from the predicted value. Cosolvent polarity effects on tagatose extraction from mixtures of galactose/tagatose were also studied using different alcohols and their mixtures with water. Although a remarkable increase of the amount of total carbohydrate extracted with polarity was found, selective extraction of tagatose decreased with increase of polarity of assayed cosolvents. To improve the recovery of extracted tagatose, additional experiments outside the experimental domain were carried out (300 bar, 80 degrees C, and 0.6 mL/min of isopropanol); recoveries >75% of tagatose with purity >90% were obtained.

  7. Study of Electron Anti-neutrinos Associated with Gamma-Ray Bursts Using KamLAND

    NASA Astrophysics Data System (ADS)

    Asakura, K.; Gando, A.; Gando, Y.; Hachiya, T.; Hayashida, S.; Ikeda, H.; Inoue, K.; Ishidoshiro, K.; Ishikawa, T.; Ishio, S.; Koga, M.; Matsuda, S.; Mitsui, T.; Motoki, D.; Nakamura, K.; Obara, S.; Oki, Y.; Oura, T.; Shimizu, I.; Shirahata, Y.; Shirai, J.; Suzuki, A.; Tachibana, H.; Tamae, K.; Ueshima, K.; Watanabe, H.; Xu, B. D.; Yoshida, H.; Kozlov, A.; Takemoto, Y.; Yoshida, S.; Fushimi, K.; Piepke, A.; Banks, T. I.; Berger, B. E.; Fujikawa, B. K.; O'Donnell, T.; Learned, J. G.; Maricic, J.; Sakai, M.; Winslow, L. A.; Efremenko, Y.; Karwowski, H. J.; Markoff, D. M.; Tornow, W.; Detwiler, J. A.; Enomoto, S.; Decowski, M. P.; KamLAND Collaboration

    2015-06-01

    We search for electron anti-neutrinos ({{\\bar{ν }}e}) from long- and short-duration gamma-ray bursts (GRBs) using data taken by the Kamioka Liquid Scintillator Anti-Neutrino Detector (KamLAND) from 2002 August to 2013 June. No statistically significant excess over the background level is found. We place the tightest upper limits on {{\\bar{ν }}e} fluence from GRBs below 7 MeV and place first constraints on the relation between {{\\bar{ν }}e} luminosity and effective temperature.

  8. Interpolating Spherical Harmonics for Computing Antenna Patterns

    DTIC Science & Technology

    2011-07-01

    4∞. If gNF denotes the spline computed from the uniform partition of NF + 1 frequency points, the splines converge as O[N−4F ]: ‖gN − g‖∞ ≤ C0‖g(4...splines. There is the possibility of estimating the error ‖g− gNF ‖∞ even though the function g is unknown. Table 1 compares these unknown errors ‖g − gNF ...to the computable estimates ‖ gNF − g2NF ‖∞. The latter is a strong predictor of the unknown error. The triple bar is the sup-norm error over all the

  9. Publisher Correction: Unravelling the immune signature of Plasmodium falciparum transmission-reducing immunity.

    PubMed

    Stone, Will J R; Campo, Joseph J; Ouédraogo, André Lin; Meerstein-Kessel, Lisette; Morlais, Isabelle; Da, Dari; Cohuet, Anna; Nsango, Sandrine; Sutherland, Colin J; van de Vegte-Bolmer, Marga; Siebelink-Stoter, Rianne; van Gemert, Geert-Jan; Graumans, Wouter; Lanke, Kjerstin; Shandling, Adam D; Pablo, Jozelyn V; Teng, Andy A; Jones, Sophie; de Jong, Roos M; Fabra-García, Amanda; Bradley, John; Roeffen, Will; Lasonder, Edwin; Gremo, Giuliana; Schwarzer, Evelin; Janse, Chris J; Singh, Susheel K; Theisen, Michael; Felgner, Phil; Marti, Matthias; Drakeley, Chris; Sauerwein, Robert; Bousema, Teun; Jore, Matthijs M

    2018-04-11

    The original version of this Article contained errors in Fig. 3. In panel a, bars from a chart depicting the percentage of antibody-positive individuals in non-infectious and infectious groups were inadvertently included in place of bars depicting the percentage of infectious individuals, as described in the Article and figure legend. However, the p values reported in the Figure and the resulting conclusions were based on the correct dataset. The corrected Fig. 3a now shows the percentage of infectious individuals in antibody-negative and -positive groups, in both the PDF and HTML versions of the Article. The incorrect and correct versions of Figure 3a are also presented for comparison in the accompanying Publisher Correction as Figure 1.The HTML version of the Article also omitted a link to Supplementary Data 6. The error has now been fixed and Supplementary Data 6 is available to download.

  10. A PRELIMINARY JUPITER MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubbard, W. B.; Militzer, B.

    In anticipation of new observational results for Jupiter's axial moment of inertia and gravitational zonal harmonic coefficients from the forthcoming Juno orbiter, we present a number of preliminary Jupiter interior models. We combine results from ab initio computer simulations of hydrogen–helium mixtures, including immiscibility calculations, with a new nonperturbative calculation of Jupiter's zonal harmonic coefficients, to derive a self-consistent model for the planet's external gravity and moment of inertia. We assume helium rain modified the interior temperature and composition profiles. Our calculation predicts zonal harmonic values to which measurements can be compared. Although some models fit the observed (pre-Juno) second-more » and fourth-order zonal harmonics to within their error bars, our preferred reference model predicts a fourth-order zonal harmonic whose absolute value lies above the pre-Juno error bars. This model has a dense core of about 12 Earth masses and a hydrogen–helium-rich envelope with approximately three times solar metallicity.« less

  11. The Carnegie-Irvine Galaxy Survey. V. Statistical Study of Bars and Buckled Bars

    NASA Astrophysics Data System (ADS)

    Li, Zhao-Yu; Ho, Luis C.; Barth, Aaron J.

    2017-08-01

    Simulations have shown that bars are subject to a vertical buckling instability that transforms thin bars into boxy or peanut-shaped structures, but the physical conditions necessary for buckling to occur are not fully understood. We use the large sample of local disk galaxies in the Carnegie-Irvine Galaxy Survey to examine the incidence of bars and buckled bars across the Hubble sequence. Depending on the disk inclination angle (I), a buckled bar reveals itself as either a boxy/peanut-shaped bulge (at high I) or as a barlens structure (at low I). We visually identify bars, boxy/peanut-shaped bulges, and barlenses, and examine the dependence of bar and buckled bar fractions on host galaxy properties, including Hubble type, stellar mass, color, and gas mass fraction. We find that the barred and unbarred disks show similar distributions in these physical parameters. The bar fraction is higher (70%-80%) in late-type disks with low stellar mass (M * < 1010.5 M ⊙) and high gas mass ratio. In contrast, the buckled bar fraction increases to 80% toward massive and early-type disks (M * > 1010.5 M ⊙), and decreases with higher gas mass ratio. These results suggest that bars are more difficult to grow in massive disks that are dynamically hotter than low-mass disks. However, once a bar forms, it can easily buckle in the massive disks, where a deeper potential can sustain the vertical resonant orbits. We also find a probable buckling bar candidate (ESO 506-G004) that could provide further clues to understand the timescale of the buckling process.

  12. Contribution to the theory of propeller vibrations

    NASA Technical Reports Server (NTRS)

    Liebers, F

    1930-01-01

    This report presents a calculation of the torsional frequencies of revolving bars with allowance for the air forces. Calculation of the flexural or bonding frequencies of revolving straight or tapered bars in terms of the angular velocity of revolution. Calculation on the basis of Rayleigh's principle of variation. There is also a discussion of error estimation and the accuracy of results. The author then provides an application of the theory to screw propellers for airplanes and the discusses the liability of propellers to damage through vibrations due to lack of uniform loading.

  13. Communication: Calculation of interatomic forces and optimization of molecular geometry with auxiliary-field quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Motta, Mario; Zhang, Shiwei

    2018-05-01

    We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.

  14. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  15. Success and High Predictability of Intraorally Welded Titanium Bar in the Immediate Loading Implants

    PubMed Central

    Fogli, Vaniel; Camerini, Michele; Carinci, Francesco

    2014-01-01

    The implants failure may be caused by micromotion and stress exerted on implants during the phase of bone healing. This concept is especially true in case of implants placed in atrophic ridges. So the primary stabilization and fixation of implants are an important goal that can also allow immediate loading and oral rehabilitation on the same day of surgery. This goal may be achieved thanks to the technique of welding titanium bars on implant abutments. In fact, the procedure can be performed directly in the mouth eliminating possibility of errors or distortions due to impression. This paper describes a case report and the most recent data about long-term success and high predictability of intraorally welded titanium bar in immediate loading implants. PMID:24963419

  16. Quark fragmentation into spin-triplet S -wave quarkonium

    DOE PAGES

    Bodwin, Geoffrey T.; Chung, Hee Sok; Kim, U-Rae; ...

    2015-04-08

    We compute fragmentation functions for a quark to fragment to a quarkonium through an S-wave spin-triplet heavy quark-antiquark pair. We consider both color-singlet and color-octet heavy quark-antiquark (Q (Q) over bar) pairs. We give results for the case in which the fragmenting quark and the quark that is a constituent of the quarkonium have different flavors and for the case in which these quarks have the same flavors. Our results for the sum over all spin polarizations of the Q (Q) over bar pairs confirm previous results. Our results for longitudinally polarized Q (Q) over bar pairs agree with previousmore » calculations for the same flavor cases and correct an error in a previous calculation for the different-flavor case.« less

  17. 7 CFR 785.8 - Reports by qualifying States receiving mediation grant funds.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... reliable statistical data as may be obtained from State statistical sources including the certified State's bar association, State Department of Agriculture, State court system or Better Business Bureau, or...

  18. Ensuring Positiveness of the Scaled Difference Chi-Square Test Statistic

    ERIC Educational Resources Information Center

    Satorra, Albert; Bentler, Peter M.

    2010-01-01

    A scaled difference test statistic T[tilde][subscript d] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (Psychometrika 66:507-514, 2001). The statistic T[tilde][subscript d] is asymptotically equivalent to the scaled difference test statistic T[bar][subscript…

  19. A model for flexi-bar to evaluate intervertebral disc and muscle forces in exercises.

    PubMed

    Abdollahi, Masoud; Nikkhoo, Mohammad; Ashouri, Sajad; Asghari, Mohsen; Parnianpour, Mohamad; Khalaf, Kinda

    2016-10-01

    This study developed and validated a lumped parameter model for the FLEXI-BAR, a popular training instrument that provides vibration stimulation. The model which can be used in conjunction with musculoskeletal-modeling software for quantitative biomechanical analyses, consists of 3 rigid segments, 2 torsional springs, and 2 torsional dashpots. Two different sets of experiments were conducted to determine the model's key parameters including the stiffness of the springs and the damping ratio of the dashpots. In the first set of experiments, the free vibration of the FLEXI-BAR with an initial displacement at its end was considered, while in the second set, forced oscillations of the bar were studied. The properties of the mechanical elements in the lumped parameter model were derived utilizing a non-linear optimization algorithm which minimized the difference between the model's prediction and the experimental data. The results showed that the model is valid (8% error) and can be used for simulating exercises with the FLEXI-BAR for excitations in the range of the natural frequency. The model was then validated in combination with AnyBody musculoskeletal modeling software, where various lumbar disc, spinal muscles and hand muscles forces were determined during different FLEXI-BAR exercise simulations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Path synthesis of four-bar mechanisms using synergy of polynomial neural network and Stackelberg game theory

    NASA Astrophysics Data System (ADS)

    Ahmadi, Bahman; Nariman-zadeh, Nader; Jamali, Ali

    2017-06-01

    In this article, a novel approach based on game theory is presented for multi-objective optimal synthesis of four-bar mechanisms. The multi-objective optimization problem is modelled as a Stackelberg game. The more important objective function, tracking error, is considered as the leader, and the other objective function, deviation of the transmission angle from 90° (TA), is considered as the follower. In a new approach, a group method of data handling (GMDH)-type neural network is also utilized to construct an approximate model for the rational reaction set (RRS) of the follower. Using the proposed game-theoretic approach, the multi-objective optimal synthesis of a four-bar mechanism is then cast into a single-objective optimal synthesis using the leader variables and the obtained RRS of the follower. The superiority of using the synergy game-theoretic method of Stackelberg with a GMDH-type neural network is demonstrated for two case studies on the synthesis of four-bar mechanisms.

  1. Application of the BAR score as a predictor of short- and long-term survival in liver transplantation patients.

    PubMed

    de Campos Junior, Ivan Dias; Stucchi, Raquel Silveira Bello; Udo, Elisabete Yoko; Boin, Ilka de Fátima Santana Ferreira

    2015-01-01

    The balance of risk (BAR) is a prediction system after liver transplantation. To assess the BAR system, a retrospective observational study was performed in 402 patients who had transplant surgery between 1997 and 2012. The BAR score was computed for each patient. Receiver operating characteristic curve analysis with the Hosmer-Lemeshow test was used to calculate sensitivity, specificity, and model calibration. The cutoff value with the best Youden index was selected. Statistical analysis employed the Kaplan-Meier method (log-rank test) for survival, the Mann-Whitney test for group comparison, and multiple logistic regression analysis. 3-month survival was 46% for BAR ≥ 11 and 77% for BAR <11 (p = 0.001); 12-month survival was 44% for BAR ≥ 11 and 69% for BAR <11 (p = 0.001). Factors of survival <3 months were BAR ≥ 11 [odds ratio (OR) 3.08; 95% confidence interval (CI) 1.75-5.42; p = 0.001] and intrasurgical use of packed red blood cells (RBC) above 6 units (OR 4.49; 95% CI 2.73-7.39; p = 0.001). For survival <12 months, factors were BAR ≥ 11 (OR 2.94; 95% CI 1.67-5.16; p = 0.001) and RBC >6 units (OR 2.99; 95% CI 1.92-4.64; p = 0.001). Our study contributes to the incorporation of the BAR system into Brazilian transplantation centers.

  2. The Carnegie-Irvine Galaxy Survey. V. Statistical Study of Bars and Buckled Bars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhao-Yu; Ho, Luis C.; Barth, Aaron J., E-mail: lizy@shao.ac.cn

    Simulations have shown that bars are subject to a vertical buckling instability that transforms thin bars into boxy or peanut-shaped structures, but the physical conditions necessary for buckling to occur are not fully understood. We use the large sample of local disk galaxies in the Carnegie-Irvine Galaxy Survey to examine the incidence of bars and buckled bars across the Hubble sequence. Depending on the disk inclination angle ( i ), a buckled bar reveals itself as either a boxy/peanut-shaped bulge (at high i ) or as a barlens structure (at low i ). We visually identify bars, boxy/peanut-shaped bulges, andmore » barlenses, and examine the dependence of bar and buckled bar fractions on host galaxy properties, including Hubble type, stellar mass, color, and gas mass fraction. We find that the barred and unbarred disks show similar distributions in these physical parameters. The bar fraction is higher (70%–80%) in late-type disks with low stellar mass ( M {sub *} < 10{sup 10.5} M {sub ⊙}) and high gas mass ratio. In contrast, the buckled bar fraction increases to 80% toward massive and early-type disks ( M {sub *} > 10{sup 10.5} M {sub ⊙}), and decreases with higher gas mass ratio. These results suggest that bars are more difficult to grow in massive disks that are dynamically hotter than low-mass disks. However, once a bar forms, it can easily buckle in the massive disks, where a deeper potential can sustain the vertical resonant orbits. We also find a probable buckling bar candidate (ESO 506−G004) that could provide further clues to understand the timescale of the buckling process.« less

  3. A comparison of two soldering techniques on the misfit of bar-retained implant-supported overdentures.

    PubMed

    Alvarez, Angel; Lafita, Pedro; de Llanos, Hector; Gago, Angel; Brizuela, Aritza; Ellacuria, Joseba J

    2014-02-01

    This study was conducted to measure and compare the effect of the soldering method (torch soldering or ceramic furnace soldering) used for soldering bars to bar-retained, implant-supported overdentures on the fit between the bar gold cylinder and implant transgingival abutment. Thirty-two overdenture implant bars were manufactured and screw retained into two Bränemark implants, which were attached to a cow rib. The bars were randomly distributed in two groups: a torch-soldering group and a porcelain-furnace soldering group. Then all bars were cut and soldered using a torch and a ceramic furnace. The fit between the bar gold cylinders and implant transgingival abutments was measured with a light microscope on the opposite side to the screw tightening side before and after the bar soldering procedure. The data obtained were statistically processed for paired and independent data. The average misfit for all bars before soldering was 33.83 to 54.04 μm. After cutting and soldering the bars, the misfit increased up to a range of 71.74 to 78.79 μm. Both before and after the soldering procedure, the bars soldered using a torch showed a higher misfit when compared to the bars soldered using a porcelain furnace. After the soldering procedure, the misfit was slightly lower on the left side of the bars, which had been soldered using a ceramic furnace. According to our data, the soldering of bars using the torch or furnace oven soldering techniques does not improve the misfit of one-piece cast bars on two implants. The lower misfit was obtained using the porcelain furnace soldering technique. © 2013 by the American College of Prosthodontists.

  4. Critical error fields for locked mode instability in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Haye, R.J.; Fitzpatrick, R.; Hender, T.C.

    1992-07-01

    Otherwise stable discharges can become nonlinearly unstable to disruptive locked modes when subjected to a resonant {ital m}=2, {ital n}=1 error field from irregular poloidal field coils, as in DIII-D (Nucl. Fusion {bold 31}, 875 (1991)), or from resonant magnetic perturbation coils as in COMPASS-C ({ital Proceedings} {ital of} {ital the} 18{ital th} {ital European} {ital Conference} {ital on} {ital Controlled} {ital Fusion} {ital and} {ital Plasma} {ital Physics}, Berlin (EPS, Petit-Lancy, Switzerland, 1991), Vol. 15C, Part II, p. 61). Experiments in Ohmically heated deuterium discharges with {ital q}{approx}3.5, {ital {bar n}} {approx} 2 {times} 10{sup 19} m{sup {minus}3} andmore » {ital B}{sub {ital T}} {approx} 1.2 T show that a much larger relative error field ({ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 1 {times} 10{sup {minus}3}) is required to produce a locked mode in the small, rapidly rotating plasma of COMPASS-C ({ital R}{sub 0} = 0.56 m, {ital f}{approx}13 kHz) than in the medium-sized plasmas of DIII-D ({ital R}{sub 0} = 1.67 m, {ital f}{approx}1.6 kHz), where the critical relative error field is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}4}. This dependence of the threshold for instability is explained by a nonlinear tearing theory of the interaction of resonant magnetic perturbations with rotating plasmas that predicts the critical error field scales as ({ital fR}{sub 0}/{ital B}{sub {ital T}}){sup 4/3}{ital {bar n}}{sup 2/3}. Extrapolating from existing devices, the predicted critical field for locked modes in Ohmic discharges on the International Thermonuclear Experimental Reactor (ITER) (Nucl. Fusion {bold 30}, 1183 (1990)) ({ital f}=0.17 kHz, {ital R}{sub 0} = 6.0 m, {ital B}{sub {ital T}} = 4.9 T, {ital {bar n}} = 2 {times} 10{sup 19} m{sup {minus}3}) is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}5}.« less

  5. Comparison of vertical hydraulic conductivity in a streambed-point bar system of a gaining stream

    NASA Astrophysics Data System (ADS)

    Dong, Weihong; Chen, Xunhong; Wang, Zhaowei; Ou, Gengxin; Liu, Can

    2012-07-01

    SummaryVertical hydraulic conductivities (Kv) of both streambed and point bars can influence water and solute exchange between streams and surrounding groundwater systems. The sediments in point bars are relatively young compared to the older sediments in the adjacent aquifers but slightly older compared to submerged streambeds. Thus, the permeability in point bar sediments can be different not only from regional aquifer but also from modern streambed. However, there is a lack of detailed studies that document spatial variability of vertical hydraulic conductivity in point bars of meandering streams. In this study, the authors proposed an in situ permeameter test method to measure vertical hydraulic conductivity of the two point bars in Clear Creek, Nebraska, USA. We compared the Kv values in streambed and adjacent point bars through 45 test locations in the two point bars and 51 test locations in the streambed. The Kv values in the point bars were lower than those in the streambed. Kruskal-Wallis test confirmed that the Kv values from the point bars and from the channel came from two statistically different populations. Within a point bar, the Kv values were higher along the point bar edges than those from inner point bars. Grain size analysis indicated that slightly more silt and clay particles existed in sediments from inner point bars, compared to that from streambed and from locations near the point bar edges. While point bars are the deposits of the adjacent channel, the comparison of two groups of Kv values suggests that post-depositional processes had an effect on the evolution of Kv from channel to point bars in fluvial deposits. We believed that the transport of fine particles and the gas ebullition in this gaining stream had significant effects on the distribution of Kv values in a streambed-point bar system. With the ageing of deposition in a floodplain, the permeability of point bar sediments can likely decrease due to reduced effects of the upward flow and gas ebullition.

  6. Monitoring Changes of Tropical Extreme Rainfall Events Using Differential Absorption Barometric Radar (DiBAR)

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Harrah, Steven; Lawrence, R. Wes; Hu, Yongxiang; Min, Qilong

    2015-01-01

    This work studies the potential of monitoring changes in tropical extreme rainfall events such as tropical storms from space using a Differential-absorption BArometric Radar (DiBAR) operating at 50-55 gigahertz O2 absorption band to remotely measure sea surface air pressure. Air pressure is among the most important variables that affect atmospheric dynamics, and currently can only be measured by limited in-situ observations over oceans. Analyses show that with the proposed radar the errors in instantaneous (averaged) pressure estimates can be as low as approximately 5 millibars (approximately 1 millibar) under all weather conditions. With these sea level pressure measurements, the forecasts, analyses and understanding of these extreme events in both short and long time scales can be improved. Severe weathers, especially hurricanes, are listed as one of core areas that need improved observations and predictions in WCRP (World Climate Research Program) and NASA Decadal Survey (DS) and have major impacts on public safety and national security through disaster mitigation. Since the development of the DiBAR concept about a decade ago, our team has made substantial progress in advancing the concept. Our feasibility assessment clearly shows the potential of sea surface barometry using existing radar technologies. We have developed a DiBAR system design, fabricated a Prototype-DiBAR (P-DiBAR) for proof-of-concept, conducted lab, ground and airborne P-DiBAR tests. The flight test results are consistent with our instrumentation goals. Observational system simulation experiments for space DiBAR performance show substantial improvements in tropical storm predictions, not only for the hurricane track and position but also for the hurricane intensity. DiBAR measurements will lead us to an unprecedented level of the prediction and knowledge on tropical extreme rainfall weather and climate conditions.

  7. Hubble Space Telescope secondary mirror vertex radius/conic constant test

    NASA Technical Reports Server (NTRS)

    Parks, Robert

    1991-01-01

    The Hubble Space Telescope backup secondary mirror was tested to determine the vertex radius and conic constant. Three completely independent tests (to the same procedure) were performed. Similar measurements in the three tests were highly consistent. The values obtained for the vertex radius and conic constant were the nominal design values within the error bars associated with the tests. Visual examination of the interferometric data did not show any measurable zonal figure error in the secondary mirror.

  8. Predicting vertical jump height from bar velocity.

    PubMed

    García-Ramos, Amador; Štirn, Igor; Padial, Paulino; Argüelles-Cienfuegos, Javier; De la Fuente, Blanca; Strojnik, Vojko; Feriche, Belén

    2015-06-01

    The objective of the study was to assess the use of maximum (Vmax) and final propulsive phase (FPV) bar velocity to predict jump height in the weighted jump squat. FPV was defined as the velocity reached just before bar acceleration was lower than gravity (-9.81 m·s(-2)). Vertical jump height was calculated from the take-off velocity (Vtake-off) provided by a force platform. Thirty swimmers belonging to the National Slovenian swimming team performed a jump squat incremental loading test, lifting 25%, 50%, 75% and 100% of body weight in a Smith machine. Jump performance was simultaneously monitored using an AMTI portable force platform and a linear velocity transducer attached to the barbell. Simple linear regression was used to estimate jump height from the Vmax and FPV recorded by the linear velocity transducer. Vmax (y = 16.577x - 16.384) was able to explain 93% of jump height variance with a standard error of the estimate of 1.47 cm. FPV (y = 12.828x - 6.504) was able to explain 91% of jump height variance with a standard error of the estimate of 1.66 cm. Despite that both variables resulted to be good predictors, heteroscedasticity in the differences between FPV and Vtake-off was observed (r(2) = 0.307), while the differences between Vmax and Vtake-off were homogenously distributed (r(2) = 0.071). These results suggest that Vmax is a valid tool for estimating vertical jump height in a loaded jump squat test performed in a Smith machine. Key pointsVertical jump height in the loaded jump squat can be estimated with acceptable precision from the maximum bar velocity recorded by a linear velocity transducer.The relationship between the point at which bar acceleration is less than -9.81 m·s(-2) and the real take-off is affected by the velocity of movement.Mean propulsive velocity recorded by a linear velocity transducer does not appear to be optimal to monitor ballistic exercise performance.

  9. Predicting Vertical Jump Height from Bar Velocity

    PubMed Central

    García-Ramos, Amador; Štirn, Igor; Padial, Paulino; Argüelles-Cienfuegos, Javier; De la Fuente, Blanca; Strojnik, Vojko; Feriche, Belén

    2015-01-01

    The objective of the study was to assess the use of maximum (Vmax) and final propulsive phase (FPV) bar velocity to predict jump height in the weighted jump squat. FPV was defined as the velocity reached just before bar acceleration was lower than gravity (-9.81 m·s-2). Vertical jump height was calculated from the take-off velocity (Vtake-off) provided by a force platform. Thirty swimmers belonging to the National Slovenian swimming team performed a jump squat incremental loading test, lifting 25%, 50%, 75% and 100% of body weight in a Smith machine. Jump performance was simultaneously monitored using an AMTI portable force platform and a linear velocity transducer attached to the barbell. Simple linear regression was used to estimate jump height from the Vmax and FPV recorded by the linear velocity transducer. Vmax (y = 16.577x - 16.384) was able to explain 93% of jump height variance with a standard error of the estimate of 1.47 cm. FPV (y = 12.828x - 6.504) was able to explain 91% of jump height variance with a standard error of the estimate of 1.66 cm. Despite that both variables resulted to be good predictors, heteroscedasticity in the differences between FPV and Vtake-off was observed (r2 = 0.307), while the differences between Vmax and Vtake-off were homogenously distributed (r2 = 0.071). These results suggest that Vmax is a valid tool for estimating vertical jump height in a loaded jump squat test performed in a Smith machine. Key points Vertical jump height in the loaded jump squat can be estimated with acceptable precision from the maximum bar velocity recorded by a linear velocity transducer. The relationship between the point at which bar acceleration is less than -9.81 m·s-2 and the real take-off is affected by the velocity of movement. Mean propulsive velocity recorded by a linear velocity transducer does not appear to be optimal to monitor ballistic exercise performance. PMID:25983572

  10. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    PubMed

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  11. Precision matrix expansion - efficient use of numerical simulations in estimating errors on cosmological parameters

    NASA Astrophysics Data System (ADS)

    Friedrich, Oliver; Eifler, Tim

    2018-01-01

    Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.

  12. Analysis of the charmed semileptonic decay D +→ ρ 0 μ + v

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luiggi, Eduardo E.

    2008-12-01

    The search for the fundamental constituents of matter has been pursued and studied since the dawn of civilization. As early as the fourth century BCE, Democritus, expanding the teachings of Leucippus, proposed small, indivisible entities called atoms, interacting with each other to form the Universe. Democritus was convinced of this by observing the environment around him. He observed, for example, how a collection of tiny grains of sand can make out smooth beaches. Today, following the lead set by Democritus more than 2500 years ago, at the heart of particle physics is the hypothesis that everything we can observe in the Universe is made of a small number of fundamental particles interacting with each other. In contrast to Democritus, for the last hundred years we have been able to perform experiments that probe deeper and deeper into matter in the search for the fundamental particles of nature. Today's knowledge is encapsulated in the Standard Model of particle physics, a model describing the fundamental particles and their interactions. It is within this model that the work in this thesis is presented. This work attempts to add to the understanding of the Standard Model by measuring the relative branching fraction of the charmed semileptonic decay D + → ρ 0μ +v with respect to D + →more » $$\\bar{K}$$* 0μ +v. Many theoretical models that describe hadronic interactions predict the value of this relative branching fraction, but only a handful of experiments have been able to measure it with any precision. By making a precise measurement of this relative branching fraction theorists can distinguish between viable models as well as refine existing ones. In this thesis we presented the measurement of the branching fraction ratio of the Cabibbo suppressed semileptonic decay mode D + → ρ 0μ +v with respect to the Cabibbo favored mode D + → $$\\bar{K}$$* 0 μ +v using data collected by the FOCUS collaboration. We used a binned maximum log-likelihood fit that included all known semileptonic backgrounds as well as combinatorial and muonmisidentification backgrounds to extract the yields for both the signal and normalization modes. We reconstructed 320 ± 44 D + → ρ 0μ +v events and 11372 ± 161 D + → K -π +μ +v events. Taking into account the non-resonant contribution to the D + → K -π +μ +v yield due to a s-wave interference first measured by FOCUS the branching fraction ratio is: Γ(D + → ρ 0μ +v) = 0.0412 ± 0.0057 ± 0.0040 (VII.1) where the first error is statistical and the second error is the systematic uncertainty. This represents a substantial improvement over the previous world average. More importantly, the new world average for Γ(D +→ 0μ +v)/Γ(D +→$$\\bar{K}$$* 0μ +v) along with the improved measurements in the electronic mode can be used to discriminate among different theoretical approaches that aim to understand the hadronic current involved in the charm to light quark decay process. The average of the electronic and muonic modes indicate that predictions for the partial decay width Γ(D + → ρ 0ℓ +v) and the ratio Γ(D +→ρ 0ℓ +v)/Γ(D +→$$\\bar{K}$$* 0ℓ +v) based on Sum Rules are too low. Using the same data used to extract Γ(D +→ρ 0μ +v)/Γ(D +→$$\\bar{K}$$* 0μ +v) we studied the feasibility of measuring the form factors for the D + → ρ 0μ +v decay. We found that the need to further reduce the combinatorial and muon misidentification backgrounds left us with a much smaller sample of 52 ± 12 D + → ρ 0μ +μ events; not enough to make a statistically significant measurement of the form factors.« less

  13. Six1-Eya-Dach Network in Breast Cancer

    DTIC Science & Technology

    2009-05-01

    Ctrl scramble controls. Responsiveness was tested using luciferase activity of the 3TP reporter construct and normalized to renilla luciferase...construct and normalized to renilla luciferase activity. Data points show the mean of two individual clones from two experiments and error bars represent

  14. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  15. Effect of Bar-code Technology on the Incidence of Medication Dispensing Errors and Potential Adverse Drug Events in a Hospital Pharmacy

    PubMed Central

    Poon, Eric G; Cina, Jennifer L; Churchill, William W; Mitton, Patricia; McCrea, Michelle L; Featherstone, Erica; Keohane, Carol A; Rothschild, Jeffrey M; Bates, David W; Gandhi, Tejal K

    2005-01-01

    We performed a direct observation pre-post study to evaluate the impact of barcode technology on medication dispensing errors and potential adverse drug events in the pharmacy of a tertiary-academic medical center. We found that barcode technology significantly reduced the rate of target dispensing errors leaving the pharmacy by 85%, from 0.37% to 0.06%. The rate of potential adverse drug events (ADEs) due to dispensing errors was also significantly reduced by 63%, from 0.19% to 0.069%. In a 735-bed hospital where 6 million doses of medications are dispensed per year, this technology is expected to prevent about 13,000 dispensing errors and 6,000 potential ADEs per year. PMID:16779372

  16. Effect of marital status on death rates. Part 1: High accuracy exploration of the Farr-Bertillon effect

    NASA Astrophysics Data System (ADS)

    Richmond, Peter; Roehner, Bertrand M.

    2016-05-01

    The Farr-Bertillon law says that for all age-groups the death rate of married people is lower than the death rate of people who are not married (i.e. single, widowed or divorced). Although this law has been known for over 150 years, it has never been established with well-controlled accuracy (e.g. error bars). This even let some authors argue that it was a statistical artifact. It is true that the data must be selected with great care, especially for age groups of small size (e.g. widowers under 25). The observations reported in this paper were selected in the way experiments are designed in physics, that is to say with the objective of minimizing error bars. Data appropriate for mid-age groups may be unsuitable for young age groups and vice versa. The investigation led to the following results. (1) The FB effect is very similar for men and women, except that (at least in western countries) its amplitude is 20% higher for men. (2) There is a marked difference between single/divorced persons on the one hand, for whom the effect is largest around the age of 40, and widowed persons on the other hand, for whom the effect is largest around the age of 25. (3) When different causes of death are distinguished, the effect is largest for suicide and smallest for cancer. For heart disease and cerebrovascular accidents, the fact of being married divides the death rate by 2.2 compared to non-married persons. (4) For young widowers the death rates are up to 10 times higher than for married persons of same age. This extreme form of the FB effect will be referred to as the ;young widower effect;. Chinese data are used to explore this effect more closely. A possible connection between the FB effect and Martin Raff's ;Stay alive; effect for the cells in an organism is discussed in the last section.

  17. THE NATURE AND NURTURE OF BARS AND DISKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendez-Abreu, J.; Aguerri, J. A. L.; Zarattini, S.

    The effects that interactions produce on galaxy disks and how they modify the subsequent formation of bars need to be distinguished to fully understand the relationship between bars and environment. To this aim we derive the bar fraction in three different environments ranging from the field to Virgo and Coma Clusters, covering an unprecedentedly large range of galaxy luminosities (or, equivalently, stellar masses). We confirm that the fraction of barred galaxies strongly depends on galaxy luminosity. We also show that the difference between the bar fraction distributions as a function of galaxy luminosity (and mass) in the field and Comamore » Cluster is statistically significant, with Virgo being an intermediate case. The fraction of barred galaxies shows a maximum of about 50% at M{sub r} {approx_equal} - 20.5 in clusters, whereas the peak is shifted to M{sub r} {approx_equal} - 19 in the field. We interpret this result as a variation of the effect of environment on bar formation depending on galaxy luminosity. We speculate that brighter disk galaxies are stable enough against interactions to keep their cold structure, thus, the interactions are able to trigger bar formation. For fainter galaxies, the interactions become strong enough to heat up the disks inhibiting bar formation and even destroying the disks. Finally, we point out that the controversy regarding whether the bar fraction depends on environment could be resolved by taking into account the different luminosity ranges probed by the galaxy samples studied so far.« less

  18. Visual short-term memory deficits associated with GBA mutation and Parkinson's disease.

    PubMed

    Zokaei, Nahid; McNeill, Alisdair; Proukakis, Christos; Beavan, Michelle; Jarman, Paul; Korlipara, Prasad; Hughes, Derralynn; Mehta, Atul; Hu, Michele T M; Schapira, Anthony H V; Husain, Masud

    2014-08-01

    Individuals with mutation in the lysosomal enzyme glucocerebrosidase (GBA) gene are at significantly high risk of developing Parkinson's disease with cognitive deficit. We examined whether visual short-term memory impairments, long associated with patients with Parkinson's disease, are also present in GBA-positive individuals-both with and without Parkinson's disease. Precision of visual working memory was measured using a serial order task in which participants observed four bars, each of a different colour and orientation, presented sequentially at screen centre. Afterwards, they were asked to adjust a coloured probe bar's orientation to match the orientation of the bar of the same colour in the sequence. An additional attentional 'filtering' condition tested patients' ability to selectively encode one of the four bars while ignoring the others. A sensorimotor task using the same stimuli controlled for perceptual and motor factors. There was a significant deficit in memory precision in GBA-positive individuals-with or without Parkinson's disease-as well as GBA-negative patients with Parkinson's disease, compared to healthy controls. Worst recall was observed in GBA-positive cases with Parkinson's disease. Although all groups were impaired in visual short-term memory, there was a double dissociation between sources of error associated with GBA mutation and Parkinson's disease. The deficit observed in GBA-positive individuals, regardless of whether they had Parkinson's disease, was explained by a systematic increase in interference from features of other items in memory: misbinding errors. In contrast, impairments in patients with Parkinson's disease, regardless of GBA status, was explained by increased random responses. Individuals who were GBA-positive and also had Parkinson's disease suffered from both types of error, demonstrating the worst performance. These findings provide evidence for dissociable signature deficits within the domain of visual short-term memory associated with GBA mutation and with Parkinson's disease. Identification of the specific pattern of cognitive impairment in GBA mutation versus Parkinson's disease is potentially important as it might help to identify individuals at risk of developing Parkinson's disease. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain.

  19. Measurement of sigma(Lambda(b)0) / sigma(anti-B 0) x B(Lambda0(b) ---> Lambda+(c) pi-) / B(anti-B0 ---> D+ pi-) in p anti-p collisions at S**(1/2) = 1.96-TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abulencia, A.; Acosta, D.; Adelman, Jahred A.

    2006-01-01

    The authors present the first observation of the baryon decay {Lambda}{sub b}{sup 0} {yields} {Lambda}{sub c}{sup +} {pi}{sup -} followed by {Lambda}{sub c}{sup +} {yields} pK{sup -} {pi}{sup +} in 106 pb{sup -1} p{bar p} collisions at {radical}s = 1.96 TeV in the CDF experiment. IN order to reduce systematic error, the measured rate for {Lambda}{sub b}{sup 0} decay is normalized to the kinematically similar meson decay {bar B}{sup 0} {yields} D{sup +}{pi}{sup -} followed by D{sup +} {yields} {pi}{sup +}K{sup -}{pi}{sup +}. They report the ratio of production cross sections ({sigma}) times the ratio of branching fractions ({Beta}) formore » the momentum region integrated above p{sub T} > 6 GeV/c and pseudorapidity range |{eta}| < 1.3: {sigma}(p{bar p} {yields} {Lambda}{sub b}{sup 0}X)/{sigma}(p{bar p} {yields} {bar B}{sup 0} X) x {Beta}({Lambda}{sub b}{sup 0} {yields} {Lambda}{sub c}{sup +}{pi}{sup -})/{Beta}({bar B}{sup 0} {yields} D{sup +}{pi}{sup -}) = 0.82 {+-} 0.08(stat) {+-} 0.11(syst) {+-} 0.22 ({Beta}({Lambda}{sub c}{sup +} {yields} pK{sup -} {pi}{sup +})).« less

  20. Partial entrainment of gravel bars during floods

    USGS Publications Warehouse

    Konrad, Christopher P.; Booth, Derek B.; Burges, Stephen J.; Montgomery, David R.

    2002-01-01

    Spatial patterns of bed material entrainment by floods were documented at seven gravel bars using arrays of metal washers (bed tags) placed in the streambed. The observed patterns were used to test a general stochastic model that bed material entrainment is a spatially independent, random process where the probability of entrainment is uniform over a gravel bar and a function of the peak dimensionless shear stress τ0* of the flood. The fraction of tags missing from a gravel bar during a flood, or partial entrainment, had an approximately normal distribution with respect to τ0* with a mean value (50% of the tags entrained) of 0.085 and standard deviation of 0.022 (root‐mean‐square error of 0.09). Variation in partial entrainment for a given τ0* demonstrated the effects of flow conditioning on bed strength, with lower values of partial entrainment after intermediate magnitude floods (0.065 < τ0*< 0.08) than after higher magnitude floods. Although the probability of bed material entrainment was approximately uniform over a gravel bar during individual floods and independent from flood to flood, regions of preferential stability and instability emerged at some bars over the course of a wet season. Deviations from spatially uniform and independent bed material entrainment were most pronounced for reaches with varied flow and in consecutive floods with small to intermediate magnitudes.

  1. Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain

    PubMed Central

    Yim, Kyoung Hoon; Han, Kyoung Ah; Park, Soo Young

    2010-01-01

    Background Statistical analysis is essential in regard to obtaining objective reliability for medical research. However, medical researchers do not have enough statistical knowledge to properly analyze their study data. To help understand and potentially alleviate this problem, we have analyzed the statistical methods and errors of articles published in the Korean Journal of Pain (KJP), with the intention to improve the statistical quality of the journal. Methods All the articles, except case reports and editorials, published from 2004 to 2008 in the KJP were reviewed. The types of applied statistical methods and errors in the articles were evaluated. Results One hundred and thirty-nine original articles were reviewed. Inferential statistics and descriptive statistics were used in 119 papers and 20 papers, respectively. Only 20.9% of the papers were free from statistical errors. The most commonly adopted statistical method was the t-test (21.0%) followed by the chi-square test (15.9%). Errors of omission were encountered 101 times in 70 papers. Among the errors of omission, "no statistics used even though statistical methods were required" was the most common (40.6%). The errors of commission were encountered 165 times in 86 papers, among which "parametric inference for nonparametric data" was the most common (33.9%). Conclusions We found various types of statistical errors in the articles published in the KJP. This suggests that meticulous attention should be given not only in the applying statistical procedures but also in the reviewing process to improve the value of the article. PMID:20552071

  2. Driving out errors through tight integration between software and automation.

    PubMed

    Reifsteck, Mark; Swanson, Thomas; Dallas, Mary

    2006-01-01

    A clear case has been made for using clinical IT to improve medication safety, particularly bar-code point-of-care medication administration and computerized practitioner order entry (CPOE) with clinical decision support. The equally important role of automation has been overlooked. When the two are tightly integrated, with pharmacy information serving as a hub, the distinctions between software and automation become blurred. A true end-to-end medication management system drives out errors from the dockside to the bedside. Presbyterian Healthcare Services in Albuquerque has been building such a system since 1999, beginning by automating pharmacy operations to support bar-coded medication administration. Encouraged by those results, it then began layering on software to further support clinician workflow and improve communication, culminating with the deployment of CPOE and clinical decision support. This combination, plus a hard-wired culture of safety, has resulted in a dramatically lower mortality and harm rate that could not have been achieved with a partial solution.

  3. Machine learning models for lipophilicity and their domain of applicability.

    PubMed

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Laak, Antonius Ter; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-01-01

    Unfavorable lipophilicity and water solubility cause many drug failures; therefore these properties have to be taken into account early on in lead discovery. Commercial tools for predicting lipophilicity usually have been trained on small and neutral molecules, and are thus often unable to accurately predict in-house data. Using a modern Bayesian machine learning algorithm--a Gaussian process model--this study constructs a log D7 model based on 14,556 drug discovery compounds of Bayer Schering Pharma. Performance is compared with support vector machines, decision trees, ridge regression, and four commercial tools. In a blind test on 7013 new measurements from the last months (including compounds from new projects) 81% were predicted correctly within 1 log unit, compared to only 44% achieved by commercial software. Additional evaluations using public data are presented. We consider error bars for each method (model based error bars, ensemble based, and distance based approaches), and investigate how well they quantify the domain of applicability of each model.

  4. Active Sensing Air Pressure Using Differential Absorption Barometric Radar

    NASA Astrophysics Data System (ADS)

    Lin, B.

    2016-12-01

    Tropical storms and other severe weathers cause huge life losses and property damages and have major impacts on public safety and national security. Their observations and predictions need to be significantly improved. This effort tries to develop a feasible active microwave approach that measures surface air pressure, especially over open seas, from space using a Differential-absorption BArometric Radar (DiBAR) operating at 50-55 GHz O2 absorption band in order to constrain assimilated dynamic fields of numerical weather Prediction (NWP) models close to actual conditions. Air pressure is the most important variable that drives atmospheric dynamics, and currently can only be measured by limited in-situ observations over oceans. Even over land there is no uniform coverage of surface air pressure measurements. Analyses show that with the proposed space radar the errors in instantaneous (averaged) pressure estimates can be as low as 4mb ( 1mb) under all weather conditions. NASA Langley research team has made substantial progresses in advancing the DiBAR concept. The feasibility assessment clearly shows the potential of surface barometry using existing radar technologies. The team has also developed a DiBAR system design, fabricated a Prototype-DiBAR (P-DiBAR) for proof-of-concept, conducted laboratory, ground and airborne P-DiBAR tests. The flight test results are consistent with the instrumentation goals. The precision and accuracy of radar surface pressure measurements are within the range of the theoretical analysis of the DiBAR concept. Observational system simulation experiments for space DiBAR performance based on the existing DiBAR technology and capability show substantial improvements in tropical storm predictions, not only for the hurricane track and position but also for the hurricane intensity. DiBAR measurements will provide us an unprecedented level of the prediction and knowledge on global extreme weather and climate conditions.

  5. Putting Meaning Back Into the Mean: A Comment on the Misuse of Elementary Statistics in a Sample of Manuscripts Submitted to Clinical Therapeutics.

    PubMed

    Forrester, Janet E

    2015-12-01

    Errors in the statistical presentation and analyses of data in the medical literature remain common despite efforts to improve the review process, including the creation of guidelines for authors and the use of statistical reviewers. This article discusses common elementary statistical errors seen in manuscripts recently submitted to Clinical Therapeutics and describes some ways in which authors and reviewers can identify errors and thus correct them before publication. A nonsystematic sample of manuscripts submitted to Clinical Therapeutics over the past year was examined for elementary statistical errors. Clinical Therapeutics has many of the same errors that reportedly exist in other journals. Authors require additional guidance to avoid elementary statistical errors and incentives to use the guidance. Implementation of reporting guidelines for authors and reviewers by journals such as Clinical Therapeutics may be a good approach to reduce the rate of statistical errors. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.

  6. Uncertainties in the deprojection of the observed bar properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Yanfei; Shen, Juntai; Li, Zhao-Yu, E-mail: jshen@shao.ac.cn

    2014-08-10

    In observations, it is important to deproject the two fundamental quantities characterizing a bar, i.e., its length (a) and ellipticity (e), to face-on values before any careful analyses. However, systematic estimation on the uncertainties of the commonly used deprojection methods is still lacking. Simulated galaxies are well suited in this study. We project two simulated barred galaxies onto a two-dimensional (2D) plane with different bar orientations and disk inclination angles (i). Bar properties are measured and deprojected with the popular deprojection methods in the literature. Generally speaking, deprojection uncertainties increase with increasing i. All of the deprojection methods behave badlymore » when i is larger than 60°, due to the vertical thickness of the bar. Thus, future statistical studies of barred galaxies should exclude galaxies more inclined than 60°. At moderate inclination angles (i ≤ 60°), 2D deprojection methods (analytical and image stretching), and Fourier-based methods (Fourier decomposition and bar-interbar contrast) perform reasonably well with uncertainties ∼10% in both the bar length and ellipticity, whereas the uncertainties of the one-dimensional (1D) analytical deprojection can be as high as 100% in certain extreme cases. We find that different bar measurement methods show systematic differences in the deprojection uncertainties. We further discuss the deprojection uncertainty factors with the emphasis on the most important one, i.e., the three-dimensional structure of the bar itself. We construct two triaxial toy bar models that can qualitatively reproduce the results of the 1D and 2D analytical deprojections; they confirm that the vertical thickness of the bar is the main source of uncertainties.« less

  7. Effects of pressure on aqueous chemical equilibria at subzero temperatures with applications to Europa

    USGS Publications Warehouse

    Marion, G.M.; Kargel, J.S.; Catling, D.C.; Jakubowski, S.D.

    2005-01-01

    Pressure plays a critical role in controlling aqueous geochemical processes in deep oceans and deep ice. The putative ocean of Europa could have pressures of 1200 bars or higher on the seafloor, a pressure not dissimilar to the deepest ocean basin on Earth (the Mariana Trench at 1100 bars of pressure). At such high pressures, chemical thermodynamic relations need to explicitly consider pressure. A number of papers have addressed the role of pressure on equilibrium constants, activity coefficients, and the activity of water. None of these models deal, however, with processes at subzero temperatures, which may be important in cold environments on Earth and other planetary bodies. The objectives of this work were to (1) incorporate a pressure dependence into an existing geochemical model parameterized for subzero temperatures (FREZCHEM), (2) validate the model, and (3) simulate pressure-dependent processes on Europa. As part of objective 1, we examined two models for quantifying the volumetric properties of liquid water at subzero temperatures: one model is based on the measured properties of supercooled water, and the other model is based on the properties of liquid water in equilibrium with ice. The relative effect of pressure on solution properties falls in the order: equilibrium constants(K) > activity coefficients (??) > activity of water (aw). The errors (%) in our model associated with these properties, however, fall in the order: ?? > K > aw. The transposition between K and ?? is due to a more accurate model for estimating K than for estimating ??. Only activity coefficients are likely to be significantly in error. However, even in this case, the errors are likely to be only in the range of 2 to 5% up to 1000 bars of pressure. Evidence based on the pressure/temperature melting of ice and salt solution densities argue in favor of the equilibrium water model, which depends on extrapolations, for characterizing the properties of liquid water in electrolyte solutions at subzero temperatures, rather than the supercooled water model. Model-derived estimates of mixed salt solution densities and chemical equilibria as a function of pressure are in reasonably good agreement with experimental measurements. To demonstrate the usefulness of this low-temperature, high-pressure model, we examined two hypothetical cases for Europa. Case 1 dealt with the ice cover of Europa, where we asked the question: How far above the putative ocean in the ice layer could we expect to find thermodynamically stable brine pockets that could serve as habitats for life? For a hypothetical nonconvecting 20 km icy shell, this potential life zone only extends 2.8 km into the icy shell before the eutectic is reached. For the case of a nonconvecting icy shell, the cold surface of Europa precludes stable aqueous phases (habitats for life) anywhere near the surface. Case 2 compared chemical equilibria at 1 bar (based on previous work) with a more realistic 1460 bars of pressure at the base of a 100 km Europan ocean. A pressure of 1460 bars, compared to 1 bar, caused a 12 K decrease in the temperature at which ice first formed and a 11 K increase in the temperature at which MgSO4. 12H2O first formed. Remarkably, there was only a 1.2 K decrease in the eutectic temperatures between 1 and 1460 bars of pressure. Chemical systems and their response to pressure depend, ultimately, on the volumetric properties of individual constituents, which makes every system response highly individualistic. Copyright ?? 2005 Elsevier Ltd.

  8. Structural parameters of young star clusters: fractal analysis

    NASA Astrophysics Data System (ADS)

    Hetem, A.

    2017-07-01

    A unified view of star formation in the Universe demand detailed and in-depth studies of young star clusters. This work is related to our previous study of fractal statistics estimated for a sample of young stellar clusters (Gregorio-Hetem et al. 2015, MNRAS 448, 2504). The structural properties can lead to significant conclusions about the early stages of cluster formation: 1) virial conditions can be used to distinguish warm collapsed; 2) bound or unbound behaviour can lead to conclusions about expansion; and 3) fractal statistics are correlated to the dynamical evolution and age. The technique of error bars estimation most used in the literature is to adopt inferential methods (like bootstrap) to estimate deviation and variance, which are valid only for an artificially generated cluster. In this paper, we expanded the number of studied clusters, in order to enhance the investigation of the cluster properties and dynamic evolution. The structural parameters were compared with fractal statistics and reveal that the clusters radial density profile show a tendency of the mean separation of the stars increase with the average surface density. The sample can be divided into two groups showing different dynamic behaviour, but they have the same dynamic evolution, since the entire sample was revealed as being expanding objects, for which the substructures do not seem to have been completely erased. These results are in agreement with the simulations adopting low surface densities and supervirial conditions.

  9. Study of the Rare Hyperon Decay $${\\boldmath \\Omega^\\mp \\to \\Xi^\\mp \\: \\pi^+\\pi^-}$$

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamaev, O.; Solomey, N.; Burnstein, R.A.

    The authors report a new measurement of the decay {Omega}{sup -} {yields} {Xi}{sup -} {pi}{sup +}{pi}{sup -} with 76 events and a first observation of the decay {bar {Omega}}{sup +} {yields} {bar {Xi}}{sup +} {pi}{sup +}{pi}{sup -} with 24 events, yielding a combined branching ratio (3.74{sub -0.56}{sup +0.67}) x 10{sup -4}. This represents a factor 25 increase in statistics over the best previous measurement. No evidence is seen for CP violation, with {Beta}({Omega}{sup -} {yields} {Xi}{sup -} {pi}{sup +}{pi}{sup -}) = 4.04{sub -0.71}{sup +0.83} x 10{sup -4} and {Beta}({bar {Omega}}{sup +} {yields} {bar {Xi}}{sup +} {pi}{sup +}{pi}{sup -}) = 3.15{submore » -0.89}{sup +1.12} x 10{sup -4}. Contrary to theoretical expectation, they see little evidence for the decays {Omega}{sup -} {yields} {Xi}*{sub 1530}{sup 0} {pi}{sup -} and {bar {Omega}}{sup +} {yields} {bar {Xi}}*{sub 1530}{sup 0} {pi}{sup +} and place a 90% C.L. upper limit on the combined branching ratio {Beta}({Omega}{sup -}({bar {Omega}}{sup +}) {yields} {Xi}*{sub 1530}{sup 0} ({bar {Xi}}*{sub 1530}{sup 0}){pi}{sup {-+}}) < 7.0 x 10{sup -5}.« less

  10. The economic impact of a smoke-free bylaw on restaurant and bar sales in Ottawa, Canada.

    PubMed

    Luk, Rita; Ferrence, Roberta; Gmel, Gerhard

    2006-05-01

    On 1 August 2001, the City of Ottawa (Canada's Capital) implemented a smoke-free bylaw that completely prohibited smoking in work-places and public places, including restaurants and bars, with no exemption for separately ventilated smoking rooms. This paper evaluates the effects of this bylaw on restaurant and bar sales. DATA AND MEASURES: We used retail sales tax data from March 1998 to June 2002 to construct two outcome measures: the ratio of licensed restaurant and bar sales to total retail sales and the ratio of unlicensed restaurant sales to total retail sales. Restaurant and bar sales were subtracted from total retail sales in the denominator of these measures. We employed an interrupted time-series design. Autoregressive integrated moving average (ARIMA) intervention analysis was used to test for three possible impacts that the bylaw might have on the sales of restaurants and bars. We repeated the analysis using regression with autoregressive moving average (ARMA) errors method to triangulate our results. Outcome measures showed declining trends at baseline before the bylaw went into effect. Results from ARIMA intervention and regression analyses did not support the hypotheses that the smoke-free bylaw had an impact that resulted in (1) abrupt permanent, (2) gradual permanent or (3) abrupt temporary changes in restaurant and bar sales. While a large body of research has found no significant adverse impact of smoke-free legislation on restaurant and bar sales in the United States, Australia and elsewhere, our study confirms these results in a northern region with a bilingual population, which has important implications for impending policy in Europe and other areas.

  11. Nurses' attitudes toward the use of the bar-coding medication administration system.

    PubMed

    Marini, Sana Daya; Hasman, Arie; Huijer, Huda Abu-Saad; Dimassi, Hani

    2010-01-01

    This study determines nurses' attitudes toward bar-coding medication administration system use. Some of the factors underlying the successful use of bar-coding medication administration systems that are viewed as a connotative indicator of users' attitudes were used to gather data that describe the attitudinal basis for system adoption and use decisions in terms of subjective satisfaction. Only 67 nurses in the United States had the chance to respond to the e-questionnaire posted on the CARING list server for the months of June and July 2007. Participants rated their satisfaction with bar-coding medication administration system use based on system functionality, usability, and its positive/negative impact on the nursing practice. Results showed, to some extent, positive attitude, but the image profile draws attention to nurses' concerns for improving certain system characteristics. The high bar-coding medication administration system skills revealed a more negative perception of the system by the nursing staff. The reasons underlying dissatisfaction with bar-coding medication administration use by skillful users are an important source of knowledge that can be helpful for system development as well as system deployment. As a result, strengthening bar-coding medication administration system usability by magnifying its ability to eliminate medication errors and the contributing factors, maximizing system functionality by ascertaining its power as an extra eye in the medication administration process, and impacting the clinical nursing practice positively by being helpful to nurses, speeding up the medication administration process, and being user-friendly can offer a congenial settings for establishing positive attitude toward system use, which in turn leads to successful bar-coding medication administration system use.

  12. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    PubMed

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  13. Experimental and artificial neural network based prediction of performance and emission characteristics of DI diesel engine using Calophyllum inophyllum methyl ester at different nozzle opening pressure

    NASA Astrophysics Data System (ADS)

    Vairamuthu, G.; Thangagiri, B.; Sundarapandian, S.

    2018-01-01

    The present work investigates the effect of varying Nozzle Opening Pressures (NOP) from 220 bar to 250 bar on performance, emissions and combustion characteristics of Calophyllum inophyllum Methyl Ester (CIME) in a constant speed, Direct Injection (DI) diesel engine using Artificial Neural Network (ANN) approach. An ANN model has been developed to predict a correlation between specific fuel consumption (SFC), brake thermal efficiency (BTE), exhaust gas temperature (EGT), Unburnt hydrocarbon (UBHC), CO, CO2, NOx and smoke density using load, blend (B0 and B100) and NOP as input data. A standard Back-Propagation Algorithm (BPA) for the engine is used in this model. A Multi Layer Perceptron network (MLP) is used for nonlinear mapping between the input and the output parameters. An ANN model can predict the performance of diesel engine and the exhaust emissions with correlation coefficient (R2) in the range of 0.98-1. Mean Relative Errors (MRE) values are in the range of 0.46-5.8%, while the Mean Square Errors (MSE) are found to be very low. It is evident that the ANN models are reliable tools for the prediction of DI diesel engine performance and emissions. The test results show that the optimum NOP is 250 bar with B100.

  14. Comparing the physiological and perceptual responses of construction workers (bar benders and bar fixers) in a hot environment.

    PubMed

    Wong, Del Pui-Lam; Chung, Joanne Wai-Yee; Chan, Albert Ping-Chuen; Wong, Francis Kwan-Wah; Yi, Wen

    2014-11-01

    This study aimed to (1) quantify the respective physical workloads of bar bending and fixing; and (2) compare the physiological and perceptual responses between bar benders and bar fixers. Field studies were conducted during the summer in Hong Kong from July 2011 to August 2011 over six construction sites. Synchronized physiological, perceptual, and environmental parameters were measured from construction rebar workers. The average duration of the 39 field measurements was 151.1 ± 22.4 min under hot environment (WBGT = 31.4 ± 2.2 °C), during which physiological, perceptual and environmental parameters were synchronized. Energy expenditure of overall rebar work, bar bending, and bar fixing were 2.57, 2.26 and 2.67 Kcal/min (179, 158 and 186 W), respectively. Bar fixing induced significantly higher physiological responses in heart rate (113.6 vs. 102.3 beat/min, p < 0.05), oxygen consumption (9.53 vs. 7.14 ml/min/kg, p < 0.05), and energy expenditure (2.67 vs. 2.26 Kcal/min, p < 0.05) (186 vs. 158 W, p < 0.05) as compared to bar bending. Perceptual response was higher in bar fixing but such difference was not statistically significant. Findings of this study enable the calculation of daily energy expenditure of rebar work. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. The Bologna Annotation Resource (BAR 3.0): improving protein functional annotation

    PubMed Central

    Casadio, Rita

    2017-01-01

    Abstract BAR 3.0 updates our server BAR (Bologna Annotation Resource) for predicting protein structural and functional features from sequence. We increase data volume, query capabilities and information conveyed to the user. The core of BAR 3.0 is a graph-based clustering procedure of UniProtKB sequences, following strict pairwise similarity criteria (sequence identity ≥40% with alignment coverage ≥90%). Each cluster contains the available annotation downloaded from UniProtKB, GO, PFAM and PDB. After statistical validation, GO terms and PFAM domains are cluster-specific and annotate new sequences entering the cluster after satisfying similarity constraints. BAR 3.0 includes 28 869 663 sequences in 1 361 773 clusters, of which 22.2% (22 241 661 sequences) and 47.4% (24 555 055 sequences) have at least one validated GO term and one PFAM domain, respectively. 1.4% of the clusters (36% of all sequences) include PDB structures and the cluster is associated to a hidden Markov model that allows building template-target alignment suitable for structural modeling. Some other 3 399 026 sequences are singletons. BAR 3.0 offers an improved search interface, allowing queries by UniProtKB-accession, Fasta sequence, GO-term, PFAM-domain, organism, PDB and ligand/s. When evaluated on the CAFA2 targets, BAR 3.0 largely outperforms our previous version and scores among state-of-the-art methods. BAR 3.0 is publicly available and accessible at http://bar.biocomp.unibo.it/bar3. PMID:28453653

  16. Using a Divided Bar Apparatus to Measure Thermal Conductivity of Samples of Odd Sizes and Shapes

    NASA Astrophysics Data System (ADS)

    Crowell, J. "; Gosnold, W. D.

    2012-12-01

    Standard procedure for measuring thermal conductivity using a divided bar apparatus requires a sample that has the same surface dimensions as the heat sink/source surface in the divided bar. Heat flow is assumed to be constant throughout the column and thermal conductivity (K) is determined by measuring temperatures (T) across the sample and across standard layers and using the basic relationship Ksample=(Kstandard*(ΔT1+ΔT2)/2)/(ΔTsample). Sometimes samples are not large enough or of correct proportions to match the surface of the heat sink/source, however using the equations presented here the thermal conductivity of these samples can still be measured with a divided bar. Measurements were done on the UND Geothermal Laboratories stationary divided bar apparatus (SDB). This SDB has been designed to mimic many in-situ conditions, with a temperature range of -20C to 150C and a pressure range of 0 to 10,000 psi for samples with parallel surfaces and 0 to 3000 psi for samples with non-parallel surfaces. The heat sink/source surfaces are copper disks and have a surface area of 1,772 mm2 (2.74 in2). Layers of polycarbonate 6 mm thick with the same surface area as the copper disks are located in the heat sink and in the heat source as standards. For this study, all samples were prepared from a single piece of 4 inch limestone core. Thermal conductivities were measured for each sample as it was cut successively smaller. The above equation was adjusted to include the thicknesses (Th) of the samples and the standards and the surface areas (A) of the heat sink/source and of the sample Ksample=(Kstandard*Astandard*Thsample*(ΔT1+ΔT3))/(ΔTsample*Asample*2*Thstandard). Measuring the thermal conductivity of samples of multiple sizes, shapes, and thicknesses gave consistent values for samples with surfaces as small as 50% of the heat sink/source surface, regardless of the shape of the sample. Measuring samples with surfaces smaller than 50% of the heat sink/source surface resulted in thermal conductivity values which were too high. The cause of the error with the smaller samples is being examined as is the relationship between the amount of error in the thermal conductivity and the difference in surface areas. As more measurements are made an equation to mathematically correct for the error is being developed on in case a way to physically correct the problem cannot be determined.

  17. The effect of 8.25% sodium hypochlorite on dental pulp dissolution and dentin flexural strength and modulus.

    PubMed

    Cullen, James K T; Wealleans, James A; Kirkpatrick, Timothy C; Yaccino, John M

    2015-06-01

    The purpose of this study was to evaluate the effect of various concentrations of sodium hypochlorite (NaOCl), including 8.25%, on dental pulp dissolution and dentin flexural strength and modulus. Sixty dental pulp samples and 55 plane parallel dentin bars were retrieved from extracted human teeth. Five test groups (n = 10) were formed consisting of a pulp sample and dentin bar immersed in various NaOCl solutions. The negative control group (n = 5) consisted of pulp samples and dentin bars immersed in saline. The positive control group (n = 5) consisted of pulp samples immersed in 8.25% NaOCl without a dentin bar. Every 6 minutes for 1 hour, the solutions were refreshed. The dentin bars were tested for flexural strength and modulus with a 3-point bend test. The time until total pulp dissolution and any changes in dentin bar flexural strength and modulus for the different NaOCl solutions were statistically analyzed. An increase in NaOCl concentration showed a highly significant decrease in pulp dissolution time. The pulp dissolution property of 8.25% NaOCl was significantly faster than any other tested concentration of NaOCl. The presence of dentin did not have a significant effect on the dissolution capacity of NaOCl if the solutions were refreshed. NaOCl concentration did not have a statistically significant effect on dentin flexural strength or modulus. Dilution of NaOCl decreases its pulp dissolution capacity. Refreshing the solution is essential to counteract the effects of dentin. In this study, NaOCl did not have a significant effect on dentin flexural strength or modulus. Published by Elsevier Inc.

  18. Evaluation of a responsible beverage service and enforcement program: Effects on bar patron intoxication and potential impaired driving by young adults.

    PubMed

    Fell, James C; Fisher, Deborah A; Yao, Jie; McKnight, A Scott

    2017-08-18

    Studies of alcohol-related harm (violence, injury, illness) suggest that the most significant risk factors are the amount of alcohol consumed and whether obviously intoxicated patrons continue to be served. This study's objective was to investigate the effects of a responsible beverage service (RBS)/enhanced alcohol enforcement intervention on bars, bar patrons, and impaired driving. Two communities-Monroe County, New York, and Cleveland, Ohio-participated in a demonstration program and evaluation. The intervention applied RBS training, targeted enforcement, and corrective actions by law enforcement to a random sample of 10 identified problem bars in each community compared to 10 matched nonintervention problem bars. Data were collected over 3 waves on bar serving practices, bar patron intoxication, drinking and driving, and other alcohol-related harm from intervention and control bars and treatment and comparison communities. In Monroe County, New York, of the 14 outcome measures analyzed, 7 measures showed statistically significant differences from pre- to postintervention. Six of those measures indicated changes in the desired or positive direction and 2 measures were in the undesired or negative direction. Of note in the positive direction, the percentage of intervention bar patrons who were intoxicated decreased from 44 to 27% and the average blood alcohol concentration of patrons decreased from 0.097 to 0.059 g/dL pre- to postintervention. In Cleveland, Ohio, 6 of the 14 measures showed statistically significant changes pre- to postintervention with 6 in the positive direction and 4 in the negative direction. Of note, the percentage of pseudo-intoxicated patrons denied service in intervention bars increased from 6 to 29%. Of the 14 outcome measures that were analyzed in each community, most indicated positive changes associated with the intervention, but others showed negative associations. About half of the measures showed no significance, the sample sizes were too small, or the data were unavailable. Therefore, at best, the results of these demonstration programs were mixed. There were, however, some positive indications from the intervention. It appears that when bar managers and owners are aware of the program and its enforcement and when servers are properly trained in RBS, fewer patrons may become intoxicated and greater efforts may be made to deny service to obviously intoxicated patrons. Given that about half of arrested impaired drivers had their last drink at a licensed establishment, widespread implementation of this strategy has the potential to help reduce impaired driving.

  19. On the Calculation of Uncertainty Statistics with Error Bounds for CFD Calculations Containing Random Parameters and Fields

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2016-01-01

    This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.

  20. Synthesis and optimization of four bar mechanism with six design parameters

    NASA Astrophysics Data System (ADS)

    Jaiswal, Ankur; Jawale, H. P.

    2018-04-01

    Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.

  1. The imprints of bars on the vertical stellar population gradients of galactic bulges

    NASA Astrophysics Data System (ADS)

    Molaeinezhad, A.; Falcón-Barroso, J.; Martínez-Valpuesta, I.; Khosroshahi, H. G.; Vazdekis, A.; La Barbera, F.; Peletier, R. F.; Balcells, M.

    2017-05-01

    This is the second paper of a series aimed to study the stellar kinematics and population properties of bulges in highly inclined barred galaxies. In this work, we carry out a detailed analysis of the stellar age, metallicity and [Mg/Fe] of 28 highly inclined (I > 65°) disc galaxies, from S0 to S(B)c, observed with the SAURON integral-field spectrograph. The sample is divided into two clean samples of barred and unbarred galaxies, on the basis of the correlation between the stellar velocity and h3 profiles, as well as the level of cylindrical rotation within the bulge region. We find that while the mean stellar age, metallicity and [Mg/Fe] in the bulges of barred and unbarred galaxies are not statistically distinct, the [Mg/Fe] gradients along the minor axis (away from the disc) of barred galaxies are significantly different than those without bars. For barred galaxies, stars that are vertically further away from the mid-plane are in general more [Mg/Fe]-enhanced and thus the vertical gradients in [Mg/Fe] for barred galaxies are mostly positive, while for unbarred bulges the [Mg/Fe] profiles are typically negative or flat. This result, together with the old populations observed in the barred sample, indicates that bars are long-lasting structures, and therefore are not easily destroyed. The marked [Mg/Fe] differences with the bulges of unbarred galaxies indicate that different formation/evolution scenarios are required to explain their build-up, and emphasizes the role of bars in redistributing stellar material in the bulge-dominated regions.

  2. Prevalence of and Differences in Salad Bar Implementation in Rural Versus Urban Arizona Schools.

    PubMed

    Blumenschine, Michelle; Adams, Marc; Bruening, Meg

    2018-03-01

    Rural children consume more calories per day on average than urban children, and they are less likely to consume fruit. Self-service salad bars have been proposed as an effective approach to better meet the National School Lunch Program's fruit and vegetable recommendations. No studies have examined how rural and urban schools differ in the implementation of school salad bars. To compare the prevalence of school-lunch salad bars and differences in implementation between urban and rural Arizona schools. Secondary analysis of a cross-sectional web-based survey. School nutrition managers (N=596) in the state of Arizona. National Center for Education Statistics locale codes defined rural and urban classifications. Barriers to salad bar implementation were examined among schools that have never had, once had, and currently have a school salad bar. Promotional practices were examined among schools that once had and currently have a school salad bar. Generalized estimating equation models were used to compare urban and rural differences in presence and implementation of salad bars, adjusting for school-level demographics and the clustering of schools within districts. After adjustment, the prevalence of salad bars did not differ between urban and rural schools (46.9%±4.3% vs 46.8%±8.5%, respectively). Rural schools without salad bars more often reported perceived food waste and cost of produce as barriers to implementing salad bars, and funding was a necessary resource for offering a salad bar in the future, as compared with urban schools (P<0.05). No other geographic differences were observed in reported salad bar promotion, challenges, or resources among schools that currently have or once had a salad bar. After adjustment, salad bar prevalence, implementation practices, and concerns are similar across geographic settings. Future research is needed to investigate methods to address cost and food waste concerns in rural areas. Copyright © 2018 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  3. Stress tracking in thin bars by eigenstrain actuation

    NASA Astrophysics Data System (ADS)

    Schoeftner, J.; Irschik, H.

    2016-11-01

    This contribution focuses on stress tracking in slender structures. The axial stress distribution of a linear elastic bar is investigated, in particular, we seek for an answer to the following question: in which manner do we have to distribute eigenstrains, such that the axial stress in a bar is equal to a certain desired stress distribution, despite external forces or support excitations are present? In order to track a certain time- and space-dependent stress function, smart actuators, such as piezoelectric actuators, are needed to realize eigenstrains. Based on the equation of motion and the constitutive relation, which relate stress, strain, displacement and eigenstrains, an analytical solution for the stress tracking problem is derived. The starting point for the derivation of a solution for the stress tracking problem is a semi-positive definite integral depending on the error stress which is the difference between the actual stress and the desired stress. Our derived stress tracking theory is verified by two examples: first, a clamped-free bar which is harmonically excited is investigated. It is shown under which circumstances the axial stress vanishes at every location and at every time instant. The second example is a support-excited bar with end mass, where a desired stress profile is prescribed.

  4. Effect of bar cross-section and female housing material on retention of mandibular implant bar overdentures: A comparative in vitro study.

    PubMed

    Abdel-Khalek, Elsayed A; Ibrahim, Abdullah M

    2017-01-01

    The aim of this study is to evaluate the effect of different cross-sections of bar connecting two implants on the retention of mandibular overdentures with Hader clip or lined with heat-cured resilient liner as a housing material. The retentive values after simulated 1.5 years of service were also recorded. Edentulous mandibular acrylic model was constructed with two dummy implants located in the canine region and connected with cast bar assembly. According to bar cross-section and anchoring method, four groups ( n = 10) of identical overdentures were used as Hader bar/clip group (HCG), Hader bar/silicone liner female housing group (HSG), oval bar/silicone liner female housing group (OSG), and round bar/silicone liner female housing group (RSG). Each overdenture sample was subjected to simulated wear up to 2740 manual insertions/separations. The mean retentive forces were measured at the baseline and after every 500 insertions. The data were statistically analyzed using one-way analysis of variance. The present study demonstrated that all bar cross-sections showed a significant difference at the baseline ( P < 0.05), but HSG showed greater initial retention compared to HCG, OSG, and RSG. OSG showed a significant higher retention after 2740 insertions (simulated five insertions/day). Within the limitation of this in vitro study and for a similar period of service, heat-cured silicone female housing for Hader bar could maintain greater retention for two-implant-retained overdentures than provided by conventional plastic clip after 1.5 year. The oval bar recorded reasonable initial retention values and maintained these values for 1.5 years of service.

  5. Debris Flux Comparisons From The Goldstone Radar, Haystack Radar, and Hax Radar Prior, During, and After the Last Solar Maximum

    NASA Technical Reports Server (NTRS)

    Stokely, C. L.; Stansbery, E. G.; Goldstein, R. M.

    2006-01-01

    The continual monitoring of low Earth orbit (LEO) debris environment using highly sensitive radars is essential for an accurate characterization of these dynamic populations. Debris populations are continually evolving since there are new debris sources, previously unrecognized debris sources, and debris loss mechanisms that are dependent on the dynamic space environment. Such radar data are used to supplement, update, and validate existing orbital debris models. NASA has been utilizing radar observations of the debris environment for over a decade from three complementary radars: the NASA JPL Goldstone radar, the MIT Lincoln Laboratory (MIT/LL) Long Range Imaging Radar (known as the Haystack radar), and the MIT/LL Haystack Auxiliary radar (HAX). All of these systems are highly sensitive radars that operate in a fixed staring mode to statistically sample orbital debris in the LEO environment. Each of these radars is ideally suited to measure debris within a specific size region. The Goldstone radar generally observes objects with sizes from 2 mm to 1 cm. The Haystack radar generally measures from 5 mm to several meters. The HAX radar generally measures from 2 cm to several meters. These overlapping size regions allow a continuous measurement of cumulative debris flux versus diameter from 2 mm to several meters for a given altitude window. This is demonstrated for all three radars by comparing the debris flux versus diameter over 200 km altitude windows for 3 nonconsecutive years from 1998 through 2003. These years correspond to periods before, during, and after the peak of the last solar cycle. Comparing the year to year flux from Haystack for each of these altitude regions indicate statistically significant changes in subsets of the debris populations. Potential causes of these changes are discussed. These analysis results include error bars that represent statistical sampling errors, and are detailed in this paper.

  6. Errors in statistical decision making Chapter 2 in Applied Statistics in Agricultural, Biological, and Environmental Sciences

    USDA-ARS?s Scientific Manuscript database

    Agronomic and Environmental research experiments result in data that are analyzed using statistical methods. These data are unavoidably accompanied by uncertainty. Decisions about hypotheses, based on statistical analyses of these data are therefore subject to error. This error is of three types,...

  7. Uncovering the single top: observation of electroweak top quark production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benitez, Jorge Armando

    2009-01-01

    The top quark is generally produced in quark and anti-quark pairs. However, the Standard Model also predicts the production of only one top quark which is mediated by the electroweak interaction, known as 'Single Top'. Single Top quark production is important because it provides a unique and direct way to measure the CKM matrix element V tb, and can be used to explore physics possibilities beyond the Standard Model predictions. This dissertation presents the results of the observation of Single Top using 2.3 fb -1 of Data collected with the D0 detector at the Fermilab Tevatron collider. The analysis includes the Single Top muon+jets and electron+jets final states and employs Boosted Decision Tress as a method to separate the signal from the background. The resulting Single Top cross section measurement is: (1) σ(pmore » $$\\bar{p}$$→ tb + X, tqb + X) = 3.74 -0.74 +0.95 pb, where the errors include both statistical and systematic uncertainties. The probability to measure a cross section at this value or higher in the absence of signal is p = 1.9 x 10 -6. This corresponds to a standard deviation Gaussian equivalence of 4.6. When combining this result with two other analysis methods, the resulting cross section measurement is: (2) σ(p$$\\bar{p}$$ → tb + X, tqb + X) = 3.94 ± 0.88 pb, and the corresponding measurement significance is 5.0 standard deviations.« less

  8. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  9. Fifth through Eighth Grade Students' Difficulties in Constructing Bar Graphs: Data Organization, Data Aggregation, and Integration of a Second Variable

    ERIC Educational Resources Information Center

    Garcia-Mila, Merce; Marti, Eduard; Gilabert, Sandra; Castells, Marina

    2014-01-01

    Studies that consider the displays that students create to organize data are not common in the literature. This article compares fifth through eighth graders' difficulties with the creation of bar graphs using either raw data (Study 1, n = 155) or a provided table (Study 2, n = 152). Data in Study 1 showed statistical differences for the type of…

  10. The Thurgood Marshall School of Law Empirical Findings: A Report of the Bar Passing Percentages of Years 2005-2009

    ERIC Educational Resources Information Center

    Kadhi, T.; Holley, D.; Garrison, P.; Green, T.; Palasota, A.

    2010-01-01

    The following report of descriptive statistics gives the passing percentages of the Bar examination for the Thurgood Marshall School of Law (TMSL) for the calendar years of 2005-2009. A Five Year Analysis is given for the entire period, followed by a Three Year Analysis of years 2005-2007, 2006-2008, and 2007-2009. In addition, an Annual Analysis…

  11. Data visualization, bar naked: A free tool for creating interactive graphics.

    PubMed

    Weissgerber, Tracey L; Savic, Marko; Winham, Stacey J; Stanisavljevic, Dejana; Garovic, Vesna D; Milic, Natasa M

    2017-12-15

    Although bar graphs are designed for categorical data, they are routinely used to present continuous data in studies that have small sample sizes. This presentation is problematic, as many data distributions can lead to the same bar graph, and the actual data may suggest different conclusions from the summary statistics. To address this problem, many journals have implemented new policies that require authors to show the data distribution. This paper introduces a free, web-based tool for creating an interactive alternative to the bar graph (http://statistika.mfub.bg.ac.rs/interactive-dotplot/). This tool allows authors with no programming expertise to create customized interactive graphics, including univariate scatterplots, box plots, and violin plots, for comparing values of a continuous variable across different study groups. Individual data points may be overlaid on the graphs. Additional features facilitate visualization of subgroups or clusters of non-independent data. A second tool enables authors to create interactive graphics from data obtained with repeated independent experiments (http://statistika.mfub.bg.ac.rs/interactive-repeated-experiments-dotplot/). These tools are designed to encourage exploration and critical evaluation of the data behind the summary statistics and may be valuable for promoting transparency, reproducibility, and open science in basic biomedical research. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. Inverse statistical estimation via order statistics: a resolution of the ill-posed inverse problem of PERT scheduling

    NASA Astrophysics Data System (ADS)

    Pickard, William F.

    2004-10-01

    The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.

  13. Deficiency of ''Thin'' Stellar Bars in Seyfert Host Galaxies

    NASA Technical Reports Server (NTRS)

    Shlosman, Isaac; Peletier, Reynier F.; Knapen, Johan

    1999-01-01

    Using all available major samples of Seyfert galaxies and their corresponding control samples of closely matched non-active galaxies, we find that the bar ellipticities (or axial ratios) in Seyfert galaxies are systematically different from those in non-active galaxies. Overall, there is a deficiency of bars with large ellipticities (i.e., 'fat' or 'weak' bars) in Seyferts, compared to non-active galaxies. Accompanied with a large dispersion due to small number statistics, this effect is strictly speaking at the 2 sigma level. To obtain this result, the active galaxy samples of near-infrared surface photometry were matched to those of normal galaxies in type, host galaxy ellipticity, absolute magnitude, and, to some extent, in redshift. We discuss possible theoretical explanations of this phenomenon within the framework of galactic evolution, and, in particular, of radial gas redistribution in barred galaxies. Our conclusions provide further evidence that Seyfert hosts differ systematically from their non-active counterparts on scales of a few kpc.

  14. Neural network uncertainty assessment using Bayesian statistics: a remote sensing application

    NASA Technical Reports Server (NTRS)

    Aires, F.; Prigent, C.; Rossow, W. B.

    2004-01-01

    Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.

  15. Sediment Dynamics Over a Stable Point bar of the San Pedro River, Southeastern Arizona

    NASA Astrophysics Data System (ADS)

    Hamblen, J. M.; Conklin, M. H.

    2002-12-01

    Streams of the Southwest receive enormous inputs of sediment during storm events in the monsoon season due to the high intensity rainfall and large percentages of exposed soil in the semi-arid landscape. In the Upper San Pedro River, with a watershed area of approximately 3600 square kilometers, particle size ranges from clays to boulders with large fractions of sand and gravel. This study focuses on the mechanics of scour and fill on a stable point bar. An innovative technique using seven co-located scour chains and liquid-filled, load-cell scour sensors characterized sediment dynamics over the point bar during the monsoon season of July to September 2002. The sensors were set in two transects to document sediment dynamics near the head and toe of the bar. Scour sensors record area-averaged sediment depths while scour chains measure scour and fill at a point. The average area covered by each scour sensor is 11.1 square meters. Because scour sensors have never been used in a system similar to the San Pedro, one goal of the study was to test their ability to detect changes in sediment load with time in order to determine the extent of scour and fill during monsoonal storms. Because of the predominantly unconsolidated nature of the substrate it was hypothesized that dune bedforms would develop in events less than the 1-year flood. The weak 2002 monsoon season produced only two storms that completely inundated the point bar, both less than the 1-year flood event. The first event, 34 cms, produced net deposition in areas where Johnson grass had been present and was now buried. The scour sensor at the lowest elevation, in a depression which serves as a secondary channel during storm events, recorded scour during the rising limb of the hydrograph followed by pulses we interpret to be the passage of dunes. The second event, although smaller at 28 cms, resulted from rain more than 50 km upstream and had a much longer peak and a slowly declining falling limb. During the second flood, several areas with buried vegetation were scoured back to their original bed elevations. Pulses of sediment passed over the sensor in the secondary channel and the sensor in the vegetated zone. Scour sensor measurements agree with data from scour chains (error +/- 3 cm) and surveys (error +/- 0.6 cm) performed before and after the two storm events, within the range of error of each method. All load sensor data were recorded at five minute intervals. Use of a smaller interval could give more details about the shapes of sediment waves and aid in bedform determination. Results suggest that dune migration is the dominant mechanism for scour and backfill in the point bar setting. Scour sensors, when coupled with surveying and/or scour chains, are a tremendous addition to the geomorphologist's toolbox, allowing unattended real-time measurements of sediment depth with time.

  16. A measurement of CMB cluster lensing with SPT and DES year 1 data

    NASA Astrophysics Data System (ADS)

    Baxter, E. J.; Raghunathan, S.; Crawford, T. M.; Fosalba, P.; Hou, Z.; Holder, G. P.; Omori, Y.; Patil, S.; Rozo, E.; Abbott, T. M. C.; Annis, J.; Aylor, K.; Benoit-Lévy, A.; Benson, B. A.; Bertin, E.; Bleem, L.; Buckley-Geer, E.; Burke, D. L.; Carlstrom, J.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Chang, C. L.; Cho, H.-M.; Crites, A. T.; Crocce, M.; Cunha, C. E.; da Costa, L. N.; D'Andrea, C. B.; Davis, C.; de Haan, T.; Desai, S.; Dietrich, J. P.; Dobbs, M. A.; Dodelson, S.; Doel, P.; Drlica-Wagner, A.; Estrada, J.; Everett, W. B.; Fausti Neto, A.; Flaugher, B.; Frieman, J.; García-Bellido, J.; George, E. M.; Gaztanaga, E.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Halverson, N. W.; Harrington, N. L.; Hartley, W. G.; Holzapfel, W. L.; Honscheid, K.; Hrubes, J. D.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Knox, L.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Lee, A. T.; Leitch, E. M.; Li, T. S.; Lima, M.; Luong-Van, D.; Manzotti, A.; March, M.; Marrone, D. P.; Marshall, J. L.; Martini, P.; McMahon, J. J.; Melchior, P.; Menanteau, F.; Meyer, S. S.; Miller, C. J.; Miquel, R.; Mocanu, L. M.; Mohr, J. J.; Natoli, T.; Nord, B.; Ogando, R. L. C.; Padin, S.; Plazas, A. A.; Pryke, C.; Rapetti, D.; Reichardt, C. L.; Romer, A. K.; Roodman, A.; Ruhl, J. E.; Rykoff, E.; Sako, M.; Sanchez, E.; Sayre, J. T.; Scarpine, V.; Schaffer, K. K.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Shirokoff, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Staniszewski, Z.; Stark, A.; Story, K.; Suchyta, E.; Tarle, G.; Thomas, D.; Troxel, M. A.; Vanderlinde, K.; Vieira, J. D.; Walker, A. R.; Williamson, R.; Zhang, Y.; Zuntz, J.

    2018-05-01

    Clusters of galaxies gravitationally lens the cosmic microwave background (CMB) radiation, resulting in a distinct imprint in the CMB on arcminute scales. Measurement of this effect offers a promising way to constrain the masses of galaxy clusters, particularly those at high redshift. We use CMB maps from the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB lensing signal around galaxy clusters identified in optical imaging from first year observations of the Dark Energy Survey. The cluster catalogue used in this analysis contains 3697 members with mean redshift of \\bar{z} = 0.45. We detect lensing of the CMB by the galaxy clusters at 8.1σ significance. Using the measured lensing signal, we constrain the amplitude of the relation between cluster mass and optical richness to roughly 17 {per cent} precision, finding good agreement with recent constraints obtained with galaxy lensing. The error budget is dominated by statistical noise but includes significant contributions from systematic biases due to the thermal SZ effect and cluster miscentring.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darling, Christopher Lynn

    By determining the production cross sections for heavy flavor hadrons, we test the theoretical predictions from perturhative quantum chroma-dynamics (QCD). In the case of pion induced beauty production, the few published results do not resolve the issue of the applicability of perturbative QCD. This analysis is undertaken in order to help resolve this situation. We determine the total beauty and charm production cross sections using an analysis of single electron decay products. We extract the cross sections per nucleon from the two-dimensional distribution of electron p versus impact parameter ( d) to the primary vertex. We place an upper limit on the beauty production cross section of σ bmore » $$\\bar{b}$$ < 105 nb at the 90% confidence level, where the limit includes both statistical and systematic errors. The charm production cross section is determined to be σ cc = 13.9$$+2.4/atop{-2.3}$$ (stat) ± 1.8 (syst) μ.b, which is in good agreement with next-to-leading order QCD predictions and other measurements.« less

  18. Investigation of local evaporation flux and vapor-phase pressure at an evaporative droplet interface.

    PubMed

    Duan, Fei; Ward, C A

    2009-07-07

    In the steady-state experiments of water droplet evaporation, when the throat was heating at a stainless steel conical funnel, the interfacial liquid temperature was found to increase parabolically from the center line to the rim of the funnel with the global vapor-phase pressure at around 600 Pa. The energy conservation analysis at the interface indicates that the energy required for evaporation is maintained by thermal conduction to the interface from the liquid and vapor phases, thermocapillary convection at interface, and the viscous dissipation globally and locally. The local evaporation flux increases from the center line to the periphery as a result of multiple effects of energy transport at the interface. The local vapor-phase pressure predicted from statistical rate theory (SRT) is also found to increase monotonically toward the interface edge from the center line. However, the average value of the local vapor-phase pressures is in agreement with the measured global vapor-phase pressure within the measured error bar.

  19. Applying Quantum Monte Carlo to the Electronic Structure Problem

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2016-06-01

    Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    The ATLAS measurement of the inclusive top quark pair (more » $$t\\bar{t}$$) cross-section σ$$t\\bar{t}$$ in proton-proton collisions at √s = 8 TeV has been updated using the final 2012 luminosity calibration. The updated cross-section result is: σ $$t\\bar{t}$$ = 242.9 ± 1.7 ± 5.5 ± 5.1 ± 4.2 pb, where the four uncertainties arise from data statistics, experimental and theoretical systematic effects, knowledge of the integrated luminosity and of the LHC beam energy. The result is consistent with theoretical QCD calculations at next-to-next-to-leading order. The measurement of the ratio of $$t\\bar{t}$$ cross-sections at √s = 8 TeV and √s = 7 TeV, and the √s = 8 TeV fiducial measurement corresponding to the experimental acceptance of the leptons, have also been updated.« less

  1. The Bologna Annotation Resource (BAR 3.0): improving protein functional annotation.

    PubMed

    Profiti, Giuseppe; Martelli, Pier Luigi; Casadio, Rita

    2017-07-03

    BAR 3.0 updates our server BAR (Bologna Annotation Resource) for predicting protein structural and functional features from sequence. We increase data volume, query capabilities and information conveyed to the user. The core of BAR 3.0 is a graph-based clustering procedure of UniProtKB sequences, following strict pairwise similarity criteria (sequence identity ≥40% with alignment coverage ≥90%). Each cluster contains the available annotation downloaded from UniProtKB, GO, PFAM and PDB. After statistical validation, GO terms and PFAM domains are cluster-specific and annotate new sequences entering the cluster after satisfying similarity constraints. BAR 3.0 includes 28 869 663 sequences in 1 361 773 clusters, of which 22.2% (22 241 661 sequences) and 47.4% (24 555 055 sequences) have at least one validated GO term and one PFAM domain, respectively. 1.4% of the clusters (36% of all sequences) include PDB structures and the cluster is associated to a hidden Markov model that allows building template-target alignment suitable for structural modeling. Some other 3 399 026 sequences are singletons. BAR 3.0 offers an improved search interface, allowing queries by UniProtKB-accession, Fasta sequence, GO-term, PFAM-domain, organism, PDB and ligand/s. When evaluated on the CAFA2 targets, BAR 3.0 largely outperforms our previous version and scores among state-of-the-art methods. BAR 3.0 is publicly available and accessible at http://bar.biocomp.unibo.it/bar3. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Information technology and medication safety: what is the benefit?

    PubMed Central

    Kaushal, R; Bates, D

    2002-01-01

    

 Medication errors occur frequently and have significant clinical and financial consequences. Several types of information technologies can be used to decrease rates of medication errors. Computerized physician order entry with decision support significantly reduces serious inpatient medication error rates in adults. Other available information technologies that may prove effective for inpatients include computerized medication administration records, robots, automated pharmacy systems, bar coding, "smart" intravenous devices, and computerized discharge prescriptions and instructions. In outpatients, computerization of prescribing and patient oriented approaches such as personalized web pages and delivery of web based information may be important. Public and private mandates for information technology interventions are growing, but further development, application, evaluation, and dissemination are required. PMID:12486992

  3. The economic impact of Mexico City's smoke-free law.

    PubMed

    López, Carlos Manuel Guerrero; Ruiz, Jorge Alberto Jiménez; Shigematsu, Luz Myriam Reynales; Waters, Hugh R

    2011-07-01

    To evaluate the economic impact of Mexico City's 2008 smoke-free law--The Non-Smokers' Health Protection Law on restaurants, bars and nightclubs. We used the Monthly Services Survey of businesses from January 2005 to April 2009--with revenues, employment and payments to employees as the principal outcomes. The results are estimated using a differences-in-differences regression model with fixed effects. The states of Jalisco, Nuevo León and México, where the law was not in effect, serve as a counterfactual comparison group. In restaurants, after accounting for observable factors and the fixed effects, there was a 24.8% increase in restaurants' revenue associated with the smoke-free law. This difference is not statistically significant but shows that, on average, restaurants did not suffer economically as a result of the law. Total wages increased by 28.2% and employment increased by 16.2%. In nightclubs, bars and taverns there was a decrease of 1.5% in revenues and an increase of 0.1% and 3.0%, respectively, in wages and employment. None of these effects are statistically significant in multivariate analysis. There is no statistically significant evidence that the Mexico City smoke-free law had a negative impact on restaurants' income, employees' wages and levels of employment. On the contrary, the results show a positive, though statistically non-significant, impact of the law on most of these outcomes. Mexico City's experience suggests that smoke-free laws in Mexico and elsewhere will not hurt economic productivity in the restaurant and bar industries.

  4. Microbial flora of in-use soap products.

    PubMed Central

    McBride, M E

    1984-01-01

    A comparison has been made of the in-use bacterial load of two bar soaps with and without antibacterials and two liquid soaps in five different locations over a 1-week period. Of the 25 samples taken from each soap, 92 to 96% of samples from bar soaps were culture positive as compared to 8% of those from liquid soaps. Bacterial populations ranged from 0 to 3.8 log CFU per sample for bar soaps and from 0 to 2.0 log CFU per sample for liquid soaps. The mean bacterial populations per sample were 1.96 and 2.47 log CFU for the two bar soaps, and 0.08 and 0.12 log CFU for the two liquid soaps. The difference in bacterial population between bar soaps and liquid soaps was statistically significant (P = 0.005). Staphylococcus aureus was isolated on three occasions from bar soaps but not from liquid soaps. S. aureus was isolated twice from the exterior of the plastic dispensers of liquid soap but not from the soap itself. Gram-negative bacteria were cultured only from soaps containing antibacterials. Bacterial populations on bar soaps were not high compared with bacterial populations on hands, and the flora was continually changing without evidence of a carrier state. PMID:6486782

  5. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  6. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  7. Fermi-Pasta-Ulam-Tsingou problems: Passage from Boltzmann to q-statistics

    NASA Astrophysics Data System (ADS)

    Bagchi, Debarshee; Tsallis, Constantino

    2018-02-01

    The Fermi-Pasta-Ulam (FPU) one-dimensional Hamiltonian includes a quartic term which guarantees ergodicity of the system in the thermodynamic limit. Consistently, the Boltzmann factor P(ε) ∼e-βε describes its equilibrium distribution of one-body energies, and its velocity distribution is Maxwellian, i.e., P(v) ∼e - βv2 /2. We consider here a generalized system where the quartic coupling constant between sites decays as 1 / dijα (α ≥ 0 ;dij = 1 , 2 , …) . Through first-principle molecular dynamics we demonstrate that, for large α (above α ≃ 1), i.e., short-range interactions, Boltzmann statistics (based on the additive entropic functional SB [ P(z) ] = - k ∫ dzP(z) ln P(z)) is verified. However, for small values of α (below α ≃ 1), i.e., long-range interactions, Boltzmann statistics dramatically fails and is replaced by q-statistics (based on the nonadditive entropic functional Sq [ P(z) ] = k(1 - ∫ dz[ P(z) ]q) /(q - 1) , with S1 =SB). Indeed, the one-body energy distribution is q-exponential, P(ε) ∼ eqε-βε ε ≡[ 1 +(qε - 1) βε ε ]-1 /(qε - 1) with qε > 1, and its velocity distribution is given by P(v) ∼ eqv-βvv2 / 2 with qv > 1. Moreover, within small error bars, we verify qε =qv = q, which decreases from an extrapolated value q ≃ 5 / 3 to q = 1 when α increases from zero to α ≃ 1, and remains q = 1 thereafter.

  8. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  9. Improved Statistics for Genome-Wide Interaction Analysis

    PubMed Central

    Ueki, Masao; Cordell, Heather J.

    2012-01-01

    Recently, Wu and colleagues [1] proposed two novel statistics for genome-wide interaction analysis using case/control or case-only data. In computer simulations, their proposed case/control statistic outperformed competing approaches, including the fast-epistasis option in PLINK and logistic regression analysis under the correct model; however, reasons for its superior performance were not fully explored. Here we investigate the theoretical properties and performance of Wu et al.'s proposed statistics and explain why, in some circumstances, they outperform competing approaches. Unfortunately, we find minor errors in the formulae for their statistics, resulting in tests that have higher than nominal type 1 error. We also find minor errors in PLINK's fast-epistasis and case-only statistics, although theory and simulations suggest that these errors have only negligible effect on type 1 error. We propose adjusted versions of all four statistics that, both theoretically and in computer simulations, maintain correct type 1 error rates under the null hypothesis. We also investigate statistics based on correlation coefficients that maintain similar control of type 1 error. Although designed to test specifically for interaction, we show that some of these previously-proposed statistics can, in fact, be sensitive to main effects at one or both loci, particularly in the presence of linkage disequilibrium. We propose two new “joint effects” statistics that, provided the disease is rare, are sensitive only to genuine interaction effects. In computer simulations we find, in most situations considered, that highest power is achieved by analysis under the correct genetic model. Such an analysis is unachievable in practice, as we do not know this model. However, generally high power over a wide range of scenarios is exhibited by our joint effects and adjusted Wu statistics. We recommend use of these alternative or adjusted statistics and urge caution when using Wu et al.'s originally-proposed statistics, on account of the inflated error rate that can result. PMID:22496670

  10. ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.

    2011-04-20

    While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less

  11. Author Correction: Nanoscale control of competing interactions and geometrical frustration in a dipolar trident lattice.

    PubMed

    Farhan, Alan; Petersen, Charlotte F; Dhuey, Scott; Anghinolfi, Luca; Qin, Qi Hang; Saccone, Michael; Velten, Sven; Wuth, Clemens; Gliga, Sebastian; Mellado, Paula; Alava, Mikko J; Scholl, Andreas; van Dijken, Sebastiaan

    2017-12-12

    The original version of this article contained an error in the legend to Figure 4. The yellow scale bar should have been defined as '~600 nm', not '~600 µm'. This has now been corrected in both the PDF and HTML versions of the article.

  12. LOCATING NEARBY SOURCES OF AIR POLLUTION BY NONPARAMETRIC REGRESSION OF ATMOSPHERIC CONCENTRATIONS ON WIND DIRECTION. (R826238)

    EPA Science Inventory

    The relationship of the concentration of air pollutants to wind direction has been determined by nonparametric regression using a Gaussian kernel. The results are smooth curves with error bars that allow for the accurate determination of the wind direction where the concentrat...

  13. Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock

    NASA Technical Reports Server (NTRS)

    Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.

    2001-01-01

    Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.

  14. Color Histogram Diffusion for Image Enhancement

    NASA Technical Reports Server (NTRS)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  15. Evaluating diffraction based overlay metrology for double patterning technologies

    NASA Astrophysics Data System (ADS)

    Saravanan, Chandra Saru; Liu, Yongdong; Dasari, Prasad; Kritsun, Oleg; Volkman, Catherine; Acheta, Alden; La Fontaine, Bruno

    2008-03-01

    Demanding sub-45 nm node lithographic methodologies such as double patterning (DPT) pose significant challenges for overlay metrology. In this paper, we investigate scatterometry methods as an alternative approach to meet these stringent new metrology requirements. We used a spectroscopic diffraction-based overlay (DBO) measurement technique in which registration errors are extracted from specially designed diffraction targets for double patterning. The results of overlay measurements are compared to traditional bar-in-bar targets. A comparison between DBO measurements and CD-SEM measurements is done to show the correlation between the two approaches. We discuss the total measurement uncertainty (TMU) requirements for sub-45 nm nodes and compare TMU from the different overlay approaches.

  16. The Thurgood Marshall School of Law Empirical Findings: A Report of Differences of Texas Bar Passing Percentages of Students Receiving the TMSL Scholarship during the Years 2005-2009 versus Those Not Receiving the TMSL Scholarship

    ERIC Educational Resources Information Center

    Kadhi, T.; Holley, D.; Garrison, P.; Green, T.

    2010-01-01

    The following report of descriptive/inferential statistics describes the population of students receiving the Thurgood Marshall School of Law (TMSL) scholarship versus those who do not and their relationship with student Bar Passing rate and GPA. The timeline observed are the calendar years of 2005-2009. Data collection and analysis for this…

  17. The Red Edge Problem in asteroid band parameter analysis

    NASA Astrophysics Data System (ADS)

    Lindsay, Sean S.; Dunn, Tasha L.; Emery, Joshua P.; Bowles, Neil E.

    2016-04-01

    Near-infrared reflectance spectra of S-type asteroids contain two absorptions at 1 and 2 μm (band I and II) that are diagnostic of mineralogy. A parameterization of these two bands is frequently employed to determine the mineralogy of S(IV) asteroids through the use of ordinary chondrite calibration equations that link the mineralogy to band parameters. The most widely used calibration study uses a Band II terminal wavelength point (red edge) at 2.50 μm. However, due to the limitations of the NIR detectors on prominent telescopes used in asteroid research, spectral data for asteroids are typically only reliable out to 2.45 μm. We refer to this discrepancy as "The Red Edge Problem." In this report, we evaluate the associated errors for measured band area ratios (BAR = Area BII/BI) and calculated relative abundance measurements. We find that the Red Edge Problem is often not the dominant source of error for the observationally limited red edge set at 2.45 μm, but it frequently is for a red edge set at 2.40 μm. The error, however, is one sided and therefore systematic. As such, we provide equations to adjust measured BARs to values with a different red edge definition. We also provide new ol/(ol+px) calibration equations for red edges set at 2.40 and 2.45 μm.

  18. Characteristic study of flat spray nozzle by using particle image velocimetry (PIV) and ANSYS simulation method

    NASA Astrophysics Data System (ADS)

    Pairan, M. Rasidi; Asmuin, Norzelawati; Isa, Nurasikin Mat; Sies, Farid

    2017-04-01

    Water mist sprays are used in wide range of application. However it is depend to the spray characteristic to suit the particular application. This project studies the water droplet velocity and penetration angle generated by new development mist spray with a flat spray pattern. This research conducted into two part which are experimental and simulation section. The experimental was conducted by using particle image velocimetry (PIV) method, ANSYS software was used as tools for simulation section meanwhile image J software was used to measure the penetration angle. Three different of combination pressure of air and water were tested which are 1 bar (case A), 2 bar (case B) and 3 bar (case C). The flat spray generated by the new development nozzle was examined at 9cm vertical line from 8cm of the nozzle orifice. The result provided in the detailed analysis shows that the trend of graph velocity versus distance gives the good agreement within simulation and experiment for all the pressure combination. As the water and air pressure increased from 1 bar to 2 bar, the velocity and angle penetration also increased, however for case 3 which run under 3 bar condition, the water droplet velocity generated increased but the angle penetration is decreased. All the data then validated by calculate the error between experiment and simulation. By comparing the simulation data to the experiment data for all the cases, the standard deviation for this case A, case B and case C relatively small which are 5.444, 0.8242 and 6.4023.

  19. Satellite Sampling and Retrieval Errors in Regional Monthly Rain Estimates from TMI AMSR-E, SSM/I, AMSU-B and the TRMM PR

    NASA Technical Reports Server (NTRS)

    Fisher, Brad; Wolff, David B.

    2010-01-01

    Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.

  20. Health and efficiency in trimix versus air breathing in compressed air workers.

    PubMed

    Van Rees Vellinga, T P; Verhoeven, A C; Van Dijk, F J H; Sterk, W

    2006-01-01

    The Western Scheldt Tunneling Project in the Netherlands provided a unique opportunity to evaluate the effects of trimix usage on the health of compressed air workers and the efficiency of the project. Data analysis addressed 318 exposures to compressed air at 3.9-4.4 bar gauge and 52 exposures to trimix (25% oxygen, 25% helium, and 50% nitrogen) at 4.6-4.8 bar gauge. Results revealed three incidents of decompression sickness all of which involved the use of compressed air. During exposure to compressed air, the effects of nitrogen narcosis were manifested in operational errors and increased fatigue among the workers. When using trimix, less effort was required for breathing, and mandatory decompression times for stays of a specific duration and maximum depth were considerably shorter. We conclude that it might be rational--for both medical and operational reasons--to use breathing gases with lower nitrogen fractions (e.g., trimix) for deep-caisson work at pressures exceeding 3 bar gauge, although definitive studies are needed.

  1. Three methods of presenting flight vector information in a head-up display during simulated STOL approaches

    NASA Technical Reports Server (NTRS)

    Dwyer, J. H., III; Palmer, E. A., III

    1975-01-01

    A simulator study was conducted to determine the usefulness of adding flight path vector symbology to a head-up display designed to improve glide-slope tracking performance during steep 7.5 deg visual approaches in STOL aircraft. All displays included a fixed attitude symbol, a pitch- and roll-stabilized horizon bar, and a glide-slope reference bar parallel to and 7.5 deg below the horizon bar. The displays differed with respect to the flight-path marker (FPM) symbol: display 1 had no FPM symbol; display 2 had an air-referenced FPM, and display 3 had a ground-referenced FPM. No differences between displays 1 and 2 were found on any of the performance measures. Display 3 was found to decrease height error in the early part of the approach and to reduce descent rate variation over the entire approach. Two measures of workload did not indicate any differences between the displays.

  2. Turbulent heat flux measurements in a transitional boundary layer

    NASA Technical Reports Server (NTRS)

    Sohn, K. H.; Zaman, K. B. M. Q.; Reshotko, E.

    1992-01-01

    During an experimental investigation of the transitional boundary layer over a heated flat plate, an unexpected result was encountered for the turbulent heat flux (bar-v't'). This quantity, representing the correlation between the fluctuating normal velocity and the temperature, was measured to be negative near the wall under certain conditions. The result was unexpected as it implied a counter-gradient heat transfer by the turbulent fluctuations. Possible reasons for this anomalous result were further investigated. The possible causes considered for this negative bar-v't' were: (1) plausible measurement error and peculiarity of the flow facility, (2) large probe size effect, (3) 'streaky structure' in the near wall boundary layer, and (4) contributions from other terms usually assumed negligible in the energy equation including the Reynolds heat flux in the streamwise direction (bar-u't'). Even though the energy balance has remained inconclusive, none of the items (1) to (3) appear to be contributing directly to the anomaly.

  3. Application of Gurson–Tvergaard–Needleman Constitutive Model to the Tensile Behavior of Reinforcing Bars with Corrosion Pits

    PubMed Central

    Xu, Yidong; Qian, Chunxiang

    2013-01-01

    Based on meso-damage mechanics and finite element analysis, the aim of this paper is to describe the feasibility of the Gurson–Tvergaard–Needleman (GTN) constitutive model in describing the tensile behavior of corroded reinforcing bars. The orthogonal test results showed that different fracture pattern and the related damage evolution process can be simulated by choosing different material parameters of GTN constitutive model. Compared with failure parameters, the two constitutive parameters are significant factors affecting the tensile strength. Both the nominal yield and ultimate tensile strength decrease markedly with the increase of constitutive parameters. Combining with the latest data and trial-and-error method, the suitable material parameters of GTN constitutive model were adopted to simulate the tensile behavior of corroded reinforcing bars in concrete under carbonation environment attack. The numerical predictions can not only agree very well with experimental measurements, but also simplify the finite element modeling process. PMID:23342140

  4. Quantifying surgical access in eyebrow craniotomy with and without orbital bar removal: cadaver and surgical phantom studies.

    PubMed

    Zador, Zsolt; Coope, David J; Gnanalingham, Kanna; Lawton, Michael T

    2014-04-01

    Eyebrow craniotomy is a recently described minimally invasive approach for tackling primarily pathology of the anterior skull base. The removal of the orbital bar may further expand the surgical corridor of this exposure, but the extent of benefit is poorly quantified. We assessed the effect of orbital bar removal with regards to surgical access in the eyebrow craniotomy using classic morphometric measurements in cadaver heads. Using surgical phantoms and neuronavigation, we also measured the 'working volume', a new parameter for characterising the volume of surgical access in these approaches. Silicon injected cadaver heads (n = 5) were used for morphometric analysis of the eyebrow craniotomy with and without orbital bar removal. Working depths and 'working areas' of surgical access were measured as defined by key anatomical landmarks. The eyebrow craniotomy with or without orbital bar removal was also simulated using surgical phantoms (n = 3, 90-120 points per trial), calibrated against a frameless neuronavigation system. Working volume was derived from reference coordinates recorded along the anatomical borders of the eyebrow craniotomy using the "α-shape algorithm" in R statistics. In cadaver heads, eyebrow craniotomy with removal of the orbital bar reduced the working depth to the ipsilateral anterior clinoid process (42 ± 2 versus 33 ± 3 mm; p < 0.05), but the working areas as defined by deep neurovascular and bony landmarks was statistically unchanged (total working areas of 418 ± 80 cm(2) versus 334 ± 48 cm(2); p = 0.4). In surgical phantom studies, however, working-volume for the simulated eyebrow craniotomies was increased with orbital bar removal (16 ± 1 cm(3) versus 21 ± 1 cm(3); p < 0.01). In laboratory studies, orbital bar removal in eyebrow craniotomy provides a modest reduction in working depth and increase in the working volume. But this must be weighed up against the added morbidity of the procedure. Working volume, a newly developed parameter may provide a more meaningful endpoint for characterising the surgical access for different surgical approaches and it could be applied to other operative cases undertaken with frameless neuronavigation.

  5. Addendum to ‘Measurement of the $$t\\bar{t}$$ production cross-section using $$e\\mu $$ events with b-tagged jets in pp collisions at $$\\sqrt{s}$$ = 7 and 8 $$\\,\\mathrm{TeV}$$with the ATLAS detector’

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2016-11-23

    The ATLAS measurement of the inclusive top quark pair (more » $$t\\bar{t}$$) cross-section σ$$t\\bar{t}$$ in proton-proton collisions at √s = 8 TeV has been updated using the final 2012 luminosity calibration. The updated cross-section result is: σ $$t\\bar{t}$$ = 242.9 ± 1.7 ± 5.5 ± 5.1 ± 4.2 pb, where the four uncertainties arise from data statistics, experimental and theoretical systematic effects, knowledge of the integrated luminosity and of the LHC beam energy. The result is consistent with theoretical QCD calculations at next-to-next-to-leading order. The measurement of the ratio of $$t\\bar{t}$$ cross-sections at √s = 8 TeV and √s = 7 TeV, and the √s = 8 TeV fiducial measurement corresponding to the experimental acceptance of the leptons, have also been updated.« less

  6. Patient Safety: Moving the Bar in Prison Health Care Standards

    PubMed Central

    Greifinger, Robert B.; Mellow, Jeff

    2010-01-01

    Improvements in community health care quality through error reduction have been slow to transfer to correctional settings. We convened a panel of correctional experts, which recommended 60 patient safety standards focusing on such issues as creating safety cultures at organizational, supervisory, and staff levels through changes to policy and training and by ensuring staff competency, reducing medication errors, encouraging the seamless transfer of information between and within practice settings, and developing mechanisms to detect errors or near misses and to shift the emphasis from blaming staff to fixing systems. To our knowledge, this is the first published set of standards focusing on patient safety in prisons, adapted from the emerging literature on quality improvement in the community. PMID:20864714

  7. Galaxy-scale Bars in Late-type Sloan Digital Sky Survey Galaxies Do Not Influence the Average Accretion Rates of Supermassive Black Holes

    NASA Astrophysics Data System (ADS)

    Goulding, A. D.; Matthaey, E.; Greene, J. E.; Hickox, R. C.; Alexander, D. M.; Forman, W. R.; Jones, C.; Lehmer, B. D.; Griffis, S.; Kanek, S.; Oulmakki, M.

    2017-07-01

    Galaxy-scale bars are expected to provide an effective means for driving material toward the central region in spiral galaxies, and possibly feeding supermassive black holes (BHs). Here we present a statistically complete study of the effect of bars on average BH accretion. From a well-selected sample of 50,794 spiral galaxies (with {M}* ˜ 0.2{--}30× {10}10 {M}⊙ ) extracted from the Sloan Digital Sky Survey Galaxy Zoo 2 project, we separate those sources considered to contain galaxy-scale bars from those that do not. Using archival data taken by the Chandra X-ray Observatory, we identify X-ray luminous ({L}{{X}}≳ {10}41 {erg} {{{s}}}-1) active galactic nuclei and perform an X-ray stacking analysis on the remaining X-ray undetected sources. Through X-ray stacking, we derive a time-averaged look at accretion for galaxies at fixed stellar mass and star-formation rate, finding that the average nuclear accretion rates of galaxies with bar structures are fully consistent with those lacking bars ({\\dot{M}}{acc}≈ 3× {10}-5 {M}⊙ yr-1). Hence, we robustly conclude that large-scale bars have little or no effect on the average growth of BHs in nearby (z< 0.15) galaxies over gigayear timescales.

  8. The economic impact of Mexico City's smoke-free law

    PubMed Central

    Guerrero López, Carlos Manuel; Jiménez Ruiz, Jorge Alberto; Reynales Shigematsu, Luz Myriam

    2011-01-01

    Objective To evaluate the economic impact of Mexico City's 2008 smoke-free law—The Non-Smokers' Health Protection Law on restaurants, bars and nightclubs. Material and methods We used the Monthly Services Survey of businesses from January 2005 to April 2009—with revenues, employment and payments to employees as the principal outcomes. The results are estimated using a differences-in-differences regression model with fixed effects. The states of Jalisco, Nuevo León and México, where the law was not in effect, serve as a counterfactual comparison group. Results In restaurants, after accounting for observable factors and the fixed effects, there was a 24.8% increase in restaurants' revenue associated with the smoke-free law. This difference is not statistically significant but shows that, on average, restaurants did not suffer economically as a result of the law. Total wages increased by 28.2% and employment increased by 16.2%. In nightclubs, bars and taverns there was a decrease of 1.5% in revenues and an increase of 0.1% and 3.0%, respectively, in wages and employment. None of these effects are statistically significant in multivariate analysis. Conclusions There is no statistically significant evidence that the Mexico City smoke-free law had a negative impact on restaurants' income, employees' wages and levels of employment. On the contrary, the results show a positive, though statistically non-significant, impact of the law on most of these outcomes. Mexico City's experience suggests that smoke-free laws in Mexico and elsewhere will not hurt economic productivity in the restaurant and bar industries. PMID:21292808

  9. Effects of a Hybrid Online and In-Person Training Program Designed to Reduce Alcohol Sales to Obviously Intoxicated Patrons.

    PubMed

    Toomey, Traci L; Lenk, Kathleen M; Erickson, Darin J; Horvath, Keith J; Ecklund, Alexandra M; Nederhoff, Dawn M; Hunt, Shanda L; Nelson, Toben F

    2017-03-01

    Overservice of alcohol (i.e., selling alcohol to intoxicated patrons) continues to be a problem at bars and restaurants, contributing to serious consequences such as traffic crashes and violence. We developed a training program for managers of bars and restaurants, eARM™, focusing on preventing overservice of alcohol. The program included online and face-to-face components to help create and implement establishment-specific policies. We conducted a large, randomized controlled trial in bars and restaurants in one metropolitan area in the midwestern United States to evaluate effects of the eARM program on the likelihood of selling alcohol to obviously intoxicated patrons. Our outcome measure was pseudo-intoxicated purchase attempts-buyers acted out signs of intoxication while attempting to purchase alcohol-conducted at baseline and then at 1 month, 3 months, and 6 months after training. We conducted intention-to-treat analyses on changes in purchase attempts in intervention (n = 171) versus control (n = 163) bars/restaurants using a Time × Condition interaction, as well as planned contrasts between baseline and follow-up purchase attempts. The overall Time × Condition interaction was not statistically significant. At 1 month after training, we observed a 6% relative reduction in likelihood of selling to obviously intoxicated patrons in intervention versus control bars/restaurants. At 3 months after training, this difference widened to a 12% relative reduction; however, at 6 months this difference dissipated. None of these specific contrasts were statistically significant (p = .05). The observed effects of this enhanced training program are consistent with prior research showing modest initial effects followed by a decay within 6 months of the core training. Unless better training methods are identified, training programs are inadequate as the sole approach to reduce overservice of alcohol.

  10. Erratum: Measurement of the electron charge asymmetry in $$\\boldsymbol{p\\bar{p}\\rightarrow W+X \\rightarrow e\

    DOE PAGES

    Abazov, Victor Mukhamedovich

    2015-04-30

    The recent paper on the charge asymmetry for electrons from W boson decay has an error in the Tables VII to XI that show the correlation coefficients of systematic uncertainties. Furthermore, the correlation matrix elements shown in the original publication were the square roots of the calculated values.

  11. Author Correction: Phase-resolved X-ray polarimetry of the Crab pulsar with the AstroSat CZT Imager

    NASA Astrophysics Data System (ADS)

    Vadawale, S. V.; Chattopadhyay, T.; Mithun, N. P. S.; Rao, A. R.; Bhattacharya, D.; Vibhute, A.; Bhalerao, V. B.; Dewangan, G. C.; Misra, R.; Paul, B.; Basu, A.; Joshi, B. C.; Sreekumar, S.; Samuel, E.; Priya, P.; Vinod, P.; Seetha, S.

    2018-05-01

    In the Supplementary Information file originally published for this Letter, in Supplementary Fig. 7 the error bars for the polarization fraction were provided as confidence intervals but instead should have been Bayesian credibility intervals. This has been corrected and does not alter the conclusions of the Letter in any way.

  12. National Centers for Environmental Prediction

    Science.gov Websites

    : Influence of convective parameterization on the systematic errors of Climate Forecast System (CFS) model ; Climate Dynamics, 41, 45-61, 2013. Saha, S., S. Pokhrel and H. S. Chaudhari : Influence of Eurasian snow Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather

  13. Author Correction: Circuit dissection of the role of somatostatin in itch and pain.

    PubMed

    Huang, Jing; Polgár, Erika; Solinski, Hans Jürgen; Mishra, Santosh K; Tseng, Pang-Yen; Iwagaki, Noboru; Boyle, Kieran A; Dickie, Allen C; Kriegbaum, Mette C; Wildner, Hendrik; Zeilhofer, Hanns Ulrich; Watanabe, Masahiko; Riddell, John S; Todd, Andrew J; Hoon, Mark A

    2018-06-01

    In the version of this article initially published online, the labels were switched for the right-hand pair of bars in Fig. 4e. The left one of the two should be Chloroquine + veh, the right one Chloroquine + CNO. The error has been corrected in the print, HTML and PDF versions of the article.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buras, Andrzej J.; /Munich, Tech. U.; Gorbahn, Martin

    The authors calculate the complete next-to-next-to-leading order QCD corrections to the charm contribution of the rare decay K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}. They encounter several new features, which were absent in lower orders. They discuss them in detail and present the results for the two-loop matching conditions of the Wilson coefficients, the three-loop anomalous dimensions, and the two-loop matrix elements of the relevant operators that enter the next-to-next-to-leading order renormalization group analysis of the Z-penguin and the electroweak box contribution. The inclusion of the next-to-next-to-leading order QCD corrections leads to a significant reduction of the theoretical uncertainty from {+-}more » 9.8% down to {+-} 2.4% in the relevant parameter P{sub c}(X), implying the leftover scale uncertainties in {Beta}(K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}) and in the determination of |V{sub td}|, sin 2{beta}, and {gamma} from the K {yields} {pi}{nu}{bar {nu}} system to be {+-} 1.3%, {+-} 1.0%, {+-} 0.006, and {+-} 1.2{sup o}, respectively. For the charm quark {ovr MS} mass m{sub c}(m{sub c}) = (1.30 {+-} 0.05) GeV and |V{sub us}| = 0.2248 the next-to-leading order value P{sub c}(X) = 0.37 {+-} 0.06 is modified to P{sub c}(X) = 0.38 {+-} 0.04 at the next-to-next-to-leading order level with the latter error fully dominated by the uncertainty in m{sub c}(m{sub c}). They present tables for P{sub c}(X) as a function of m{sub c}(m{sub c}) and {alpha}{sub s}(M{sub z}) and a very accurate analytic formula that summarizes these two dependences as well as the dominant theoretical uncertainties. Adding the recently calculated long-distance contributions they find {Beta}(K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}) = (8.0 {+-} 1.1) x 10{sup -11} with the present uncertainties in m{sub c}(m{sub c}) and the Cabibbo-Kobayashi-Maskawa elements being the dominant individual sources in the quoted error. They also emphasize that improved calculations of the long-distance contributions to K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}} and of the isospin breaking corrections in the evaluation of the weak current matrix elements from K{sup +} {yields} {pi}{sup 0}e{sup +}{nu} would be valuable in order to increase the potential of the two golden K {yields} {pi}{nu}{bar {nu}} decays in the search for new physics.« less

  15. Selective versus routine patch metal allergy testing to select bar material for the Nuss procedure in 932 patients over 10years.

    PubMed

    Obermeyer, Robert J; Gaffar, Sheema; Kelly, Robert E; Kuhn, M Ann; Frantz, Frazier W; McGuire, Margaret M; Paulson, James F; Kelly, Cynthia S

    2018-02-01

    The aim of the study was to determine the role of patch metal allergy testing to select bar material for the Nuss procedure. An IRB-approved (11-04-WC-0098) single institution retrospective, cohort study comparing selective versus routine patch metal allergy testing to select stainless steel or titanium bars for Nuss repair was performed. In Cohort A (9/2004-1/2011), selective patch testing was performed based on clinical risk factors. In Cohort B (2/2011-9/2014), all patients were patch tested. The cohorts were compared for incidence of bar allergy and resultant premature bar loss. Risk factors for stainless steel allergy or positive patch test were evaluated. Cohort A had 628 patients with 63 (10.0%) selected for patch testing, while all 304 patients in Cohort B were tested. Over 10years, 15 (1.8%) of the 842 stainless steel Nuss repairs resulted in a bar allergy, and 5 had a negative preoperative patch test. The incidence of stainless steel bar allergy (1.8% vs 1.7%, p=0.57) and resultant bar loss (0.5% vs 1.3%, p=0.23) was not statistically different between cohorts. An allergic reaction to a stainless steel bar or a positive patch test was more common in females (OR=2.3, p<0.001) and patients with a personal (OR=24.8, p<0.001) or family history (OR=3.1, p<0.001) of metal sensitivity. Stainless steel bar allergies occur at a low incidence with either routine or selective patch metal allergy testing. If selective testing is performed, it is advisable in females and patients with a personal or family history of metal sensitivity. A negative preoperative patch metal allergy test does not preclude the possibility of a postoperative stainless steel bar allergy. Level III Treatment Study and Study of Diagnostic Test. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Validation of strain gauges as a method of measuring precision of fit of implant bars.

    PubMed

    Hegde, Rashmi; Lemons, Jack E; Broome, James C; McCracken, Michael S

    2009-04-01

    Multiple articles in the literature have used strain gauges to estimate the precision of fit of implant bars. However, the accuracy of these measurements has not been fully documented. The purpose of this study was to evaluate the response of strain gauges to known amounts of misfit in an implant bar. This is an important step in validation of this device. A steel block was manufactured with five 4.0-mm externally hexed implant platforms machined into the block 7-mm apart. A 1.4-cm long gold alloy bar was cast to fit 2 of the platforms. Brass shims of varying thickness (150, 300, and 500 microm) were placed under one side of the bar to create misfit. A strain gage was used to record strain readings on top of the bar, one reading at first contact of the bar and one at maximum screw torque. Microgaps between the bar and the steel platforms were measured using a high-precision optical measuring device at 4 points around the platform. The experiment was repeated 3 times. Two-way analysis of variance and linear regression were used for statistical analyses. Shim thickness had a significant effect on strain (P < 0.0001). There was a significant positive correlation between shim thickness and strain (R(2) = 0.93) for strain at maximum torque, and for strain measurements at first contact (R(2) = 0.91). Microgap measurements showed no correlation with increasing misfit. Strain in the bar increased significantly with increasing levels of misfit. Strain measurements induced at maximum torque are not necessarily indicative of the maximum strains experienced by the bar. The presence or absence of a microgap between the bar and the platform is not necessarily indicative of passivity. These data suggest that microgap may not be clinically reliable as a measure of precision of fit.

  17. Consistency errors in p-values reported in Spanish psychology journals.

    PubMed

    Caperos, José Manuel; Pardo, Antonio

    2013-01-01

    Recent reviews have drawn attention to frequent consistency errors when reporting statistical results. We have reviewed the statistical results reported in 186 articles published in four Spanish psychology journals. Of these articles, 102 contained at least one of the statistics selected for our study: Fisher-F , Student-t and Pearson-c 2 . Out of the 1,212 complete statistics reviewed, 12.2% presented a consistency error, meaning that the reported p-value did not correspond to the reported value of the statistic and its degrees of freedom. In 2.3% of the cases, the correct calculation would have led to a different conclusion than the reported one. In terms of articles, 48% included at least one consistency error, and 17.6% would have to change at least one conclusion. In meta-analytical terms, with a focus on effect size, consistency errors can be considered substantial in 9.5% of the cases. These results imply a need to improve the quality and precision with which statistical results are reported in Spanish psychology journals.

  18. Applicability of polydimethylsiloxane (PDMS) and polyethersulfone (PES) as passive samplers of more hydrophobic organic compounds in intertidal estuarine environments.

    PubMed

    Posada-Ureta, Oscar; Olivares, Maitane; Delgado, Alejandra; Prieto, Ailette; Vallejo, Asier; Irazola, Mireia; Paschke, Albrecht; Etxebarria, Nestor

    2017-02-01

    The uptake calibration of three passive samplers, stir-bars, MESCO/stir-bars and polyethersulfone tubes (PES t ), was assessed in seawater at different salinities for 17 organic compounds including organochlorine compounds, pesticides, phthalates, musk fragrances and triclosan. The calibration procedure was accomplished by exposing the samplers to a continuous flow of fortified seawater for up to 14days under laboratory conditions. Prior to the exposure, stir-bars and MESCO/stir-bars were loaded with a known amount of deuterated PAH mixture as performance reference compounds (PRC). For most of the studied compounds, the sampling rates (Rs, mL·day -1 ) were determined for each sampler at two salinities (15 and 30‰) and two nominal concentrations (25 and 50ng·L -1 ). Among the tested devices, though PES can be an outstanding cheap alternative to other passive samplers, naked or free stir-bars provided the best results in terms of uptake rates (i.e., the Rs values ranged from 30 to 350mL·day -1 ). Regarding the variation of the salinity, the Rs values obtained with naked stir-bars were statistically comparable in the full range of salinities (0-30‰) but the values obtained with MESCO/stir-bars and PES t were salinity dependent. Consequently, only stir-bars assured the required robustness to be used as passive samplers in intertidal estuarine environments. Finally, the stir-bars were applied to estimate the time-weighted average concentration of some of those contaminants in the feeding seawater of the experimental aquaria at the Plentzia Marine Station (Basque Country) and low levels of musks fragrances (0.1-0.2ng·L -1 ) were estimated. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Verification of image orthorectification techniques for low-cost geometric inspection of masonry arch bridges

    NASA Astrophysics Data System (ADS)

    González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro

    2012-07-01

    A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.

  20. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  1. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  2. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    PubMed

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  3. Near-IR period-luminosity relations for pulsating stars in ω Centauri (NGC 5139)

    NASA Astrophysics Data System (ADS)

    Navarrete, C.; Catelan, M.; Contreras Ramos, R.; Alonso-García, J.; Gran, F.; Dékány, I.; Minniti, D.

    2017-08-01

    Aims: The globular cluster ω Centauri (NGC 5139) hosts hundreds of pulsating variable stars of different types, thus representing a treasure trove for studies of their corresponding period-luminosity (PL) relations. Our goal in this study is to obtain the PL relations for RR Lyrae and SX Phoenicis stars in the field of the cluster, based on high-quality, well-sampled light curves in the near-infrared (IR). Methods: Observations were carried out using the VISTA InfraRed CAMera (VIRCAM) mounted on the Visible and Infrared Survey Telescope for Astronomy (VISTA). A total of 42 epochs in J and 100 epochs in KS were obtained, spanning 352 days. Point-spread function photometry was performed using DoPhot and DAOPHOT crowded-field photometry packages in the outer and inner regions of the cluster, respectively. Results: Based on the comprehensive catalog of near-IR light curves thus secured, PL relations were obtained for the different types of pulsators in the cluster, both in the J and KS bands. This includes the first PL relations in the near-IR for fundamental-mode SX Phoenicis stars. The near-IR magnitudes and periods of Type II Cepheids and RR Lyrae stars were used to derive an updated true distance modulus to the cluster, with a resulting value of (m - M)0 = 13.708 ± 0.035 ± 0.10 mag, where the error bars correspond to the adopted statistical and systematic errors, respectively. Adding the errors in quadrature, this is equivalent to a heliocentric distance of 5.52 ± 0.27 kpc. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, with the VISTA telescope (project ID 087.D-0472, PI R. Angeloni).

  4. Chemical Evolution and History of Star Formation in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Gustafsson, Bengt

    1995-07-01

    Large scale processes controlling star formation and nucleosynthesis are fundamental but poorly understood. This is especially true for external galaxies. A detailed study of individual main sequence stars in the LMC Bar is proposed. The LMC is close enough to allow this, has considerable spread in stellar ages and a structure permitting identification of stellar populations and their structural features. The Bar presumably plays a dominant role in the chemical and dynamical evolution of the galaxy. Our knowledge is, at best, based on educated guesses. Still, the major population of the Bar is quite old, and many member stars are relatively evolved. The Bar seems to contain stars similar to those of Intermediate to Extreme Pop II in the Galaxy. We want to study the history of star formation, chemical evolution and initial mass function of the population dominating the Bar. We will use field stars close to the turn off point in the HR diagram. From earlier studies, we know that 250-500 such stars are available for uvby photometry in the PC field. We aim at an accuracy of 0.1 -0.2 dex in Me/H and 25% or better in relative ages. This requires an accuracy of about 0.02 mag in the uvby indices, which can be reached, taking into account errors in calibration, flat fielding, guiding and problems due to crowding. For a study of the luminosity function fainter stars will be included as well. Calibration fields are available in Omega Cen and M 67.

  5. Galactic interstellar abundance surveys with IUE. III - Silicon, manganese, iron, sulfur, and zinc

    NASA Technical Reports Server (NTRS)

    Van Steenberg, Michael E.; Shull, J. Michael

    1988-01-01

    This paper continues a survey of intestellar densities, abundances, and cloud structure in the Galaxy using the IUE satellite. A statistical data set of 223 O3-B2.5 stars is constructed, including 53 stars in the Galactic halo. It is found that S II lines in B stars, of luminosity classes IV and V, have possible contamination from stellar S II, particular for stars with v sin i less than 200 km/s. The mean logarithmic depletions are -1.00, -1.19. -0.63, and -0.23 (Si, Mn,Fe,S, Zn). Depletions of Si, Mn, and Fe correlate with the mean hydrogen density n-bar along the line of sight, with a turnover for n-bar greater than 1/cm. Sulfur depletions correlate with n-bar along the line of sight. The slight Zn depletion correlation also appears to be statistically insignificant. No correlation of depletion is found with the physical density derived from H2 rotational states in 21 lines of sight. Depletion variations in the disk are consistent with a Galactic abundance gradient or with enhanced mean depletions in the anticenter region.

  6. Measurement of the inclusive forward-backward t$$\\bar{t}$$ production asymmetry and its rapidity dependence dA fb/d(Δy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strycker, Glenn Loyd

    2010-01-01

    Early measurements of a large forward-background asymmetry at the CDF and D0 experiments at Fermilab have generated much recent interest, but were hampered by large uncertainties. We present here a new measurement of the parton level forward-backward asymmetry of pair-produced top quarks, using a high-statistics sample with much improved precision. We study the rapidity, y top, of the top quark production angle with respect to the incoming parton momentum in both the lab and tmore » $$\\bar{t}$$ rest frames. We find the parton-level forward-backward asymmetries to be A fb p$$\\bar{t}$$ = 0.150 ± 0.050 stat ± 0.024 syst A fb t$$\\bar{t}$$ = 0.158 ± 0.072{sup stat} ± 0.024 syst. These results should be compared with the small p$$\\bar{p}$$ frame charge asymmetry expected in QCD at NLO, A fb = 0.050 ± 0.015. Additionally, we introduce a measurement of the A fb rapidity dependence dA fb/d(Δy). We find this to be A fb p$$\\bar{t}$$(|Δy| < 1.0) = 0.026 ± 0.104 stat ± 0.012 syst A fb p$$\\bar{t}$$(|Δy| > 1.0) = 0.611 ± 0.210 stat ± 0.246 syst which we compare with model predictions 0.039 ± 0.006 and 0.123 ± 0.018 for the inner and outer rapidities, respectively.« less

  7. Confidence, Concentration, and Competitive Performance of Elite Athletes: A Natural Experiment in Olympic Gymnastics.

    ERIC Educational Resources Information Center

    Grandjean, Burke D.; Taylor, Patricia A.; Weiner, Jay

    2002-01-01

    During the women's all-around gymnastics final at the 2000 Olympics, the vault was inadvertently set 5 cm too low for a random half of the gymnasts. The error was widely viewed as undermining their confidence and subsequent performance. However, data from pretest and posttest scores on the vault, bars, beam, and floor indicated that the vault…

  8. Implementing Material Surfaces with an Adhesive Switch

    DTIC Science & Technology

    2014-02-28

    squares), M15 (solid triangles), M13 (open circles), M11 (solid circles), or NC14 (open triangles) DNA primary targets. Error bars indicating...5’–ATCAGGCGCAA–3’ M13 = 5’–ATCAGCGGCAATC–3’ M15 = 5’–ATCAGCCCCAATCCA–3’ L3M9 = 5’–ATLCACLCCGLC–3’ L3M11 = 5

  9. Kinematic parameter estimation using close range photogrammetry for sport applications

    NASA Astrophysics Data System (ADS)

    Magre Colorado, Luz Alejandra; Martínez Santos, Juan Carlos

    2015-12-01

    In this article, we show the development of a low-cost hardware/software system based on close range photogrammetry to track the movement of a person performing weightlifting. The goal is to reduce the costs to the trainers and athletes dedicated to this sport when it comes to analyze the performance of the sportsman and avoid injuries or accidents. We used a web-cam as the data acquisition hardware and develop the software stack in Processing using the OpenCV library. Our algorithm extracts size, position, velocity, and acceleration measurements of the bar along the course of the exercise. We present detailed characteristics of the system with their results in a controlled setting. The current work improves the detection and tracking capabilities from a previous version of this system by using HSV color model instead of RGB. Preliminary results show that the system is able to profile the movement of the bar as well as determine the size, position, velocity, and acceleration values of a marker/target in scene. The average error finding the size of object at four meters of distance is less than 4%, and the error of the acceleration value is 1.01% in average.

  10. Teaching Statistics Online Using "Excel"

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2011-01-01

    As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…

  11. The Effects of Long-Term Feeding of Rodent Food Bars on Lipid Peroxidation And Antioxidant Enzyme Levels In Fisher Rats

    NASA Technical Reports Server (NTRS)

    Ramirez, Joel; Zirkle-Yoshida, M.; Piert, S.; Barrett, J.; Yul, D.; Dalton, B.; Girten, B.

    2001-01-01

    A specialized rodent food bar diet has been developed and utilized successfully for short-duration shuttle missions. Recent tests conducted in preparation for experiments aboard the International Space Station (ISS) indicated that long-term food bar feeding for three months induced hyperlipidemia in rats. This study examined oxidative stress status in livers of these same animals. Spectrophotometric analysis of 79 Fischer rat livers (40 female and 39 male) for lipid peroxidation (LPO) and superoxide dismutase (SOD) was conducted using Bioxytech LPO-587(TM) assay kit and SOD-525(Tm) assay kit, respectively. The treatment groups consisted of 20 male CHOW and 19 male FOOD BAR rats and 20 female CHOW and 20 female FOOD BAR rats. Statistical analysis to compare differences between groups was performed by standard analysis of variance procedures. The male FOOD BAR group LPO mean (3.6 +/- 0.2 mmol/g) was significantly (p less than or equal to 0.05) greater than that of the male CHOW group (2.1 +/-0.1 mmol/g). Moreover the female FOOD BAR group LPO mean (2.9 +/-0.1 mmol/g) was also significantly greater than the female CHOW group mean (2.2 +/-0.1 mmol/g). The mean values for SOD in both male and female groups showed no significant differences between CHOW and FOOD BAR groups. These results show that LPO levels were significantly higher in both the male and female FOOD BAR groups compared to CHOW groups and that there was no concomitant increase in SOD levels across the group. In addition, males showed a greater difference than females in terms of LPO levels. These findings suggest a need for further investigation into the use of the current food bar formulation for long-term experiments such as those planned for the ISS.

  12. Application of high-speed photography to chip refining

    NASA Astrophysics Data System (ADS)

    Stationwala, Mustafa I.; Miller, Charles E.; Atack, Douglas; Karnis, A.

    1991-04-01

    Several high speed photographic methods have been employed to elucidate the mechanistic aspects of producing mechanical pulp in a disc refiner. Material flow patterns of pulp in a refmer were previously recorded by means of a HYCAM camera and continuous lighting system which provided cine pictures at up to 10,000 pps. In the present work an IMACON camera was used to obtain several series of high resolution, high speed photographs, each photograph containing an eight-frame sequence obtained at a framing rate of 100,000 pps. These high-resolution photographs made it possible to identify the nature of the fibrous material trapped on the bars of the stationary disc. Tangential movement of fibre floes, during the passage of bars on the rotating disc over bars on the stationary disc, was also observed on the stator bars. In addition, using a cinestroboscopic technique a large number of high resolution pictures were taken at three different positions of the rotating disc relative to the stationary disc. These pictures were computer analyzed, statistically, to determine the fractional coverage of the bars of the stationary disc with pulp. Information obtained from these studies provides new insights into the mechanism of the refining process.

  13. Closing the wedge: Search strategies for extended Higgs sectors with heavy flavor final states

    DOE PAGES

    Gori, Stefania; Kim, Ian-Woo; Shah, Nausheen R.; ...

    2016-04-29

    We consider search strategies for an extended Higgs sector at the high-luminosity LHC14 utilizing multitop final states. In the framework of a two Higgs doublet model, the purely top final states (more » $$t\\bar{t}$$, 4t) are important channels for heavy Higgs bosons with masses in the wedge above 2m t and at low values of tanβ, while a 2b2t final state is most relevant at moderate values of tanβ. We find, in the $$t\\bar{t}$$ H channel, with H→$$t\\bar{t}$$, that both single and three lepton final states can provide statistically significant constraints at low values of tanβ for mA as high as ~750 GeV. When systematics on the $$t\\bar{t}$$ background are taken into account, however, the three lepton final state is more powerful, though the precise constraint depends fairly sensitively on lepton fake rates. We also find that neither 2b2t nor $$t\\bar{t}$$ final states provide constraints on additional heavy Higgs bosons with couplings to tops smaller than the top Yukawa due to expected systematic uncertainties in the tt background.« less

  14. Assessment of Candida species colonization and denture-related stomatitis in bar- and locator-retained overdentures.

    PubMed

    Kilic, Kerem; Koc, Ayse Nedret; Tekinsen, Fatma Filiz; Yildiz, Pinar; Kilic, Duygu; Zararsiz, Gokmen; Kilic, Erdem

    2014-10-01

    The aim of this study was to assess the prevalence of denture-related stomatitis (DRS) in different attachment-retained overdenture wearers and its association with particular colonizing Candida species. Thirty-seven edentulous patients with implant-supported maxillary or mandibular overdentures were enrolled. A full clinical history was obtained, including details of patients' oral hygiene practices and the levels of erythema based on Newton's classification scale. Swabs were taken from the palate and investigated mycologically to identify the yeast colonies. Quantitative and qualitative microbiological assessments were performed, which included recording the total numbers of colonies (cfu), their color, and their morphological characteristics. Significant differences were found in cfu values between the attachment and inner surfaces of locator- and bar-retained overdentures (P < .05). Candida albicans was the most common species in both evaluations, being isolated from 81.3% of bar-retained overdentures and 38.1% of locator-retained overdentures. DRS developed in all patients using bar-retained overdentures but in only 71.4% of those using locator-retained overdentures. No statistically significant relationship was found between bar and locator attachments according to smoking habit, overnight removal, or plaque and gingival indices (P > .05).

  15. From QCD-based hard-scattering to nonextensive statistical mechanical descriptions of transverse momentum spectra in high-energy p p and p p ¯ collisions

    DOE PAGES

    Wong, Cheuk-Yin; Wilk, Grzegorz; Cirto, Leonardo J. L.; ...

    2015-06-22

    Transverse spectra of both jets and hadrons obtained in high-energymore » $pp$ and $$p\\bar p $$ collisions at central rapidity exhibit power-law behavior of $$1/p_T^n$$ at high $$p_T$$. The power index $n$ is 4-5 for jet production and is slightly greater for hadron production. Furthermore, the hadron spectra spanning over 14 orders of magnitude down to the lowest $$p_T$$ region in $pp$ collisions at LHC can be adequately described by a single nonextensive statistical mechanical distribution that is widely used in other branches of science. This suggests indirectly the dominance of the hard-scattering process over essentially the whole $$p_T$$ region at central rapidity in $pp$ collisions at LHC. We show here direct evidences of such a dominance of the hard-scattering process by investigating the power index of UA1 jet spectra over an extended $$p_T$$ region and the two-particle correlation data of the STAR and PHENIX Collaborations in high-energy $pp$ and $$p \\bar p$$ collisions at central rapidity. We then study how the showering of the hard-scattering product partons alters the power index of the hadron spectra and leads to a hadron distribution that can be cast into a single-particle non-extensive statistical mechanical distribution. Lastly, because of such a connection, the non-extensive statistical mechanical distribution can be considered as a lowest-order approximation of the hard-scattering of partons followed by the subsequent process of parton showering that turns the jets into hadrons, in high energy $pp$ and $$p\\bar p$$ collisions.« less

  16. Measurement of the B ¯ → X s γ branching fraction with a sum of exclusive decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, T.; Ishikawa, A.; Yamamoto, H.

    We use 772 × 10 6 BB-bar meson pairs collected at the Υ(4S) resonance with the Belle detector to measure the branching fraction for B-bar → X sγ. Our measurement uses a sum-of-exclusives approach in which 38 of the hadronic final states with strangeness equal to +1, denoted by X s, are reconstructed. The inclusive branching fraction for M Xs < 2.8 GeV/c², which corresponds to a minimum photon energy of 1.9 GeV, is measured to be B(B-bar → X sγ)=(3.51±0.17±0.33) × 10 –4, where the first uncertainty is statistical and the second is systematic.

  17. Measurement of the B ¯ → X s γ branching fraction with a sum of exclusive decays

    DOE PAGES

    Saito, T.; Ishikawa, A.; Yamamoto, H.; ...

    2015-03-04

    We use 772 × 10 6 BB-bar meson pairs collected at the Υ(4S) resonance with the Belle detector to measure the branching fraction for B-bar → X sγ. Our measurement uses a sum-of-exclusives approach in which 38 of the hadronic final states with strangeness equal to +1, denoted by X s, are reconstructed. The inclusive branching fraction for M Xs < 2.8 GeV/c², which corresponds to a minimum photon energy of 1.9 GeV, is measured to be B(B-bar → X sγ)=(3.51±0.17±0.33) × 10 –4, where the first uncertainty is statistical and the second is systematic.

  18. Adequacy of rut bar data collection

    DOT National Transportation Integrated Search

    1994-01-01

    This publication brings together the 1994 annual series of selected statistical tabulations relating to highway transportation - highway use--the ownership and operation of motor vehicles; highway finance--the receipts and expenditures for highways b...

  19. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study.

    PubMed

    Nour-Eldein, Hebatallah

    2016-01-01

    With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.

  20. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study

    PubMed Central

    Nour-Eldein, Hebatallah

    2016-01-01

    Background: With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. Objectives: To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. Methods: This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Results: Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Conclusion: Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles. PMID:27453839

  1. Benchmarking statistical averaging of spectra with HULLAC

    NASA Astrophysics Data System (ADS)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  2. A measurement of CMB cluster lensing with SPT and DES year 1 data

    DOE PAGES

    Baxter, E. J.; Raghunathan, S.; Crawford, T. M.; ...

    2018-02-09

    Clusters of galaxies gravitationally lens the cosmic microwave background (CMB) radiation, resulting in a distinct imprint in the CMB on arcminute scales. Measurement of this effect offers a promising way to constrain the masses of galaxy clusters, particularly those at high redshift. We use CMB maps from the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB lensing signal around galaxy clusters identified in optical imaging from first year observations of the Dark Energy Survey. The cluster catalog used in this analysis contains 3697 members with mean redshift ofmore » $$\\bar{z} = 0.45$$. We detect lensing of the CMB by the galaxy clusters at $$8.1\\sigma$$ significance. Using the measured lensing signal, we constrain the amplitude of the relation between cluster mass and optical richness to roughly $$17\\%$$ precision, finding good agreement with recent constraints obtained with galaxy lensing. The error budget is dominated by statistical noise but includes significant contributions from systematic biases due to the thermal SZ effect and cluster miscentering.« less

  3. Evidence for Direct CP Violation in the Measurement of the Cabibbo-Kobayashi-Maskawa Angle gamma with B-+ to D(*) K(*)-+ Decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Amo Sanchez, P.; Lees, J.P.; Poireau, V.

    2011-08-19

    We report the measurement of the Cabibbo-Kobayashi-Maskawa CP-violating angle {gamma} through a Dalitz plot analysis of neutral D meson decays to K{sub S}{sup 0}{pi}{sup +}{pi}{sup -} and K{sub S}{sup 0} K{sup +}K{sup -} produced in the processes B{sup {-+}} {yields} DK{sup {-+}}, B{sup {-+}} {yields} D* K{sup {-+}} with D* {yields} D{pi}{sup 0}, D{gamma}, and B{sup {-+}} {yields} DK*{sup {-+}} with K*{sup {-+}} {yields} K{sub S}{sup 0}{pi}{sup {-+}}, using 468 million B{bar B} pairs collected by the BABAR detector at the PEP-II asymmetric-energy e{sup +}e{sup -} collider at SLAC. We measure {gamma} = (68 {+-} 14 {+-} 4 {+-} 3){supmore » o} (modulo 180{sup o}), where the first error is statistical, the second is the experimental systematic uncertainty and the third reflects the uncertainty in the description of the neutral D decay amplitudes. This result is inconsistent with {gamma} = 0 (no direct CP violation) with a significance of 3.5 standard deviations.« less

  4. Credible occurrence probabilities for extreme geophysical events: earthquakes, volcanic eruptions, magnetic storms

    USGS Publications Warehouse

    Love, Jeffrey J.

    2012-01-01

    Statistical analysis is made of rare, extreme geophysical events recorded in historical data -- counting the number of events $k$ with sizes that exceed chosen thresholds during specific durations of time $\\tau$. Under transformations that stabilize data and model-parameter variances, the most likely Poisson-event occurrence rate, $k/\\tau$, applies for frequentist inference and, also, for Bayesian inference with a Jeffreys prior that ensures posterior invariance under changes of variables. Frequentist confidence intervals and Bayesian (Jeffreys) credibility intervals are approximately the same and easy to calculate: $(1/\\tau)[(\\sqrt{k} - z/2)^{2},(\\sqrt{k} + z/2)^{2}]$, where $z$ is a parameter that specifies the width, $z=1$ ($z=2$) corresponding to $1\\sigma$, $68.3\\%$ ($2\\sigma$, $95.4\\%$). If only a few events have been observed, as is usually the case for extreme events, then these "error-bar" intervals might be considered to be relatively wide. From historical records, we estimate most likely long-term occurrence rates, 10-yr occurrence probabilities, and intervals of frequentist confidence and Bayesian credibility for large earthquakes, explosive volcanic eruptions, and magnetic storms.

  5. InGaAs tunnel diodes for the calibration of semi-classical and quantum mechanical band-to-band tunneling models

    NASA Astrophysics Data System (ADS)

    Smets, Quentin; Verreck, Devin; Verhulst, Anne S.; Rooyackers, Rita; Merckling, Clément; Van De Put, Maarten; Simoen, Eddy; Vandervorst, Wilfried; Collaert, Nadine; Thean, Voon Y.; Sorée, Bart; Groeseneken, Guido; Heyns, Marc M.

    2014-05-01

    Promising predictions are made for III-V tunnel-field-effect transistor (FET), but there is still uncertainty on the parameters used in the band-to-band tunneling models. Therefore, two simulators are calibrated in this paper; the first one uses a semi-classical tunneling model based on Kane's formalism, and the second one is a quantum mechanical simulator implemented with an envelope function formalism. The calibration is done for In0.53Ga0.47As using several p+/intrinsic/n+ diodes with different intrinsic region thicknesses. The dopant profile is determined by SIMS and capacitance-voltage measurements. Error bars are used based on statistical and systematic uncertainties in the measurement techniques. The obtained parameters are in close agreement with theoretically predicted values and validate the semi-classical and quantum mechanical models. Finally, the models are applied to predict the input characteristics of In0.53Ga0.47As n- and p-lineTFET, with the n-lineTFET showing competitive performance compared to MOSFET.

  6. On the precision of aero-thermal simulations for TMT

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos; Thompson, Hugh

    2016-08-01

    Environmental effects on the Image Quality (IQ) of the Thirty Meter Telescope (TMT) are estimated by aero-thermal numerical simulations. These simulations utilize Computational Fluid Dynamics (CFD) to estimate, among others, thermal (dome and mirror) seeing as well as wind jitter and blur. As the design matures, guidance obtained from these numerical experiments can influence significant cost-performance trade-offs and even component survivability. The stochastic nature of environmental conditions results in the generation of a large computational solution matrix in order to statistically predict Observatory Performance. Moreover, the relative contribution of selected key subcomponents to IQ increases the parameter space and thus computational cost, while dictating a reduced prediction error bar. The current study presents the strategy followed to minimize prediction time and computational resources, the subsequent physical and numerical limitations and finally the approach to mitigate the issues experienced. In particular, the paper describes a mesh-independence study, the effect of interpolation of CFD results on the TMT IQ metric, and an analysis of the sensitivity of IQ to certain important heat sources and geometric features.

  7. A measurement of CMB cluster lensing with SPT and DES year 1 data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, E. J.; Raghunathan, S.; Crawford, T. M.

    Clusters of galaxies gravitationally lens the cosmic microwave background (CMB) radiation, resulting in a distinct imprint in the CMB on arcminute scales. Measurement of this effect offers a promising way to constrain the masses of galaxy clusters, particularly those at high redshift. We use CMB maps from the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB lensing signal around galaxy clusters identified in optical imaging from first year observations of the Dark Energy Survey. The cluster catalog used in this analysis contains 3697 members with mean redshift ofmore » $$\\bar{z} = 0.45$$. We detect lensing of the CMB by the galaxy clusters at $$8.1\\sigma$$ significance. Using the measured lensing signal, we constrain the amplitude of the relation between cluster mass and optical richness to roughly $$17\\%$$ precision, finding good agreement with recent constraints obtained with galaxy lensing. The error budget is dominated by statistical noise but includes significant contributions from systematic biases due to the thermal SZ effect and cluster miscentering.« less

  8. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  9. Moving beyond the Bar Plot and the Line Graph to Create Informative and Attractive Graphics

    ERIC Educational Resources Information Center

    Larson-Hall, Jenifer

    2017-01-01

    Graphics are often mistaken for a mere frill in the methodological arsenal of data analysis when in fact they can be one of the simplest and at the same time most powerful methods of communicating statistical information (Tufte, 2001). The first section of the article argues for the statistical necessity of graphs, echoing and amplifying similar…

  10. Accounting for measurement error: a critical but often overlooked process.

    PubMed

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  11. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    NASA Astrophysics Data System (ADS)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  12. Network problem threshold

    NASA Technical Reports Server (NTRS)

    Gejji, Raghvendra, R.

    1992-01-01

    Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.

  13. Design and application of hybrid maxillomandibular fixation for facial bone fractures.

    PubMed

    Park, Kang-Nam; Oh, Seung-Min; Lee, Chang-Youn; Kim, Jwa-Young; Yang, Byoung-Eun

    2013-01-01

    A novel maxillomandibular fixation (MMF) procedure using a skeletal anchorage screw (SAS) (in the maxilla) and an arch bar (in the mandible), which we call "hybrid maxillomandibular fixation," was explored in this study. The aims of the study were to examine the efficacy of our hybrid MMF method and to compare periodontal tissue health and occlusal rehabilitation among 3 MMF methods. In total, 112 patients who had undergone open reduction at the Department of Oral and Maxillofacial Surgery between September 2005 and December 2012 were selected for this study. The participants were assigned to one of the following groups: SAS (maxilla), SAS (mandible), SAS-arch bar, or arch bar-arch bar. Periodontal health was evaluated using the Gingival Index, and the perioperative occlusal reproducibility was evaluated using a score of 1 to 3. Statistical analysis was performed using parametric tests (Student t test or 1-way analysis of variance followed by post hoc Tukey test). In the Gingival Index comparison performed 1 month after the surgery, only the group using the arch bars and wiring was significantly different from the other groups (P < 0.05). The occlusal reproducibility scores were not significantly different. The pain and discomfort of the patients were reduced in the hybrid MMF group. The hybrid MMF takes advantage of MMF using both arch bars and SASs for mandibular fractures. In addition, it overcomes many problems presented by previous MMF methods.

  14. A novel automated rat catalepsy bar test system based on a RISC microcontroller.

    PubMed

    Alvarez-Cervera, Fernando J; Villanueva-Toledo, Jairo; Moo-Puc, Rosa E; Heredia-López, Francisco J; Alvarez-Cervera, Margarita; Pineda, Juan C; Góngora-Alfaro, José L

    2005-07-15

    Catalepsy tests performed in rodents treated with drugs that interfere with dopaminergic transmission have been widely used for the screening of drugs with therapeutic potential in the treatment of Parkinson's disease. The basic method for measuring catalepsy intensity is the "standard" bar test. We present here an easy to use microcontroller-based automatic system for recording bar test experiments. The design is simple, compact, and has a low cost. Recording intervals and total experimental time can be programmed within a wide range of values. The resulting catalepsy times are stored, and up to five simultaneous experiments can be recorded. A standard personal computer interface is included. The automated system also permits the elimination of human error associated with factors such as fatigue, distraction, and data transcription, occurring during manual recording. Furthermore, a uniform criterion for timing the cataleptic condition can be achieved. Correlation values between the results obtained with the automated system and those reported by two independent observers ranged between 0.88 and 0.99 (P<0.0001; three treatments, nine animals, 144 catalepsy time measurements).

  15. Three Axis Control of the Hubble Space Telescope Using Two Reaction Wheels and Magnetic Torquer Bars for Science Observations

    NASA Technical Reports Server (NTRS)

    Hur-Diaz, Sun; Wirzburger, John; Smith, Dan

    2008-01-01

    The Hubble Space Telescope (HST) is renowned for its superb pointing accuracy of less than 10 milli-arcseconds absolute pointing error. To accomplish this, the HST relies on its complement of four reaction wheel assemblies (RWAs) for attitude control and four magnetic torquer bars (MTBs) for momentum management. As with most satellites with reaction wheel control, the fourth RWA provides for fault tolerance to maintain three-axis pointing capability should a failure occur and a wheel is lost from operations. If an additional failure is encountered, the ability to maintain three-axis pointing is jeopardized. In order to prepare for this potential situation, HST Pointing Control Subsystem (PCS) Team developed a Two Reaction Wheel Science (TRS) control mode. This mode utilizes two RWAs and four magnetic torquer bars to achieve three-axis stabilization and pointing accuracy necessary for a continued science observing program. This paper presents the design of the TRS mode and operational considerations necessary to protect the spacecraft while allowing for a substantial science program.

  16. Single-variant and multi-variant trend tests for genetic association with next-generation sequencing that are robust to sequencing error.

    PubMed

    Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Alejandro Q; Musolf, Anthony; Matise, Tara C; Finch, Stephen J; Gordon, Derek

    2012-01-01

    As with any new technology, next-generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to those data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single-variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p value, no matter how many loci. Copyright © 2013 S. Karger AG, Basel.

  17. Single variant and multi-variant trend tests for genetic association with next generation sequencing that are robust to sequencing error

    PubMed Central

    Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Andrew; Musolf, Anthony; Matise, Tara C.; Finch, Stephen J.; Gordon, Derek

    2013-01-01

    As with any new technology, next generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model, based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to that data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p-value, no matter how many loci. PMID:23594495

  18. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  19. Jovian Chromophore Characteristics from Multispectral HST Images

    NASA Technical Reports Server (NTRS)

    Strycker, Paul D.; Chanover, Nancy J.; Simon-Miller, Amy A.; Banfield, Don; Gierasch, Peter J.

    2011-01-01

    The chromophores responsible for coloring the jovian atmosphere are embedded within Jupiter's vertical aerosol structure. Sunlight propagates through this vertical distribution of aerosol particles, whose colors are defined by omega-bar (sub 0)(lambda), and we remotely observe the culmination of the radiative transfer as I/F(lambda). In this study, we employed a radiative transfer code to retrieve omega-bar (sub 0)(lambda) for particles in Jupiter's tropospheric haze at seven wavelengths in the near-UV and visible regimes. The data consisted of images of the 2008 passage of Oval BA to the south of the Great Red Spot obtained by the Wide Field Planetary Camera 2 on-board the Hubble Space Telescope. We present derived particle colors for locations that were selected from 14 weather regions, which spanned a large range of observed colors. All omega-bar (sub 0)(lambda) curves were absorbing in the blue, and omega-bar (sub 0)(lambda) increased monotonically to approximately unity as wavelength increased. We found accurate fits to all omega-bar (sub 0)(lambda) curves using an empirically derived functional form: omega-bar (sub 0)(lambda) = 1 A exp(-B lambda). The best-fit parameters for the mean omega-bar (sub 0)(lambda) curve were A = 25.4 and B = 0.0149 for lambda in units of nm. We performed a principal component analysis (PCA) on our omega-bar (sub 0)(lambda) results and found that one or two independent chromophores were sufficient to produce the variations in omega-bar (sub 0)(lambda). A PCA of I/F(lambda) for the same jovian locations resulted in principal components (PCs) with roughly the same variances as the omega-bar (sub 0)(lambda) PCA, but they did not result in a one-to-one mapping of PC amplitudes between the omega-bar (sub 0)(lambda) PCA and I/F(lambda) PCA. We suggest that statistical analyses performed on I/ F(lambda) image cubes have limited applicability to the characterization of chromophores in the jovian atmosphere due to the sensitivity of 1/ F(lambda) to horizontal variations in the vertical aerosol distribution.

  20. Death Certification Errors and the Effect on Mortality Statistics.

    PubMed

    McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth

    Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P < .001). Certificates for deaths occurring in hospitals were more likely to have major errors than certificates for deaths occurring at a private residence (59% vs 39%, P < .001). A total of 580 (93%) death certificates had a change in ICD-10 codes between the original and mock certificates, of which 348 (60%) had a change in the underlying cause-of-death code. Error rates on death certificates in Vermont are high and extend to ICD-10 coding, thereby affecting national mortality statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.

  1. Science 101: When Drawing Graphs from Collected Data, Why Don't You Just "Connect the Dots?"

    ERIC Educational Resources Information Center

    Robertson, William C.

    2007-01-01

    Using "error bars" on graphs is a good way to help students see that, within the inherent uncertainty of the measurements due to the instruments used for measurement, the data points do, in fact, lie along the line that represents the linear relationship. In this article, the author explains why connecting the dots on graphs of collected data is…

  2. Ionospheric Modeling: Development, Verification and Validation

    DTIC Science & Technology

    2007-08-15

    The University of Massachusetts (UMass), Lowell, has introduced a new version of their ionogram autoscaling program ARTIST , Version 5. A very...Investigation of the Reliability of the ESIR Ionogram Autoscaling Method (Expert System for Ionogram Reduction) ESIR.book.pdf Dec 06 Quality...Figures and Error Bars for Autoscaled Vertical Incidence Ionograms. Background and User Documentation for QualScan V2007.2 AFRL_QualScan.book.pdf Feb

  3. DataPlus™ - a revolutionary applications generator for DOS hand-held computers

    Treesearch

    David Dean; Linda Dean

    2000-01-01

    DataPlus allows the user to easily design data collection templates for DOS-based hand-held computers that mimic clipboard data sheets. The user designs and tests the application on the desktop PC and then transfers it to a DOS field computer. Other features include: error checking, missing data checks, and sensor input from RS-232 devices such as bar code wands,...

  4. Improving radiopharmaceutical supply chain safety by implementing bar code technology.

    PubMed

    Matanza, David; Hallouard, François; Rioufol, Catherine; Fessi, Hatem; Fraysse, Marc

    2014-11-01

    The aim of this study was to describe and evaluate an approach for improving radiopharmaceutical supply chain safety by implementing bar code technology. We first evaluated the current situation of our radiopharmaceutical supply chain and, by means of the ALARM protocol, analysed two dispensing errors that occurred in our department. Thereafter, we implemented a bar code system to secure selected key stages of the radiopharmaceutical supply chain. Finally, we evaluated the cost of this implementation, from overtime, to overheads, to additional radiation exposure to workers. An analysis of the events that occurred revealed a lack of identification of prepared or dispensed drugs. Moreover, the evaluation of the current radiopharmaceutical supply chain showed that the dispensation and injection steps needed to be further secured. The bar code system was used to reinforce product identification at three selected key stages: at usable stock entry; at preparation-dispensation; and during administration, allowing to check conformity between the labelling of the delivered product (identity and activity) and the prescription. The extra time needed for all these steps had no impact on the number and successful conduct of examinations. The investment cost was reduced (2600 euros for new material and 30 euros a year for additional supplies) because of pre-existing computing equipment. With regard to the radiation exposure to workers there was an insignificant overexposure for hands with this new organization because of the labelling and scanning processes of radiolabelled preparation vials. Implementation of bar code technology is now an essential part of a global securing approach towards optimum patient management.

  5. The (mis)reporting of statistical results in psychology journals.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2011-09-01

    In order to study the prevalence, nature (direction), and causes of reporting errors in psychology, we checked the consistency of reported test statistics, degrees of freedom, and p values in a random sample of high- and low-impact psychology journals. In a second study, we established the generality of reporting errors in a random sample of recent psychological articles. Our results, on the basis of 281 articles, indicate that around 18% of statistical results in the psychological literature are incorrectly reported. Inconsistencies were more common in low-impact journals than in high-impact journals. Moreover, around 15% of the articles contained at least one statistical conclusion that proved, upon recalculation, to be incorrect; that is, recalculation rendered the previously significant result insignificant, or vice versa. These errors were often in line with researchers' expectations. We classified the most common errors and contacted authors to shed light on the origins of the errors.

  6. LEARNING STRATEGY REFINEMENT REVERSES EARLY SENSORY CORTICAL MAP EXPANSION BUT NOT BEHAVIOR: SUPPORT FOR A THEORY OF DIRECTED CORTICAL SUBSTRATES OF LEARNING AND MEMORY

    PubMed Central

    Elias, Gabriel A.; Bieszczad, Kasia M.; Weinberger, Norman M.

    2015-01-01

    Primary sensory cortical fields develop highly specific associative representational plasticity, notably enlarged area of representation of reinforced signal stimuli within their topographic maps. However, overtraining subjects after they have solved an instrumental task can reduce or eliminate the expansion while the successful behavior remains. As the development of this plasticity depends on the learning strategy used to solve a task, we asked whether the loss of expansion is due to the strategy used during overtraining. Adult male rats were trained in a three-tone auditory discrimination task to bar-press to the CS+ for water reward and refrain from doing so during the CS− tones and silent intertrial intervals; errors were punished by a flashing light and time-out penalty. Groups acquired this task to a criterion within seven training sessions by relying on a strategy that was “bar-press from tone-onset-to-error signal” (“TOTE”). Three groups then received different levels of overtraining: Group ST, none; Group RT, one week; Group OT, three weeks. Post-training mapping of their primary auditory fields (A1) showed that Groups ST and RT had developed significantly expanded representational areas, specifically restricted to the frequency band of the CS+ tone. In contrast, the A1 of Group OT was no different from naïve controls. Analysis of learning strategy revealed this group had shifted strategy to a refinement of TOTE in which they self-terminated bar-presses before making an error (“iTOTE”). Across all animals, the greater the use of iTOTE, the smaller was the representation of the CS+ in A1. Thus, the loss of cortical expansion is attributable to a shift or refinement in strategy. This reversal of expansion was considered in light of a novel theoretical framework (CONCERTO) highlighting four basic principles of brain function that resolve anomalous findings and explaining why even a minor change in strategy would involve concomitant shifts of involved brain sites, including reversal of cortical expansion. PMID:26596700

  7. A Comparison of Full and Empirical Bayes Techniques for Inferring Sea Level Changes from Tide Gauge Records

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Tingley, M.

    2016-12-01

    Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.

  8. Learning strategy refinement reverses early sensory cortical map expansion but not behavior: Support for a theory of directed cortical substrates of learning and memory.

    PubMed

    Elias, Gabriel A; Bieszczad, Kasia M; Weinberger, Norman M

    2015-12-01

    Primary sensory cortical fields develop highly specific associative representational plasticity, notably enlarged area of representation of reinforced signal stimuli within their topographic maps. However, overtraining subjects after they have solved an instrumental task can reduce or eliminate the expansion while the successful behavior remains. As the development of this plasticity depends on the learning strategy used to solve a task, we asked whether the loss of expansion is due to the strategy used during overtraining. Adult male rats were trained in a three-tone auditory discrimination task to bar-press to the CS+ for water reward and refrain from doing so during the CS- tones and silent intertrial intervals; errors were punished by a flashing light and time-out penalty. Groups acquired this task to a criterion within seven training sessions by relying on a strategy that was "bar-press from tone-onset-to-error signal" ("TOTE"). Three groups then received different levels of overtraining: Group ST, none; Group RT, one week; Group OT, three weeks. Post-training mapping of their primary auditory fields (A1) showed that Groups ST and RT had developed significantly expanded representational areas, specifically restricted to the frequency band of the CS+ tone. In contrast, the A1 of Group OT was no different from naïve controls. Analysis of learning strategy revealed this group had shifted strategy to a refinement of TOTE in which they self-terminated bar-presses before making an error ("iTOTE"). Across all animals, the greater the use of iTOTE, the smaller was the representation of the CS+ in A1. Thus, the loss of cortical expansion is attributable to a shift or refinement in strategy. This reversal of expansion was considered in light of a novel theoretical framework (CONCERTO) highlighting four basic principles of brain function that resolve anomalous findings and explaining why even a minor change in strategy would involve concomitant shifts of involved brain sites, including reversal of cortical expansion. Published by Elsevier Inc.

  9. Frequency and properties of bars in cluster and field galaxies at intermediate redshifts

    NASA Astrophysics Data System (ADS)

    Barazza, F. D.; Jablonka, P.; Desai, V.; Jogee, S.; Aragón-Salamanca, A.; De Lucia, G.; Saglia, R. P.; Halliday, C.; Poggianti, B. M.; Dalcanton, J. J.; Rudnick, G.; Milvang-Jensen, B.; Noll, S.; Simard, L.; Clowe, D. I.; Pelló, R.; White, S. D. M.; Zaritsky, D.

    2009-04-01

    We present a study of large-scale bars in field and cluster environments out to redshifts of ~0.8 using a final sample of 945 moderately inclined disk galaxies drawn from the EDisCS project. We characterize bars and their host galaxies and look for relations between the presence of a bar and the properties of the underlying disk. We investigate whether the fraction and properties of bars in clusters are different from their counterparts in the field. The properties of bars and disks are determined by ellipse fits to the surface brightness distribution of the galaxies using HST/ACS images in the F814W filter. The bar identification is based on quantitative criteria after highly inclined (> 60°) systems have been excluded. The total optical bar fraction in the redshift range z = 0.4-0.8 (median z = 0.60), averaged over the entire sample, is 25% (20% for strong bars). For the cluster and field subsamples, we measure bar fractions of 24% and 29%, respectively. We find that bars in clusters are on average longer than in the field and preferentially found close to the cluster center, where the bar fraction is somewhat higher (~31%) than at larger distances (~18%). These findings however rely on a relatively small subsample and might be affected by small number statistics. In agreement with local studies, we find that disk-dominated galaxies have a higher optical bar fraction (~45%) than bulge-dominated galaxies (~15%). This result is based on Hubble types and effective radii and does not change with redshift. The latter finding implies that bar formation or dissolution is strongly connected to the emergence of the morphological structure of a disk and is typically accompanied by a transition in the Hubble type. The question whether internal or external factors are more important for bar formation and evolution cannot be answered definitely. On the one hand, the bar fraction and properties of cluster and field samples of disk galaxies are quite similar, indicating that internal processes are crucial for bar formation. On the other hand, we find evidence that cluster centers are favorable locations for bars, which suggests that the internal processes responsible for bar growth are supported by the typical interactions taking place in such environments. Based on observations collected at the European Southern Observatory, Chile, as part of large programme 166.A-0162 (the ESO Distant Cluster Survey). Also based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with proposal 9476. Support for this porposal was provided by NASA through a grant from Space Telescope Science Institute.

  10. A Novel Field-Based Approach for Analysing Bar Reworking: Trialled in the Tongariro River, New Zealand

    NASA Astrophysics Data System (ADS)

    Reid, H. E.; Williams, R. D.; Coleman, S.; Brierley, G. J.

    2012-04-01

    Bars are key morphological units within river systems, fashioning the sediment regime and bedload transport processes within a reach. Reworking of these features underpins channel adjustment at larger scales, thereby acting as a key determinant of channel stability. Yet, despite their fundamental importance to channel evolution, few investigations have acquired spatially continuous data on bar morphology and sediment particle size to facilitate detailed investigations on bar reworking. To this end, four bars along a 10 km reach of wandering gravel bed river were surveyed, capturing downstream changes in slope, bed material size and channel planform. High resolution surveys of bar topography and grain-size roughness were acquired using Terrestrial Laser Scanning (TLS). The resulting point clouds were then filtered to a quasi-uniform point spacing of 0.05 m and statistical attributes were extracted at a 1 m resolution. The detrended standard deviations from the TLS data were then correlated to the underlying median grain size (D50), which was measured using the Wolman transect method. The resulting linear regression model had a strong relationship (R2 = 0.92) and was used to map median sediment size across each bar. Representative cross-sections were used to interpolate water surfaces across each bar, for flood events with recurrence intervals (RI) of 2.33, 10, 20, 50 and 100 years, enabling flow depth to be calculated. The ratio of dimensionless shear stress (from the depth raster and slope) over critical shear stress (from the D50 raster) was used to map entrainment across each bar at 1 m resolution for each flood event. This is referred to as 'relative erodibility'. The two downstream bars, which are characterised by low slope and smaller bed material, underwent greater entrainment during the more frequent 2.33 RI flood than the higher energy upstream bars which required floods with a RI of 10 or greater. Reworking was also assessed for within-bar geomorphic units. This work demonstrated that floods with a 2.33 year RI flush material on the bar tail, while 10 year RI floods rework the supra-platform and back channel deposits and only the largest flows (RI of > = 50) are able to entrain the bar head materials. Interestingly, despite dramatic differences between slope, grain-size and planform, all bar heads were found to undergo minimal entrainment (between 10 - 20 %) during the frequent 2.33 RI flood. This indicates that resistance at the bar head during frequent floods promotes the deposition of finer-grained, more transient units in their lee. This process-based appraisal explains channel adjustment at the reach-scale, whereby the proportion of the bar made out of more frequently entrained units (tail, backchannel, supra-platform) relative to more static units at the bar head exerts a direct influence upon the extent of adjustment of the bar and the reach as a whole.

  11. Globular Clusters: Absolute Proper Motions and Galactic Orbits

    NASA Astrophysics Data System (ADS)

    Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.

    2018-04-01

    We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.

  12. VizieR Online Data Catalog: CGS. V. Statistical study of bars and buckled bars (Li+, 2017)

    NASA Astrophysics Data System (ADS)

    Li, Z.-Y.; Ho, L. C.; Barth, A. J.

    2018-04-01

    Images in B-, V-, R-, and I-band filters were taken with the du Pont 2.5m telescope at Las Campanas Observatory, with a field of view (FOV) of 8.9'x8.9'. The typical depths of the B-, V-, R-, and I-band images are 27.5, 26.9, 26.4, and 25.3mag/arcsec2, respectively. More information about the Carnegie-Irvine Galaxy Survey (CGS) design, data reduction, and photometric measurements can be found in Papers I (Ho+, 2011, J/ApJS/197/21) and II (Li+, 2011, J/ApJS/197/22). In this work, we use the CGS I-band images to minimize the effect of dust extinction. The selected sample contains 376 disk galaxies with 264 disks hosting bars. (1 data file).

  13. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

  14. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  15. Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.

    PubMed

    Veldkamp, Coosje L S; Nuijten, Michèle B; Dominguez-Alvarez, Linda; van Assen, Marcel A L M; Wicherts, Jelte M

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.

  16. Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science

    PubMed Central

    Veldkamp, Coosje L. S.; Nuijten, Michèle B.; Dominguez-Alvarez, Linda; van Assen, Marcel A. L. M.; Wicherts, Jelte M.

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this ‘co-piloting’ currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors. PMID:25493918

  17. Effects of a Hybrid Online and In-Person Training Program Designed to Reduce Alcohol Sales to Obviously Intoxicated Patrons

    PubMed Central

    Toomey, Traci L.; Lenk, Kathleen M.; Erickson, Darin J.; Horvath, Keith J.; Ecklund, Alexandra M.; Nederhoff, Dawn M.; Hunt, Shanda L.; Nelson, Toben F.

    2017-01-01

    Objective: Overservice of alcohol (i.e., selling alcohol to intoxicated patrons) continues to be a problem at bars and restaurants, contributing to serious consequences such as traffic crashes and violence. We developed a training program for managers of bars and restaurants, eARMTM, focusing on preventing overservice of alcohol. The program included online and face-to-face components to help create and implement establishment-specific policies. Method: We conducted a large, randomized controlled trial in bars and restaurants in one metropolitan area in the midwestern United States to evaluate effects of the eARM program on the likelihood of selling alcohol to obviously intoxicated patrons. Our outcome measure was pseudo-intoxicated purchase attempts—buyers acted out signs of intoxication while attempting to purchase alcohol—conducted at baseline and then at 1 month, 3 months, and 6 months after training. We conducted intention-to-treat analyses on changes in purchase attempts in intervention (n = 171) versus control (n = 163) bars/restaurants using a Time × Condition interaction, as well as planned contrasts between baseline and follow-up purchase attempts. Results: The overall Time × Condition interaction was not statistically significant. At 1 month after training, we observed a 6% relative reduction in likelihood of selling to obviously intoxicated patrons in intervention versus control bars/restaurants. At 3 months after training, this difference widened to a 12% relative reduction; however, at 6 months this difference dissipated. None of these specific contrasts were statistically significant (p = .05). Conclusions: The observed effects of this enhanced training program are consistent with prior research showing modest initial effects followed by a decay within 6 months of the core training. Unless better training methods are identified, training programs are inadequate as the sole approach to reduce overservice of alcohol. PMID:28317507

  18. Evaluation of load transfer devices : final report.

    DOT National Transportation Integrated Search

    1975-11-01

    This report describes the procedures and findings of a study conducted to evaluate two types of load transfer devices used in Louisiana--steel dowel bars and starlugs (a patented device). A statistical comparison was accomplished by evaluating existi...

  19. Precision Measurement of the e + e - → Λ c + Λ ¯ c - Cross Section Near Threshold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ablikim, M.; Achasov, M. N.; Ahmed, S.

    The cross section of the e +e - →more » $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ process is measured with unprecedented precision using data collected with the BESIII detector at √s = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. The non-zero cross section near the $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ production threshold is cleared. At center-of-mass energies √s = 4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λ c polar angle distributions. From these, the Λ c electric over magnetic form factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03 respectively, where the first uncertainties are statistical and the second are systematic.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaumier, Michael J.

    This thesis discusses the process of extracting the longitudinal asymmetry, Amore » $$W±\\atop{L}$$ describing W → μ production in forward kinematic regimes. This asymmetry is used to constrain our understanding of the polarized parton distribution functions characterizing $$\\bar{u}$$ and $$\\bar{d}$$ sea quarks in the proton. This asymmetry will be used to constrain the overall contribution of the sea-quarks to the total proton spin. The asymmetry is evaluated over the pseudorapidity range of the PHENIX Muon Arms, 2.1 < |η| 2.6, for longitudinally polarized proton-proton collisions at 510 GeV √s. In particular, I will discuss the statistical methods used to characterize real muonic W decays and the various background processes is presented, including a discussion of likelihood event selection and the Extended Unbinned Maximum Likelihood t. These statistical methods serve estimate the yields of W muonic decays, which are used to calculate the longitudinal asymmetry.« less

  1. Precision Measurement of the e + e - → Λ c + Λ ¯ c - Cross Section Near Threshold

    DOE PAGES

    Ablikim, M.; Achasov, M. N.; Ahmed, S.; ...

    2018-03-29

    The cross section of the e +e - →more » $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ process is measured with unprecedented precision using data collected with the BESIII detector at √s = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. The non-zero cross section near the $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ production threshold is cleared. At center-of-mass energies √s = 4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λ c polar angle distributions. From these, the Λ c electric over magnetic form factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03 respectively, where the first uncertainties are statistical and the second are systematic.« less

  2. Incorporating a Spatial Prior into Nonlinear D-Bar EIT Imaging for Complex Admittivities.

    PubMed

    Hamilton, Sarah J; Mueller, J L; Alsaker, M

    2017-02-01

    Electrical Impedance Tomography (EIT) aims to recover the internal conductivity and permittivity distributions of a body from electrical measurements taken on electrodes on the surface of the body. The reconstruction task is a severely ill-posed nonlinear inverse problem that is highly sensitive to measurement noise and modeling errors. Regularized D-bar methods have shown great promise in producing noise-robust algorithms by employing a low-pass filtering of nonlinear (nonphysical) Fourier transform data specific to the EIT problem. Including prior data with the approximate locations of major organ boundaries in the scattering transform provides a means of extending the radius of the low-pass filter to include higher frequency components in the reconstruction, in particular, features that are known with high confidence. This information is additionally included in the system of D-bar equations with an independent regularization parameter from that of the extended scattering transform. In this paper, this approach is used in the 2-D D-bar method for admittivity (conductivity as well as permittivity) EIT imaging. Noise-robust reconstructions are presented for simulated EIT data on chest-shaped phantoms with a simulated pneumothorax and pleural effusion. No assumption of the pathology is used in the construction of the prior, yet the method still produces significant enhancements of the underlying pathology (pneumothorax or pleural effusion) even in the presence of strong noise.

  3. Branching Fraction Measurements of the Color-Suppressed Decays B0bar to D(*)0 pi0, D(*)0 eta, D(*)0 omega, and D(*)0 eta_prime and Measurement of the Polarization in the Decay B0bar to D*0 omega

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lees, J.P.; Poireau, V.; Tisserand, V.

    2012-02-14

    We report updated branching fraction measurements of the color-suppressed decays {bar B}{sup 0} {yields} D{sup 0}{pi}{sup 0}, D*{sup 0}{pi}{sup 0}, D{sup 0}{eta}, D*{sup 0}{eta}, D{sup 0}{omega}, D*{sup 0}{omega}, D{sup 0}{eta}', and D*{sup 0}{eta}'. We measure the branching fractions (x10{sup -4}): {Beta}({bar B}{sup 0} {yields} D{sup 0}{pi}{sup 0}) = 2.69 {+-} 0.09 {+-} 0.13, {Beta}({bar B}{sup 0} {yields} D*{sup 0}{pi}{sup 0}) = 3.05 {+-} 0.14 {+-} 0.28, {Beta}({bar B}{sup 0} {yields} D{sup 0}{eta}) = 2.53 {+-} 0.09 {+-} 0.11, {Beta}({bar B}{sup 0} {yields} D*{sup 0}{eta}) = 2.69 {+-} 0.14 {+-} 0.23, {Beta}({bar B}{sup 0} {yields} D{sup 0}{omega}) = 2.57 {+-} 0.11more » {+-} 0.14, {Beta}({bar B}{sup 0} {yields} D*{sup 0}{omega}) = 4.55 {+-} 0.24 {+-} 0.39, {Beta}({bar B}{sup 0} {yields} D{sup 0}{eta}') = 1.48 {+-} 0.13 {+-} 0.07, and {Beta}({bar B}{sup 0} {yields} D*{sup 0}{eta}') = 1.49 {+-} 0.22 {+-} 0.15. We also present the first measurement of the longitudinal polarization fraction of the decay channel D*{sup 0}{omega}, f{sub L} = (66.5 {+-} 4.7 {+-} 1.5)%. In the above, the first uncertainty is statistical and the second is systematic. The results are based on a sample of (454 {+-} 5) x 10{sup 6} B{bar B} pairs collected at the {Upsilon}(4S) resonance, with the BABAR detector at the PEP-II storage rings at SLAC. The measurements are the most precise determinations of these quantities from a single experiment. They are compared to theoretical predictions obtained by factorization, Soft Collinear Effective Theory (SCET) and perturbative QCD (pQCD). We find that the presence of final state interactions is favored and the measurements are in better agreement with SCET than with pQCD.« less

  4. Spectroscopics database for warm Xenon and Iron in Astrophysics and Laboratory Astrophysics conditions

    NASA Astrophysics Data System (ADS)

    Busquet, Michel; Klapisch, Marcel; Bar-Shalom, Avi; Oreg, Josse

    2010-11-01

    The main contribution to spectral properties of astrophysics mixtures come often from Iron. On the other hand, in the so-called domain of ``Laboratory Astrophysics,'' where astrophysics phenomena are scaled down to the laboratory, Xenon (and Argon) are commonly used gases. At so called ``warm'' temperatures (T=5-50eV), L-shell Iron and M-shell Xenon present a very large number of spectral lines, originating from billions of levels. More often than not, Local Thermodynamical Equilibrium is assumed, leading to noticeable simplification of the computation. Nevertheless, complex and powerful atomic structure codes are required. We take benefit of powerful statistics and numerics, included in our atomic structure codes, STA[1] and HULLAC[2], to generate the required spectra. Recent improvements in both fields (statistics, numerics and convergence control) allow obtaining large databases (ro x T grid of > 200x200 points, and > 10000 frequencies) for temperature down to a few eV. We plan to port these improvements in the NLTE code SCROLL[3]. [1] A.Bar-Shalom, et al, Phys. Rev. A 40, 3183 (1989) [2] M.Busquet,et al, J.Phys. IV France 133, 973-975 (2006); A.Bar-Shalom, M.Klapisch, J.Oreg, J.Oreg, JQSRT 71, 169, (2001) [3] A.Bar-Shalom, et al, Phys. Rev. E 56, R70 (1997)

  5. Impact of crystal orientation on ohmic contact resistance of enhancement-mode p-GaN gate high electron mobility transistors on 200 mm silicon substrates

    NASA Astrophysics Data System (ADS)

    Van Hove, Marleen; Posthuma, Niels; Geens, Karen; Wellekens, Dirk; Li, Xiangdong; Decoutere, Stefaan

    2018-04-01

    p-GaN gate enhancement mode power transistors were processed in a Si CMOS processing line on 200 mm Si(111) substrates using Au-free metallization schemes. Si/Ti/Al/Ti/TiN ohmic contacts were formed after full recessing of the AlGaN barrier, followed by a HCl-based wet cleaning step. The electrical performance of devices aligned to the [11\\bar{2}0] and the perpendicular [1\\bar{1}00] directions was compared. The ohmic contact resistance was decreased from 1 Ω·mm for the [11\\bar{2}0] direction to 0.35 Ω·mm for the [1\\bar{1}00] direction, resulting in an increase of the drain saturation current from 0.5 to 0.6 A/mm, and a reduction of the on-resistance from 6.4 to 5.1 Ω·mm. Moreover, wafer mapping of the device characteristics over the 200 mm wafer showed a tighter statistical distribution for the [1\\bar{1}00] direction. However, by using an optimized sulfuric/ammonia peroxide (SPM/APM) cleaning step, the ohmic contact resistance could be lowered to 0.3 Ω·mm for both perpendicular directions.

  6. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    NASA Astrophysics Data System (ADS)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  7. hh+ {Jet} production at 100 TeV

    NASA Astrophysics Data System (ADS)

    Banerjee, Shankha; Englert, Christoph; Mangano, Michelangelo L.; Selvaggi, Michele; Spannowsky, Michael

    2018-04-01

    Higgs pair production is a crucial phenomenological process in deciphering the nature of the TeV scale and the mechanism underlying electroweak symmetry breaking. At the Large Hadron Collider, this process is statistically limited. Pushing the energy frontier beyond the LHC's reach will create new opportunities to exploit the rich phenomenology at higher centre-of-mass energies and luminosities. In this work, we perform a comparative analysis of the hh+ {jet} channel at a future 100 TeV hadron collider. We focus on the hh→ b\\bar{b} b\\bar{b} and hh → b\\bar{b} τ ^+τ ^- channels and employ a range of analysis techniques to estimate the sensitivity potential that can be gained by including this jet-associated Higgs pair production to the list of sensitive collider processes in such an environment. In particular, we observe that hh → b\\bar{b} τ ^+τ ^- in the boosted regime exhibits a large sensitivity to the Higgs boson self-coupling and the Higgs self-coupling could be constrained at the 8% level in this channel alone.

  8. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  9. [Character of refractive errors in population study performed by the Area Military Medical Commission in Lodz].

    PubMed

    Nowak, Michał S; Goś, Roman; Smigielski, Janusz

    2008-01-01

    To determine the prevalence of refractive errors in population. A retrospective review of medical examinations for entry to the military service from The Area Military Medical Commission in Lodz. Ophthalmic examinations were performed. We used statistic analysis to review the results. Statistic analysis revealed that refractive errors occurred in 21.68% of the population. The most commen refractive error was myopia. 1) The most commen ocular diseases are refractive errors, especially myopia (21.68% in total). 2) Refractive surgery and contact lenses should be allowed as the possible correction of refractive errors for military service.

  10. Why hard-nosed executives should care about management theory.

    PubMed

    Christensen, Clayton M; Raynor, Michael E

    2003-09-01

    Theory often gets a bum rap among managers because it's associated with the word "theoretical," which connotes "impractical." But it shouldn't. Because experience is solely about the past, solid theories are the only way managers can plan future actions with any degree of confidence. The key word here is "solid." Gravity is a solid theory. As such, it lets us predict that if we step off a cliff we will fall, without actually having to do so. But business literature is replete with theories that don't seem to work in practice or actually contradict each other. How can a manager tell a good business theory from a bad one? The first step is understanding how good theories are built. They develop in three stages: gathering data, organizing it into categories highlighting significant differences, then making generalizations explaining what causes what, under which circumstances. For instance, professor Ananth Raman and his colleagues collected data showing that bar code-scanning systems generated notoriously inaccurate inventory records. These observations led them to classify the types of errors the scanning systems produced and the types of shops in which those errors most often occurred. Recently, some of Raman's doctoral students have worked as clerks to see exactly what kinds of behavior cause the errors. From this foundation, a solid theory predicting under which circumstances bar code systems work, and don't work, is beginning to emerge. Once we forgo one-size-fits-all explanations and insist that a theory describes the circumstances under which it does and doesn't work, we can bring predictable success to the world of management.

  11. Normality Tests for Statistical Analysis: A Guide for Non-Statisticians

    PubMed Central

    Ghasemi, Asghar; Zahediasl, Saleh

    2012-01-01

    Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808

  12. Development and Validation of Methods for Applying Pharmacokinetic Data in Risk Assessment. Volume 7. PBPK SIM

    DTIC Science & Technology

    1990-12-01

    keys 7 Executing PBPKSIM 10 Main Menu 12 File Selection 13 Data 13 simulation 13 All 14 sTatistics 14 Change directory 14 dos Shell 15 eXit 15 Data...the PBPKSIM program are based upon the window design seen here: TITLE I MENU BAR I INFORMATION LINE I I I IMIN DISPLAY AREAI1 1 I I I I I I I STATUS...AREAI Title shows the location of the program by supplying the name of the window being exeLuted. Menu Bar displays the other windows or other

  13. ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ANDREWS, J.W.

    1998-12-31

    One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method ofmore » reconciling any conflicting results from the two leakage tests.« less

  14. A Comparison of Sleep and Performance of Sailors on an Operationally Deployed U.S. Navy Warship

    DTIC Science & Technology

    2013-09-01

    The crew’s mission on a deployed warship is inherently dangerous. The nature of the job means navigating restricted waters, conducting underway...The nature of the job means navigating restricted waters, conducting underway replenishments with less than 200 feet of lateral separation from... concentration equivalent. Error bars ± s.e. (From Dawson & Reid, 1997). .............................9 Figure 4. Mean psychomotor vigilance task speed (and

  15. Metal Ion Sensor with Catalytic DNA in a Nanofluidic Intelligent Processor

    DTIC Science & Technology

    2011-12-01

    attributed to decreased diffusion and less active DNAzyme complex because of pore constraints. Uncleavable Alexa546 intensity is shown in gray ...is shown in gray , cleavable fluorescein in green, and the ratio of Fl/Alexa in red. Error bars represent one standard deviation of four independent...higher concentrations inhibiting cleaved fragment release. Uncleavable Alexa 546 intensity is shown in gray , cleavable fluorescein in green, and the

  16. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  17. Effects of salt secretion on psychrometric determinations of water potential of cotton leaves.

    PubMed

    Klepper, B; Barrs, H D

    1968-07-01

    Thermocouple psychrometers gave lower estimates of water potential of cotton leaves than did a pressure chamber. This difference was considerable for turgid leaves, but progressively decreased for leaves with lower water potentials and fell to zero at water potentials below about -10 bars. The conductivity of washings from cotton leaves removed from the psychrometric equilibration chambers was related to the magnitude of this discrepancy in water potential, indicating that the discrepancy is due to salts on the leaf surface which make the psychrometric estimates too low. This error, which may be as great as 400 to 500%, cannot be eliminated by washing the leaves because salts may be secreted during the equilibration period. Therefore, a thermocouple psychrometer is not suitable for measuring the water potential of cotton leaves when it is above about -10 bars.

  18. A new photometric model of the Galactic bar using red clump giants

    NASA Astrophysics Data System (ADS)

    Cao, Liang; Mao, Shude; Nataf, David; Rattenbury, Nicholas J.; Gould, Andrew

    2013-09-01

    We present a study of the luminosity density distribution of the Galactic bar using number counts of red clump giants from the Optical Gravitational Lensing Experiment (OGLE) III survey. The data were recently published by Nataf et al. for 9019 fields towards the bulge and have 2.94 × 106 RC stars over a viewing area of 90.25 deg^2. The data include the number counts, mean distance modulus (μ), dispersion in μ and full error matrix, from which we fit the data with several triaxial parametric models. We use the Markov Chain Monte Carlo method to explore the parameter space and find that the best-fitting model is the E3 model, with the distance to the GC 8.13 kpc, the ratio of semimajor and semiminor bar axis scalelengths in the Galactic plane x0, y0 and vertical bar scalelength z0 x0: y0: z0 ≈ 1.00: 0.43: 0.40 (close to being prolate). The scalelength of the stellar density profile along the bar's major axis is ˜0.67 kpc and has an angle of 29.4°, slightly larger than the value obtained from a similar study based on OGLE-II data. The number of estimated RC stars within the field of view is 2.78 × 106, which is systematically lower than the observed value. We subtract the smooth parametric model from the observed counts and find that the residuals are consistent with the presence of an X-shaped structure in the Galactic Centre, the excess to the estimated mass content is ˜5.8 per cent. We estimate that the total mass of the bar is ˜1.8 × 1010 M⊙. Our results can be used as a key ingredient to construct new density models of the Milky Way and will have implications on the predictions of the optical depth to gravitational microlensing and the patterns of hydrodynamical gas flow in the Milky Way.

  19. Enamel coated steel reinforcement for improved durability and life-cycle performance of concrete structures: microstructure, corrosion, and deterioration

    NASA Astrophysics Data System (ADS)

    Tang, Fujian

    This study is aimed (a) to statistically characterize the corrosion-induced deterioration process of reinforced concrete structures (concrete cracking, steel mass loss, and rebar-concrete bond degradation), and (b) to develop and apply three types of enamel-coated steel bars for improved corrosion resistance of the structures. Commercially available pure enamel, mixed enamel with 50% calcium silicate, and double enamel with an inner layer of pure enamel and an outer layer of mixed enamel were considered as various steel coatings. Electrochemical tests were respectively conducted on steel plates, smooth bars embedded in concrete, and deformed bars with/without concrete cover in 3.5 wt.% NaCl or saturated Ca(OH)2 solution. The effects of enamel microstructure, coating thickness variation, potential damage, mortar protection, and corrosion environment on corrosion resistance of the steel members were investigated. Extensive test results indicated that corrosion-induced concrete cracking can be divided into four stages that gradually become less correlated with corrosion process over time. The coefficient of variation of crack width increases with the increasing level of corrosion. Corrosion changed the cross section area instead of mechanical properties of steel bars. The bond-slip behavior between the corroded bars and concrete depends on the corrosion level and distribution of corrosion pits. Although it can improve the chemical bond with concrete and steel, the mixed enamel coating is the least corrosion resistant. The double enamel coating provides the most consistent corrosion performance and is thus recommended to coat reinforcing steel bars for concrete structures applied in corrosive environments. Corrosion pits in enamel-coated bars are limited around damage locations.

  20. Economic Impact of Smoke-Free Legislation: Did the Spanish Tobacco Control Law Affect the Economic Activity of Bars and Restaurants?

    PubMed

    García-Altés, Anna; Pinilla, Jaime; Marí-Dell'Olmo, Marc; Fernández, Esteve; López, Maria José

    2015-11-01

    The potential of smoke-free bans to negatively impact the hospitality business has been an argument of the hospitality and tobacco industry against such legislation. A partial smoke-free legislation was introduced in Spain in 2006 allowing smoking in most bars and restaurants due to the pressure of the hospitality sector. However, this partial ban was later amended in 2011 to include all the hospitality premises without exceptions. The stepped Spanish process permits to evaluate whether the entry into force of the smoke-free legislation had any effect on the economic activity of the hospitality sector. We employed a pooled time series cross-sectional design, with national data over 6 years (2006-2011). The dependent variable used was the total number of bars and restaurants per 100,000 inhabitants. The explanatory variables used were the average amount of spending per household in bars and restaurants, and the total unemployment rate in Spain by regions. For every 1% increase in the unemployment rate there was a 0.05% decrease in the number of bars and restaurants. In 2007, the number of bars and restaurants was significantly reduced by 13.06% (all others factors being held constant), 4.87% in 2008, and 10.42% in 2009. No statistically significant effect of the smoke-free legislation emerged from 2010 (6.76%) to 2011 (7.69%). The new Spanish smoke-free legislation had no effect on the number of bars and restaurants. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Vertical and lateral forces applied to the bar during the bench press in novice lifters.

    PubMed

    Duffey, Michael J; Challis, John H

    2011-09-01

    The purpose of this study was to determine the vertical and lateral forces applied to the bar during a maximal and a submaximal effort bench press lifts. For this study, 10 male and 8 female recreational lifters were recruited (mean height: 1.71 ± 0.08 m; mass: 73.7 ± 13.6 kg) and were asked to perform a maximal and submaximal (80% of maximal lift) bench press. These lifts were performed with a bar instrumented to record forces applied to it, via the hands, in the vertical direction and along the long axis of the bar. To determine the position of the bar and timing of events, 3D kinematic data were recorded and analyzed for both lifts. The subjects in this study averaged a maximal lift of 63 ± 29 kg (90 ± 31% bodyweight). The peak vertical force was 115 ± 22% (percentage of load), whereas for the submaximal condition it was 113 ± 20%; these forces were statistically different between conditions; they were not when expressed as a percentage of the load (p > 0.05). During all the lifts, the lateral forces were always outward along the bar. The lateral force profile was similar to that of the vertical force, albeit at a lesser magnitude. During the lift phase, the peak lateral force was on average 26.3 ± 3.9% of the vertical force for the maximal lift and 23.7 ± 3.9% of the vertical force for the submaximal lift. Given that the amount of force applied laterally to the bar was a similar percentage of vertical force irrespective of load, it appears that the generation of lateral forces during the bench press is a result of having the muscles engaged in generating vertical force.

  2. Project ARM: alcohol risk management to prevent sales to underage and intoxicated patrons.

    PubMed

    Toomey, T L; Wagenaar, A C; Gehan, J P; Kilian, G; Murray, D M; Perry, C L

    2001-04-01

    Clear policies and expectations are key to increasing responsible service of alcohol in licensed establishments. Few training programs focus exclusively on owners and managers of alcohol establishments to reduce the risk of alcohol service. Project ARM: Alcohol Risk Management is a one-on-one consultation program for owners and managers. Participants received information on risk level, policies to prevent illegal sales, legal issues, and staff communication. This nonrandomized demonstration project was implemented in five diverse bars. Two waves of underage and pseudo-intoxicated purchase attempts were conducted pre- and postintervention in the five intervention bars and nine matched control bars. Underage sales decreased by 11.5%, and sales to pseudo-intoxicated buyers decreased by 46%. Results were in the hypothesized direction but not statistically significant. A one-on-one, outlet-specific training program for owners and managers is a promising way to reduce illegal alcohol sales, particularly to obviously intoxicated individuals.

  3. The effect of high concentrations of glufosinate ammonium on the yield components of transgenic spring wheat (Triticum aestivum L.) constitutively expressing the bar gene.

    PubMed

    Áy, Zoltán; Mihály, Róbert; Cserháti, Mátyás; Kótai, Éva; Pauk, János

    2012-01-01

    We present an experiment done on a bar(+) wheat line treated with 14 different concentrations of glufosinate ammonium-an effective component of nonselective herbicides-during seed germination in a closed experimental system. Yield components as number of spikes per plant, number of grains per spike, thousand kernel weight, and yield per plant were thoroughly analysed and statistically evaluated after harvesting. We found that a concentration of glufosinate ammonium 5000 times the lethal dose was not enough to inhibit the germination of transgenic plants expressing the bar gene. Extremely high concentrations of glufosinate ammonium caused a bushy phenotype, significantly lower numbers of grains per spike, and thousand kernel weights. Concerning the productivity, we observed that concentrations of glufosinate ammonium 64 times the lethal dose did not lead to yield depression. Our results draw attention to the possibilities implied in the transgenic approaches.

  4. Induction motor broken rotor bar fault location detection through envelope analysis of start-up current using Hilbert transform

    NASA Astrophysics Data System (ADS)

    Abd-el-Malek, Mina; Abdelsalam, Ahmed K.; Hassan, Ola E.

    2017-09-01

    Robustness, low running cost and reduced maintenance lead Induction Motors (IMs) to pioneerly penetrate the industrial drive system fields. Broken rotor bars (BRBs) can be considered as an important fault that needs to be early assessed to minimize the maintenance cost and labor time. The majority of recent BRBs' fault diagnostic techniques focus on differentiating between healthy and faulty rotor cage. In this paper, a new technique is proposed for detecting the location of the broken bar in the rotor. The proposed technique relies on monitoring certain statistical parameters estimated from the analysis of the start-up stator current envelope. The envelope of the signal is obtained using Hilbert Transformation (HT). The proposed technique offers non-invasive, fast computational and accurate location diagnostic process. Various simulation scenarios are presented that validate the effectiveness of the proposed technique.

  5. Accuracy of fit of implant-supported bars fabricated on definitive casts made by different dental stones

    PubMed Central

    Kioleoglou, Ioannis; Pissiotis, Argirios

    2018-01-01

    Background The purpose of this study was to evaluate the accuracy of fitting of an implant supported screw-retained bar made on definitive casts produced by 4 different dental stone products. Material and Methods The dental stones tested were QuickRock (Protechno), FujiRock (GC), Jade Stone (Whip Mix) and Moldasynt (Heraeus). Three external hexagon implants were placed in a polyoxymethylene block. Definitive impressions were made using monophase high viscosity polyvinylsiloxane in combination with custom trays. Then, definitive models from the different types of dental stones were fabricated. Three castable cylinders with a machined non-enganging base were cast and connected with a very small quantity of PMMA to a cast bar, which was used to verify the marginal discrepancies between the abutments and the prosthetic platforms of the implants. For that purpose special software and a camera mounted on an optical microscope were used. The gap was measured by taking 10 measurements on each abutment, after the Sheffield test was applied. Twelve definitive casts were fabricated for each gypsum product and 40 measurements were performed for each cast. Mean, minimum, and maximum values were calculated. The Shapiro-Wilk test of normality was performed. Mann-Whitney test (P<.06) was used for the statistical analysis of the measurements. Results The non-parametric Kruskal-Wallis test revealed a statistically significant effect of the stone factor on the marginal discrepancy for all Sheffield test combinations: 1. Abutment 2 when screw was fastened on abutment 1 (χ2=3, df=35.33, P<0.01), 2. Abutment 3 when the screw was fastened on abutment 1 (χ2=3, df=37.74, P<0.01), 3. Abutment 1 when the screw was fastened on abutment 3 (χ2=3, df=39.79, P<0.01), 4. Abutment 2 when the screw was fastened on abutment 3 (χ2=3, df=37.26, P<0.01). Conclusions A significant correlation exists between marginal discrepancy and different dental gypsum products used for the fabrication of definitive casts for implant supported bars. The smallest marginal discrepancy was noted on implant supported bars fabricated on definitive casts made by Type III mounting stone. The biggest marginal discrepancy was noted on implant supported bars fabricated on definitive casts made by Type V dental stone. The marginal discrepancies presented on implant supported bars fabricated on definitive casts made by two types of Type IV dental stone were not significantly different. Key words:Dental implant, passive fit, dental stones, marginal discrepancy. PMID:29721227

  6. The State and Trends of Barcode, RFID, Biometric and Pharmacy Automation Technologies in US Hospitals.

    PubMed

    Uy, Raymonde Charles Y; Kury, Fabricio P; Fontelo, Paul A

    2015-01-01

    The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions.

  7. Common Scientific and Statistical Errors in Obesity Research

    PubMed Central

    George, Brandon J.; Beasley, T. Mark; Brown, Andrew W.; Dawson, John; Dimova, Rositsa; Divers, Jasmin; Goldsby, TaShauna U.; Heo, Moonseong; Kaiser, Kathryn A.; Keith, Scott; Kim, Mimi Y.; Li, Peng; Mehta, Tapan; Oakes, J. Michael; Skinner, Asheley; Stuart, Elizabeth; Allison, David B.

    2015-01-01

    We identify 10 common errors and problems in the statistical analysis, design, interpretation, and reporting of obesity research and discuss how they can be avoided. The 10 topics are: 1) misinterpretation of statistical significance, 2) inappropriate testing against baseline values, 3) excessive and undisclosed multiple testing and “p-value hacking,” 4) mishandling of clustering in cluster randomized trials, 5) misconceptions about nonparametric tests, 6) mishandling of missing data, 7) miscalculation of effect sizes, 8) ignoring regression to the mean, 9) ignoring confirmation bias, and 10) insufficient statistical reporting. We hope that discussion of these errors can improve the quality of obesity research by helping researchers to implement proper statistical practice and to know when to seek the help of a statistician. PMID:27028280

  8. User-centered design of quality of life reports for clinical care of patients with prostate cancer

    PubMed Central

    Izard, Jason; Hartzler, Andrea; Avery, Daniel I.; Shih, Cheryl; Dalkin, Bruce L.; Gore, John L.

    2014-01-01

    Background Primary treatment of localized prostate cancer can result in bothersome urinary, sexual, and bowel symptoms. Yet clinical application of health-related quality-of-life (HRQOL) questionnaires is rare. We employed user-centered design to develop graphic dashboards of questionnaire responses from patients with prostate cancer to facilitate clinical integration of HRQOL measurement. Methods We interviewed 50 prostate cancer patients and 50 providers, assessed literacy with validated instruments (Rapid Estimate of Adult Literacy in Medicine short form, Subjective Numeracy Scale, Graphical Literacy Scale), and presented participants with prototype dashboards that display prostate cancer-specific HRQOL with graphic elements derived from patient focus groups. We assessed dashboard comprehension and preferences in table, bar, line, and pictograph formats with patient scores contextualized with HRQOL scores of similar patients serving as a comparison group. Results Health literacy (mean score, 6.8/7) and numeracy (mean score, 4.5/6) of patient participants was high. Patients favored the bar chart (mean rank, 1.8 [P = .12] vs line graph [P <.01] vs table and pictograph); providers demonstrated similar preference for table, bar, and line formats (ranked first by 30%, 34%, and 34% of providers, respectively). Providers expressed unsolicited concerns over presentation of comparison group scores (n = 19; 38%) and impact on clinic efficiency (n = 16; 32%). Conclusion Based on preferences of prostate cancer patients and providers, we developed the design concept of a dynamic HRQOL dashboard that permits a base patient-centered report in bar chart format that can be toggled to other formats and include error bars that frame comparison group scores. Inclusion of lower literacy patients may yield different preferences. PMID:24787105

  9. User-centered design of quality of life reports for clinical care of patients with prostate cancer.

    PubMed

    Izard, Jason; Hartzler, Andrea; Avery, Daniel I; Shih, Cheryl; Dalkin, Bruce L; Gore, John L

    2014-05-01

    Primary treatment of localized prostate cancer can result in bothersome urinary, sexual, and bowel symptoms. Yet clinical application of health-related quality-of-life (HRQOL) questionnaires is rare. We employed user-centered design to develop graphic dashboards of questionnaire responses from patients with prostate cancer to facilitate clinical integration of HRQOL measurement. We interviewed 50 prostate cancer patients and 50 providers, assessed literacy with validated instruments (Rapid Estimate of Adult Literacy in Medicine short form, Subjective Numeracy Scale, Graphical Literacy Scale), and presented participants with prototype dashboards that display prostate cancer-specific HRQOL with graphic elements derived from patient focus groups. We assessed dashboard comprehension and preferences in table, bar, line, and pictograph formats with patient scores contextualized with HRQOL scores of similar patients serving as a comparison group. Health literacy (mean score, 6.8/7) and numeracy (mean score, 4.5/6) of patient participants was high. Patients favored the bar chart (mean rank, 1.8 [P = .12] vs line graph [P < .01] vs table and pictograph); providers demonstrated similar preference for table, bar, and line formats (ranked first by 30%, 34%, and 34% of providers, respectively). Providers expressed unsolicited concerns over presentation of comparison group scores (n = 19; 38%) and impact on clinic efficiency (n = 16; 32%). Based on preferences of prostate cancer patients and providers, we developed the design concept of a dynamic HRQOL dashboard that permits a base patient-centered report in bar chart format that can be toggled to other formats and include error bars that frame comparison group scores. Inclusion of lower literacy patients may yield different preferences. Copyright © 2014 Mosby, Inc. All rights reserved.

  10. Change in indoor particle levels after a smoking ban in Minnesota bars and restaurants.

    PubMed

    Bohac, David L; Hewett, Martha J; Kapphahn, Kristopher I; Grimsrud, David T; Apte, Michael G; Gundel, Lara A

    2010-12-01

    Smoking bans in bars and restaurants have been shown to improve worker health and reduce hospital admissions for acute myocardial infarction. Several studies have also reported improved indoor air quality, although these studies generally used single visits before and after a ban for a convenience sample of venues. The primary objective of this study was to provide detailed time-of-day and day-of-week secondhand smoke-exposure data for representative bars and restaurants in Minnesota. This study improved on previous approaches by using a statistically representative sample of three venue types (drinking places, limited-service restaurants, and full-service restaurants), conducting repeat visits to the same venue prior to the ban, and matching the day of week and time of day for the before- and after-ban monitoring. The repeat visits included laser photometer fine particulate (PM₂.₅) concentration measurements, lit cigarette counts, and customer counts for 19 drinking places, eight limited-service restaurants, and 35 full-service restaurants in the Minneapolis/St. Paul metropolitan area. The more rigorous design of this study provides improved confidence in the findings and reduces the likelihood of systematic bias. The median reduction in PM₂.₅ was greater than 95% for all three venue types. Examination of data from repeated visits shows that making only one pre-ban visit to each venue would greatly increase the range of computed percentage reductions and lower the statistical power of pre-post tests. Variations in PM₂.₅ concentrations were found based on time of day and day of week when monitoring occurred. These comprehensive measurements confirm that smoking bans provide significant reductions in SHS constituents, protecting customers and workers from PM₂.₅ in bars and restaurants. Copyright © 2010 American Journal of Preventive Medicine. All rights reserved.

  11. On P values and effect modification.

    PubMed

    Mayer, Martin

    2017-12-01

    A crucial element of evidence-based healthcare is the sound understanding and use of statistics. As part of instilling sound statistical knowledge and practice, it seems useful to highlight instances of unsound statistical reasoning or practice, not merely in captious or vitriolic spirit, but rather, to use such error as a springboard for edification by giving tangibility to the concepts at hand and highlighting the importance of avoiding such error. This article aims to provide an instructive overview of two key statistical concepts: effect modification and P values. A recent article published in the Journal of the American College of Cardiology on side effects related to statin therapy offers a notable example of errors in understanding effect modification and P values, and although not so critical as to entirely invalidate the article, the errors still demand considerable scrutiny and correction. In doing so, this article serves as an instructive overview of the statistical concepts of effect modification and P values. Judicious handling of statistics is imperative to avoid muddying their utility. This article contributes to the body of literature aiming to improve the use of statistics, which in turn will help facilitate evidence appraisal, synthesis, translation, and application.

  12. A method for velocity signal reconstruction of AFDISAR/PDV based on crazy-climber algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Ying-cheng; Guo, Xian; Xing, Yuan-ding; Chen, Rong; Li, Yan-jie; Bai, Ting

    2017-10-01

    The resolution of Continuous wavelet transformation (CWT) is different when the frequency is different. For this property, the time-frequency signal of coherent signal obtained by All Fiber Displacement Interferometer System for Any Reflector (AFDISAR) is extracted. Crazy-climber Algorithm is adopted to extract wavelet ridge while Velocity history curve of the measuring object is obtained. Numerical simulation is carried out. The reconstruction signal is completely consistent with the original signal, which verifies the accuracy of the algorithm. Vibration of loudspeaker and free end of Hopkinson incident bar under impact loading are measured by AFDISAR, and the measured coherent signals are processed. Velocity signals of loudspeaker and free end of Hopkinson incident bar are reconstructed respectively. Comparing with the theoretical calculation, the particle vibration arrival time difference error of the free end of Hopkinson incident bar is 2μs. It is indicated from the results that the algorithm is of high accuracy, and is of high adaptability to signals of different time-frequency feature. The algorithm overcomes the limitation of modulating the time window artificially according to the signal variation when adopting STFT, and is suitable for extracting signal measured by AFDISAR.

  13. Technology-related medication errors in a tertiary hospital: a 5-year analysis of reported medication incidents.

    PubMed

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2012-12-01

    Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Baryon-antibaryon dynamics in relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Seifert, E.; Cassing, W.

    2018-04-01

    The dynamics of baryon-antibaryon annihilation and reproduction (B B ¯↔3 M ) is studied within the Parton-Hadron-String Dynamics (PHSD) transport approach for Pb+Pb and Au+Au collisions as a function of centrality from lower Super Proton Synchrotron (SPS) up to Large Hadron Collider (LHC) energies on the basis of the quark rearrangement model. At Relativistic Heavy-Ion Collider (RHIC) energies we find a small net reduction of baryon-antibaryon (B B ¯ ) pairs while for the LHC energy of √{sN N}=2.76 TeV a small net enhancement is found relative to calculations without annihilation (and reproduction) channels. Accordingly, the sizable difference between data and statistical calculations in Pb+Pb collisions at √{sN N}=2.76 TeV for proton and antiproton yields [ALICE Collaboration, B. Abelev et al., Phys. Rev. C 88, 044910 (2013), 10.1103/PhysRevC.88.044910], where a deviation of 2.7 σ was claimed by the ALICE Collaboration, should not be attributed to a net antiproton annihilation. This is in line with the observation that no substantial deviation between the data and statistical hadronization model (SHM) calculations is seen for antihyperons, since according to the PHSD analysis the antihyperons should be modified by the same amount as antiprotons. As the PHSD results for particle ratios are in line with the ALICE data (within error bars) this might point towards a deviation from statistical equilibrium in the hadronization (at least for protons and antiprotons). Furthermore, we find that the B B ¯↔3 M reactions are more effective at lower SPS energies where a net suppression for antiprotons and antihyperons up to a factor of 2-2.5 can be extracted from the PHSD calculations for central Au+Au collisions.

  15. Statistical pattern analysis of surficial karst in the Pleistocene Miami oolite of South Florida

    NASA Astrophysics Data System (ADS)

    Harris, Paul (Mitch); Purkis, Sam; Reyes, Bella

    2018-05-01

    A robust airborne light detection and ranging digital terrain model (LiDAR DTM) and select outcrops are used to examine the extent and characteristics of the surficial karst overprint of the late Pleistocene Miami oolite in South Florida. Subaerial exposure of the Miami oolite barrier bar and shoals to a meteoric diagenetic environment, lasting ca. 120 kyr from the end of the last interglacial highstand MIS 5e until today, has resulted in diagenetic alteration including surface and shallow subsurface dissolution producing extensive dolines and a few small stratiform caves. Analysis of the LiDAR DTM suggests that >50% of the dolines in the Miami oolite have been obscured/lost to urbanization, though a large number of depressions remain apparent and can be examined for trends and spatial patterns. The verified dolines are analyzed for their size and depth, their lateral distribution and relation to depositional topography, and the separation distance between them. Statistical pattern analysis shows that the average separation distance and average density of dolines on the strike-oriented barrier bar versus dip-oriented shoals is statistically inseparable. Doline distribution on the barrier bar is clustered because of the control exerted on dissolution by the depositional topography of the shoal system, whereas patterning of dolines in the more platform-ward lower-relief shoals is statistically indistinguishable from random. The areal extent and depth of dissolution of the dolines are well described by simple mathematical functions, and the depth of the dolines increases as a function of their size. The separation and density results from the Miami oolite are compared to results from other carbonate terrains. Near-surface, stratiform caves in the Miami oolite occur in sites where the largest and deepest dolines are present, and sit at, or near, the top of the present water table.

  16. Rhythms at the bottom of the deep sea: Cyclic current flow changes and melatonin patterns in two species of demersal fish

    NASA Astrophysics Data System (ADS)

    Wagner, H.-J.; Kemp, K.; Mattheus, U.; Priede, I. G.

    2007-11-01

    We have studied physical and biological rhythms in the deep demersal habitat of the Northeastern Atlantic. Current velocity and direction changes occurred at intervals of 12.4 h, demonstrating that they could have an impact of tidal activity, and also showed indications of other seasonal changes. As an indicator of biological rhythms, we measured the content of pineal and retinal melatonin in the grenadier Coryphaenoides armatus and the deep-sea eel Synaphobranchus kaupii, and determined the spontaneous release of melatonin in long-term (52 h minimum) cultures of isolated pineal organs and retinae in S. kaupii. The results of the release experiments show statistically significant signs of synchronicity and periodicity suggesting the presence of an endogenous clock. The melatonin content data show large error bars typical of cross-sectional population studies. When the data are plotted according to a lunar cycle, taken as indication of a tidal rhythm, both species show peak values at the beginning of the lunar day and night and lower values during the second half of lunar day and night and during moonrise and moonset. Statistical analysis, however, shows that the periodicity of the melatonin content is not significant. Taken together these observations strongly suggest that (1) biological rhythms are present in demersal fish, (2) the melatonin metabolism shows signs of periodicity, and (3) tidal currents may act as zeitgeber at the bottom of the deep sea.

  17. Bioavailability and Methylation Potential of Mercury Sulfides in Sediments

    DTIC Science & Technology

    2014-08-01

    such as size separation (i.e. filtration with a particular pore size or molecular weight cutoff) or metal-ligand complexation from experimentally ...and 6 nM HgS microparticles. The error bars represent ±1 s.d. for duplicate samples. Results of Hg fractionation by filtration and (ultra... results from filtration (Figures S2). These differences in the data indicated that the nHgS dissolution rate could be overestimated by the filtration data

  18. Image decomposition of barred galaxies and AGN hosts

    NASA Astrophysics Data System (ADS)

    Gadotti, Dimitri Alexei

    2008-02-01

    I present the results of multicomponent decomposition of V and R broad-band images of a sample of 17 nearby galaxies, most of them hosting bars and active galactic nuclei (AGN). I use BUDDA v2.1 to produce the fits, allowing the inclusion of bars and AGN in the models. A comparison with previous results from the literature shows a fairly good agreement. It is found that the axial ratio of bars, as measured from ellipse fits, can be severely underestimated if the galaxy axisymmetric component is relatively luminous. Thus, reliable bar axial ratios can only be determined by taking into account the contributions of bulge and disc to the light distribution in the galaxy image. Through a number of tests, I show that neglecting bars when modelling barred galaxies can result in an overestimation of the bulge-to-total luminosity ratio of a factor of 2. Similar effects result when bright, type 1 AGN are not considered in the models. By artificially redshifting the images, I show that the structural parameters of more distant galaxies can in general be reliably retrieved through image fitting, at least up to the point where the physical spatial resolution is ~1.5kpc. This corresponds, for instance, to images of galaxies at z = 0.05 with a seeing full width at half-maximum (FWHM) of 1.5arcsec, typical of the Sloan Digital Sky Survey (SDSS). In addition, such a resolution is also similar to what can be achieved with the Hubble Space Telescope (HST), and ground-based telescopes with adaptive optics, at z ~ 1-2. Thus, these results also concern deeper studies such as COSMOS and SINS. This exercise shows that disc parameters are particularly robust, but bulge parameters are prone to errors if its effective radius is small compared to the seeing radius, and might suffer from systematic effects. For instance, the bulge-to-total luminosity ratio is systematically overestimated, on average, by 0.05 (i.e. 5 per cent of the galaxy total luminosity). In this low-resolution regime, the effects of ignoring bars are still present, but AGN light is smeared out. I briefly discuss the consequences of these results to studies of the structural properties of galaxies, in particular on the stellar mass budget in the local Universe. With reasonable assumptions, it is possible to show that the stellar content in bars can be similar to that in classical bulges and elliptical galaxies. Finally, I revisit the cases of NGC4608 and 5701 and show that the lack of stars in the disc region inside the bar radius is significant. Accordingly, the best-fitting model for the former uses a Freeman type II disc.

  19. BAO from Angular Clustering: Optimization and Mitigation of Theoretical Systematics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crocce, M.; et al.

    We study the theoretical systematics and optimize the methodology in Baryon Acoustic Oscillations (BAO) detections using the angular correlation function with tomographic bins. We calibrate and optimize the pipeline for the Dark Energy Survey Year 1 dataset using 1800 mocks. We compare the BAO fitting results obtained with three estimators: the Maximum Likelihood Estimator (MLE), Profile Likelihood, and Markov Chain Monte Carlo. The MLE method yields the least bias in the fit results (bias/spreadmore » $$\\sim 0.02$$) and the error bar derived is the closest to the Gaussian results (1% from 68% Gaussian expectation). When there is mismatch between the template and the data either due to incorrect fiducial cosmology or photo-$z$ error, the MLE again gives the least-biased results. The BAO angular shift that is estimated based on the sound horizon and the angular diameter distance agree with the numerical fit. Various analysis choices are further tested: the number of redshift bins, cross-correlations, and angular binning. We propose two methods to correct the mock covariance when the final sample properties are slightly different from those used to create the mock. We show that the sample changes can be accommodated with the help of the Gaussian covariance matrix or more effectively using the eigenmode expansion of the mock covariance. The eigenmode expansion is significantly less susceptible to statistical fluctuations relative to the direct measurements of the covariance matrix because the number of free parameters is substantially reduced [$p$ parameters versus $p(p+1)/2$ from direct measurement].« less

  20. Estimation of shoreline position and change using airborne topographic lidar data

    USGS Publications Warehouse

    Stockdon, H.F.; Sallenger, A.H.; List, J.H.; Holman, R.A.

    2002-01-01

    A method has been developed for estimating shoreline position from airborne scanning laser data. This technique allows rapid estimation of objective, GPS-based shoreline positions over hundreds of kilometers of coast, essential for the assessment of large-scale coastal behavior. Shoreline position, defined as the cross-shore position of a vertical shoreline datum, is found by fitting a function to cross-shore profiles of laser altimetry data located in a vertical range around the datum and then evaluating the function at the specified datum. Error bars on horizontal position are directly calculated as the 95% confidence interval on the mean value based on the Student's t distribution of the errors of the regression. The technique was tested using lidar data collected with NASA's Airborne Topographic Mapper (ATM) in September 1997 on the Outer Banks of North Carolina. Estimated lidar-based shoreline position was compared to shoreline position as measured by a ground-based GPS vehicle survey system. The two methods agreed closely with a root mean square difference of 2.9 m. The mean 95% confidence interval for shoreline position was ?? 1.4 m. The technique has been applied to a study of shoreline change on Assateague Island, Maryland/Virginia, where three ATM data sets were used to assess the statistics of large-scale shoreline change caused by a major 'northeaster' winter storm. The accuracy of both the lidar system and the technique described provides measures of shoreline position and change that are ideal for studying storm-scale variability over large spatial scales.

  1. Hauser-Feshbach fission fragment de-excitation with calculated macroscopic-microscopic mass yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaffke, Patrick John; Talou, Patrick; Sierk, Arnold John

    The Hauser-Feshbach statistical model is applied to the de-excitation of primary fission fragments using input mass yields calculated with macroscopic-microscopic models of the potential energy surface. We test the sensitivity of the prompt fission observables to the input mass yields for two important reactions, 235U (n th, f) and 239Pu (n th, f) , for which good experimental data exist. General traits of the mass yields, such as the location of the peaks and their widths, can impact both the prompt neutron and γ-ray multiplicities, as well as their spectra. Specifically, we use several mass yields to determine a linear correlation between the calculated prompt neutron multiplicitymore » $$\\bar{v}$$ and the average heavy-fragment mass $$\\langle$$A h$$\\rangle$$ of the input mass yields ∂$$\\bar{v}$$/∂ $$\\langle$$A h$$\\rangle$$ = ± 0.1 (n / f )/u . The mass peak width influences the correlation between the total kinetic energy of the fission fragments and the total number of prompt neutrons emitted, $$\\bar{v}_T$$ ( TKE ) . Finally, typical biases on prompt particle observables from using calculated mass yields instead of experimental ones are δ$$\\bar{v}$$ = 4 % for the average prompt neutron multiplicity, δ$$\\overline{M}_γ$$ = 1% for the average prompt γ-ray multiplicity, δ$$\\bar{ε}$$ $$LAB\\atop{n}$$ = 1 % for the average outgoing neutron energy, δ$$\\bar{ε}_γ$$ = 1 % for the average γ-ray energy, and δ $$\\langle$$TKE$$\\rangle$$ = 0.4 % for the average total kinetic energy of the fission fragments.« less

  2. Hauser-Feshbach fission fragment de-excitation with calculated macroscopic-microscopic mass yields

    DOE PAGES

    Jaffke, Patrick John; Talou, Patrick; Sierk, Arnold John; ...

    2018-03-15

    The Hauser-Feshbach statistical model is applied to the de-excitation of primary fission fragments using input mass yields calculated with macroscopic-microscopic models of the potential energy surface. We test the sensitivity of the prompt fission observables to the input mass yields for two important reactions, 235U (n th, f) and 239Pu (n th, f) , for which good experimental data exist. General traits of the mass yields, such as the location of the peaks and their widths, can impact both the prompt neutron and γ-ray multiplicities, as well as their spectra. Specifically, we use several mass yields to determine a linear correlation between the calculated prompt neutron multiplicitymore » $$\\bar{v}$$ and the average heavy-fragment mass $$\\langle$$A h$$\\rangle$$ of the input mass yields ∂$$\\bar{v}$$/∂ $$\\langle$$A h$$\\rangle$$ = ± 0.1 (n / f )/u . The mass peak width influences the correlation between the total kinetic energy of the fission fragments and the total number of prompt neutrons emitted, $$\\bar{v}_T$$ ( TKE ) . Finally, typical biases on prompt particle observables from using calculated mass yields instead of experimental ones are δ$$\\bar{v}$$ = 4 % for the average prompt neutron multiplicity, δ$$\\overline{M}_γ$$ = 1% for the average prompt γ-ray multiplicity, δ$$\\bar{ε}$$ $$LAB\\atop{n}$$ = 1 % for the average outgoing neutron energy, δ$$\\bar{ε}_γ$$ = 1 % for the average γ-ray energy, and δ $$\\langle$$TKE$$\\rangle$$ = 0.4 % for the average total kinetic energy of the fission fragments.« less

  3. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  4. Three-dimensional magnetic resonance imaging of physeal injury: reliability and clinical utility.

    PubMed

    Lurie, Brett; Koff, Matthew F; Shah, Parina; Feldmann, Eric James; Amacker, Nadja; Downey-Zayas, Timothy; Green, Daniel; Potter, Hollis G

    2014-01-01

    Injuries to the physis are common in children with a subset resulting in an osseous bar and potential growth disturbance. Magnetic resonance imaging allows for detailed assessment of the physis with the ability to generate 3-dimensional physeal models from volumetric data. The purpose of this study was to assess the interrater reliability of physeal bar area measurements generated using a validated semiautomated segmentation technique and to highlight the clinical utility of quantitative 3-dimensional (3D) physeal mapping in pediatric orthopaedic practice. The Radiology Information System/Picture Archiving Communication System (PACS) at our institution was searched to find consecutive patients who were imaged for the purpose of assessing a physeal bar or growth disturbance between December 2006 and October 2011. Physeal segmentation was retrospectively performed by 2 independent operators using semiautomated software to generate physeal maps and bar area measurements from 3-dimensional spoiled gradient recalled echo sequences. Inter-reliability was statistically analyzed. Subsequent surgical management for each patient was recorded from the patient notes and surgical records. We analyzed 24 patients (12M/12F) with a mean age of 11.4 years (range, 5-year to 15-year olds) and 25 physeal bars. Of the physeal bars: 9 (36%) were located in the distal tibia; 8 (32%) in the proximal tibia; 5 (20%) in the distal femur; 1 (4%) in the proximal femur; 1 (4%) in the proximal humerus; and 1 (4%) in the distal radius. The independent operator measurements of physeal bar area were highly correlated with a Pearson correlation coefficient (r) of 0.96 and an intraclass correlation coefficient for average measures of 0.99 (95% confidence interval, 0.97-0.99). Four patients underwent resection of the identified physeal bars, 9 patients were treated with epiphysiodesis, and 1 patient underwent bilateral tibial osteotomies. Semiautomated segmentation of the physis is a reproducible technique for generating physeal maps and accurately measuring physeal bars, providing quantitative and anatomic information that may inform surgical management and prognosis in patients with physeal injury. Level IV.

  5. A Measurement of the D+(s) lifetime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Link, J.M.; Yager, P.M.; /UC, Davis

    2005-04-01

    A high statistics measurement of the D{sub s}{sup +} lifetime from the Fermilab fixed-target FOCUS photoproduction experiment is presented. They describe the analysis of the two decay modes, D{sub s}{sup +} {yields} {phi}(1020){pi}{sup +} and D{sub s}{sup +} {yields} {bar K}*(892){sup 0}K{sup +}, used for the measurement. The measured lifetime is 507.4 {+-} 5.5(stat.) {+-} 5.1(syst.) is using 8961 {+-} 105 D{sub s}{sup +} {yields} {phi}(1020){pi}{sup +} and 4680 {+-} 90 D{sub s}{sup +} {yields} {bar K}*(892){sup 0} K{sup +} decays. This is a significant improvement over the present world average.

  6. Recent results on heavy flavor physics from LEP experiments using 1990-92 data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasparini, U.

    1994-12-01

    After three years of data taking, the four LEP experiments collected a total of about four million Z{sup 0} hadronic decays, in which a heavy quark pair (either b{bar b} or c{bar c}) is produced with 40% probability. Results are presented both in the sector of the electroweak precision measurements, with particular emphasis on the beauty quark, and in the determination of the beauty decay properties, where lifetimes and branching ratio measurements take advantage of the large statistics now available and of the recent improvements in the analysis based on microvertex detectors and particle identification devices.

  7. Measurement of B sup 0 -- B sup 0 mixing using the MARK II at PEP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, F.

    B{sup 0}{bar B}{sup 0} mixing has been observed now by several experiments. The signature is the observation of an excess of same-sign dilepton events in datasets containing semileptonic B decays. Several years ago the MARK II published an upper limit on B{sup 0}{bar B}{sup 0} mixing at E{sub cm} = 29 GeV, using data taken at the e{sup +}e{sup {minus}} storage ring PEP. Here we report on the results of a new analysis with increased statistics, using refined methods with better sensitivity and control of systematic effects. 10 refs., 2 figs., 2 tab.

  8. Comparative evaluation of aqueous humor viscosity.

    PubMed

    Davis, Kyshia; Carter, Renee; Tully, Thomas; Negulescu, Ioan; Storey, Eric

    2015-01-01

    To evaluate aqueous humor viscosity in the raptor, dog, cat, and horse, with a primary focus on the barred owl (Strix varia). Twenty-six raptors, ten dogs, three cats, and one horse. Animals were euthanized for reasons unrelated to this study. Immediately, after horizontal and vertical corneal dimensions were measured, and anterior chamber paracentesis was performed to quantify anterior chamber volume and obtain aqueous humor samples for viscosity analysis. Dynamic aqueous humor viscosity was measured using a dynamic shear rheometer (AR 1000 TA Instruments, New Castle, DE, USA) at 20 °C. Statistical analysis included descriptive statistics, unpaired t-tests, and Tukey's test to evaluate the mean ± standard deviation for corneal diameter, anterior chamber volume, and aqueous humor viscosity amongst groups and calculation of Spearman's coefficient for correlation analyses. The mean aqueous humor viscosity in the barred owl was 14.1 centipoise (cP) ± 9, cat 4.4 cP ± 0.2, and dog 2.9 cP ± 1.3. The aqueous humor viscosity for the horse was 1 cP. Of the animals evaluated in this study, the raptor aqueous humor was the most viscous. The aqueous humor of the barred owl is significantly more viscous than the dog (P < 0.0001). The aqueous humor viscosity of the raptor, dog, cat, and horse can be successfully determined using a dynamic shear rheometer. © 2014 American College of Veterinary Ophthalmologists.

  9. Workplace exposure to secondhand smoke among non-smoking hospitality employees.

    PubMed

    Lawhorn, Nikki A; Lirette, David K; Klink, Jenna L; Hu, Chih-Yang; Contreras, Cassandra; Ajori Bryant, Ty-Runet Pinkney; Brown, Lisanne F; Diaz, James H

    2013-02-01

    This article examines salivary cotinine concentrations to characterize secondhand smoke (SHS) exposure among non-smoking hospitality employees (bar and casino employees and musicians who perform in bars) who are exposed to SHS in the workplace. A pre-post test study design was implemented to assess SHS exposure in the workplace. The convenience sample of 41 non-smoking hospitality employees included 10 controls (non-smoking hospitality employees not exposed to SHS in the workplace). The findings demonstrate that post-shift saliva cotinine levels of hospitality employees who are exposed to SHS in the workplace are significantly higher than controls who work in smoke-free venues. Findings also suggested a statistically significant increase between pre- and post-shift saliva cotinine levels of hospitality employees who are exposed in the workplace. No statistically significant difference was noted across labor categories, suggesting that all exposed employees are at increased risk. The study results indicate that non-smoking hospitality employees exposed to SHS in the workplace have significantly higher cotinine concentration levels compared with their counterparts who work in smoke-free venues. Findings from other studies suggest that these increased cotinine levels are harmful to health. Given the potential impact on the health of exposed employees, this study further supports the efforts of tobacco prevention and control programs in advocating for comprehensive smoke-free air policies to protect bar and casino employees.

  10. Calculating Potential Energy Curves with Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2014-06-01

    Quantum Monte Carlo (QMC) is a computational technique that can be applied to the electronic Schrödinger equation for molecules. QMC methods such as Variational Monte Carlo (VMC) and Diffusion Monte Carlo (DMC) have demonstrated the capability of capturing large fractions of the correlation energy, thus suggesting their possible use for high-accuracy quantum chemistry calculations. QMC methods scale particularly well with respect to parallelization making them an attractive consideration in anticipation of next-generation computing architectures which will involve massive parallelization with millions of cores. Due to the statistical nature of the approach, in contrast to standard quantum chemistry methods, uncertainties (error-bars) are associated with each calculated energy. This study focuses on the cost, feasibility and practical application of calculating potential energy curves for small molecules with QMC methods. Trial wave functions were constructed with the multi-configurational self-consistent field (MCSCF) method from GAMESS-US.[1] The CASINO Monte Carlo quantum chemistry package [2] was used for all of the DMC calculations. An overview of our progress in this direction will be given. References: M. W. Schmidt et al. J. Comput. Chem. 14, 1347 (1993). R. J. Needs et al. J. Phys.: Condensed Matter 22, 023201 (2010).

  11. Clustering of quasars in SDSS-IV eBOSS: study of potential systematics and bias determination

    NASA Astrophysics Data System (ADS)

    Laurent, Pierre; Eftekharzadeh, Sarah; Le Goff, Jean-Marc; Myers, Adam; Burtin, Etienne; White, Martin; Ross, Ashley J.; Tinker, Jeremy; Tojeiro, Rita; Bautista, Julian; Brinkmann, Jonathan; Comparat, Johan; Dawson, Kyle; du Mas des Bourboux, Hélion; Kneib, Jean-Paul; McGreer, Ian D.; Palanque-Delabrouille, Nathalie; Percival, Will J.; Prada, Francisco; Rossi, Graziano; Schneider, Donald P.; Weinberg, David; Yèche, Christophe; Zarrouk, Pauline; Zhao, Gong-Bo

    2017-07-01

    We study the first year of the eBOSS quasar sample in the redshift range 0.9

  12. Optimal heavy tail estimation - Part 1: Order selection

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred; Bermejo, Miguel A.

    2017-12-01

    The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  13. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  14. Effects of Salt Secretion on Psychrometric Determinations of Water Potential of Cotton Leaves

    PubMed Central

    Klepper, Betty; Barrs, H. D.

    1968-01-01

    Thermocouple psychrometers gave lower estimates of water potential of cotton leaves than did a pressure chamber. This difference was considerable for turgid leaves, but progressively decreased for leaves with lower water potentials and fell to zero at water potentials below about −10 bars. The conductivity of washings from cotton leaves removed from the psychrometric equilibration chambers was related to the magnitude of this discrepancy in water potential, indicating that the discrepancy is due to salts on the leaf surface which make the psychrometric estimates too low. This error, which may be as great as 400 to 500%, cannot be eliminated by washing the leaves because salts may be secreted during the equilibration period. Therefore, a thermocouple psychrometer is not suitable for measuring the water potential of cotton leaves when it is above about −10 bars. PMID:16656895

  15. Relationship between visual binding, reentry and awareness.

    PubMed

    Koivisto, Mika; Silvanto, Juha

    2011-12-01

    Visual feature binding has been suggested to depend on reentrant processing. We addressed the relationship between binding, reentry, and visual awareness by asking the participants to discriminate the color and orientation of a colored bar (presented either alone or simultaneously with a white distractor bar) and to report their phenomenal awareness of the target features. The success of reentry was manipulated with object substitution masking and backward masking. The results showed that late reentrant processes are necessary for successful binding but not for phenomenal awareness of the bound features. Binding errors were accompanied by phenomenal awareness of the misbound feature conjunctions, demonstrating that they were experienced as real properties of the stimuli (i.e., illusory conjunctions). Our results suggest that early preattentive binding and local recurrent processing enable features to reach phenomenal awareness, while later attention-related reentrant iterations modulate the way in which the features are bound and experienced in awareness. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Probabilistic parameter estimation in a 2-step chemical kinetics model for n-dodecane jet autoignition

    NASA Astrophysics Data System (ADS)

    Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph

    2018-05-01

    This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.

  17. A wireless passive pressure microsensor fabricated in HTCC MEMS technology for harsh environments.

    PubMed

    Tan, Qiulin; Kang, Hao; Xiong, Jijun; Qin, Li; Zhang, Wendong; Li, Chen; Ding, Liqiong; Zhang, Xiansheng; Yang, Mingliang

    2013-08-02

    A wireless passive high-temperature pressure sensor without evacuation channel fabricated in high-temperature co-fired ceramics (HTCC) technology is proposed. The properties of the HTCC material ensure the sensor can be applied in harsh environments. The sensor without evacuation channel can be completely gastight. The wireless data is obtained with a reader antenna by mutual inductance coupling. Experimental systems are designed to obtain the frequency-pressure characteristic, frequency-temperature characteristic and coupling distance. Experimental results show that the sensor can be coupled with an antenna at 600 °C and max distance of 2.8 cm at room temperature. The senor sensitivity is about 860 Hz/bar and hysteresis error and repeatability error are quite low.

  18. Integration and evaluation of a needle-positioning robot with volumetric microcomputed tomography image guidance for small animal stereotactic interventions.

    PubMed

    Waspe, Adam C; McErlain, David D; Pitelka, Vasek; Holdsworth, David W; Lacefield, James C; Fenster, Aaron

    2010-04-01

    Preclinical research protocols often require insertion of needles to specific targets within small animal brains. To target biologically relevant locations in rodent brains more effectively, a robotic device has been developed that is capable of positioning a needle along oblique trajectories through a single burr hole in the skull under volumetric microcomputed tomography (micro-CT) guidance. An x-ray compatible stereotactic frame secures the head throughout the procedure using a bite bar, nose clamp, and ear bars. CT-to-robot registration enables structures identified in the image to be mapped to physical coordinates in the brain. Registration is accomplished by injecting a barium sulfate contrast agent as the robot withdraws the needle from predefined points in a phantom. Registration accuracy is affected by the robot-positioning error and is assessed by measuring the surface registration error for the fiducial and target needle tracks (FRE and TRE). This system was demonstrated in situ by injecting 200 microm tungsten beads into rat brains along oblique trajectories through a single burr hole on the top of the skull under micro-CT image guidance. Postintervention micro-CT images of each skull were registered with preintervention high-field magnetic resonance images of the brain to infer the anatomical locations of the beads. Registration using four fiducial needle tracks and one target track produced a FRE and a TRE of 96 and 210 microm, respectively. Evaluation with tissue-mimicking gelatin phantoms showed that locations could be targeted with a mean error of 154 +/- 113 microm. The integration of a robotic needle-positioning device with volumetric micro-CT image guidance should increase the accuracy and reduce the invasiveness of stereotactic needle interventions in small animals.

  19. Integration and evaluation of a needle-positioning robot with volumetric microcomputed tomography image guidance for small animal stereotactic interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waspe, Adam C.; McErlain, David D.; Pitelka, Vasek

    Purpose: Preclinical research protocols often require insertion of needles to specific targets within small animal brains. To target biologically relevant locations in rodent brains more effectively, a robotic device has been developed that is capable of positioning a needle along oblique trajectories through a single burr hole in the skull under volumetric microcomputed tomography (micro-CT) guidance. Methods: An x-ray compatible stereotactic frame secures the head throughout the procedure using a bite bar, nose clamp, and ear bars. CT-to-robot registration enables structures identified in the image to be mapped to physical coordinates in the brain. Registration is accomplished by injecting amore » barium sulfate contrast agent as the robot withdraws the needle from predefined points in a phantom. Registration accuracy is affected by the robot-positioning error and is assessed by measuring the surface registration error for the fiducial and target needle tracks (FRE and TRE). This system was demonstrated in situ by injecting 200 {mu}m tungsten beads into rat brains along oblique trajectories through a single burr hole on the top of the skull under micro-CT image guidance. Postintervention micro-CT images of each skull were registered with preintervention high-field magnetic resonance images of the brain to infer the anatomical locations of the beads. Results: Registration using four fiducial needle tracks and one target track produced a FRE and a TRE of 96 and 210 {mu}m, respectively. Evaluation with tissue-mimicking gelatin phantoms showed that locations could be targeted with a mean error of 154{+-}113 {mu}m. Conclusions: The integration of a robotic needle-positioning device with volumetric micro-CT image guidance should increase the accuracy and reduce the invasiveness of stereotactic needle interventions in small animals.« less

  20. Raising the Bar for Reproducible Science at the US Environmental Protection Agency Office of Research and Development

    EPA Science Inventory

    Considerable concern has been raised regarding research reproducibility both within and outside the scientific community. Several factors possibly contribute to a lack of reproducibility, including a failure to adequately employ statistical considerations during study design, bia...

  1. SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.

    PubMed

    Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S

    2012-06-01

    With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.

  2. Study of B to pi l nu and B to rho l nu Decays and Determination of |V_ub|

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Amo Sanchez, P.; Lees, J.P.; Poireau, V.

    2011-12-09

    We present an analysis of exclusive charmless semileptonic B-meson decays based on 377 million B{bar B} pairs recorded with the BABAR detector at the {Upsilon} (4S) resonance. We select four event samples corresponding to the decay modes B{sup 0} {yields} {pi}{sup -}{ell}{sup +}{nu}, B{sup +} {yields} {pi}{sup 0}{ell}{sup +}{nu}, B{sup 0} {yields} {rho}{sup -}{ell}{sup +}{nu}, and B{sup +} {yields} {rho}{sup 0}{ell}{sup +}{nu}, and find the measured branching fractions to be consistent with isospin symmetry. Assuming isospin symmetry, we combine the two B {yields} {pi}{ell}{nu} samples, and similarly the two B {yields} {rho}{ell}{nu} samples, and measure the branching fractions {Beta}(B{sup 0}more » {yields} {pi}{sup -}{ell}{sup +}{nu}) = (1.41 {+-} 0.05 {+-} 0.07) x 10{sup -4} and {Beta}(B{sup 0} {yields} {rho}{sup 0}{ell}{sup +}{nu}) = (1.75 {+-} 0.15 {+-} 0.27) x 10{sup -4}, where the errors are statistical and systematic. We compare the measured distribution in q{sup 2}, the momentum transfer squared, with predictions for the form factors from QCD calculations and determine the CKM matrix element |V{sub ub}|. Based on the measured partial branching fraction for B {yields} {pi}{ell}{nu} in the range q{sup 2} < 12 GeV{sup 2} and the most recent LCSR calculations we obtain |V{sub ub}| = (3.78 {+-} 0.13{sub -0.40}{sup +0.55}) x 10{sup -3}, where the errors refer to the experimental and theoretical uncertainties. From a simultaneous fit to the data over the full q{sup 2} range and the FNAL/MILC lattice QCD results, we obtain |V{sub ub}| = (2.95 {+-} 0.31) x 10{sup -3} from B {yields} {pi}{ell}{nu}, where the error is the combined experimental and theoretical uncertainty.« less

  3. Statistics of the radiated field of a space-to-earth microwave power transfer system

    NASA Technical Reports Server (NTRS)

    Stevens, G. H.; Leininger, G.

    1976-01-01

    Statistics such as average power density pattern, variance of the power density pattern and variance of the beam pointing error are related to hardware parameters such as transmitter rms phase error and rms amplitude error. Also a limitation on spectral width of the phase reference for phase control was established. A 1 km diameter transmitter appears feasible provided the total rms insertion phase errors of the phase control modules does not exceed 10 deg, amplitude errors do not exceed 10% rms, and the phase reference spectral width does not exceed approximately 3 kHz. With these conditions the expected radiation pattern is virtually the same as the error free pattern, and the rms beam pointing error would be insignificant (approximately 10 meters).

  4. Predictors of Errors of Novice Java Programmers

    ERIC Educational Resources Information Center

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  5. An Artificial Intelligence Approach to Analyzing Student Errors in Statistics.

    ERIC Educational Resources Information Center

    Sebrechts, Marc M.; Schooler, Lael J.

    1987-01-01

    Describes the development of an artificial intelligence system called GIDE that analyzes student errors in statistics problems by inferring the students' intentions. Learning strategies involved in problem solving are discussed and the inclusion of goal structures is explained. (LRW)

  6. Statistical error model for a solar electric propulsion thrust subsystem

    NASA Technical Reports Server (NTRS)

    Bantell, M. H.

    1973-01-01

    The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.

  7. Empirical investigation into depth-resolution of Magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Piana Agostinetti, N.; Ogaya, X.

    2017-12-01

    We investigate the depth-resolution of MT data comparing reconstructed 1D resistivity profiles with measured resistivity and lithostratigraphy from borehole data. Inversion of MT data has been widely used to reconstruct the 1D fine-layered resistivity structure beneath an isolated Magnetotelluric (MT) station. Uncorrelated noise is generally assumed to be associated to MT data. However, wrong assumptions on error statistics have been proved to strongly bias the results obtained in geophysical inversions. In particular the number of resolved layers at depth strongly depends on error statistics. In this study, we applied a trans-dimensional McMC algorithm for reconstructing the 1D resistivity profile near-by the location of a 1500 m-deep borehole, using MT data. We resolve the MT inverse problem imposing different models for the error statistics associated to the MT data. Following a Hierachical Bayes' approach, we also inverted for the hyper-parameters associated to each error statistics model. Preliminary results indicate that assuming un-correlated noise leads to a number of resolved layers larger than expected from the retrieved lithostratigraphy. Moreover, comparing the inversion of synthetic resistivity data obtained from the "true" resistivity stratification measured along the borehole shows that a consistent number of resistivity layers can be obtained using a Gaussian model for the error statistics, with substantial correlation length.

  8. Inference of reaction rate parameters based on summary statistics from experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  9. Inference of reaction rate parameters based on summary statistics from experiments

    DOE PAGES

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...

    2016-10-15

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  10. Statistics, Handle with Care: Detecting Multiple Model Components with the Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Protassov, Rostislav; van Dyk, David A.; Connors, Alanna; Kashyap, Vinay L.; Siemiginowska, Aneta

    2002-05-01

    The likelihood ratio test (LRT) and the related F-test, popularized in astrophysics by Eadie and coworkers in 1971, Bevington in 1969, Lampton, Margon, & Bowyer, in 1976, Cash in 1979, and Avni in 1978, do not (even asymptotically) adhere to their nominal χ2 and F-distributions in many statistical tests common in astrophysics, thereby casting many marginal line or source detections and nondetections into doubt. Although the above authors illustrate the many legitimate uses of these statistics, in some important cases it can be impossible to compute the correct false positive rate. For example, it has become common practice to use the LRT or the F-test to detect a line in a spectral model or a source above background despite the lack of certain required regularity conditions. (These applications were not originally suggested by Cash or by Bevington.) In these and other settings that involve testing a hypothesis that is on the boundary of the parameter space, contrary to common practice, the nominal χ2 distribution for the LRT or the F-distribution for the F-test should not be used. In this paper, we characterize an important class of problems in which the LRT and the F-test fail and illustrate this nonstandard behavior. We briefly sketch several possible acceptable alternatives, focusing on Bayesian posterior predictive probability values. We present this method in some detail since it is a simple, robust, and intuitive approach. This alternative method is illustrated using the gamma-ray burst of 1997 May 8 (GRB 970508) to investigate the presence of an Fe K emission line during the initial phase of the observation. There are many legitimate uses of the LRT and the F-test in astrophysics, and even when these tests are inappropriate, there remain several statistical alternatives (e.g., judicious use of error bars and Bayes factors). Nevertheless, there are numerous cases of the inappropriate use of the LRT and similar tests in the literature, bringing substantive scientific results into question.

  11. Linearised and non-linearised isotherm models optimization analysis by error functions and statistical means

    PubMed Central

    2014-01-01

    In adsorption study, to describe sorption process and evaluation of best-fitting isotherm model is a key analysis to investigate the theoretical hypothesis. Hence, numerous statistically analysis have been extensively used to estimate validity of the experimental equilibrium adsorption values with the predicted equilibrium values. Several statistical error analysis were carried out. In the present study, the following statistical analysis were carried out to evaluate the adsorption isotherm model fitness, like the Pearson correlation, the coefficient of determination and the Chi-square test, have been used. The ANOVA test was carried out for evaluating significance of various error functions and also coefficient of dispersion were evaluated for linearised and non-linearised models. The adsorption of phenol onto natural soil (Local name Kalathur soil) was carried out, in batch mode at 30 ± 20 C. For estimating the isotherm parameters, to get a holistic view of the analysis the models were compared between linear and non-linear isotherm models. The result reveled that, among above mentioned error functions and statistical functions were designed to determine the best fitting isotherm. PMID:25018878

  12. Product plots.

    PubMed

    Wickham, Hadley; Hofmann, Heike

    2011-12-01

    We propose a new framework for visualising tables of counts, proportions and probabilities. We call our framework product plots, alluding to the computation of area as a product of height and width, and the statistical concept of generating a joint distribution from the product of conditional and marginal distributions. The framework, with extensions, is sufficient to encompass over 20 visualisations previously described in fields of statistical graphics and infovis, including bar charts, mosaic plots, treemaps, equal area plots and fluctuation diagrams. © 2011 IEEE

  13. Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions

    ERIC Educational Resources Information Center

    Yormaz, Seha; Sünbül, Önder

    2017-01-01

    This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…

  14. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  15. Molecular Analysis of Motility in Metastatic Mammary Adenocarcinoma Cells

    DTIC Science & Technology

    1996-09-01

    elements of epidermoid carcinoma (A43 1) cells. J. Cell. Biol. 103: 87-94 Winkler, M. (1988). Translational regulation in sea urchin eggs: a complex...and Methods. Error bars show SEM . Figure 2. Rhodamine-actin polymerizes preferentially at the tips of lamellipods in EGF- stimulated cells. MTLn3...lamellipods. B) rhodamine-actin intensity at the cell center. Data for each time point is the average and SEM of 15 different cells. Images A and B

  16. The Effect of Information Level on Human-Agent Interaction for Route Planning

    DTIC Science & Technology

    2015-12-01

    13 Fig. 4 Experiment 1 shows regression results for time spent at DP predicting posttest trust group membership for the high LOI...decision time by pretest trust group membership. Bars denote standard error (SE). DT at DP was evaluated to see if it predicted posttest trust... group . Linear regression indicated that DT at DP was not a significant predictor of posttest trust for the Low or the Medium LOI conditions; however, it

  17. Thermal Conductivities of Some Polymers and Composites

    DTIC Science & Technology

    2018-02-01

    volume fraction of glass and fabric style. The experimental results are compared to modeled results for Kt in composites. 15. SUBJECT TERMS...entities in a polymer above TG increases, so Cp will increase at TG. For Kt to remain constant, there would have to be a comparable decrease in α due to...scanning calorimetry (DSC) method, and have error bars as large as the claimed effect. Their Kt values for their carbon fiber samples are comparable to

  18. New Methods for the Computational Fabrication of Appearance

    DTIC Science & Technology

    2015-06-01

    disadvantage is that it does not model phenomena such as retro-reflection and grazing-angle e↵ects. We find that previously proposed BRDF metrics performed well...Figure 3.15-right shows the mean BRDF in blue and the corresponding error bars. In order to interpret our data, we fit a parametric model to slices of the...and Wojciech Matusik. Image-driven navigation of analytical brdf models . In Eurographics Symposium on Rendering, 2006. 107 [40] F. E. Nicodemus, J. C

  19. On-line vs off-line electrical conductivity characterization. Polycarbonate composites developed with multiwalled carbon nanotubes by compounding technology

    NASA Astrophysics Data System (ADS)

    Llorens-Chiralt, R.; Weiss, P.; Mikonsaari, I.

    2014-05-01

    Material characterization is one of the key steps when conductive polymers are developed. The dispersion of carbon nanotubes (CNTs) in a polymeric matrix using melt mixing influence final composite properties. The compounding becomes trial and error using a huge amount of materials, spending time and money to obtain competitive composites. Traditional methods to carry out electrical conductivity characterization include compression and injection molding. Both methods need extra equipments and moulds to obtain standard bars. This study aims to investigate the accuracy of the data obtained from absolute resistance recorded during the melt compounding, using an on-line setup developed by our group, and to correlate these values with off-line characterization and processing parameters (screw/barrel configuration, throughput, screw speed, temperature profile and CNTs percentage). Compounds developed with different percentages of multi walled carbon nanotubes (MWCNTs) and polycarbonate has been characterized during and after extrusion. Measurements, on-line resistance and off-line resistivity, showed parallel response and reproducibility, confirming method validity. The significance of the results obtained stems from the fact that we are able to measure on-line resistance and to change compounding parameters during production to achieve reference values reducing production/testing cost and ensuring material quality. Also, this method removes errors which can be found in test bars development, showing better correlation with compounding parameters.

  20. [Constructing a database that can input record of use and product-specific information].

    PubMed

    Kawai, Satoru; Satoh, Kenichi; Yamamoto, Hideo

    2012-01-01

    In Japan, patients were infected by viral hepatitis C generally by administering a specific fibrinogen injection. However, it has been difficult to identify patients who were infected as result of the injections due to the lack of medical records. It is still not a common practice by a number of medical facilities to maintain detailed information because manual record keeping is extremely time consuming and subject to human error. Due to these reasons, the regulator required Medical device manufacturers and pharmaceutical companies to attach a bar code called "GS1-128" effective March 28, 2008. Based on this new process, we have come up with the idea of constructing a new database whose records can be entered by bar code scanning to ensure data integrity. Upon examining the efficacy of this new data collection process from the perspective of time efficiency and of course data accuracy, "GS1-128" proved that it significantly reduces time and record keeping mistakes. Patients not only became easily identifiable by a lot number and a serial number when immediate care was required, but "GS1-128" enhanced the ability to pinpoint manufacturing errors in the event any trouble or side effects are reported. This data can be shared with and utilized by the entire medical industry and will help perfect the products and enhance record keeping. I believe this new process is extremely important.

  1. VizieR Online Data Catalog: R absolute magnitudes of Kuiper Belt objects (Peixinho+, 2012)

    NASA Astrophysics Data System (ADS)

    Peixinho, N.; Delsanti, A.; Guilbert-Lepoutre, A.; Gafeira, R.; Lacerda, P.

    2012-06-01

    Compilation of absolute magnitude HRα, B-R color spectral features used in this work. For each object, we computed the average color index from the different papers presenting data obtained simultaneously in B and R bands (e.g. contiguous observations within a same night). When individual R apparent magnitude and date were available, we computed the HRα=R-5log(r Delta), where R is the R-band magnitude, r and Delta are the helio- and geocentric distances at the time of observation in AU, respectively. When V and V-R colors were available, we derived an R and then HRα value. We did not correct for the phase-angle α effect. This table includes also spectral information on the presence of water ice, methanol, methane, or confirmed featureless spectra, as available in the literature. We highlight only the cases with clear bands in the spectrum, which were reported/confirmed by some other work. The 1st column indicates the object identification number and name or provisional designation; the 2nd column indicates the dynamical class; the 3rd column indicates the average HRα value and 1-σ error bars; the 4th column indicates the average $B-R$ color and 1-σ error bars; the 5th column indicates the most important spectral features detected; and the 6th column points to the bibliographic references used for each object. (3 data files).

  2. The use of information technology to enhance patient safety and nursing efficiency.

    PubMed

    Lee, Tso-Ying; Sun, Gi-Tseng; Kou, Li-Tseng; Yeh, Mei-Ling

    2017-10-23

    Issues in patient safety and nursing efficiency have long been of concern. Advancing the role of nursing informatics is seen as the best way to address this. The aim of this study was to determine if the use, outcomes and satisfaction with a nursing information system (NIS) improved patient safety and the quality of nursing care in a hospital in Taiwan. This study adopts a quasi-experimental design. Nurses and patients were surveyed by questionnaire and data retrieval before and after the implementation of NIS in terms of blood drawing, nursing process, drug administration, bar code scanning, shift handover, and information and communication integration. Physiologic values were easier to read and interpret; it took less time to complete electronic records (3.7 vs. 9.1 min); the number of errors in drug administration was reduced (0.08% vs. 0.39%); bar codes reduced the number of errors in blood drawing (0 vs. 10) and transportation of specimens (0 vs. 0.42%); satisfaction with electronic shift handover increased significantly; there was a reduction in nursing turnover (14.9% vs. 16%); patient satisfaction increased significantly (3.46 vs. 3.34). Introduction of NIS improved patient safety and nursing efficiency and increased nurse and patient satisfaction. Medical organizations must continually improve the nursing information system if they are to provide patients with high quality service in a competitive environment.

  3. Evaluation of assumptions in soil moisture triple collocation analysis

    USDA-ARS?s Scientific Manuscript database

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  4. Two-dimensional multi-component photometric decomposition of CALIFA galaxies

    NASA Astrophysics Data System (ADS)

    Méndez-Abreu, J.; Ruiz-Lara, T.; Sánchez-Menguiano, L.; de Lorenzo-Cáceres, A.; Costantin, L.; Catalán-Torrecilla, C.; Florido, E.; Aguerri, J. A. L.; Bland-Hawthorn, J.; Corsini, E. M.; Dettmar, R. J.; Galbany, L.; García-Benito, R.; Marino, R. A.; Márquez, I.; Ortega-Minakata, R. A.; Papaderos, P.; Sánchez, S. F.; Sánchez-Blazquez, P.; Spekkens, K.; van de Ven, G.; Wild, V.; Ziegler, B.

    2017-02-01

    We present a two-dimensional multi-component photometric decomposition of 404 galaxies from the Calar Alto Legacy Integral Field Area data release 3 (CALIFA-DR3). They represent all possible galaxies with no clear signs of interaction and not strongly inclined in the final CALIFA data release. Galaxies are modelled in the g, r, and I Sloan Digital Sky Survey (SDSS) images including, when appropriate, a nuclear point source, bulge, bar, and an exponential or broken disc component. We use a human-supervised approach to determine the optimal number of structures to be included in the fit. The dataset, including the photometric parameters of the CALIFA sample, is released together with statistical errors and a visual analysis of the quality of each fit. The analysis of the photometric components reveals a clear segregation of the structural composition of galaxies with stellar mass. At high masses (log (M⋆/M⊙) > 11), the galaxy population is dominated by galaxies modelled with a single Sérsic or a bulge+disc with a bulge-to-total (B/T) luminosity ratio B/T > 0.2. At intermediate masses (9.5 < log (M⋆/M⊙) < 11), galaxies described with bulge+disc but B/T < 0.2 are preponderant, whereas, at the low mass end (log (M⋆/M⊙) < 9.5), the prevailing population is constituted by galaxies modelled with either purediscs or nuclear point sources+discs (I.e., no discernible bulge). We obtain that 57% of the volume corrected sample of disc galaxies in the CALIFA sample host a bar. This bar fraction shows a significant drop with increasing galaxy mass in the range 9.5 < log (M⋆/M⊙) < 11.5. The analyses of the extended multi-component radial profile result in a volume-corrected distribution of 62%, 28%, and 10% for the so-called Type I (pure exponential), Type II (down-bending), and Type III (up-bending) disc profiles, respectively. These fractions are in discordance with previous findings. We argue that the different methodologies used to detect the breaks are the main cause for these differences. The catalog of fitted parameters is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A32

  5. Issues with data and analyses: Errors, underlying themes, and potential solutions

    PubMed Central

    Allison, David B.

    2018-01-01

    Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge. PMID:29531079

  6. A Preservice Mathematics Teacher's Beliefs about Teaching Mathematics with Technology

    ERIC Educational Resources Information Center

    Belbase, Shashidhar

    2015-01-01

    This paper analyzed a preservice mathematics teacher's beliefs about teaching mathematics with technology. The researcher used five semi-structured task-based interviews in the problematic contexts of teaching fraction multiplications with JavaBars, functions and limits, and geometric transformations with Geometer's Sketchpad, and statistical data…

  7. Microscopic saw mark analysis: an empirical approach.

    PubMed

    Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Peters, Charles

    2015-01-01

    Microscopic saw mark analysis is a well published and generally accepted qualitative analytical method. However, little research has focused on identifying and mitigating potential sources of error associated with the method. The presented study proposes the use of classification trees and random forest classifiers as an optimal, statistically sound approach to mitigate the potential for error of variability and outcome error in microscopic saw mark analysis. The statistical model was applied to 58 experimental saw marks created with four types of saws. The saw marks were made in fresh human femurs obtained through anatomical gift and were analyzed using a Keyence digital microscope. The statistical approach weighed the variables based on discriminatory value and produced decision trees with an associated outcome error rate of 8.62-17.82%. © 2014 American Academy of Forensic Sciences.

  8. Investigating the role of background and observation error correlations in improving a model forecast of forest carbon balance using four dimensional variational data assimilation.

    NASA Astrophysics Data System (ADS)

    Pinnington, Ewan; Casella, Eric; Dance, Sarah; Lawless, Amos; Morison, James; Nichols, Nancy; Wilkinson, Matthew; Quaife, Tristan

    2016-04-01

    Forest ecosystems play an important role in sequestering human emitted carbon-dioxide from the atmosphere and therefore greatly reduce the effect of anthropogenic induced climate change. For that reason understanding their response to climate change is of great importance. Efforts to implement variational data assimilation routines with functional ecology models and land surface models have been limited, with sequential and Markov chain Monte Carlo data assimilation methods being prevalent. When data assimilation has been used with models of carbon balance, background "prior" errors and observation errors have largely been treated as independent and uncorrelated. Correlations between background errors have long been known to be a key aspect of data assimilation in numerical weather prediction. More recently, it has been shown that accounting for correlated observation errors in the assimilation algorithm can considerably improve data assimilation results and forecasts. In this paper we implement a 4D-Var scheme with a simple model of forest carbon balance, for joint parameter and state estimation and assimilate daily observations of Net Ecosystem CO2 Exchange (NEE) taken at the Alice Holt forest CO2 flux site in Hampshire, UK. We then investigate the effect of specifying correlations between parameter and state variables in background error statistics and the effect of specifying correlations in time between observation error statistics. The idea of including these correlations in time is new and has not been previously explored in carbon balance model data assimilation. In data assimilation, background and observation error statistics are often described by the background error covariance matrix and the observation error covariance matrix. We outline novel methods for creating correlated versions of these matrices, using a set of previously postulated dynamical constraints to include correlations in the background error statistics and a Gaussian correlation function to include time correlations in the observation error statistics. The methods used in this paper will allow the inclusion of time correlations between many different observation types in the assimilation algorithm, meaning that previously neglected information can be accounted for. In our experiments we compared the results using our new correlated background and observation error covariance matrices and those using diagonal covariance matrices. We found that using the new correlated matrices reduced the root mean square error in the 14 year forecast of daily NEE by 44 % decreasing from 4.22 g C m-2 day-1 to 2.38 g C m-2 day-1.

  9. Lesion correlates of impairments in actual tool use following unilateral brain damage.

    PubMed

    Salazar-López, E; Schwaiger, B J; Hermsdörfer, J

    2016-04-01

    To understand how the brain controls actions involving tools, tests have been developed employing different paradigms such as pantomime, imitation and real tool use. The relevant areas have been localized in the premotor cortex, the middle temporal gyrus and the superior and inferior parietal lobe. This study employs Voxel Lesion Symptom Mapping to relate the functional impairment in actual tool use with extent and localization of the structural damage in the left (LBD, N=31) and right (RBD, N=19) hemisphere in chronic stroke patients. A series of 12 tools was presented to participants in a carousel. In addition, a non-tool condition tested the prescribed manipulation of a bar. The execution was scored according to an apraxic error scale based on the dimensions grasp, movement, direction and space. Results in the LBD group show that the ventro-dorsal stream constitutes the core of the defective network responsible for impaired tool use; it is composed of the inferior parietal lobe, the supramarginal and angular gyrus and the dorsal premotor cortex. In addition, involvement of regions in the temporal lobe, the rolandic operculum, the ventral premotor cortex and the middle occipital gyrus provide evidence of the role of the ventral stream in this task. Brain areas related to the use of the bar largely overlapped with this network. For patients with RBD data were less conclusive; however, a trend for the involvement of the temporal lobe in apraxic errors was manifested. Skilled bar manipulation depended on the same temporal area in these patients. Therefore, actual tool use depends on a well described left fronto-parietal-temporal network. RBD affects actual tool use, however the underlying neural processes may be more widely distributed and more heterogeneous. Goal directed manipulation of non-tool objects seems to involve very similar brain areas as tool use, suggesting that both types of manipulation share identical processes and neural representations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. A quality assessment of 3D video analysis for full scale rockfall experiments

    NASA Astrophysics Data System (ADS)

    Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.

    2012-04-01

    Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results regarding the lateral rough positioning along the slope, the frontal video must also be scaled. The error in scaling the video images can be evaluated by comparing the data by additional combination of the vertical trajectory component over time with the theoretical polynomial trend according to gravity. The different tracking techniques used to plot the position of the boulder's center of gravity all generated positional data with minimal error acceptable for trajectory analysis. However, when calculating instantaneous velocities an amplification of this error becomes un acceptable. A regression analysis of the data is helpful to optimize trajectory and velocity, respectively.

  11. Emotional intelligence and its correlation to performance as a resident: a preliminary study.

    PubMed

    Talarico, Joseph F; Metro, David G; Patel, Rita M; Carney, Patricia; Wetmore, Amy L

    2008-03-01

    To test the hypothesis that emotional intelligence, as measured by the Bar-On Emotional Quotient Inventory (EQ-I) 125 (Multi Health Systems, Toronto, Ontario, Canada) personal inventory, would correlate with resident performance. Prospective survey. University-affiliated, multiinstitutional anesthesiology residency program. Current clinical anesthesiology years one to three (PGY 2-4) anesthesiology residents enrolled in the University of Pittsburgh Anesthesiology Residency Program. Participants confidentially completed the Bar-On EQ-I 125 survey. Results of the individual EQ-I 125 and daily evaluations by the faculty of the residency program were compiled and analyzed. There was no positive correlation between any facet of emotional intelligence and resident performance. There was statistically significant negative correlation (-0.40; P < 0.05) between assertiveness and the "American Board of Anesthesiology essential attributes" component of the resident evaluation. Emotional intelligence, as measured by the Bar-On EQ-I personal inventory, does not strongly correlate to resident performance as defined at the University of Pittsburgh.

  12. Economic evaluation of a 100% smoke-free law on the hospitality industry in an Argentinean province.

    PubMed

    Candioti, Carlos; Rossini, Gustavo; Depetris de Guiguet, Edith; Costa, Oscar; Schoj, Verónica

    2012-06-01

    To assess the economic impact of a 100% smoke-free law on bars and restaurants in an Argentinean province. We conducted a time series analysis of restaurant and bar revenues in the province of Santa Fe 31 months before and 29 months after the implementation of the 100% smoke-free environment law. The neighboring province of Entre Rios without smoking restrictions at the time of this study, was used as the control province. Average taxable revenues post-legislation in the province of Santa Fe as a whole and in the two most important cities were higher when compared to the total provincial revenue pre-legislation. No significant differences were observed with the total revenue from the province of Entre Rios. We found no statistically significant evidence that the 100% smoke-free environment legislation in the province of Santa Fe, Argentina, had a negative impact on the revenues of local bars and restaurants.

  13. Measurement-device-independent quantum key distribution with source state errors and statistical fluctuation

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2017-03-01

    We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.

  14. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    PubMed Central

    Tian, Zengshan; Xu, Kunjie; Yu, Xiang

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349

  15. Error analysis for RADAR neighbor matching localization in linear logarithmic strength varying Wi-Fi environment.

    PubMed

    Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  16. Could some aviation deep vein thrombosis be a form of decompression sickness?

    PubMed

    Buzzacott, Peter; Mollerlokken, Andreas

    2016-10-01

    Aviation deep vein thrombosis is a challenge poorly understood in modern aviation. The aim of the present project was to determine if cabin decompression might favor formation of vascular bubbles in commercial air travelers. Thirty commercial flights were taken. Cabin pressure was noted at take-off and at every minute following, until the pressure stabilized. These time-pressure profiles were imported into the statistics program R and analyzed using the package SCUBA. Greatest pressure differentials between tissues and cabin pressures were estimated for 20, 40, 60, 80 and 120 min half-time compartments. Time to decompress ranged from 11 to 47 min. The greatest drop in cabin pressure was from 1022 to 776 mBar, equivalent to a saturated diver ascending from 2.46 msw depth. Mean pressure drop in flights >2 h duration was 193 mBar, while mean pressure drop in flights <2 h was 165 mBar. The greatest drop in pressure over 1 min was 28 mBar. Over 30 commercial flights it was found that the drop in cabin pressure was commensurate with that found to cause bubbles in man. Both the US Navy and the Royal Navy mandate far slower decompression from states of saturation, being 1.7 and 1.9 mBar/min respectively. The median overall rate of decompression found in this study was 8.5 mBar/min, five times the rate prescribed for USN saturation divers. The tissues associated with hypobaric bubble formation are likely slower than those associated with bounce diving, with 60 min a potentially useful index.

  17. NOTE: Development and preliminary evaluation of a prototype audiovisual biofeedback device incorporating a patient-specific guiding waveform

    NASA Astrophysics Data System (ADS)

    Venkat, Raghu B.; Sawant, Amit; Suh, Yelin; George, Rohini; Keall, Paul J.

    2008-06-01

    The aim of this research was to investigate the effectiveness of a novel audio-visual biofeedback respiratory training tool to reduce respiratory irregularity. The audiovisual biofeedback system acquires sample respiratory waveforms of a particular patient and computes a patient-specific waveform to guide the patient's subsequent breathing. Two visual feedback models with different displays and cognitive loads were investigated: a bar model and a wave model. The audio instructions were ascending/descending musical tones played at inhale and exhale respectively to assist in maintaining the breathing period. Free-breathing, bar model and wave model training was performed on ten volunteers for 5 min for three repeat sessions. A total of 90 respiratory waveforms were acquired. It was found that the bar model was superior to free breathing with overall rms displacement variations of 0.10 and 0.16 cm, respectively, and rms period variations of 0.77 and 0.33 s, respectively. The wave model was superior to the bar model and free breathing for all volunteers, with an overall rms displacement of 0.08 cm and rms periods of 0.2 s. The reduction in the displacement and period variations for the bar model compared with free breathing was statistically significant (p = 0.005 and 0.002, respectively); the wave model was significantly better than the bar model (p = 0.006 and 0.005, respectively). Audiovisual biofeedback with a patient-specific guiding waveform significantly reduces variations in breathing. The wave model approach reduces cycle-to-cycle variations in displacement by greater than 50% and variations in period by over 70% compared with free breathing. The planned application of this device is anatomic and functional imaging procedures and radiation therapy delivery.

  18. Parametric Study of Shear Strength of Concrete Beams Reinforced with FRP Bars

    NASA Astrophysics Data System (ADS)

    Thomas, Job; Ramadass, S.

    2016-09-01

    Fibre Reinforced Polymer (FRP) bars are being widely used as internal reinforcement in structural elements in the last decade. The corrosion resistance of FRP bars qualifies its use in severe and marine exposure conditions in structures. A total of eight concrete beams longitudinally reinforced with FRP bars were cast and tested over shear span to depth ratio of 0.5 and 1.75. The shear strength test data of 188 beams published in various literatures were also used. The model originally proposed by Indian Standard Code of practice for the prediction of shear strength of concrete beams reinforced with steel bars IS:456 (Plain and reinforced concrete, code of practice, fourth revision. Bureau of Indian Standards, New Delhi, 2000) is considered and a modification to account for the influence of the FRP bars is proposed based on regression analysis. Out of the 196 test data, 110 test data is used for the regression analysis and 86 test data is used for the validation of the model. In addition, the shear strength of 86 test data accounted for the validation is assessed using eleven models proposed by various researchers. The proposed model accounts for compressive strength of concrete ( f ck ), modulus of elasticity of FRP rebar ( E f ), longitudinal reinforcement ratio ( ρ f ), shear span to depth ratio ( a/ d) and size effect of beams. The predicted shear strength of beams using the proposed model and 11 models proposed by other researchers is compared with the corresponding experimental results. The mean of predicted shear strength to the experimental shear strength for the 86 beams accounted for the validation of the proposed model is found to be 0.93. The result of the statistical analysis indicates that the prediction based on the proposed model corroborates with the corresponding experimental data.

  19. A large community outbreak of salmonellosis caused by intentional contamination of restaurant salad bars.

    PubMed

    Török, T J; Tauxe, R V; Wise, R P; Livengood, J R; Sokolow, R; Mauvais, S; Birkness, K A; Skeels, M R; Horan, J M; Foster, L R

    1997-08-06

    This large outbreak of foodborne disease highlights the challenge of investigating outbreaks caused by intentional contamination and demonstrates the vulnerability of self-service foods to intentional contamination. To investigate a large community outbreak of Salmonella Typhimurium infections. Epidemiologic investigation of patients with Salmonella gastroenteritis and possible exposures in The Dalles, Oregon. Cohort and case-control investigations were conducted among groups of restaurant patrons and employees to identify exposures associated with illness. A community in Oregon. Outbreak period was September and October 1984. A total of 751 persons with Salmonella gastroenteritis associated with eating or working at area restaurants. Most patients were identified through passive surveillance; active surveillance was conducted for selected groups. A case was defined either by clinical criteria or by a stool culture yielding S Typhimurium. The outbreak occurred in 2 waves, September 9 through 18 and September 19 through October 10. Most cases were associated with 10 restaurants, and epidemiologic studies of customers at 4 restaurants and of employees at all 10 restaurants implicated eating from salad bars as the major risk factor for infection. Eight (80%) of 10 affected restaurants compared with only 3 (11%) of the 28 other restaurants in The Dalles operated salad bars (relative risk, 7.5; 95% confidence interval, 2.4-22.7; P<.001). The implicated food items on the salad bars differed from one restaurant to another. The investigation did not identify any water supply, food item, supplier, or distributor common to all affected restaurants, nor were employees exposed to any single common source. In some instances, infected employees may have contributed to the spread of illness by inadvertently contaminating foods. However, no evidence was found linking ill employees to initiation of the outbreak. Errors in food rotation and inadequate refrigeration on ice-chilled salad bars may have facilitated growth of the S Typhimurium but could not have caused the outbreak. A subsequent criminal investigation revealed that members of a religious commune had deliberately contaminated the salad bars. An S Typhimurium strain found in a laboratory at the commune was indistinguishable from the outbreak strain. This outbreak of salmonellosis was caused by intentional contamination of restaurant salad bars by members of a religious commune.

  20. Local indicators of geocoding accuracy (LIGA): theory and application

    PubMed Central

    Jacquez, Geoffrey M; Rommel, Robert

    2009-01-01

    Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795

  1. Leptonic Decays of the Charged B Meson

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corwin, Luke A.

    2008-01-01

    We present a search for the decay B + → ℓ +ν ( = τ, μ, or e) in (458.9±5.1)×10 6 Υ(4S) decays recorded with the BABAR detector at the SLAC PEP-II B-Factory. A sample of events with one reconstructed exclusive semi-leptonic B decay (B - → D 0ℓ -more » $$\\bar{v}$$X) is selected, and in the recoil a search for B + →ℓ +ν ℓ signal is performed. The τ is identified in the following channels: τ + → e +ν e$$\\bar{v}$$ τ , τ + → μ +ν μ$$\\bar{v}$$ τ , τ + → π +$$\\bar{v}$$ τ , and τ + → π +π 0$$\\bar{v}$$ τ . The analysis strategy and the statistical procedure is set up for branching fraction extraction or upper limit determination. We determine from the dataset a preliminary measurement of B(B + → τ +ν τ) = (1.8 ± 0.8 ± 0.1) × 10 -4, which excludes zero at 2.4σ, and f B = 255 ± 58 MeV. Combination with the hadronically tagged measurement yields B(B + → τ +ν τ) = (1.8 ± 0.6) × 10 -4. We also set preliminary limits on the branching fractions at B(B + → e +ν e) < 7.7 × 10 -6 (90% C.L.), B(B + → μ +ν μ) < 11 × 10 -6 (90% C.L.), and B(B + → τ +ν τ ) < 3.2 × 10 -4(90% C.L.).« less

  2. Sexing young snowy owls

    USGS Publications Warehouse

    Seidensticker, M.T.; Holt, D.W.; Detienne, J.; Talbot, S.; Gray, K.

    2011-01-01

    We predicted sex of 140 Snowy Owl (Bubo scandiacus) nestlings out of 34 nests at our Barrow, Alaska, study area to develop a technique for sexing these owls in the field. We primarily sexed young, flightless owls (3844 d old) by quantifying plumage markings on the remiges and tail, predicting sex, and collecting blood samples to test our field predictions using molecular sexing techniques. We categorized and quantified three different plumage markings: two types of bars (defined as markings that touch the rachis) and spots (defined as markings that do not touch the rachis). We predicted sex in the field assuming that males had more spots than bars and females more bars than spots on the remiges and rectrices. Molecular data indicated that we correctly sexed 100% of the nestlings. We modeled the data using random forests and classification trees. Both models indicated that the number and type of markings on the secondary feathers were the most important in classifying nestling sex. The statistical models verified our initial qualitative prediction that males have more spots than bars and females more bars than spots on flight feathers P6P10 for both wings and tail feathers T1 and T2. This study provides researchers with an easily replicable and highly accurate method for sexing young Snowy Owls in the field, which should aid further studies of sex-ratios and sex-related variation in behavior and growth of this circumpolar owl species. ?? 2011 The Raptor Research Foundation, Inc.

  3. Scout trajectory error propagation computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1982-01-01

    Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.

  4. Measurement of D 0– $$\\bar{D}$$ 0 mixing and search for CP violation in D 0 →K +K -, π +π ₋ decays with the full Belle data set

    DOE PAGES

    Staric, M.; Abdesselam, A.; Adachi, I.; ...

    2015-12-14

    Here, we report an improved measurement of D 0–more » $$\\bar{D}$$ 0 mixing and a search for CP violation in D 0 decays to CP -even final states K +K – and π +π –. The measurement is based on the final Belle data sample of 976 fb –1. The results are y CP = (1.11 ± 0.22 ± 0.09)% and A Γ = (–0.03 ± 0.20 ± 0.07)%, where the first uncertainty is statistical and the second is systematic.« less

  5. Measurement of D 0– $$\\bar{D}$$ 0 mixing and search for CP violation in D 0 →K +K -, π +π ₋ decays with the full Belle data set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staric, M.; Abdesselam, A.; Adachi, I.

    Here, we report an improved measurement of D 0–more » $$\\bar{D}$$ 0 mixing and a search for CP violation in D 0 decays to CP -even final states K +K – and π +π –. The measurement is based on the final Belle data sample of 976 fb –1. The results are y CP = (1.11 ± 0.22 ± 0.09)% and A Γ = (–0.03 ± 0.20 ± 0.07)%, where the first uncertainty is statistical and the second is systematic.« less

  6. Fundamental Investigation of the Microstructural Parameters to Improve Dynamic Response in Al-Cu Model System

    DTIC Science & Technology

    2014-05-01

    grain size. Recrystallization was then induced via annealing just above the solvus temperature. After quenching , the bars were immediately placed into...that the values were statistically significant. Precipitate sizes ranged from approximately 100 nanometers in diameter up to 2-5 microns in diameter

  7. Modeling evaporation of Jet A, JP-7 and RP-1 drops at 1 to 15 bars

    NASA Technical Reports Server (NTRS)

    Harstad, K.; Bellan, J.

    2003-01-01

    A model describing the evaportion of an isolated drop of a multicomponent fuel containing hundreds of species has been developed. The model is based on Continuous Thermodynamics concepts wherein the composition of a fuel is statistically described using a Probability Distribution Function (PDF).

  8. What's holding us back? Raising the alfalfa yield bar

    USDA-ARS?s Scientific Manuscript database

    Measuring yield of commodity crops is easy – weight and moisture content are determined on delivery. Consequently, reports of production or yield for grain crops can be made reliably to the agencies that track crop production, such as the USDA-National Agricultural Statistics Service (NASS). The s...

  9. Action planning and position sense in children with Developmental Coordination Disorder.

    PubMed

    Adams, Imke L J; Ferguson, Gillian D; Lust, Jessica M; Steenbergen, Bert; Smits-Engelsman, Bouwien C M

    2016-04-01

    The present study examined action planning and position sense in children with Developmental Coordination Disorder (DCD). Participants performed two action planning tasks, the sword task and the bar grasping task, and an active elbow matching task to examine position sense. Thirty children were included in the DCD group (aged 6-10years) and age-matched to 90 controls. The DCD group had a MABC-2 total score ⩽5th percentile, the control group a total score ⩾25th percentile. Results from the sword-task showed that children with DCD planned less for end-state comfort. On the bar grasping task no significant differences in planning for end-state comfort between the DCD and control group were found. There was also no significant difference in the position sense error between the groups. The present study shows that children with DCD plan less for end-state comfort, but that this result is task-dependent and becomes apparent when more precision is needed at the end of the task. In that respect, the sword-task appeared to be a more sensitive task to assess action planning abilities, than the bar grasping task. The action planning deficit in children with DCD cannot be explained by an impaired position sense during active movements. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. How to Create Automatically Graded Spreadsheets for Statistics Courses

    ERIC Educational Resources Information Center

    LoSchiavo, Frank M.

    2016-01-01

    Instructors often use spreadsheet software (e.g., Microsoft Excel) in their statistics courses so that students can gain experience conducting computerized analyses. Unfortunately, students tend to make several predictable errors when programming spreadsheets. Without immediate feedback, programming errors are likely to go undetected, and as a…

  11. Phase error statistics of a phase-locked loop synchronized direct detection optical PPM communication system

    NASA Technical Reports Server (NTRS)

    Natarajan, Suresh; Gardner, C. S.

    1987-01-01

    Receiver timing synchronization of an optical Pulse-Position Modulation (PPM) communication system can be achieved using a phased-locked loop (PLL), provided the photodetector output is suitably processed. The magnitude of the PLL phase error is a good indicator of the timing error at the receiver decoder. The statistics of the phase error are investigated while varying several key system parameters such as PPM order, signal and background strengths, and PPL bandwidth. A practical optical communication system utilizing a laser diode transmitter and an avalanche photodiode in the receiver is described, and the sampled phase error data are presented. A linear regression analysis is applied to the data to obtain estimates of the relational constants involving the phase error variance and incident signal power.

  12. Meta-analysis inside and outside particle physics: two traditions that should converge?

    PubMed

    Baker, Rose D; Jackson, Dan

    2013-06-01

    The use of meta-analysis in medicine and epidemiology really took off in the 1970s. However, in high-energy physics, the Particle Data Group has been carrying out meta-analyses of measurements of particle masses and other properties since 1957. Curiously, there has been virtually no interaction between those working inside and outside particle physics. In this paper, we use statistical models to study two major differences in practice. The first is the usefulness of systematic errors, which physicists are now beginning to quote in addition to statistical errors. The second is whether it is better to treat heterogeneity by scaling up errors as do the Particle Data Group or by adding a random effect as does the rest of the community. Besides fitting models, we derive and use an exact test of the error-scaling hypothesis. We also discuss the other methodological differences between the two streams of meta-analysis. Our conclusion is that systematic errors are not currently very useful and that the conventional random effects model, as routinely used in meta-analysis, has a useful role to play in particle physics. The moral we draw for statisticians is that we should be more willing to explore 'grassroots' areas of statistical application, so that good statistical practice can flow both from and back to the statistical mainstream. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Assessing colour-dependent occupation statistics inferred from galaxy group catalogues

    NASA Astrophysics Data System (ADS)

    Campbell, Duncan; van den Bosch, Frank C.; Hearin, Andrew; Padmanabhan, Nikhil; Berlind, Andreas; Mo, H. J.; Tinker, Jeremy; Yang, Xiaohu

    2015-09-01

    We investigate the ability of current implementations of galaxy group finders to recover colour-dependent halo occupation statistics. To test the fidelity of group catalogue inferred statistics, we run three different group finders used in the literature over a mock that includes galaxy colours in a realistic manner. Overall, the resulting mock group catalogues are remarkably similar, and most colour-dependent statistics are recovered with reasonable accuracy. However, it is also clear that certain systematic errors arise as a consequence of correlated errors in group membership determination, central/satellite designation, and halo mass assignment. We introduce a new statistic, the halo transition probability (HTP), which captures the combined impact of all these errors. As a rule of thumb, errors tend to equalize the properties of distinct galaxy populations (i.e. red versus blue galaxies or centrals versus satellites), and to result in inferred occupation statistics that are more accurate for red galaxies than for blue galaxies. A statistic that is particularly poorly recovered from the group catalogues is the red fraction of central galaxies as a function of halo mass. Group finders do a good job in recovering galactic conformity, but also have a tendency to introduce weak conformity when none is present. We conclude that proper inference of colour-dependent statistics from group catalogues is best achieved using forward modelling (i.e. running group finders over mock data) or by implementing a correction scheme based on the HTP, as long as the latter is not too strongly model dependent.

  14. The State and Trends of Barcode, RFID, Biometric and Pharmacy Automation Technologies in US Hospitals

    PubMed Central

    Uy, Raymonde Charles Y.; Kury, Fabricio P.; Fontelo, Paul A.

    2015-01-01

    The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions. PMID:26958264

  15. Creating a Satellite-Based Record of Tropospheric Ozone

    NASA Technical Reports Server (NTRS)

    Oetjen, Hilke; Payne, Vivienne H.; Kulawik, Susan S.; Eldering, Annmarie; Worden, John; Edwards, David P.; Francis, Gene L.; Worden, Helen M.

    2013-01-01

    The TES retrieval algorithm has been applied to IASI radiances. We compare the retrieved ozone profiles with ozone sonde profiles for mid-latitudes for the year 2008. We find a positive bias in the IASI ozone profiles in the UTLS region of up to 22 %. The spatial coverage of the IASI instrument allows sampling of effectively the same air mass with several IASI scenes simultaneously. Comparisons of the root-mean-square of an ensemble of IASI profiles to theoretical errors indicate that the measurement noise and the interference of temperature and water vapour on the retrieval together mostly explain the empirically derived random errors. The total degrees of freedom for signal of the retrieval for ozone are 3.1 +/- 0.2 and the tropospheric degrees of freedom are 1.0 +/- 0.2 for the described cases. IASI ozone profiles agree within the error bars with coincident ozone profiles derived from a TES stare sequence for the ozone sonde station at Bratt's Lake (50.2 deg N, 104.7 deg W).

  16. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Real-Time Identification of Wheel Terrain Interaction Models for Enhanced Autonomous Vehicle Mobility

    DTIC Science & Technology

    2014-04-24

    tim at io n Er ro r ( cm ) 0 2 4 6 8 10 Color Statistics Angelova...Color_Statistics_Error) / Average_Slip_Error Position Estimation Error: Global Pose Po si tio n Es tim at io n Er ro r ( cm ) 0 2 4 6 8 10 12 Color...get some kind of clearance for releasing pose and odometry data) collected at the following sites – Taylor, Gascola, Somerset, Fort Bliss and

  18. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  19. The GEOS Ozone Data Assimilation System: Specification of Error Statistics

    NASA Technical Reports Server (NTRS)

    Stajner, Ivanka; Riishojgaard, Lars Peter; Rood, Richard B.

    2000-01-01

    A global three-dimensional ozone data assimilation system has been developed at the Data Assimilation Office of the NASA/Goddard Space Flight Center. The Total Ozone Mapping Spectrometer (TOMS) total ozone and the Solar Backscatter Ultraviolet (SBUV) or (SBUV/2) partial ozone profile observations are assimilated. The assimilation, into an off-line ozone transport model, is done using the global Physical-space Statistical Analysis Scheme (PSAS). This system became operational in December 1999. A detailed description of the statistical analysis scheme, and in particular, the forecast and observation error covariance models is given. A new global anisotropic horizontal forecast error correlation model accounts for a varying distribution of observations with latitude. Correlations are largest in the zonal direction in the tropics where data is sparse. Forecast error variance model is proportional to the ozone field. The forecast error covariance parameters were determined by maximum likelihood estimation. The error covariance models are validated using x squared statistics. The analyzed ozone fields in the winter 1992 are validated against independent observations from ozone sondes and HALOE. There is better than 10% agreement between mean Halogen Occultation Experiment (HALOE) and analysis fields between 70 and 0.2 hPa. The global root-mean-square (RMS) difference between TOMS observed and forecast values is less than 4%. The global RMS difference between SBUV observed and analyzed ozone between 50 and 3 hPa is less than 15%.

  20. Effects of Visual Communication Tool and Separable Status Display on Team Performance and Subjective Workload in Air Battle Management

    DTIC Science & Technology

    2008-06-01

    NASAS TLX (Hart & Staveland, 1987) was used to evaluate perceived task demands. In the modified version, participants were asked to estimate the...subjective workload (i.e., NASA - TLX ) was assessed for each trial. Unweighted NASA - TLX ratings were submitted to a 5 (Subscale) × 2 (Communication...Communication Condition M ea n TL X R at in g Figure 3. Mean unweighted NASA - TLX ratings as a function of communication modality. Error bars represent one

  1. The Effect of Information Level on Human-Agent Interaction for Route Planning

    DTIC Science & Technology

    2015-12-01

    χ2 (4, 60) = 11.41, p = 0.022, and Cramer’s V = 0.308, indicating there was no effect of experiment on posttest trust. Pretest trust was not a...decision time by pretest trust group membership. Bars denote standard error (SE). DT at DP was evaluated to see if it predicted posttest trust...0.007, Cramer’s V = 0.344, indicating there was no effect of experiment on posttest trust. Pretest trust was not a significant prediction of total DT

  2. Elimination of Sensor Artifacts from Infrared Data.

    DTIC Science & Technology

    1984-12-11

    channel to compensate detector responsivity nonuniformity . Before inspecting the bar target measurements, it was expected that the preceding sequence of...sample errors and by applyieg separate pain and offset costants to each canel for nonuniformity compensation. 12(t) 𔃻 -7. Y2 lar I ,ar hr’ In apern...W5 RICHARD STEIDRO E 1 -- t4 ii x3 .13 275 325 3i5 425 SAMPLE NUMBER FI. 4 - Postamplfler output waveform for LWIR channel 3, for data frame shown in

  3. Correction of Thermal Gradient Errors in Stem Thermocouple Hygrometers

    PubMed Central

    Michel, Burlyn E.

    1979-01-01

    Stem thermocouple hygrometers were subjected to transient and stable thermal gradients while in contact with reference solutions of NaCl. Both dew point and psychrometric voltages were directly related to zero offset voltages, the latter reflecting the size of the thermal gradient. Although slopes were affected by absolute temperature, they were not affected by water potential. One hygrometer required a correction of 1.75 bars water potential per microvolt of zero offset, a value that was constant from 20 to 30 C. PMID:16660685

  4. Implementation of a pharmacy automation system (robotics) to ensure medication safety at Norwalk hospital.

    PubMed

    Bepko, Robert J; Moore, John R; Coleman, John R

    2009-01-01

    This article reports an intervention to improve the quality and safety of hospital patient care by introducing the use of pharmacy robotics into the medication distribution process. Medication safety is vitally important. The integration of pharmacy robotics with computerized practitioner order entry and bedside medication bar coding produces a significant reduction in medication errors. The creation of a safe medication-from initial ordering to bedside administration-provides enormous benefits to patients, to health care providers, and to the organization as well.

  5. Watts Bar Nuclear Plant Title V Applicability

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  6. Volume Phase Masks in Photo-Thermo-Refractive Glass

    DTIC Science & Technology

    2014-10-06

    development when forming the nanocrystals. Fig. 1.1 shows the refractive index change curves for some common glass melts when exposed to a beam at 325 nm...integral curve to the curve for the ideal phase mask. If there is a deviation in the experimental curve from the ideal curve , whether the overlap...redevelopments of the sample. Note that the third point on the spherical curve and the third and fourth points on the coma y curve have larger error bars than

  7. Coupling constant for N*(1535)N{rho}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie Jujun; Graduate University of Chinese Academy of Sciences, Beijing 100049; Wilkin, Colin

    2008-05-15

    The value of the N*(1535)N{rho} coupling constant g{sub N*N{rho}} derived from the N*(1535){yields}N{rho}{yields}N{pi}{pi} decay is compared with that deduced from the radiative decay N*(1535){yields}N{gamma} using the vector-meson-dominance model. On the basis of an effective Lagrangian approach, we show that the values of g{sub N*N{rho}} extracted from the available experimental data on the two decays are consistent, though the error bars are rather large.

  8. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

  9. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology.

    PubMed

    Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta

    2017-09-19

    Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.

  10. How allele frequency and study design affect association test statistics with misrepresentation errors.

    PubMed

    Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael

    2014-04-01

    We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.

  11. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    PubMed

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Statistical Analysis Experiment for Freshman Chemistry Lab.

    ERIC Educational Resources Information Center

    Salzsieder, John C.

    1995-01-01

    Describes a laboratory experiment dissolving zinc from galvanized nails in which data can be gathered very quickly for statistical analysis. The data have sufficient significant figures and the experiment yields a nice distribution of random errors. Freshman students can gain an appreciation of the relationships between random error, number of…

  13. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    PubMed

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

  14. Stellar mass distribution of S4G disk galaxies and signatures of bar-induced secular evolution

    NASA Astrophysics Data System (ADS)

    Díaz-García, S.; Salo, H.; Laurikainen, E.

    2016-12-01

    Context. Models of galaxy formation in a cosmological framework need to be tested against observational constraints, such as the average stellar density profiles (and their dispersion) as a function of fundamental galaxy properties (e.g. the total stellar mass). Simulation models predict that the torques produced by stellar bars efficiently redistribute the stellar and gaseous material inside the disk, pushing it outwards or inwards depending on whether it is beyond or inside the bar corotation resonance radius. Bars themselves are expected to evolve, getting longer and narrower as they trap particles from the disk and slow down their rotation speed. Aims: We use 3.6 μm photometry from the Spitzer Survey of Stellar Structure in Galaxies (S4G) to trace the stellar distribution in nearby disk galaxies (z ≈ 0) with total stellar masses 108.5 ≲ M∗/M⊙ ≲ 1011 and mid-IR Hubble types - 3 ≤ T ≤ 10. We characterize the stellar density profiles (Σ∗), the stellar contribution to the rotation curves (V3.6 μm), and the m = 2 Fourier amplitudes (A2) as a function of M∗ and T. We also describe the typical shapes and strengths of stellar bars in the S4G sample and link their properties to the total stellar mass and morphology of their host galaxy. Methods: For 1154 S4G galaxies with disk inclinations lower than 65°, we perform a Fourier decomposition and rescale their images to a common frame determined by the size in physical units, by their disk scalelength, and for 748 barred galaxies by both the length and orientation of their bars. We stack the resized density profiles and images to obtain statistically representative average stellar disks and bars in bins of M∗ and T. Based on the radial force profiles of individual galaxies we calculate the mean stellar contribution to the circular velocity. We also calculate average A2 profiles, where the radius is normalized to R25.5. Furthermore, we infer the gravitational potentials from the synthetic bars to obtain the tangential-to-radial force ratio (QT) and A2 profiles in the different bins. We also apply ellipse fitting to quantitatively characterize the shape of the bar stacks. Results: For M∗ ≥ 109M⊙, we find a significant difference in the stellar density profiles of barred and non-barred systems: (I) disks in barred galaxies show larger scalelengths (hR) and fainter extrapolated central surface brightnesses (Σ°); (II) the mean surface brightness profiles (Σ∗) of barred and non-barred galaxies intersect each other slightly beyond the mean bar length, most likely at the bar corotation; and (III) the central mass concentration of barred galaxies is higher (by almost a factor 2 when T ≤ 5) than in their non-barred counterparts. The averaged Σ∗ profiles follow an exponential slope down to at least 10 M⊙ pc-2, which is the typical depth beyond which the sample coverage in the radial direction starts to drop. Central mass concentrations in massive systems (≥1010M⊙) are substantially larger than in fainter galaxies, and their prominence scales with T. This segregation also manifests in the inner slope of the mean stellar component of the circular velocity: lenticular (S0) galaxies present the most sharply rising V3.6 μm. Based on the analysis of bar stacks, we show that early- and intermediate-type spirals (0 ≤ T< 5) have intrinsically narrower bars than later types and S0s, whose bars are oval-shaped. We show a clear agreement between galaxy family and quantitative estimates of bar strength. In early- and intermediate-type spirals, A2 is larger within and beyond the typical bar region among barred galaxies than in the non-barred subsample. Strongly barred systems also tend to have larger A2 amplitudes at all radii than their weakly barred counterparts. Conclusions: Using near-IR wavelengths (S4G 3.6 μm), we provide observational constraints that galaxy formation models can be checked against. In particular, we calculate the mean stellar density profiles, and the disk(+bulge) component of the rotation curve (and their dispersion) in bins of M∗ and T. We find evidence for bar-induced secular evolution of disk galaxies in terms of disk spreading and enhanced central mass concentration. We also obtain average bars (2D), and we show that bars hosted by early-type galaxies are more centrally concentrated and have larger density amplitudes than their late-type counterparts. The FITS files of the synthetic images and the tabulated radial profiles of the mean (and dispersion of) stellar mass density, 3.6 μm surface brightness, Fourier amplitudes, gravitational force, and the stellar contribution to the circular velocity are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/596/A84

  15. Lagrangian particle statistics of numerically simulated shear waves

    NASA Astrophysics Data System (ADS)

    Kirby, J.; Briganti, R.; Brocchini, M.; Chen, Q. J.

    2006-12-01

    The properties of numerical solutions of various circulation models (Boussinesq-type and wave-averaged NLSWE) have been investigated on the basis of the induced horizontal flow mixing, for the case of shear waves. The mixing properties of the flow have been investigated using particle statistics, following the approach of LaCasce (2001) and Piattella et al. (2006). Both an idealized barred beach bathymetry and a test case taken from SANDYDUCK '97 have been considered. Random seeding patterns of passive tracer particles are used. The flow exhibits features similar to those discussed in literature. Differences are also evident due both to the physics (intense longshore shear shoreward of the bar) and the procedure used to obtain the statistics (lateral conditions limit the time/space window for the longshore flow). Within the Boussinesq framework, different formulations of Boussinesq type equations have been used and the results compared (Wei et al. 1995, Chen et al. (2003), Chen et al. (2006)). Analysis based on the Eulerian velocity fields suggests a close similarity between Wei et al. (1995) and Chen et. al (2006), while examination of particle displacements and implied mixing suggests a closer behaviour between Chen et al. (2003) and Chen et al. (2006). Two distinct stages of mixing are evident in all simulations: i) the first stage ends at t

  16. Spatial relationships between alcohol-related road crashes and retail alcohol availability.

    PubMed

    Morrison, Christopher; Ponicki, William R; Gruenewald, Paul J; Wiebe, Douglas J; Smith, Karen

    2016-05-01

    This study examines spatial relationships between alcohol outlet density and the incidence of alcohol-related crashes. The few prior studies conducted in this area used relatively large spatial units; here we use highly resolved units from Melbourne, Australia (Statistical Area level 1 [SA1] units: mean land area=0.5 km(2); SD=2.2 km(2)), in order to assess different micro-scale spatial relationships for on- and off-premise outlets. Bayesian conditional autoregressive Poisson models were used to assess cross-sectional relationships of three-year counts of alcohol-related crashes (2010-2012) attended by Ambulance Victoria paramedics to densities of bars, restaurants, and off-premise outlets controlling for other land use, demographic and roadway characteristics. Alcohol-related crashes were not related to bar density within local SA1 units, but were positively related to bar density in adjacent SA1 units. Alcohol-related crashes were negatively related to off-premise outlet density in local SA1 units. Examined in one metropolitan area using small spatial units, bar density is related to greater crash risk in surrounding areas. Observed negative relationships for off-premise outlets may be because the origins and destinations of alcohol-affected journeys are in distal locations relative to outlets. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Treatment of oropharyngeal dysphagia secondary to idiopathic cricopharyngeal bar: Surgical cricopharyngeal muscle myotomy versus dilation.

    PubMed

    Marston, Alexander P; Maldonado, Francisco J; Ravi, Karthik; Kasperbauer, Jan L; Ekbom, Dale C

    To compare swallowing outcomes following cricopharyngeal (CP) dilation versus surgical myotomy in patients with dysphagia secondary to idiopathic CP bar. All patients had an idiopathic CP bar without a history of Zenker's diverticulum, head and neck cancer, or systemic neurologic disease treated between 2000 and 2013. The Functional Outcome Swallowing Scale (FOSS) was utilized to assess dysphagia symptoms. Twenty-three patients underwent 46 dilations and 20 patients had a myotomy. Nineteen of 23 (83%) patients in the dilation group and all patients in the myotomy group reported improved swallow function. The median difference in pre- versus post-intervention FOSS scores was not statistically significant (p=0.07) between the dilation and myotomy groups with mean reductions of 1.3 and 1.8, respectively. Seventeen of 23 (74%) dilation patients had persistent or recurrent dysphagia with 13 (57%) requiring repeat dilation and 4 (17%) undergoing CP myotomy. The median time to first reintervention in the dilation group was 13.6months. Nineteen of 20 (95%) surgical myotomy patients did not experience recurrent dysphagia. Both endoscopic CP dilation and myotomy led to similar initial improvement in swallow function for patients with primary idiopathic CP bar; however, dilation is more likely to provide temporary benefit. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Technology utilization to prevent medication errors.

    PubMed

    Forni, Allison; Chu, Hanh T; Fanikos, John

    2010-01-01

    Medication errors have been increasingly recognized as a major cause of iatrogenic illness and system-wide improvements have been the focus of prevention efforts. Critically ill patients are particularly vulnerable to injury resulting from medication errors because of the severity of illness, need for high risk medications with a narrow therapeutic index and frequent use of intravenous infusions. Health information technology has been identified as method to reduce medication errors as well as improve the efficiency and quality of care; however, few studies regarding the impact of health information technology have focused on patients in the intensive care unit. Computerized physician order entry and clinical decision support systems can play a crucial role in decreasing errors in the ordering stage of the medication use process through improving the completeness and legibility of orders, alerting physicians to medication allergies and drug interactions and providing a means for standardization of practice. Electronic surveillance, reminders and alerts identify patients susceptible to an adverse event, communicate critical changes in a patient's condition, and facilitate timely and appropriate treatment. Bar code technology, intravenous infusion safety systems, and electronic medication administration records can target prevention of errors in medication dispensing and administration where other technologies would not be able to intercept a preventable adverse event. Systems integration and compliance are vital components in the implementation of health information technology and achievement of a safe medication use process.

  19. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  20. A simulation of GPS and differential GPS sensors

    NASA Technical Reports Server (NTRS)

    Rankin, James M.

    1993-01-01

    The Global Positioning System (GPS) is a revolutionary advance in navigation. Users can determine latitude, longitude, and altitude by receiving range information from at least four satellites. The statistical accuracy of the user's position is directly proportional to the statistical accuracy of the range measurement. Range errors are caused by clock errors, ephemeris errors, atmospheric delays, multipath errors, and receiver noise. Selective Availability, which the military uses to intentionally degrade accuracy for non-authorized users, is a major error source. The proportionality constant relating position errors to range errors is the Dilution of Precision (DOP) which is a function of the satellite geometry. Receivers separated by relatively short distances have the same satellite and atmospheric errors. Differential GPS (DGPS) removes these errors by transmitting pseudorange corrections from a fixed receiver to a mobile receiver. The corrected pseudorange at the moving receiver is now corrupted only by errors from the receiver clock, multipath, and measurement noise. This paper describes a software package that models position errors for various GPS and DGPS systems. The error model is used in the Real-Time Simulator and Cockpit Technology workstation simulations at NASA-LaRC. The GPS/DGPS sensor can simulate enroute navigation, instrument approaches, or on-airport navigation.

  1. Accuracy of non-resonant laser-induced thermal acoustics (LITA) in a convergent-divergent nozzle flow

    NASA Astrophysics Data System (ADS)

    Richter, J.; Mayer, J.; Weigand, B.

    2018-02-01

    Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.

  2. Improved simulation of aerosol, cloud, and density measurements by shuttle lidar

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Morley, B. M.; Livingston, J. M.; Grams, G. W.; Patterson, E. W.

    1981-01-01

    Data retrievals are simulated for a Nd:YAG lidar suitable for early flight on the space shuttle. Maximum assumed vertical and horizontal resolutions are 0.1 and 100 km, respectively, in the boundary layer, increasing to 2 and 2000 km in the mesosphere. Aerosol and cloud retrievals are simulated using 1.06 and 0.53 microns wavelengths independently. Error sources include signal measurement, conventional density information, atmospheric transmission, and lidar calibration. By day, tenuous clouds and Saharan and boundary layer aerosols are retrieved at both wavelengths. By night, these constituents are retrieved, plus upper tropospheric, stratospheric, and mesospheric aerosols and noctilucent clouds. Density, temperature, and improved aerosol and cloud retrievals are simulated by combining signals at 0.35, 1.06, and 0.53 microns. Particlate contamination limits the technique to the cloud free upper troposphere and above. Error bars automatically show effect of this contamination, as well as errors in absolute density nonmalization, reference temperature or pressure, and the sources listed above. For nonvolcanic conditions, relative density profiles have rms errors of 0.54 to 2% in the upper troposphere and stratosphere. Temperature profiles have rms errors of 1.2 to 2.5 K and can define the tropopause to 0.5 km and higher wave structures to 1 or 2 km.

  3. Looking for trouble? Diagnostics expanding disease and producing patients.

    PubMed

    Hofmann, Bjørn

    2018-05-23

    Novel tests give great opportunities for earlier and more precise diagnostics. At the same time, new tests expand disease, produce patients, and cause unnecessary harm in overdiagnosis and overtreatment. How can we evaluate diagnostics to obtain the benefits and avoid harm? One way is to pay close attention to the diagnostic process and its core concepts. Doing so reveals 3 errors that expand disease and increase overdiagnosis. The first error is to decouple diagnostics from harm, eg, by diagnosing insignificant conditions. The second error is to bypass proper validation of the relationship between test indicator and disease, eg, by introducing biomarkers for Alzheimer's disease before the tests are properly validated. The third error is to couple the name of disease to insignificant or indecisive indicators, eg, by lending the cancer name to preconditions, such as ductal carcinoma in situ. We need to avoid these errors to promote beneficial testing, bar harmful diagnostics, and evade unwarranted expansion of disease. Accordingly, we must stop identifying and testing for conditions that are only remotely associated with harm. We need more stringent verification of tests, and we must avoid naming indicators and indicative conditions after diseases. If not, we will end like ancient tragic heroes, succumbing because of our very best abilities. © 2018 John Wiley & Sons, Ltd.

  4. Uncovering Dangerous Cheats: How Do Avian Hosts Recognize Adult Brood Parasites?

    PubMed Central

    Trnka, Alfréd; Prokop, Pavol; Grim, Tomáš

    2012-01-01

    Background Co-evolutionary struggles between dangerous enemies (e.g., brood parasites) and their victims (hosts) lead to the emergence of sophisticated adaptations and counter-adaptations. Salient host tricks to reduce parasitism costs include, as front line defence, adult enemy discrimination. In contrast to the well studied egg stage, investigations addressing the specific cues for adult enemy recognition are rare. Previous studies have suggested barred underparts and yellow eyes may provide cues for the recognition of cuckoos Cuculus canorus by their hosts; however, no study to date has examined the role of the two cues simultaneously under a consistent experimental paradigm. Methodology/Principal Findings We modify and extend previous work using a novel experimental approach – custom-made dummies with various combinations of hypothesized recognition cues. The salient recognition cue turned out to be the yellow eye. Barred underparts, the only trait examined previously, had a statistically significant but small effect on host aggression highlighting the importance of effect size vs. statistical significance. Conclusion Relative importance of eye vs. underpart phenotypes may reflect ecological context of host-parasite interaction: yellow eyes are conspicuous from the typical direction of host arrival (from above), whereas barred underparts are poorly visible (being visually blocked by the upper part of the cuckoo's body). This visual constraint may reduce usefulness of barred underparts as a reliable recognition cue under a typical situation near host nests. We propose a novel hypothesis that recognition cues for enemy detection can vary in a context-dependent manner (e.g., depending on whether the enemy is approached from below or from above). Further we suggest a particular cue can trigger fear reactions (escape) in some hosts/populations whereas the same cue can trigger aggression (attack) in other hosts/populations depending on presence/absence of dangerous enemies that are phenotypically similar to brood parasites and costs and benefits associated with particular host responses. PMID:22624031

  5. A novel robot for imposing perturbations during overground walking: mechanism, control and normative stepping responses.

    PubMed

    Olenšek, Andrej; Zadravec, Matjaž; Matjačić, Zlatko

    2016-06-11

    The most common approach to studying dynamic balance during walking is by applying perturbations. Previous studies that investigated dynamic balance responses predominantly focused on applying perturbations in frontal plane while walking on treadmill. The goal of our work was to develop balance assessment robot (BAR) that can be used during overground walking and to assess normative balance responses to perturbations in transversal plane in a group of neurologically healthy individuals. BAR provides three passive degrees of freedom (DoF) and three actuated DoF in pelvis that are admittance-controlled in such a way that the natural movement of pelvis is not significantly affected. In this study BAR was used to assess normative balance responses in neurologically healthy individuals by applying linear perturbations in frontal and sagittal planes and angular perturbations in transversal plane of pelvis. One way repeated measure ANOVA was used to statistically evaluate the effect of selected perturbations on stepping responses. Standard deviations of assessed responses were similar in unperturbed and perturbed walking. Perturbations in frontal direction evoked substantial pelvis displacement and caused statistically significant effect on step length, step width and step time. Likewise, perturbations in sagittal plane also caused statistically significant effect on step length, step width and step time but with less explicit impact on pelvis movement in frontal plane. On the other hand, except from substantial pelvis rotation angular perturbations did not have substantial effect on pelvis movement in frontal and sagittal planes while statistically significant effect was noted only in step length and step width after perturbation in clockwise direction. Results indicate that the proposed device can repeatedly reproduce similar experimental conditions. Results also suggest that "stepping strategy" is the dominant strategy for coping with perturbations in frontal plane, perturbations in sagittal plane are to greater extent handled by "ankle strategy" while angular perturbations in transversal plane do not pose substantial challenge for balance. Results also show that specific perturbation in general elicits responses that extend also to other planes of movement that are not directly associated with plane of perturbation as well as to spatio temporal parameters of gait.

  6. The deficit of joint position sense in the chronic unstable ankle as measured by inversion angle replication error.

    PubMed

    Nakasa, Tomoyuki; Fukuhara, Kohei; Adachi, Nobuo; Ochi, Mitsuo

    2008-05-01

    Functional instability is defined as a repeated ankle inversion sprain and a giving way sensation. Previous studies have described the damage of sensori-motor control in ankle sprain as being a possible cause of functional instability. The aim of this study was to evaluate the inversion angle replication errors in patients with functional instability after ankle sprain. The difference between the index angle and replication angle was measured in 12 subjects with functional instability, with the aim of evaluating the replication error. As a control group, the replication errors of 17 healthy volunteers were investigated. The side-to-side differences of the replication errors were compared between both the groups, and the relationship between the side-to-side differences of the replication errors and the mechanical instability were statistically analyzed in the unstable group. The side-to-side difference of the replication errors was 1.0 +/- 0.7 degrees in the unstable group and 0.2 +/- 0.7 degrees in the control group. There was a statistically significant difference between both the groups. The side-to-side differences of the replication errors in the unstable group did not statistically correlate to the anterior talar translation and talar tilt. The patients with functional instability had the deficit of joint position sense in comparison with healthy volunteers. The replication error did not correlate to the mechanical instability. The patients with functional instability should be treated appropriately in spite of having less mechanical instability.

  7. Chandra Source Catalog: User Interface

    NASA Astrophysics Data System (ADS)

    Bonaventura, Nina; Evans, Ian N.; Rots, Arnold H.; Tibbetts, Michael S.; van Stone, David W.; Zografou, Panagoula; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Helen; Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Winkelman, Sherry L.

    2009-09-01

    The Chandra Source Catalog (CSC) is intended to be the definitive catalog of all X-ray sources detected by Chandra. For each source, the CSC provides positions and multi-band fluxes, as well as derived spatial, spectral, and temporal source properties. Full-field and source region data products are also available, including images, photon event lists, light curves, and spectra. The Chandra X-ray Center CSC website (http://cxc.harvard.edu/csc/) is the place to visit for high-level descriptions of each source property and data product included in the catalog, along with other useful information, such as step-by-step catalog tutorials, answers to FAQs, and a thorough summary of the catalog statistical characterization. Eight categories of detailed catalog documents may be accessed from the navigation bar on most of the 50+ CSC pages; these categories are: About the Catalog, Creating the Catalog, Using the Catalog, Catalog Columns, Column Descriptions, Documents, Conferences, and Useful Links. There are also prominent links to CSCview, the CSC data access GUI, and related help documentation, as well as a tutorial for using the new CSC/Google Earth interface. Catalog source properties are presented in seven scientific categories, within two table views: the Master Source and Source Observations tables. Each X-ray source has one ``master source'' entry and one or more ``source observation'' entries, the details of which are documented on the CSC ``Catalog Columns'' pages. The master source properties represent the best estimates of the properties of a source; these are extensively described on the following pages of the website: Position and Position Errors, Source Flags, Source Extent and Errors, Source Fluxes, Source Significance, Spectral Properties, and Source Variability. The eight tutorials (``threads'') available on the website serve as a collective guide for accessing, understanding, and manipulating the source properties and data products provided by the catalog.

  8. Supermassive Black Holes and Their Host Spheroids. I. Disassembling Galaxies

    NASA Astrophysics Data System (ADS)

    Savorgnan, G. A. D.; Graham, A. W.

    2016-01-01

    Several recent studies have performed galaxy decompositions to investigate correlations between the black hole mass and various properties of the host spheroid, but they have not converged on the same conclusions. This is because their models for the same galaxy were often significantly different and not consistent with each other in terms of fitted components. Using 3.6 μm Spitzer imagery, which is a superb tracer of the stellar mass (superior to the K band), we have performed state-of-the-art multicomponent decompositions for 66 galaxies with directly measured black hole masses. Our sample is the largest to date and, unlike previous studies, contains a large number (17) of spiral galaxies with low black hole masses. We paid careful attention to the image mosaicking, sky subtraction, and masking of contaminating sources. After a scrupulous inspection of the galaxy photometry (through isophotal analysis and unsharp masking) and—for the first time—2D kinematics, we were able to account for spheroids large-scale, intermediate-scale, and nuclear disks bars rings spiral arms halos extended or unresolved nuclear sources; and partially depleted cores. For each individual galaxy, we compared our best-fit model with previous studies, explained the discrepancies, and identified the optimal decomposition. Moreover, we have independently performed one-dimensional (1D) and two-dimensional (2D) decompositions and concluded that, at least when modeling large, nearby galaxies, 1D techniques have more advantages than 2D techniques. Finally, we developed a prescription to estimate the uncertainties on the 1D best-fit parameters for the 66 spheroids that takes into account systematic errors, unlike popular 2D codes that only consider statistical errors.

  9. SUPERMASSIVE BLACK HOLES AND THEIR HOST SPHEROIDS. I. DISASSEMBLING GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savorgnan, G. A. D.; Graham, A. W., E-mail: gsavorgn@astro.swin.edu.au

    Several recent studies have performed galaxy decompositions to investigate correlations between the black hole mass and various properties of the host spheroid, but they have not converged on the same conclusions. This is because their models for the same galaxy were often significantly different and not consistent with each other in terms of fitted components. Using 3.6 μm Spitzer imagery, which is a superb tracer of the stellar mass (superior to the K band), we have performed state-of-the-art multicomponent decompositions for 66 galaxies with directly measured black hole masses. Our sample is the largest to date and, unlike previous studies, containsmore » a large number (17) of spiral galaxies with low black hole masses. We paid careful attention to the image mosaicking, sky subtraction, and masking of contaminating sources. After a scrupulous inspection of the galaxy photometry (through isophotal analysis and unsharp masking) and—for the first time—2D kinematics, we were able to account for spheroids; large-scale, intermediate-scale, and nuclear disks; bars; rings; spiral arms; halos; extended or unresolved nuclear sources; and partially depleted cores. For each individual galaxy, we compared our best-fit model with previous studies, explained the discrepancies, and identified the optimal decomposition. Moreover, we have independently performed one-dimensional (1D) and two-dimensional (2D) decompositions and concluded that, at least when modeling large, nearby galaxies, 1D techniques have more advantages than 2D techniques. Finally, we developed a prescription to estimate the uncertainties on the 1D best-fit parameters for the 66 spheroids that takes into account systematic errors, unlike popular 2D codes that only consider statistical errors.« less

  10. Measurement of the bottom hadron lifetime at the Z 0 resonancce

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujino, Donald Hideo

    1992-06-01

    We have measured the bottom hadron lifetime from bmore » $$\\bar{b}$$ events produced at the Z 0 resonance. Using the precision vertex detectors of the Mark II detector at the Stanford Linear Collider, we developed an impact parameter tag to identify bottom hadrons. The vertex tracking system resolved impact parameters to 30 μm for high momentum tracks, and 70 μm for tracks with a momentum of 1 GeV. We selected B hadrons with an efficiency of 40% and a sample purity of 80%, by requiring there be at least two tracks in a single jet that significantly miss the Z 0 decay vertex. From a total of 208 hadronic Z 0 events collected by the Mark II detector in 1990, we tagged 53 jets, of which 22 came from 11 double-tagged events. The jets opposite the tagged ones, referred as the ``untagged`` sample, are rich in B hadrons and unbiased in B decay times. The variable Σδ is the sum of impact parameters from tracks in the jet, and contains vital information on the B decay time. We measured the B lifetime from a one-parameter likelihood fit to the untagged Σδ distribution, obtaining τ b = 1.53 $$+0.55\\atop{-0.45}$$ ± 0.16 ps which agrees with the current world average. The first error is statistical and the second is systematic. The systematic error was dominated by uncertainties in the track resolution function. As a check, we also obtained consistent results using the Σδ distribution from the tagged jets and from the entire hadronic sample without any bottom enrichment.« less

  11. Measurement of the bottom hadron lifetime at the Z sup 0 resonancce

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujino, D.H.

    1992-06-01

    We have measured the bottom hadron lifetime from b{bar b} events produced at the Z{sup 0} resonance. Using the precision vertex detectors of the Mark II detector at the Stanford Linear Collider, we developed an impact parameter tag to identify bottom hadrons. The vertex tracking system resolved impact parameters to 30 {mu}m for high momentum tracks, and 70 {mu}m for tracks with a momentum of 1 GeV. We selected B hadrons with an efficiency of 40% and a sample purity of 80%, by requiring there be at least two tracks in a single jet that significantly miss the Z{sup 0}more » decay vertex. From a total of 208 hadronic Z{sup 0} events collected by the Mark II detector in 1990, we tagged 53 jets, of which 22 came from 11 double-tagged events. The jets opposite the tagged ones, referred as the untagged'' sample, are rich in B hadrons and unbiased in B decay times. The variable {Sigma}{delta} is the sum of impact parameters from tracks in the jet, and contains vital information on the B decay time. We measured the B lifetime from a one-parameter likelihood fit to the untagged {Sigma}{delta} distribution, obtaining {tau}{sub b} = 1.53{sub {minus}0.45}{sup +0.55}{plus minus}0.16 ps which agrees with the current world average. The first error is statistical and the second is systematic. The systematic error was dominated by uncertainties in the track resolution function. As a check, we also obtained consistent results using the {Sigma}{delta} distribution from the tagged jets and from the entire hadronic sample without any bottom enrichment.« less

  12. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    NASA Astrophysics Data System (ADS)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.

  13. The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.

    ERIC Educational Resources Information Center

    Dunivant, Noel

    The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…

  14. Student Distractor Choices on the Mathematics Virginia Standards of Learning Middle School Assessments

    ERIC Educational Resources Information Center

    Lewis, Virginia Vimpeny

    2011-01-01

    Number Concepts; Measurement; Geometry; Probability; Statistics; and Patterns, Functions and Algebra. Procedural Errors were further categorized into the following content categories: Computation; Measurement; Statistics; and Patterns, Functions, and Algebra. The results of the analysis showed the main sources of error for 6th, 7th, and 8th…

  15. 76 FR 30245 - Post-Employment Conflict of Interest Restrictions; Revision of Departmental Component Designations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-25

    ...: Bureau of Labor Statistics. Employee Benefits Security Administration (formerly Pension and Welfare... senior employee served in any capacity during the year prior to termination from a senior employee... senior employee who served in a ``parent'' department or agency is not barred by 18 U.S.C. 207(c) from...

  16. We, the Asian and Pacific Islander Americans.

    ERIC Educational Resources Information Center

    Johnson, Dwight L.; And Others

    Demographic data are presented about the people who have immigrated to the United States from Asia and the Pacific Islands. Twelve figures (pie charts, bar graphs, and maps), and eight tables provide detailed, statistical information on such things as (1) distribution of Asians and Pacific Islanders in the United States, (2) states with the…

  17. Chasing the light sterile neutrino with the STEREO detector

    NASA Astrophysics Data System (ADS)

    Minotti, A.

    2017-09-01

    The standard three-family neutrino oscillation model is challenged by a number of observations, such as the reactor antineutrino anomaly (RAA), that can be explained by the existence of sterile neutrinos at the eV mass scale. The STEREO experiment detects {\\bar ν _e} produced in the 58.3MW Th compact core of the ILL research reactor via inverse beta decay (IBD) interactions in a liquid scintillator. Using 6 identical target cells, STEREO compares {\\bar ν _e} energy spectra at different baselines in order to observe possible distortions due to short-baseline oscillations toward eV sterile neutrinos. IBD events are effectively singled out from γ radiation by selecting events with a two-fold coincidence that is typical of an IBD interaction. External background is reduced by means of layers of shielding material. A Cherenkov veto allows to partially remove background produced by cosmic muons, and the remaining component is measured in reactor-off periods and subtracted statistically. If no evidence of sterile neutrinos after the full statistics of 6 reactor cycles is gathered, STEREO is expected to fully exclude the RAA allowed region.

  18. Synthetic aperture imaging in ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.

    2014-03-01

    Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.

  19. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  20. The Neural-fuzzy Thermal Error Compensation Controller on CNC Machining Center

    NASA Astrophysics Data System (ADS)

    Tseng, Pai-Chung; Chen, Shen-Len

    The geometric errors and structural thermal deformation are factors that influence the machining accuracy of Computer Numerical Control (CNC) machining center. Therefore, researchers pay attention to thermal error compensation technologies on CNC machine tools. Some real-time error compensation techniques have been successfully demonstrated in both laboratories and industrial sites. The compensation results still need to be enhanced. In this research, the neural-fuzzy theory has been conducted to derive a thermal prediction model. An IC-type thermometer has been used to detect the heat sources temperature variation. The thermal drifts are online measured by a touch-triggered probe with a standard bar. A thermal prediction model is then derived by neural-fuzzy theory based on the temperature variation and the thermal drifts. A Graphic User Interface (GUI) system is also built to conduct the user friendly operation interface with Insprise C++ Builder. The experimental results show that the thermal prediction model developed by neural-fuzzy theory methodology can improve machining accuracy from 80µm to 3µm. Comparison with the multi-variable linear regression analysis the compensation accuracy is increased from ±10µm to ±3µm.

  1. A method to estimate the effect of deformable image registration uncertainties on daily dose mapping

    PubMed Central

    Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin

    2012-01-01

    Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766

  2. The intercrater plains of Mercury and the Moon: Their nature, origin and role in terrestrial planet evolution. Measurement and errors of crater statistics. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Leake, M. A.

    1982-01-01

    Planetary imagery techniques, errors in measurement or degradation assignment, and statistical formulas are presented with respect to cratering data. Base map photograph preparation, measurement of crater diameters and sampled area, and instruments used are discussed. Possible uncertainties, such as Sun angle, scale factors, degradation classification, and biases in crater recognition are discussed. The mathematical formulas used in crater statistics are presented.

  3. Visual Survey of Infantry Troops. Part 1. Visual Acuity, Refractive Status, Interpupillary Distance and Visual Skills

    DTIC Science & Technology

    1989-06-01

    letters on one line and several letters on the next line, there is no accurate way to credit these extra letters for statistical analysis. The decimal and...contains the descriptive statistics of the objective refractive error components of infantrymen. Figures 8-11 show the frequency distributions for sphere...equivalents. Nonspectacle wearers Table 12 contains the idescriptive statistics for non- spectacle wearers. Based or these refractive error data, about 30

  4. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  5. The peak altitude of H3+ auroral emission: comparison with the ultraviolet

    NASA Astrophysics Data System (ADS)

    Blake, J.; Stallard, T.; Miller, S.; Melin, H.; O'Donoghue, J.; Baines, K.

    2013-09-01

    The altitude of Saturn's peak auroral emission has previously been measured for specific cases in both the ultraviolet (UV) and the infrared (IR). Gerard et al [2009] concludes that the night side H2 UV emission is within the range of 800 to 1300 km above the 1-bar pressure surface. However, using colour ratio spectroscopy, Gustin et al [2009] located the emission layer at or above 610 km. Measurements of the infrared auroral altitude was conducted by Stallard et al [2012] on H3+ emissions from nine VIMS Cassini images, resulting in a measurement of 1155 ± 25 km above the 1-bar pressure surface. Here we present data analysed in a manner similar to Stallard et al [2012] on the observations of H3+ emission in twenty images taken by the Visual Infrared Mapping Spectrometer (VIMS) aboard the spacecraft Cassini from the years 2006, 2008 and 2012. The bins covered were 3.39872, 3.51284, 3.64853, 4.18299 and 4.33280 μm. These observations were selected from a set of 15,000 as they contained a useful alignment of the aurorae on the limb and the body of the planet. The specific conditions that had to be met for each image were as follows; minimum integration time of 75 milliseconds per pixel, minimum number of pixels in the x and y direction of 32, the image must include the latitude range of 70 to 90 degrees for either hemisphere and the sub spacecraft angle must be between 0 and 20 degrees. This alignment allowed for the altitudinal profiles to be analysed in terms of the difference between the latitude of aurorae on the limb and on the body of Saturn; thus permitting an investigation into the effects of misalignment. In this instance, misalignment was defined as the difference between the latitude of the peak emission latitude on the planet and the latitude of the limb; assuming the aurorae to be approximately circular. A statistical study by Badman et al [2011] showed that centre of the oval is on average offset anti sunward of the pole by about 1.6 degrees. To account for this, the acceptable error in misalignment was set to be ± 4 degrees. The accepted error range for the altitudinal profiles was set to ± 250 km. It was determined that variations in the measured altitude of the aurorae are predominantly shifted by misalignment, though there is also some natural variation. Using a second order polynomial fit, the altitude with zero misalignment is measured at 1215 ± 119 km. Further still, through comparison of the IR and UV altitudinal emission profiles is had been discovered that regardless of the alignment, the Infrared auroral altitudinal profile drops off in intensity much faster and the Ultraviolet counterpart, declining to less than 10% of maximum intensity before reaching an altitude of 2000 km above the 1 bar pressure surface. Further work is currently underway to investigate the implication for the emissive behaviour of H3 + with altitude.

  6. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  7. Rank score and permutation testing alternatives for regression quantile estimates

    USGS Publications Warehouse

    Cade, B.S.; Richards, J.D.; Mielke, P.W.

    2006-01-01

    Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.

  8. Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results

    PubMed Central

    Wicherts, Jelte M.; Bakker, Marjan; Molenaar, Dylan

    2011-01-01

    Background The widespread reluctance to share published research data is often hypothesized to be due to the authors' fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically. Methods and Findings We related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance. Conclusions Our findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies. PMID:22073203

  9. Trends in statistical methods in articles published in Archives of Plastic Surgery between 2012 and 2017.

    PubMed

    Han, Kyunghwa; Jung, Inkyung

    2018-05-01

    This review article presents an assessment of trends in statistical methods and an evaluation of their appropriateness in articles published in the Archives of Plastic Surgery (APS) from 2012 to 2017. We reviewed 388 original articles published in APS between 2012 and 2017. We categorized the articles that used statistical methods according to the type of statistical method, the number of statistical methods, and the type of statistical software used. We checked whether there were errors in the description of statistical methods and results. A total of 230 articles (59.3%) published in APS between 2012 and 2017 used one or more statistical method. Within these articles, there were 261 applications of statistical methods with continuous or ordinal outcomes, and 139 applications of statistical methods with categorical outcome. The Pearson chi-square test (17.4%) and the Mann-Whitney U test (14.4%) were the most frequently used methods. Errors in describing statistical methods and results were found in 133 of the 230 articles (57.8%). Inadequate description of P-values was the most common error (39.1%). Among the 230 articles that used statistical methods, 71.7% provided details about the statistical software programs used for the analyses. SPSS was predominantly used in the articles that presented statistical analyses. We found that the use of statistical methods in APS has increased over the last 6 years. It seems that researchers have been paying more attention to the proper use of statistics in recent years. It is expected that these positive trends will continue in APS.

  10. Automatic Ammunition Identification Technology Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weil, B.

    1993-01-01

    The Automatic Ammunition Identification Technology (AAIT) Project is an activity of the Robotics Process Systems Division at the Oak Ridge National Laboratory (ORNL) for the US Army's Project Manager-Ammunition Logistics (PM-AMMOLOG) at the Picatinny Arsenal in Picatinny, New Jersey. The project objective is to evaluate new two-dimensional bar code symbologies for potential use in ammunition logistics systems and automated reloading equipment. These new symbologies are a significant improvement over typical linear bar codes since machine-readable alphanumeric messages up to 2000 characters long are achievable. These compressed data symbologies are expected to significantly improve logistics and inventory management tasks and permitmore » automated feeding and handling of ammunition to weapon systems. The results will be increased throughout capability, better inventory control, reduction of human error, lower operation and support costs, and a more timely re-supply of various weapon systems. This paper will describe the capabilities of existing compressed data symbologies and the symbol testing activities being conducted at ORNL for the AAIT Project.« less

  11. Automatic Ammunition Identification Technology Project. Ammunition Logistics Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weil, B.

    1993-03-01

    The Automatic Ammunition Identification Technology (AAIT) Project is an activity of the Robotics & Process Systems Division at the Oak Ridge National Laboratory (ORNL) for the US Army`s Project Manager-Ammunition Logistics (PM-AMMOLOG) at the Picatinny Arsenal in Picatinny, New Jersey. The project objective is to evaluate new two-dimensional bar code symbologies for potential use in ammunition logistics systems and automated reloading equipment. These new symbologies are a significant improvement over typical linear bar codes since machine-readable alphanumeric messages up to 2000 characters long are achievable. These compressed data symbologies are expected to significantly improve logistics and inventory management tasks andmore » permit automated feeding and handling of ammunition to weapon systems. The results will be increased throughout capability, better inventory control, reduction of human error, lower operation and support costs, and a more timely re-supply of various weapon systems. This paper will describe the capabilities of existing compressed data symbologies and the symbol testing activities being conducted at ORNL for the AAIT Project.« less

  12. Universal behavior of the γ⁎γ→(π0,η,η′) transition form factors

    PubMed Central

    Melikhov, Dmitri; Stech, Berthold

    2012-01-01

    The photon transition form factors of π, η and η′ are discussed in view of recent measurements. It is shown that the exact axial anomaly sum rule allows a precise comparison of all three form factors at high-Q2 independent of the different structures and distribution amplitudes of the participating pseudoscalar mesons. We conclude: (i) The πγ form factor reported by Belle is in excellent agreement with the nonstrange I=0 component of the η and η′ form factors obtained from the BaBar measurements. (ii) Within errors, the πγ form factor from Belle is compatible with the asymptotic pQCD behavior, similar to the η and η′ form factors from BaBar. Still, the best fits to the data sets of πγ, ηγ, and η′γ form factors favor a universal small logarithmic rise Q2FPγ(Q2)∼log(Q2). PMID:23226917

  13. Interpretation of fast-ion signals during beam modulation experiments

    DOE PAGES

    Heidbrink, W. W.; Collins, C. S.; Stagner, L.; ...

    2016-07-22

    Fast-ion signals produced by a modulated neutral beam are used to infer fast-ion transport. The measured quantity is the divergence of perturbed fast-ion flux from the phase-space volume measured by the diagnostic, ∇•more » $$\\bar{Γ}$$. Since velocity-space transport often contributes to this divergence, the phase-space sensitivity of the diagnostic (or “weight function”) plays a crucial role in the interpretation of the signal. The source and sink make major contributions to the signal but their effects are accurately modeled by calculations that employ an exponential decay term for the sink. Recommendations for optimal design of a fast-ion transport experiment are given, illustrated by results from DIII-D measurements of fast-ion transport by Alfv´en eigenmodes. Finally, the signal-to-noise ratio of the diagnostic, systematic uncertainties in the modeling of the source and sink, and the non-linearity of the perturbation all contribute to the error in ∇•$$\\bar{Γ}$$.« less

  14. A 3-year prospective clinical study of telescopic crown, bar, and locator attachments for removable four implant-supported maxillary overdentures.

    PubMed

    Zou, Duohong; Wu, Yiqun; Huang, Wei; Wang, Feng; Wang, Shen; Zhang, Zhiyong; Zhang, Zhiyuan

    2013-01-01

    To evaluate telescopic crown (TC), bar, and locator attachments used in removable four implant-supported overdentures for patients with edentulous maxillae. A total of 30 maxillary edentulous patients were enrolled in a 3-year prospective study. Ten patients (group A) were treated with overdentures supported by TCs, 10 patients (group B) with overdentures supported by bar attachments, and 10 patients (group C) with overdentures supported by locator attachments. A total of 120 implants were used to restore oral function. During the 3-year follow-up period, implant survival and success rates, biologic and mechanical complications, prosthodontic maintenance efforts, and patient satisfaction were evaluated. All 30 patients were available for the 3-year follow-up and exhibited 100% implant survival and success rates. Peri-implant marginal bone resorption was not statistically significant for the three groups. There were lower plaque, bleeding, gingiva, and calculus indices in group C compared with groups A and B. The number of prosthodontic maintenance visits revealed eight complications in the TC group, seven complications in the bar group, and four complications in the locator group. However, there were no differences in the clinical effects of the overdentures in the three groups. Within the limits of this prospective study, it was concluded that the locator system produced superior clinical results compared with the TC and bar attachments in terms of peri-implant hygiene parameters, the frequency of prosthodontic maintenance measures, cost, and ease of denture preparation. However, longer-term prospective studies are required to confirm these results.

  15. New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data

    NASA Astrophysics Data System (ADS)

    Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.

    2009-11-01

    A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.

  16. Reconstruction of primordial tensor power spectra from B -mode polarization of the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Takashi; Komatsu, Eiichiro; Hazumi, Masashi; Sasaki, Misao

    2018-06-01

    Given observations of the B -mode polarization power spectrum of the cosmic microwave background (CMB), we can reconstruct power spectra of primordial tensor modes from the early Universe without assuming their functional form such as a power-law spectrum. The shape of the reconstructed spectra can then be used to probe the origin of tensor modes in a model-independent manner. We use the Fisher matrix to calculate the covariance matrix of tensor power spectra reconstructed in bins. We find that the power spectra are best reconstructed at wave numbers in the vicinity of k ≈6 ×10-4 and 5 ×10-3 Mpc-1 , which correspond to the "reionization bump" at ℓ≲6 and "recombination bump" at ℓ≈80 of the CMB B -mode power spectrum, respectively. The error bar between these two wave numbers is larger because of the lack of the signal between the reionization and recombination bumps. The error bars increase sharply toward smaller (larger) wave numbers because of the cosmic variance (CMB lensing and instrumental noise). To demonstrate the utility of the reconstructed power spectra, we investigate whether we can distinguish between various sources of tensor modes including those from the vacuum metric fluctuation and SU(2) gauge fields during single-field slow-roll inflation, open inflation, and massive gravity inflation. The results depend on the model parameters, but we find that future CMB experiments are sensitive to differences in these models. We make our calculation tool available online.

  17. Combination of CDF and D0 results on the mass of the top quark using up $$9.7\\:{\\rm fb}^{-1}$$ at the Tevatron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tevatron Electroweak Working Group, Tevatron Group; Aaltonen, T.

    2016-08-05

    We summarize the current top quark mass (m t) measurements from the CDF and D0 experiments at Fermilab. We combine published results from Run I (1992–1996) with the most precise published and preliminary Run II (2001–2011) measurements based onmore » $$p\\bar{p}$$ data corresponding to up to 9.7 fb $-$1 of $$p\\bar{p}$$ collisions. Taking correlations of uncertainties into account, and combining the statistical and systematic contributions in quadrature, the preliminary Tevatron average mass value for the top quark is m t = 174.30 ± 0.65 GeV/c 2, corresponding to a relative precision of 0.37%.« less

  18. NDE of structural ceramics

    NASA Technical Reports Server (NTRS)

    Klima, S. J.; Vary, A.

    1986-01-01

    Radiographic, ultrasonic, scanning laser acoustic microscopy (SLAM), and thermo-acoustic microscopy techniques were used to characterize silicon nitride and silicon carbide modulus-of-rupture test specimens in various stages of fabrication. Conventional and microfocus X-ray techniques were found capable of detecting minute high density inclusions in as-received powders, green compacts, and fully densified specimens. Significant density gradients in sintered bars were observed by radiography, ultrasonic velocity, and SLAM. Ultrasonic attenuation was found sensitive to microstructural variations due to grain and void morphology and distribution. SLAM was also capable of detecting voids, inclusions and cracks in finished test bars. Consideration is given to the potential for applying thermo-acoustic microscopy techniques to green and densified ceramics. The detection probability statistics and some limitations of radiography and SLAM also are discussed.

  19. A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.

    2013-07-25

    This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less

  20. Quantagenetics® analysis of laser-induced breakdown spectroscopic data: Rapid and accurate authentication of materials

    NASA Astrophysics Data System (ADS)

    McManus, Catherine E.; Dowe, James; McMillan, Nancy J.

    2018-07-01

    Many industrial and commercial issues involve authentication of such matters as the manufacturer or geographic source of a material, and quality control of materials, determining whether specific treatments have been properly applied, or if a material is authentic or fraudulent. Often, multiple analytical techniques and tests are used, resulting in expensive and time-consuming testing procedures. Laser-Induced Breakdown Spectroscopy (LIBS) is a rapid laser ablation spectroscopic analytical method. Each LIBS spectrum contains information about the concentration of every element, some isotopic ratios, and the molecular structure of the material, making it a unique and comprehensive signature of the material. Quantagenetics® is a multivariate statistical method based on Bayesian statistics that uses the Euclidian distance between LIBS spectra of materials to classify materials (US Patents 9,063,085 and 8,699,022). The fundamental idea behind Quantagenetics® is that LIBS spectra contain sufficient information to determine the origin and history of materials. This study presents two case studies that illustrate the method. LIBS spectra from 510 Colombian emeralds from 18 mines were classified by mine. Overall, 99.4% of the spectra were correctly classified; the success rate for individual mines ranges from 98.2% to 100%. Some of the mines are separated by distances as little as 200 m, indicating that the method uses the slight but consistent differences in composition to identify the mine of origin accurately. The second study used bars of 17-4 stainless steel from three manufacturers. Each of the three bars was cut into 90 coupons; 30 of each bar received no further treatment, another 30 from each bar received one tempering and hardening treatment, and the final 30 coupons from each bar received a different heat treatment. Using LIBS spectra taken from the coupons, the Quantagenetics® method classified the 270 coupons both by manufacturer (composition) and heat treatment (structure) with an overall success rate of 95.3%. Individual success rates range from 92.4% to 97.6%. These case studies were successful despite having no preconceived knowledge of the materials; artificial intelligence allows the materials to classify themselves without human intervention or bias. Multivariate analysis of LIBS spectra using the Quantagenetics® method has promise to improve quality control and authentication of a wide variety of materials in industrial enterprises.

Top