Sample records for good statistical precision

  1. Hypothesis testing for band size detection of high-dimensional banded precision matrices.

    PubMed

    An, Baiguo; Guo, Jianhua; Liu, Yufeng

    2014-06-01

    Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.

  2. Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.

    2009-05-01

    We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

  3. Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R

    ERIC Educational Resources Information Center

    Dogan, C. Deha

    2017-01-01

    Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…

  4. Accurate quantification of magnetic particle properties by intra-pair magnetophoresis for nanobiotechnology

    NASA Astrophysics Data System (ADS)

    van Reenen, Alexander; Gao, Yang; Bos, Arjen H.; de Jong, Arthur M.; Hulsen, Martien A.; den Toonder, Jaap M. J.; Prins, Menno W. J.

    2013-07-01

    The application of magnetic particles in biomedical research and in-vitro diagnostics requires accurate characterization of their magnetic properties, with single-particle resolution and good statistics. Here, we report intra-pair magnetophoresis as a method to accurately quantify the field-dependent magnetic moments of magnetic particles and to rapidly generate histograms of the magnetic moments with good statistics. We demonstrate our method with particles of different sizes and from different sources, with a measurement precision of a few percent. We expect that intra-pair magnetophoresis will be a powerful tool for the characterization and improvement of particles for the upcoming field of particle-based nanobiotechnology.

  5. Statistical analysis of an RNA titration series evaluates microarray precision and sensitivity on a whole-array basis

    PubMed Central

    Holloway, Andrew J; Oshlack, Alicia; Diyagama, Dileepa S; Bowtell, David DL; Smyth, Gordon K

    2006-01-01

    Background Concerns are often raised about the accuracy of microarray technologies and the degree of cross-platform agreement, but there are yet no methods which can unambiguously evaluate precision and sensitivity for these technologies on a whole-array basis. Results A methodology is described for evaluating the precision and sensitivity of whole-genome gene expression technologies such as microarrays. The method consists of an easy-to-construct titration series of RNA samples and an associated statistical analysis using non-linear regression. The method evaluates the precision and responsiveness of each microarray platform on a whole-array basis, i.e., using all the probes, without the need to match probes across platforms. An experiment is conducted to assess and compare four widely used microarray platforms. All four platforms are shown to have satisfactory precision but the commercial platforms are superior for resolving differential expression for genes at lower expression levels. The effective precision of the two-color platforms is improved by allowing for probe-specific dye-effects in the statistical model. The methodology is used to compare three data extraction algorithms for the Affymetrix platforms, demonstrating poor performance for the commonly used proprietary algorithm relative to the other algorithms. For probes which can be matched across platforms, the cross-platform variability is decomposed into within-platform and between-platform components, showing that platform disagreement is almost entirely systematic rather than due to measurement variability. Conclusion The results demonstrate good precision and sensitivity for all the platforms, but highlight the need for improved probe annotation. They quantify the extent to which cross-platform measures can be expected to be less accurate than within-platform comparisons for predicting disease progression or outcome. PMID:17118209

  6. Interlaboratory round robin study on axial tensile properties of SiC-SiC CMC tubular test specimens [Interlaboratory round robin study on axial tensile properties of SiC/SiC tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.

    An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less

  7. Interlaboratory round robin study on axial tensile properties of SiC-SiC CMC tubular test specimens [Interlaboratory round robin study on axial tensile properties of SiC/SiC tubes

    DOE PAGES

    Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.; ...

    2018-04-19

    An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less

  8. PV cells electrical parameters measurement

    NASA Astrophysics Data System (ADS)

    Cibira, Gabriel

    2017-12-01

    When measuring optical parameters of a photovoltaic silicon cell, precise results bring good electrical parameters estimation, applying well-known physical-mathematical models. Nevertheless, considerable re-combination phenomena might occur in both surface and intrinsic thin layers within novel materials. Moreover, rear contact surface parameters may influence close-area re-combination phenomena, too. Therefore, the only precise electrical measurement approach is to prove assumed cell electrical parameters. Based on theoretical approach with respect to experiments, this paper analyses problems within measurement procedures and equipment used for electrical parameters acquisition within a photovoltaic silicon cell, as a case study. Statistical appraisal quality is contributed.

  9. Disability Measurement for Korean Community-Dwelling Adults With Stroke: Item-Level Psychometric Analysis of the Korean Longitudinal Study of Ageing

    PubMed Central

    2018-01-01

    Objective To investigate the psychometric properties of the activities of daily living (ADL) instrument used in the analysis of Korean Longitudinal Study of Ageing (KLoSA) dataset. Methods A retrospective study was carried out involving 2006 KLoSA records of community-dwelling adults diagnosed with stroke. The ADL instrument used for the analysis of KLoSA included 17 items, which were analyzed using Rasch modeling to develop a robust outcome measure. The unidimensionality of the ADL instrument was examined based on confirmatory factor analysis with a one-factor model. Item-level psychometric analysis of the ADL instrument included fit statistics, internal consistency, precision, and the item difficulty hierarchy. Results The study sample included a total of 201 community-dwelling adults (1.5% of the Korean population with an age over 45 years; mean age=70.0 years, SD=9.7) having a history of stroke. The ADL instrument demonstrated unidimensional construct. Two misfit items, money management (mean square [MnSq]=1.56, standardized Z-statistics [ZSTD]=2.3) and phone use (MnSq=1.78, ZSTD=2.3) were removed from the analysis. The remaining 15 items demonstrated good item fit, high internal consistency (person reliability=0.91), and good precision (person strata=3.48). The instrument precisely estimated person measures within a wide range of theta (−4.75 logits < θ < 3.97 logits) and a reliability of 0.9, with a conceptual hierarchy of item difficulty. Conclusion The findings indicate that the 15 ADL items met Rasch expectations of unidimensionality and demonstrated good psychometric properties. It is proposed that the validated ADL instrument can be used as a primary outcome measure for assessing longitudinal disability trajectories in the Korean adult population and can be employed for comparative analysis of international disability across national aging studies. PMID:29765888

  10. Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks

    NASA Astrophysics Data System (ADS)

    Karpov, Kirill; Fedotova, Irina; Siemens, Eduard

    2017-07-01

    In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.

  11. A novel spectrofluorimetric method for the assay of pseudoephedrine hydrochloride in pharmaceutical formulations via derivatization with 4-chloro-7-nitrobenzofurazan.

    PubMed

    El-Didamony, Akram M; Gouda, Ayman A

    2011-01-01

    A new highly sensitive and specific spectrofluorimetric method has been developed to determine a sympathomimetic drug pseudoephedrine hydrochloride. The present method was based on derivatization with 4-chloro-7-nitrobenzofurazan in phosphate buffer at pH 7.8 to produce a highly fluorescent product which was measured at 532 nm (excitation at 475 nm). Under the optimized conditions a linear relationship and good correlation was found between the fluorescence intensity and pseudoephedrine hydrochloride concentration in the range of 0.5-5 µg mL(-1). The proposed method was successfully applied to the assay of pseudoephedrine hydrochloride in commercial pharmaceutical formulations with good accuracy and precision and without interferences from common additives. Statistical comparison of the results with a well-established method showed excellent agreement and proved that there was no significant difference in the accuracy and precision. The stoichiometry of the reaction was determined and the reaction pathway was postulated. Copyright © 2010 John Wiley & Sons, Ltd.

  12. A validated fast difference spectrophotometric method for 5-hydroxymethyl-2-furfural (HMF) determination in corn syrups.

    PubMed

    de Andrade, Jucimara Kulek; de Andrade, Camila Kulek; Komatsu, Emy; Perreault, Hélène; Torres, Yohandra Reyes; da Rosa, Marcos Roberto; Felsner, Maria Lurdes

    2017-08-01

    Corn syrups, important ingredients used in food and beverage industries, often contain high levels of 5-hydroxymethyl-2-furfural (HMF), a toxic contaminant. In this work, an in house validation of a difference spectrophotometric method for HMF analysis in corn syrups was developed using sophisticated statistical tools by the first time. The methodology showed excellent analytical performance with good selectivity, linearity (R 2 =99.9%, r>0.99), accuracy and low limits (LOD=0.10mgL -1 and LOQ=0.34mgL -1 ). An excellent precision was confirmed by repeatability (RSD (%)=0.30) and intermediate precision (RSD (%)=0.36) estimates and by Horrat value (0.07). A detailed study of method precision using a nested design demonstrated that variation sources such as instruments, operators and time did not interfere in the variability of results within laboratory and consequently in its intermediate precision. The developed method is environmentally friendly, fast, cheap and easy to implement resulting in an attractive alternative for corn syrups quality control in industries and official laboratories. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Flavanol Quantification of Grapes via Multiple Reaction Monitoring Mass Spectrometry. Application to Differentiation among Clones of Vitis vinifera L. cv. Rufete Grapes.

    PubMed

    García-Estévez, Ignacio; Alcalde-Eon, Cristina; Escribano-Bailón, M Teresa

    2017-08-09

    The determination of the detailed flavanol composition in food matrices is not a simple task because of the structural similarities of monomers and, consequently, oligomers and polymers. The aim of this study was the development and validation of an HPLC-MS/MS-multiple reaction monitoring (MRM) method that would allow the accurate and precise quantification of catechins, gallocatechins, and oligomeric proanthocyanidins. The high correlation coefficients of the calibration curves (>0.993), the recoveries not statistically different from 100%, the good intra- and interday precisions (<5%), and the LOD and LOQ values, low enough to quantify flavanols in grapes, are good results from the method validation procedure. Its usefulness has also been tested by determining the detailed composition of Vitis vinifera L. cv. Rufete grapes. Seventy-two (38 nongalloylated and 34 galloylated) and 53 (24 procyanidins and 29 prodelphinidins) flavanols have been identified and quantified in grape seed and grape skin, respectively. The use of HCA and PCA on the detailed flavanol composition has allowed differentiation among Rufete clones.

  14. Advancing Clinical Proteomics via Analysis Based on Biological Complexes: A Tale of Five Paradigms.

    PubMed

    Goh, Wilson Wen Bin; Wong, Limsoon

    2016-09-02

    Despite advances in proteomic technologies, idiosyncratic data issues, for example, incomplete coverage and inconsistency, resulting in large data holes, persist. Moreover, because of naïve reliance on statistical testing and its accompanying p values, differential protein signatures identified from such proteomics data have little diagnostic power. Thus, deploying conventional analytics on proteomics data is insufficient for identifying novel drug targets or precise yet sensitive biomarkers. Complex-based analysis is a new analytical approach that has potential to resolve these issues but requires formalization. We categorize complex-based analysis into five method classes or paradigms and propose an even-handed yet comprehensive evaluation rubric based on both simulated and real data. The first four paradigms are well represented in the literature. The fifth and newest paradigm, the network-paired (NP) paradigm, represented by a method called Extremely Small SubNET (ESSNET), dominates in precision-recall and reproducibility, maintains strong performance in small sample sizes, and sensitively detects low-abundance complexes. In contrast, the commonly used over-representation analysis (ORA) and direct-group (DG) test paradigms maintain good overall precision but have severe reproducibility issues. The other two paradigms considered here are the hit-rate and rank-based network analysis paradigms; both of these have good precision-recall and reproducibility, but they do not consider low-abundance complexes. Therefore, given its strong performance, NP/ESSNET may prove to be a useful approach for improving the analytical resolution of proteomics data. Additionally, given its stability, it may also be a powerful new approach toward functional enrichment tests, much like its ORA and DG counterparts.

  15. HPTLC Determination of Artemisinin and Its Derivatives in Bulk and Pharmaceutical Dosage

    NASA Astrophysics Data System (ADS)

    Agarwal, Suraj P.; Ahuja, Shipra

    A simple, selective, accurate, and precise high-performance thin-layer chromatographic (HPTLC) method has been established and validated for the analysis of artemisinin and its derivatives (artesunate, artemether, and arteether) in the bulk drugs and formulations. The artemisinin, artesunate, artemether, and arteether were separated on aluminum-backed silica gel 60 F254 plates with toluene:ethyl acetate (10:1), toluene: ethyl acetate: acetic acid (2:8:0.2), toluene:butanol (10:1), and toluene:dichloro methane (0.5:10) mobile phase, respectively. The linear detector response for concentrations between 100 and 600 ng/spot showed good linear relationship with r value 0.9967, 0.9989, 0.9981 and 0.9989 for artemisinin, artesunate, artemether, and arteether, respectively. Statistical analysis proves that the method is precise, accurate, and reproducible and hence can be employed for the routine analysis.

  16. In vivo dosimetry with optically stimulated luminescent dosimeters for conformal and intensity-modulated radiation therapy: A 2-year multicenter cohort study.

    PubMed

    Riegel, Adam C; Chen, Yu; Kapur, Ajay; Apicello, Laura; Kuruvilla, Abraham; Rea, Anthony J; Jamshidi, Abolghassem; Potters, Louis

    Optically stimulated luminescent dosimeters (OSLDs) are utilized for in vivo dosimetry (IVD) of modern radiation therapy techniques such as intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). Dosimetric precision achieved with conventional techniques may not be attainable. In this work, we measured accuracy and precision for a large sample of clinical OSLD-based IVD measurements. Weekly IVD measurements were collected from 4 linear accelerators for 2 years and were expressed as percent differences from planned doses. After outlier analysis, 10,224 measurements were grouped in the following way: overall, modality (photons, electrons), treatment technique (3-dimensional [3D] conformal, field-in-field intensity modulation, inverse-planned IMRT, and VMAT), placement location (gantry angle, cardinality, and central axis positioning), and anatomical site (prostate, breast, head and neck, pelvis, lung, rectum and anus, brain, abdomen, esophagus, and bladder). Distributions were modeled via a Gaussian function. Fitting was performed with least squares, and goodness-of-fit was assessed with the coefficient of determination. Model means (μ) and standard deviations (σ) were calculated. Sample means and variances were compared for statistical significance by analysis of variance and the Levene tests (α = 0.05). Overall, μ ± σ was 0.3 ± 10.3%. Precision for electron measurements (6.9%) was significantly better than for photons (10.5%). Precision varied significantly among treatment techniques (P < .0001) with field-in-field lowest (σ = 7.2%) and IMRT and VMAT highest (σ = 11.9% and 13.4%, respectively). Treatment site models with goodness-of-fit greater than 0.90 (6 of 10) yielded accuracy within ±3%, except for head and neck (μ = -3.7%). Precision varied with treatment site (range, 7.3%-13.0%), with breast and head and neck yielding the best and worst precision, respectively. Placement on the central axis of cardinal gantry angles yielded more precise results (σ = 8.5%) compared with other locations (range, 10.5%-11.4%). Accuracy of ±3% was achievable. Precision ranged from 6.9% to 13.4% depending on modality, technique, and treatment site. Simple, standardized locations may improve IVD precision. These findings may aid development of patient-specific tolerances for OSLD-based IVD. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  17. Differential cross sections and recoil polarizations for the reaction γ p → K + Σ 0

    DOE PAGES

    Dey, B.; Meyer, C. A.; Bellis, M.; ...

    2010-08-06

    Here, high-statistics measurements of differential cross sections and recoil polarizations for the reactionmore » $$\\gamma p \\rightarrow K^+ \\Sigma^0$$ have been obtained using the CLAS detector at Jefferson Lab. We cover center-of-mass energies ($$\\sqrt{s}$$) from 1.69 to 2.84 GeV, with an extensive coverage in the $K^+$ production angle. Independent measurements were made using the $$K^{+}p\\pi^{-}$$($$\\gamma$$) and $$K^{+}p$$($$\\pi^-,\\gamma$$) final-state topologies, and were found to exhibit good agreement. Our differential cross sections show good agreement with earlier CLAS, SAPHIR and LEPS results, while offering better statistical precision and a 300-MeV increase in $$\\sqrt{s}$$ coverage. Above $$\\sqrt{s} \\approx 2.5$$ GeV, $t$- and $u$-channel Regge scaling behavior can be seen at forward- and backward-angles, respectively. Our recoil polarization ($$P_\\Sigma$$) measurements represent a substantial increase in kinematic coverage and enhanced precision over previous world data. At forward angles we find that $$P_\\Sigma$$ is of the same magnitude but opposite sign as $$P_\\Lambda$$, in agreement with the static SU(6) quark model prediction of $$P_\\Sigma \\approx -P_\\Lambda$$. This expectation is violated in some mid- and backward-angle kinematic regimes, where $$P_\\Sigma$$ and $$P_\\Lambda$$ are of similar magnitudes but also have the same signs. In conjunction with several other meson photoproduction results recently published by CLAS, the present data will help constrain the partial wave analyses being performed to search for missing baryon resonances.« less

  18. A preliminary study on identification of Thai rice samples by INAA and statistical analysis

    NASA Astrophysics Data System (ADS)

    Kongsri, S.; Kukusamude, C.

    2017-09-01

    This study aims to investigate the elemental compositions in 93 Thai rice samples using instrumental neutron activation analysis (INAA) and to identify rice according to their types and rice cultivars using statistical analysis. As, Mg, Cl, Al, Br, Mn, K, Rb and Zn in Thai jasmine rice and Sung Yod rice samples were successfully determined by INAA. The accuracy and precision of the INAA method were verified by SRM 1568a Rice Flour. All elements were found to be in a good agreement with the certified values. The precisions in term of %RSD were lower than 7%. The LODs were obtained in range of 0.01 to 29 mg kg-1. The concentration of 9 elements distributed in Thai rice samples was evaluated and used as chemical indicators to identify the type of rice samples. The result found that Mg, Cl, As, Br, Mn, K, Rb, and Zn concentrations in Thai jasmine rice samples are significantly different but there was no evidence that Al is significantly different from concentration in Sung Yod rice samples at 95% confidence interval. Our results may provide preliminary information for discrimination of rice samples and may be useful database of Thai rice.

  19. Evaluation of graphical and statistical representation of analytical signals of spectrophotometric methods

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam Mahmoud; Fayez, Yasmin Mohammed; Tawakkol, Shereen Mostafa; Fahmy, Nesma Mahmoud; Shehata, Mostafa Abd El-Atty

    2017-09-01

    Simultaneous determination of miconazole (MIC), mometasone furaoate (MF), and gentamicin (GEN) in their pharmaceutical combination. Gentamicin determination is based on derivatization with of o-phthalaldehyde reagent (OPA) without any interference of other cited drugs, while the spectra of MIC and MF are resolved using both successive and progressive resolution techniques. The first derivative spectrum of MF is measured using constant multiplication or spectrum subtraction, while its recovered zero order spectrum is obtained using derivative transformation. Beside the application of constant value method. Zero order spectrum of MIC is obtained by derivative transformation after getting its first derivative spectrum by derivative subtraction method. The novel method namely, differential amplitude modulation is used to get the concentration of MF and MIC, while the novel graphical method namely, concentration value is used to get the concentration of MIC, MF, and GEN. Accuracy and precision testing of the developed methods show good results. Specificity of the methods is ensured and is successfully applied for the analysis of pharmaceutical formulation of the three drugs in combination. ICH guidelines are used for validation of the proposed methods. Statistical data are calculated, and the results are satisfactory revealing no significant difference regarding accuracy and precision.

  20. Simultaneous quantitative determination of paracetamol and tramadol in tablet formulation using UV spectrophotometry and chemometric methods

    NASA Astrophysics Data System (ADS)

    Glavanović, Siniša; Glavanović, Marija; Tomišić, Vladislav

    2016-03-01

    The UV spectrophotometric methods for simultaneous quantitative determination of paracetamol and tramadol in paracetamol-tramadol tablets were developed. The spectrophotometric data obtained were processed by means of partial least squares (PLS) and genetic algorithm coupled with PLS (GA-PLS) methods in order to determine the content of active substances in the tablets. The results gained by chemometric processing of the spectroscopic data were statistically compared with those obtained by means of validated ultra-high performance liquid chromatographic (UHPLC) method. The accuracy and precision of data obtained by the developed chemometric models were verified by analysing the synthetic mixture of drugs, and by calculating recovery as well as relative standard error (RSE). A statistically good agreement was found between the amounts of paracetamol determined using PLS and GA-PLS algorithms, and that obtained by UHPLC analysis, whereas for tramadol GA-PLS results were proven to be more reliable compared to those of PLS. The simplest and the most accurate and precise models were constructed by using the PLS method for paracetamol (mean recovery 99.5%, RSE 0.89%) and the GA-PLS method for tramadol (mean recovery 99.4%, RSE 1.69%).

  1. Removing the Impact of Correlated PSF Uncertainties in Weak Lensing

    NASA Astrophysics Data System (ADS)

    Lu, Tianhuan; Zhang, Jun; Dong, Fuyu; Li, Yingke; Liu, Dezi; Fu, Liping; Li, Guoliang; Fan, Zuhui

    2018-05-01

    Accurate reconstruction of the spatial distributions of the point-spread function (PSF) is crucial for high precision cosmic shear measurements. Nevertheless, current methods are not good at recovering the PSF fluctuations of high spatial frequencies. In general, the residual PSF fluctuations are spatially correlated, and therefore can significantly contaminate the correlation functions of the weak lensing signals. We propose a method to correct for this contamination statistically, without any assumptions on the PSF and galaxy morphologies or their spatial distribution. We demonstrate our idea with the data from the W2 field of CFHTLenS.

  2. Corpus-based Statistical Screening for Phrase Identification

    PubMed Central

    Kim, Won; Wilbur, W. John

    2000-01-01

    Purpose: The authors study the extraction of useful phrases from a natural language database by statistical methods. The aim is to leverage human effort by providing preprocessed phrase lists with a high percentage of useful material. Method: The approach is to develop six different scoring methods that are based on different aspects of phrase occurrence. The emphasis here is not on lexical information or syntactic structure but rather on the statistical properties of word pairs and triples that can be obtained from a large database. Measurements: The Unified Medical Language System (UMLS) incorporates a large list of humanly acceptable phrases in the medical field as a part of its structure. The authors use this list of phrases as a gold standard for validating their methods. A good method is one that ranks the UMLS phrases high among all phrases studied. Measurements are 11-point average precision values and precision-recall curves based on the rankings. Result: The authors find of six different scoring methods that each proves effective in identifying UMLS quality phrases in a large subset of MEDLINE. These methods are applicable both to word pairs and word triples. All six methods are optimally combined to produce composite scoring methods that are more effective than any single method. The quality of the composite methods appears sufficient to support the automatic placement of hyperlinks in text at the site of highly ranked phrases. Conclusion: Statistical scoring methods provide a promising approach to the extraction of useful phrases from a natural language database for the purpose of indexing or providing hyperlinks in text. PMID:10984469

  3. Collective flow measurements with HADES in Au+Au collisions at 1.23A GeV

    NASA Astrophysics Data System (ADS)

    Kardan, Behruz; Hades Collaboration

    2017-11-01

    HADES has a large acceptance combined with a good mass-resolution and therefore allows the study of dielectron and hadron production in heavy-ion collisions with unprecedented precision. With the statistics of seven billion Au-Au collisions at 1.23A GeV recorded in 2012, the investigation of higher-order flow harmonics is possible. At the BEVALAC and SIS18 directed and elliptic flow has been measured for pions, charged kaons, protons, neutrons and fragments, but higher-order harmonics have not yet been studied. They provide additional important information on the properties of the dense hadronic medium produced in heavy-ion collisions. We present here a high-statistics, multidifferential measurement of v1 and v2 for protons in Au+Au collisions at 1.23A GeV.

  4. Statistical and temporal irradiance fluctuations modeling for a ground-to-geostationary satellite optical link.

    PubMed

    Camboulives, A-R; Velluet, M-T; Poulenard, S; Saint-Antonin, L; Michau, V

    2018-02-01

    An optical communication link performance between the ground and a geostationary satellite can be impaired by scintillation, beam wandering, and beam spreading due to its propagation through atmospheric turbulence. These effects on the link performance can be mitigated by tracking and error correction codes coupled with interleaving. Precise numerical tools capable of describing the irradiance fluctuations statistically and of creating an irradiance time series are needed to characterize the benefits of these techniques and optimize them. The wave optics propagation methods have proven their capability of modeling the effects of atmospheric turbulence on a beam, but these are known to be computationally intensive. We present an analytical-numerical model which provides good results on the probability density functions of irradiance fluctuations as well as a time series with an important saving of time and computational resources.

  5. Measurements of experimental precision for trials with cowpea (Vigna unguiculata L. Walp.) genotypes.

    PubMed

    Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G

    2016-05-09

    The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.

  6. Statistical precision of the intensities retrieved from constrained fitting of overlapping peaks in high-resolution mass spectra

    DOE PAGES

    Cubison, M. J.; Jimenez, J. L.

    2015-06-05

    Least-squares fitting of overlapping peaks is often needed to separately quantify ions in high-resolution mass spectrometer data. A statistical simulation approach is used to assess the statistical precision of the retrieved peak intensities. The sensitivity of the fitted peak intensities to statistical noise due to ion counting is probed for synthetic data systems consisting of two overlapping ion peaks whose positions are pre-defined and fixed in the fitting procedure. The fitted intensities are sensitive to imperfections in the m/Q calibration. These propagate as a limiting precision in the fitted intensities that may greatly exceed the precision arising from counting statistics.more » The precision on the fitted peak intensity falls into one of three regimes. In the "counting-limited regime" (regime I), above a peak separation χ ~ 2 to 3 half-widths at half-maximum (HWHM), the intensity precision is similar to that due to counting error for an isolated ion. For smaller χ and higher ion counts (~ 1000 and higher), the intensity precision rapidly degrades as the peak separation is reduced ("calibration-limited regime", regime II). Alternatively for χ < 1.6 but lower ion counts (e.g. 10–100) the intensity precision is dominated by the additional ion count noise from the overlapping ion and is not affected by the imprecision in the m/Q calibration ("overlapping-limited regime", regime III). The transition between the counting and m/Q calibration-limited regimes is shown to be weakly dependent on resolving power and data spacing and can thus be approximated by a simple parameterisation based only on peak intensity ratios and separation. A simple equation can be used to find potentially problematic ion pairs when evaluating results from fitted spectra containing many ions. Longer integration times can improve the precision in regimes I and III, but a given ion pair can only be moved out of regime II through increased spectrometer resolving power. As a result, studies presenting data obtained from least-squares fitting procedures applied to mass spectral peaks should explicitly consider these limits on statistical precision.« less

  7. Thermospheric density estimation and responses to the March 2013 geomagnetic storm from GRACE GPS-determined precise orbits

    NASA Astrophysics Data System (ADS)

    Calabia, Andres; Jin, Shuanggen

    2017-02-01

    The thermospheric mass density variations and the thermosphere-ionosphere coupling during geomagnetic storms are not clear due to lack of observables and large uncertainty in the models. Although accelerometers on-board Low-Orbit-Earth (LEO) satellites can measure non-gravitational accelerations and derive thermospheric mass density variations with unprecedented details, their measurements are not always available (e.g., for the March 2013 geomagnetic storm). In order to cover accelerometer data gaps of Gravity Recovery and Climate Experiment (GRACE), we estimate thermospheric mass densities from numerical derivation of GRACE determined precise orbit ephemeris (POE) for the period 2011-2016. Our results show good correlation with accelerometer-based mass densities, and a better estimation than the NRLMSISE00 empirical model. Furthermore, we statistically analyze the differences to accelerometer-based densities, and study the March 2013 geomagnetic storm response. The thermospheric density enhancements at the polar regions on 17 March 2013 are clearly represented by POE-based measurements. Although our results show density variations better correlate with Dst and k-derived geomagnetic indices, the auroral electroject activity index AE as well as the merging electric field Em picture better agreement at high latitude for the March 2013 geomagnetic storm. On the other side, low-latitude variations are better represented with the Dst index. With the increasing resolution and accuracy of Precise Orbit Determination (POD) products and LEO satellites, the straightforward technique of determining non-gravitational accelerations and thermospheric mass densities through numerical differentiation of POE promises potentially good applications for the upper atmosphere research community.

  8. Validation of the Filovirus Plaque Assay for Use in Preclinical Studies

    PubMed Central

    Shurtleff, Amy C.; Bloomfield, Holly A.; Mort, Shannon; Orr, Steven A.; Audet, Brian; Whitaker, Thomas; Richards, Michelle J.; Bavari, Sina

    2016-01-01

    A plaque assay for quantitating filoviruses in virus stocks, prepared viral challenge inocula and samples from research animals has recently been fully characterized and standardized for use across multiple institutions performing Biosafety Level 4 (BSL-4) studies. After standardization studies were completed, Good Laboratory Practices (GLP)-compliant plaque assay method validation studies to demonstrate suitability for reliable and reproducible measurement of the Marburg Virus Angola (MARV) variant and Ebola Virus Kikwit (EBOV) variant commenced at the United States Army Medical Research Institute of Infectious Diseases (USAMRIID). The validation parameters tested included accuracy, precision, linearity, robustness, stability of the virus stocks and system suitability. The MARV and EBOV assays were confirmed to be accurate to ±0.5 log10 PFU/mL. Repeatability precision, intermediate precision and reproducibility precision were sufficient to return viral titers with a coefficient of variation (%CV) of ≤30%, deemed acceptable variation for a cell-based bioassay. Intraclass correlation statistical techniques for the evaluation of the assay’s precision when the same plaques were quantitated by two analysts returned values passing the acceptance criteria, indicating high agreement between analysts. The assay was shown to be accurate and specific when run on Nonhuman Primates (NHP) serum and plasma samples diluted in plaque assay medium, with negligible matrix effects. Virus stocks demonstrated stability for freeze-thaw cycles typical of normal usage during assay retests. The results demonstrated that the EBOV and MARV plaque assays are accurate, precise and robust for filovirus titration in samples associated with the performance of GLP animal model studies. PMID:27110807

  9. Survey of A asymmetries in semi-exclusive electron scattering on 4He and 12C

    NASA Astrophysics Data System (ADS)

    Protopopescu, D.; Hersman, F. W.; Holtrop, M.; Adams, G.; Ambrozewicz, P.; Anciant, E.; Anghinolfi, M.; Asavapibhop, B.; Asryan, G.; Audit, G.; Auger, T.; Avakian, H.; Bagdasaryan, H.; Ball, J. P.; Barrow, S.; Battaglieri, M.; Beard, K.; Bektasoglu, M.; Bellis, M.; Benmouna, N.; Berman, B. L.; Bertozzi, W.; Bianchi, N.; Biselli, A. S.; Boiarinov, S.; Bonner, B. E.; Bouchigny, S.; Bradford, R.; Branford, D.; Briscoe, W. J.; Brooks, W. K.; Burkert, V. D.; Butuceanu, C.; Calarco, J. R.; Carman, D. S.; Carnahan, B.; Cetina, C.; Chen, S.; Cole, P. L.; Coleman, A.; Cords, D.; Corvisiero, P.; Crabb, D.; Crannell, H.; Cummings, J. P.; Debruyne, D.; De Sanctis, E.; DeVita, R.; Degtyarenko, P. V.; Dennis, L.; Dharmawardane, K. V.; Dhuga, K. S.; Djalali, C.; Dodge, G. E.; Doughty, D.; Dragovitsch, P.; Dugger, M.; Dytman, S.; Dzyubak, O. P.; Egiyan, H.; Egiyan, K. S.; Elouadrhiri, L.; Empl, A.; Eugenio, P.; Fatemi, R.; Feuerbach, R. J.; Forest, T. A.; Funsten, H.; Gavalian, G.; Gilad, S.; Gilfoyle, G. P.; Giovanetti, K. L.; Girard, P.; Gordon, C. I. O.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guillo, M.; Guler, N.; Guo, L.; Gyurjyan, V.; Hadjidakis, C.; Hakobyan, R. S.; Hardie, J.; Heddle, D.; Hicks, K.; Hleiqawi, I.; Hu, J.; Hyde-Wright, C. E.; Ingram, W.; Ireland, D.; Ito, M. M.; Jenkins, D.; Joo, K.; Juengst, H. G.; Kelley, J. H.; Kellie, J. D.; Khandaker, M.; Kim, K. Y.; Kim, K.; Kim, W.; Klein, A.; Klein, F. J.; Klimenko, A. V.; Klusman, M.; Kossov, M.; Kramer, L. H.; Kuhn, S. E.; Kuhn, J.; Lachniet, J.; Laget, J. M.; Langheinrich, J.; Lawrence, D.; Lee, T.; Li, Ji; Livingston, K.; Lukashin, K.; Manak, J. J.; Marchand, C.; McAleer, S.; McLauchlan, S. T.; McNabb, J. W. C.; Mecking, B. A.; Melone, J. J.; Mestayer, M. D.; Meyer, C. A.; Mikhailov, K.; Minehart, R.; Mirazita, M.; Miskimen, R.; Morand, L.; Morrow, S. A.; Muccifora, V.; Mueller, J.; Mutchler, G. S.; Napolitano, J.; Nasseripour, R.; Nelson, S. O.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Niczyporuk, B. B.; Niyazov, R. A.; Nozar, M.; O'Rielly, G. V.; Osipenko, M.; Ostrovidov, A.; Park, K.; Pasyuk, E.; Peterson, G.; Philips, S. A.; Pivnyuk, N.; Pocanic, D.; Pogorelko, O.; Polli, E.; Pozdniakov, S.; Preedom, B. M.; Price, J. W.; Prok, Y.; Qin, L. M.; Raue, B. A.; Riccardi, G.; Ricco, G.; Ripani, M.; Ritchie, B. G.; Ronchetti, F.; Rosner, G.; Rossi, P.; Rowntree, D.; Rubin, P. D.; Ryckebusch, J.; Sabatié, F.; Sabourov, K.; Salgado, C.; Santoro, J. P.; Sapunenko, V.; Schumacher, R. A.; Serov, V. S.; Sharabian, Y. G.; Shaw, J.; Simionatto, S.; Skabelin, A. V.; Smith, E. S.; Smith, L. C.; Sober, D. I.; Spraker, M.; Stavinsky, A.; Stepanyan, S.; Stokes, B. E.; Stoler, P.; Strauch, S.; Taiuti, M.; Taylor, S.; Tedeschi, D. J.; Thoma, U.; Thompson, R.; Tkabladze, A.; Todor, L.; Tur, C.; Ungaro, M.; Vineyard, M. F.; Vlassov, A. V.; Wang, K.; Weinstein, L. B.; Weller, H.; Weygand, D. P.; Whisnant, C. S.; Williams, M.; Wolin, E.; Wood, M. H.; Yegneswaran, A.; Yun, J.; Zana, L.; Zhang, B.; CLAS Collaboration

    2005-02-01

    Single spin azimuthal asymmetries A were measured at Jefferson Lab using 2.2 and 4.4 GeV longitudinally polarised electrons incident on 4He and 12C targets in the CLAS detector. A is related to the imaginary part of the longitudinal-transverse interference and in quasifree nucleon knockout it provides an unambiguous signature for final state interactions (FSI). Experimental values of A were found to be below 5%, typically |A|⩽3% for data with good statistical precision. Optical model in eikonal approximation (OMEA) and relativistic multiple-scattering Glauber approximation (RMSGA) calculations are shown to be consistent with the measured asymmetries.

  10. Simultaneous Determination of Ofloxacin and Flavoxate Hydrochloride by Absorption Ratio and Second Derivative UV Spectrophotometry

    PubMed Central

    Attimarad, Mahesh

    2010-01-01

    The objective of this study was to develop simple, precise, accurate and sensitive UV spectrophotometric methods for the simultaneous determination of ofloxacin (OFX) and flavoxate HCl (FLX) in pharmaceutical formulations. The first method is based on absorption ratio method, by formation of Q absorbance equation at 289 nm (λmax of OFX) and 322.4 nm (isoabsorptive point). The linearity range was found to be 1 to 30 μg/ml for FLX and OFX. In the method-II second derivative absorption at 311.4 nm for OFX (zero crossing for FLX) and at 246.2 nm for FLX (zero crossing for OFX) was used for the determination of the drugs and the linearity range was found to be 2 to 30 μg/ml for OFX and 2-75 μg /ml for FLX. The accuracy and precision of the methods were determined and validated statistically. Both the methods showed good reproducibility and recovery with % RSD less than 1.5%. Both the methods were found to be rapid, specific, precise and accurate and can be successfully applied for the routine analysis of OFX and FLX in combined dosage form PMID:24826003

  11. Statistics of lattice animals

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping; Nadler, Walder; Grassberger, Peter

    2005-07-01

    The scaling behavior of randomly branched polymers in a good solvent is studied in two to nine dimensions, modeled by lattice animals on simple hypercubic lattices. For the simulations, we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. We obtain high statistics of animals with up to several thousand sites in all dimension 2⩽d⩽9. The partition sum (number of different animals) and gyration radii are estimated. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4, and ⩾8. In addition, we present the hitherto most precise estimates for growth constants in d⩾3. For clusters with one site attached to an attractive surface, we verify the superuniversality of the cross-over exponent at the adsorption transition predicted by Janssen and Lyssy.

  12. Extractive-spectrophotometric determination of disopyramide and irbesartan in their pharmaceutical formulation

    NASA Astrophysics Data System (ADS)

    Abdellatef, Hisham E.

    2007-04-01

    Picric acid, bromocresol green, bromothymol blue, cobalt thiocyanate and molybdenum(V) thiocyanate have been tested as spectrophotometric reagents for the determination of disopyramide and irbesartan. Reaction conditions have been optimized to obtain coloured comoplexes of higher sensitivity and longer stability. The absorbance of ion-pair complexes formed were found to increases linearity with increases in concentrations of disopyramide and irbesartan which were corroborated by correction coefficient values. The developed methods have been successfully applied for the determination of disopyramide and irbesartan in bulk drugs and pharmaceutical formulations. The common excipients and additives did not interfere in their determination. The results obtained by the proposed methods have been statistically compared by means of student t-test and by the variance ratio F-test. The validity was assessed by applying the standard addition technique. The results were compared statistically with the official or reference methods showing a good agreement with high precision and accuracy.

  13. PRECISE:PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare

    PubMed Central

    Chen, Feng; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Jiang, Xiaoqian

    2015-01-01

    Quality improvement (QI) requires systematic and continuous efforts to enhance healthcare services. A healthcare provider might wish to compare local statistics with those from other institutions in order to identify problems and develop intervention to improve the quality of care. However, the sharing of institution information may be deterred by institutional privacy as publicizing such statistics could lead to embarrassment and even financial damage. In this article, we propose a PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare (PRECISE), which aims at enabling cross-institution comparison of healthcare statistics while protecting privacy. The proposed framework relies on a set of state-of-the-art cryptographic protocols including homomorphic encryption and Yao’s garbled circuit schemes. By securely pooling data from different institutions, PRECISE can rank the encrypted statistics to facilitate QI among participating institutes. We conducted experiments using MIMIC II database and demonstrated the feasibility of the proposed PRECISE framework. PMID:26146645

  14. PRECISE:PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare.

    PubMed

    Chen, Feng; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Jiang, Xiaoqian

    2014-10-01

    Quality improvement (QI) requires systematic and continuous efforts to enhance healthcare services. A healthcare provider might wish to compare local statistics with those from other institutions in order to identify problems and develop intervention to improve the quality of care. However, the sharing of institution information may be deterred by institutional privacy as publicizing such statistics could lead to embarrassment and even financial damage. In this article, we propose a PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare (PRECISE), which aims at enabling cross-institution comparison of healthcare statistics while protecting privacy. The proposed framework relies on a set of state-of-the-art cryptographic protocols including homomorphic encryption and Yao's garbled circuit schemes. By securely pooling data from different institutions, PRECISE can rank the encrypted statistics to facilitate QI among participating institutes. We conducted experiments using MIMIC II database and demonstrated the feasibility of the proposed PRECISE framework.

  15. A Monte Carlo Simulation Comparing the Statistical Precision of Two High-Stakes Teacher Evaluation Methods: A Value-Added Model and a Composite Measure

    ERIC Educational Resources Information Center

    Spencer, Bryden

    2016-01-01

    Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…

  16. A targeted metabolomic protocol for short-chain fatty acids and branched-chain amino acids.

    PubMed

    Zheng, Xiaojiao; Qiu, Yunping; Zhong, Wei; Baxter, Sarah; Su, Mingming; Li, Qiong; Xie, Guoxiang; Ore, Brandon M; Qiao, Shanlei; Spencer, Melanie D; Zeisel, Steven H; Zhou, Zhanxiang; Zhao, Aihua; Jia, Wei

    2013-08-01

    Research in obesity and metabolic disorders that involve intestinal microbiota demands reliable methods for the precise measurement of the short-chain fatty acids (SCFAs) and branched-chain amino acids (BCAAs) concentration. Here, we report a rapid method of simultaneously determining SCFAs and BCAAs in biological samples using propyl chloroformate (PCF) derivatization followed by gas chromatography mass spectrometry (GC-MS) analysis. A one-step derivatization using 100 µL of PCF in a reaction system of water, propanol, and pyridine (v/v/v = 8:3:2) at pH 8 provided the optimal derivatization efficiency. The best extraction efficiency of the derivatized products was achieved by a two-step extraction with hexane. The method exhibited good derivatization efficiency and recovery for a wide range of concentrations with a low limit of detection for each compound. The relative standard deviations (RSDs) of all targeted compounds showed good intra- and inter-day (within 7 days) precision (< 10%), and good stability (< 20%) within 4 days at room temperature (23-25 °C), or 7 days when stored at -20 °C. We applied our method to measure SCFA and BCAA levels in fecal samples from rats administrated with different diet. Both univariate and multivariate statistics analysis of the concentrations of these target metabolites could differentiate three groups with ethanol intervention and different oils in diet. This method was also successfully employed to determine SCFA and BCAA in the feces, plasma and urine from normal humans, providing important baseline information of the concentrations of these metabolites. This novel metabolic profile study has great potential for translational research.

  17. Evaluating flow cytometer performance with weighted quadratic least squares analysis of LED and multi-level bead data

    PubMed Central

    Parks, David R.; Khettabi, Faysal El; Chase, Eric; Hoffman, Robert A.; Perfetto, Stephen P.; Spidlen, Josef; Wood, James C.S.; Moore, Wayne A.; Brinkman, Ryan R.

    2017-01-01

    We developed a fully automated procedure for analyzing data from LED pulses and multi-level bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all of the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than for multi-level bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. PMID:28160404

  18. Development and Validation of RP-LC Method for the Determination of Cinnarizine/Piracetam and Cinnarizine/Heptaminol Acefyllinate in Presence of Cinnarizine Reported Degradation Products

    PubMed Central

    EL-Houssini, Ola M.; Zawilla, Nagwan H.; Mohammad, Mohammad A.

    2013-01-01

    Specific stability indicating reverse-phase liquid chromatography (RP-LC) assay method (SIAM) was developed for the determination of cinnarizine (Cinn)/piracetam (Pira) and cinnarizine (Cinn)/heptaminol acefyllinate (Hept) in the presence of the reported degradation products of Cinn. A C18 column and gradient mobile phase was applied for good resolution of all peaks. The detection was achieved at 210 nm and 254 nm for Cinn/Pira and Cinn/Hept, respectively. The responses were linear over concentration ranges of 20–200, 20–1000 and 25–1000 μgmL−1 for Cinn, Pira, and Hept respectively. The proposed method was validated for linearity, accuracy, repeatability, intermediate precision, and robustness via statistical analysis of the data. The method was shown to be precise, accurate, reproducible, sensitive, and selective for the analysis of Cinn/Pira and Cinn/Hept in laboratory prepared mixtures and in pharmaceutical formulations. PMID:24137049

  19. Probing the top-quark width using the charge identification of b jets

    DOE PAGES

    Giardino, Pier Paolo; Zhang, Cen

    2017-07-18

    We propose a new method for measuring the top-quark width based on the on-/off-shell ratio of b -charge asymmetry in pp → Wbj production at the LHC. The charge asymmetry removes virtually all backgrounds and related uncertainties, while remaining systematic and theoretical uncertainties can be taken under control by the ratio of cross sections. Limited only by statistical error, in an optimistic scenario, we find that our approach leads to good precision at high integrated luminosity, at a few hundred MeV assuming 300 – 3000 fb -1 at the LHC. The approach directly probes the total width, in such amore » way that model-dependence can be minimized. It is complementary to existing cross section measurements which always leave a degeneracy between the total rate and the branching ratio, and provides valuable information about the properties of the top quark. Here, the proposal opens up new opportunities for precision top measurements using a b-charge identification algorithm.« less

  20. An empirical determination of the minimum number of measurements needed to estimate the mean random vitrinite reflectance of disseminated organic matter

    USGS Publications Warehouse

    Barker, C.E.; Pawlewicz, M.J.

    1993-01-01

    In coal samples, published recommendations based on statistical methods suggest 100 measurements are needed to estimate the mean random vitrinite reflectance (Rv-r) to within ??2%. Our survey of published thermal maturation studies indicates that those using dispersed organic matter (DOM) mostly have an objective of acquiring 50 reflectance measurements. This smaller objective size in DOM versus that for coal samples poses a statistical contradiction because the standard deviations of DOM reflectance distributions are typically larger indicating a greater sample size is needed to accurately estimate Rv-r in DOM. However, in studies of thermal maturation using DOM, even 50 measurements can be an unrealistic requirement given the small amount of vitrinite often found in such samples. Furthermore, there is generally a reduced need for assuring precision like that needed for coal applications. Therefore, a key question in thermal maturation studies using DOM is how many measurements of Rv-r are needed to adequately estimate the mean. Our empirical approach to this problem is to compute the reflectance distribution statistics: mean, standard deviation, skewness, and kurtosis in increments of 10 measurements. This study compares these intermediate computations of Rv-r statistics with a final one computed using all measurements for that sample. Vitrinite reflectance was measured on mudstone and sandstone samples taken from borehole M-25 in the Cerro Prieto, Mexico geothermal system which was selected because the rocks have a wide range of thermal maturation and a comparable humic DOM with depth. The results of this study suggest that after only 20-30 measurements the mean Rv-r is generally known to within 5% and always to within 12% of the mean Rv-r calculated using all of the measured particles. Thus, even in the worst case, the precision after measuring only 20-30 particles is in good agreement with the general precision of one decimal place recommended for mean Rv-r measurements on DOM. The coefficient of variation (V = standard deviation/mean) is proposed as a statistic to indicate the reliability of the mean Rv-r estimates made at n ??? 20. This preliminary study suggests a V 0.2 suggests an unreliable mean in such small samples. ?? 1993.

  1. Simultaneous spectrophotometric determination of glimepiride and pioglitazone in binary mixture and combined dosage form using chemometric-assisted techniques

    NASA Astrophysics Data System (ADS)

    El-Zaher, Asmaa A.; Elkady, Ehab F.; Elwy, Hanan M.; Saleh, Mahmoud Abo El Makarim

    2017-07-01

    In the present work, pioglitazone and glimepiride, 2 widely used antidiabetics, were simultaneously determined by a chemometric-assisted UV-spectrophotometric method which was applied to a binary synthetic mixture and a pharmaceutical preparation containing both drugs. Three chemometric techniques - Concentration residual augmented classical least-squares (CRACLS), principal component regression (PCR), and partial least-squares (PLS) were implemented by using the synthetic mixtures containing the two drugs in acetonitrile. The absorbance data matrix corresponding to the concentration data matrix was obtained by the measurements of absorbencies in the range between 215 and 235 nm in the intervals with Δλ = 0.4 nm in their zero-order spectra. Then, calibration or regression was obtained by using the absorbance data matrix and concentration data matrix for the prediction of the unknown concentrations of pioglitazone and glimepiride in their mixtures. The described techniques have been validated by analyzing synthetic mixtures containing the two drugs showing good mean recovery values lying between 98 and 100%. In addition, accuracy and precision of the three methods have been assured by recovery values lying between 98 and 102% and R.S.D. % ˂0.6 for intra-day precision and ˂1.2 for inter-day precision. The proposed chemometric techniques were successfully applied to a pharmaceutical preparation containing a combination of pioglitazone and glimepiride in the ratio of 30: 4, showing good recovery values. Finally, statistical analysis was carried out to add a value to the verification of the proposed methods. It was carried out by an intrinsic comparison between the 3 chemometric techniques and by comparing values of present methods with those obtained by implementing reference pharmacopeial methods for each of pioglitazone and glimepiride.

  2. Investigation of improving MEMS-type VOA reliability

    NASA Astrophysics Data System (ADS)

    Hong, Seok K.; Lee, Yeong G.; Park, Moo Y.

    2003-12-01

    MEMS technologies have been applied to a lot of areas, such as optical communications, Gyroscopes and Bio-medical components and so on. In terms of the applications in the optical communication field, MEMS technologies are essential, especially, in multi dimensional optical switches and Variable Optical Attenuators(VOAs). This paper describes the process for the development of MEMS type VOAs with good optical performance and improved reliability. Generally, MEMS VOAs have been fabricated by silicon micro-machining process, precise fibre alignment and sophisticated packaging process. Because, it is composed of many structures with various materials, it is difficult to make devices reliable. We have developed MEMS type VOSs with many failure mode considerations (FMEA: Failure Mode Effect Analysis) in the initial design step, predicted critical failure factors and revised the design, and confirmed the reliability by preliminary test. These predicted failure factors were moisture, bonding strength of the wire, which wired between the MEMS chip and TO-CAN and instability of supplied signals. Statistical quality control tools (ANOVA, T-test and so on) were used to control these potential failure factors and produce optimum manufacturing conditions. To sum up, we have successfully developed reliable MEMS type VOAs with good optical performances by controlling potential failure factors and using statistical quality control tools. As a result, developed VOAs passed international reliability standards (Telcodia GR-1221-CORE).

  3. Investigation of improving MEMS-type VOA reliability

    NASA Astrophysics Data System (ADS)

    Hong, Seok K.; Lee, Yeong G.; Park, Moo Y.

    2004-01-01

    MEMS technologies have been applied to a lot of areas, such as optical communications, Gyroscopes and Bio-medical components and so on. In terms of the applications in the optical communication field, MEMS technologies are essential, especially, in multi dimensional optical switches and Variable Optical Attenuators(VOAs). This paper describes the process for the development of MEMS type VOAs with good optical performance and improved reliability. Generally, MEMS VOAs have been fabricated by silicon micro-machining process, precise fibre alignment and sophisticated packaging process. Because, it is composed of many structures with various materials, it is difficult to make devices reliable. We have developed MEMS type VOSs with many failure mode considerations (FMEA: Failure Mode Effect Analysis) in the initial design step, predicted critical failure factors and revised the design, and confirmed the reliability by preliminary test. These predicted failure factors were moisture, bonding strength of the wire, which wired between the MEMS chip and TO-CAN and instability of supplied signals. Statistical quality control tools (ANOVA, T-test and so on) were used to control these potential failure factors and produce optimum manufacturing conditions. To sum up, we have successfully developed reliable MEMS type VOAs with good optical performances by controlling potential failure factors and using statistical quality control tools. As a result, developed VOAs passed international reliability standards (Telcodia GR-1221-CORE).

  4. Statistical framework for evaluation of climate model simulations by use of climate proxy data from the last millennium - Part 1: Theory

    NASA Astrophysics Data System (ADS)

    Sundberg, R.; Moberg, A.; Hind, A.

    2012-08-01

    A statistical framework for comparing the output of ensemble simulations from global climate models with networks of climate proxy and instrumental records has been developed, focusing on near-surface temperatures for the last millennium. This framework includes the formulation of a joint statistical model for proxy data, instrumental data and simulation data, which is used to optimize a quadratic distance measure for ranking climate model simulations. An essential underlying assumption is that the simulations and the proxy/instrumental series have a shared component of variability that is due to temporal changes in external forcing, such as volcanic aerosol load, solar irradiance or greenhouse gas concentrations. Two statistical tests have been formulated. Firstly, a preliminary test establishes whether a significant temporal correlation exists between instrumental/proxy and simulation data. Secondly, the distance measure is expressed in the form of a test statistic of whether a forced simulation is closer to the instrumental/proxy series than unforced simulations. The proposed framework allows any number of proxy locations to be used jointly, with different seasons, record lengths and statistical precision. The goal is to objectively rank several competing climate model simulations (e.g. with alternative model parameterizations or alternative forcing histories) by means of their goodness of fit to the unobservable true past climate variations, as estimated from noisy proxy data and instrumental observations.

  5. The accuracy of the ATLAS muon X-ray tomograph

    NASA Astrophysics Data System (ADS)

    Avramidou, R.; Berbiers, J.; Boudineau, C.; Dechelette, C.; Drakoulakos, D.; Fabjan, C.; Grau, S.; Gschwendtner, E.; Maugain, J.-M.; Rieder, H.; Rangod, S.; Rohrbach, F.; Sbrissa, E.; Sedykh, E.; Sedykh, I.; Smirnov, Y.; Vertogradov, L.; Vichou, I.

    2003-01-01

    A gigantic detector, the ATLAS project, is under construction at CERN for particle physics research at the Large Hadron Collider which is to be ready by 2006. An X-ray tomograph has been developed, designed and constructed at CERN in order to control the mechanical quality of the ATLAS muon chambers. We reached a measurement accuracy of 2 μm systematic and 2 μm statistical uncertainties in the horizontal and vertical directions in the working area 220 cm (horizontal)×60 cm (vertical). Here we describe in detail the fundamental approach of the basic principle chosen to achieve such good accuracy. In order to crosscheck our precision, key results of measurements are presented.

  6. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    PubMed

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  7. Bias, precision and statistical power of analysis of covariance in the analysis of randomized trials with baseline imbalance: a simulation study.

    PubMed

    Egbewale, Bolaji E; Lewis, Martyn; Sim, Julius

    2014-04-09

    Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. 126 hypothetical trial scenarios were evaluated (126,000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power.

  8. Bias, precision and statistical power of analysis of covariance in the analysis of randomized trials with baseline imbalance: a simulation study

    PubMed Central

    2014-01-01

    Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304

  9. Fluid transport properties by equilibrium molecular dynamics. I. Methodology at extreme fluid states

    NASA Astrophysics Data System (ADS)

    Dysthe, D. K.; Fuchs, A. H.; Rousseau, B.

    1999-02-01

    The Green-Kubo formalism for evaluating transport coefficients by molecular dynamics has been applied to flexible, multicenter models of linear and branched alkanes in the gas phase and in the liquid phase from ambient conditions to close to the triple point. The effects of integration time step, potential cutoff and system size have been studied and shown to be small compared to the computational precision except for diffusion in gaseous n-butane. The RATTLE algorithm is shown to give accurate transport coefficients for time steps up to a limit of 8 fs. The different relaxation mechanisms in the fluids have been studied and it is shown that the longest relaxation time of the system governs the statistical precision of the results. By measuring the longest relaxation time of a system one can obtain a reliable error estimate from a single trajectory. The accuracy of the Green-Kubo method is shown to be as good as the precision for all states and models used in this study even when the system relaxation time becomes very long. The efficiency of the method is shown to be comparable to nonequilibrium methods. The transport coefficients for two recently proposed potential models are presented, showing deviations from experiment of 0%-66%.

  10. DoD Met Most Requirements of the Improper Payments Elimination and Recovery Act in FY 2014, but Improper Payment Estimates Were Unreliable

    DTIC Science & Technology

    2015-05-12

    Deficiencies That Affect the Reliability of Estimates ________________________________________6 Statistical Precision Could Be Improved... statistical precision of improper payments estimates in seven of the DoD payment programs through the use of stratified sample designs. DoD improper...payments not subject to sampling, which made the results statistically invalid. We made a recommendation to correct this problem in a previous report;4

  11. Evaluating flow cytometer performance with weighted quadratic least squares analysis of LED and multi-level bead data.

    PubMed

    Parks, David R; El Khettabi, Faysal; Chase, Eric; Hoffman, Robert A; Perfetto, Stephen P; Spidlen, Josef; Wood, James C S; Moore, Wayne A; Brinkman, Ryan R

    2017-03-01

    We developed a fully automated procedure for analyzing data from LED pulses and multilevel bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than that from multilevel bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  12. A targeted metabolomic protocol for short-chain fatty acids and branched-chain amino acids

    PubMed Central

    Zheng, Xiaojiao; Qiu, Yunping; Zhong, Wei; Baxter, Sarah; Su, Mingming; Li, Qiong; Xie, Guoxiang; Ore, Brandon M.; Qiao, Shanlei; Spencer, Melanie D.; Zeisel, Steven H.; Zhou, Zhanxiang; Zhao, Aihua; Jia, Wei

    2013-01-01

    Research in obesity and metabolic disorders that involve intestinal microbiota demands reliable methods for the precise measurement of the short-chain fatty acids (SCFAs) and branched-chain amino acids (BCAAs) concentration. Here, we report a rapid method of simultaneously determining SCFAs and BCAAs in biological samples using propyl chloroformate (PCF) derivatization followed by gas chromatography mass spectrometry (GC-MS) analysis. A one-step derivatization using 100 µL of PCF in a reaction system of water, propanol, and pyridine (v/v/v = 8:3:2) at pH 8 provided the optimal derivatization efficiency. The best extraction efficiency of the derivatized products was achieved by a two-step extraction with hexane. The method exhibited good derivatization efficiency and recovery for a wide range of concentrations with a low limit of detection for each compound. The relative standard deviations (RSDs) of all targeted compounds showed good intra- and inter-day (within 7 days) precision (< 10%), and good stability (< 20%) within 4 days at room temperature (23–25 °C), or 7 days when stored at −20 °C. We applied our method to measure SCFA and BCAA levels in fecal samples from rats administrated with different diet. Both univariate and multivariate statistics analysis of the concentrations of these target metabolites could differentiate three groups with ethanol intervention and different oils in diet. This method was also successfully employed to determine SCFA and BCAA in the feces, plasma and urine from normal humans, providing important baseline information of the concentrations of these metabolites. This novel metabolic profile study has great potential for translational research. PMID:23997757

  13. Photon asymmetry measurements of overrightarrow{γ}p → π0 p for E_{γ}=320-650 MeV

    NASA Astrophysics Data System (ADS)

    Gardner, S.; Howdle, D.; Sikora, M. H.; Wunderlich, Y.; Abt, S.; Achenbach, P.; Afzal, F.; Aguar-Bartolome, P.; Ahmed, Z.; Annand, J. R. M.; Arends, H. J.; Bantawa, K.; Bashkanov, M.; Beck, R.; Biroth, M.; Borisov, N. S.; Braghieri, A.; Briscoe, W. J.; Cherepnya, S.; Cividini, F.; Costanza, S.; Collicott, C.; Demissie, B. T.; Denig, A.; Dieterle, M.; Downie, E. J.; Drexler, P.; Ferretti-Bondy, M. I.; Filkov, L. V.; Glazier, D. I.; Garni, S.; Gradl, W.; Günther, M.; Gurevich, G. M.; Hall Barrientos, P.; Hamilton, D.; Heid, E.; Hornidge, D.; Huber, G. M.; Jahn, O.; Jude, T. C.; Käser, A.; Kay, S.; Kashevarov, V. L.; Keshelashvili, I.; Kondratiev, R.; Korolija, M.; Krusche, B.; Linturi, J. M.; Lisin, V.; Livingston, K.; Lutterer, S.; MacGregor, I. J. D.; Macrae, R.; Mancell, J.; Manley, D. M.; Martel, P. P.; McGeorge, J. C.; McNicoll, E. F.; Middleton, D. G.; Miskimen, R.; Mullen, C.; Mushkarenkov, A.; Neganov, A. B.; Neiser, A.; Nikolaev, A.; Oberle, M.; Ostrick, M.; Owens, R. O.; Otte, P. B.; Oussena, B.; Paudyal, D.; Pedroni, P.; Polonski, A.; Prakhov, S.; Rajabi, A.; Robinson, J.; Rosner, G.; Rostomyan, T.; Sarty, A.; Schumann, S.; Sokhoyan, V.; Spieker, K.; Steffen, O.; Sfienti, C.; Strakovsky, I. I.; Strandberg, B.; Strub, Th.; Supek, I.; Tarbert, C. M.; Thiel, A.; Thiel, M.; Thomas, A.; Unverzagt, M.; Usov, Yu. A.; Watts, D. P.; Werthmüller, D.; Wettig, J.; Wolfes, M.; Witthauer, L.; Zana, L.

    2016-11-01

    High-statistics measurements of the photon asymmetry Σ for the overrightarrow{γ}p→π0p reaction have been made in the center-of-mass energy range W=1214-1450 MeV. The data were measured with the MAMI A2 real photon beam and Crystal Ball/TAPS detector systems in Mainz, Germany. The results significantly improve the existing world data and are shown to be in good agreement with previous measurements, and with the MAID, SAID, and Bonn-Gatchina predictions. We have also combined the photon asymmetry results with recent cross-section measurements from Mainz to calculate the profile functions, \\check{Σ} (= σ0Σ), and perform a moment analysis. Comparison with calculations from the Bonn-Gatchina model shows that the precision of the data is good enough to further constrain the higher partial waves, and there is an indication of interference between the very small F-waves and the N(1520) 3/2- and N(1535) 1/2- resonances.

  14. Validated Spectrophotometric and RP-HPLC-DAD Methods for the Determination of Ursodeoxycholic Acid Based on Derivatization with 2-Nitrophenylhydrazine.

    PubMed

    El-Kafrawy, Dina S; Belal, Tarek S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H

    2017-05-01

    This work describes the development, validation, and application of two simple, accurate, and reliable methods for the determination of ursodeoxycholic acid (UDCA) in bulk powder and in pharmaceutical dosage forms. The carboxylic acid group in UDCA was exploited for the development of these novel methods. Method 1 is the colorimetric determination of the drug based on its reaction with 2-nitrophenylhydrazine hydrochloride in the presence of a water-soluble carbodiimide coupler [1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide hydrochloride] and pyridine to produce an acid hydrazide derivative, which ionizes to yield an intense violet color with maximum absorption at 553 nm. Method 2 uses reversed-phase HPLC with diode-array detection for the determination of UDCA after precolumn derivatization using the same reaction mentioned above. The acid hydrazide reaction product was separated using a Pinnacle DB C8 column (4.6 × 150 mm, 5 μm particle size) and a mobile phase consisting of 0.01 M acetate buffer (pH 3)-methanol-acetonitrile (30 + 30 + 40, v/v/v) isocratically pumped at a flow rate of 1 mL/min. Ibuprofen was used as the internal standard (IS). The peaks of the reaction product and IS were monitored at 400 nm. Different experimental parameters for both methods were carefully optimized. Analytical performance of the developed methods were statistically validated for linearity, range, precision, accuracy, specificity, robustness, LOD, and LOQ. Calibration curves showed good linear relationships for concentration ranges 32-192 and 60-600 μg/mL for methods 1 and 2, respectively. The proposed methods were successfully applied for the assay of UDCA in bulk form, capsules, and oral suspension with good accuracy and precision. Assay results were statistically compared with a reference pharmacopeial HPLC method, and no significant differences were observed between proposed and reference methods.

  15. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. The impact of statistical adjustment on conditional standard errors of measurement in the assessment of physician communication skills.

    PubMed

    Raymond, Mark R; Clauser, Brian E; Furman, Gail E

    2010-10-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.

  17. Pb and Sr isotope measurements by inductively coupled plasma mass spectrometer: efficient time management for precision improvement

    NASA Astrophysics Data System (ADS)

    Monna, F.; Loizeau, J.-L.; Thomas, B. A.; Guéguen, C.; Favarger, P.-Y.

    1998-08-01

    One of the factors limiting the precision of inductively coupled plasma mass spectrometry is the counting statistics, which depend upon acquisition time and ion fluxes. In the present study, the precision of the isotopic measurements of Pb and Sr is examined. The time of measurement is optimally shared for each isotope, using a mathematical simulation, to provide the lowest theoretical analytical error. Different algorithms of mass bias correction are also taken into account and evaluated in term of improvement of overall precision. Several experiments allow a comparison of real conditions with theory. The present method significantly improves the precision, regardless of the instrument used. However, this benefit is more important for equipment which originally yields a precision close to that predicted by counting statistics. Additionally, the procedure is flexible enough to be easily adapted to other problems, such as isotopic dilution.

  18. Weak lensing shear and aperture mass from linear to non-linear scales

    NASA Astrophysics Data System (ADS)

    Munshi, Dipak; Valageas, Patrick; Barber, Andrew J.

    2004-05-01

    We describe the predictions for the smoothed weak lensing shear, γs, and aperture mass,Map, of two simple analytical models of the density field: the minimal tree model and the stellar model. Both models give identical results for the statistics of the three-dimensional density contrast smoothed over spherical cells and only differ by the detailed angular dependence of the many-body density correlations. We have shown in previous work that they also yield almost identical results for the probability distribution function (PDF) of the smoothed convergence, κs. We find that the two models give rather close results for both the shear and the positive tail of the aperture mass. However, we note that at small angular scales (θs<~ 2 arcmin) the tail of the PDF, , for negative Map shows a strong variation between the two models, and the stellar model actually breaks down for θs<~ 0.4 arcmin and Map < 0. This shows that the statistics of the aperture mass provides a very precise probe of the detailed structure of the density field, as it is sensitive to both the amplitude and the detailed angular behaviour of the many-body correlations. On the other hand, the minimal tree model shows good agreement with numerical simulations over all the scales and redshifts of interest, while both models provide a good description of the PDF, , of the smoothed shear components. Therefore, the shear and the aperture mass provide robust and complementary tools to measure the cosmological parameters as well as the detailed statistical properties of the density field.

  19. Microbiological assay for the determination of meropenem in pharmaceutical dosage form.

    PubMed

    Mendez, Andreas S L; Weisheimer, Vanessa; Oppe, Tércio P; Steppe, Martin; Schapoval, Elfrides E S

    2005-04-01

    Meropenem is a highly active carbapenem antibiotic used in the treatment of a wide range of serious infections. The present work reports a microbiological assay, applying the cylinder-plate method, for the determination of meropenem in powder for injection. The validation method yielded good results and included linearity, precision, accuracy and specificity. The assay is based on the inhibitory effect of meropenem upon the strain of Micrococcus luteus ATCC 9341 used as the test microorganism. The results of assay were treated statistically by analysis of variance (ANOVA) and were found to be linear (r=0.9999) in the range of 1.5-6.0 microg ml(-1), precise (intra-assay: R.S.D.=0.29; inter-assay: R.S.D.=0.94) and accurate. A preliminary stability study of meropenem was performed to show that the microbiological assay is specific for the determination of meropenem in the presence of its degradation products. The degraded samples were also analysed by the HPLC method. The proposed method allows the quantitation of meropenem in pharmaceutical dosage form and can be used for the drug analysis in routine quality control.

  20. Validation of HPLC and UV spectrophotometric methods for the determination of meropenem in pharmaceutical dosage form.

    PubMed

    Mendez, Andreas S L; Steppe, Martin; Schapoval, Elfrides E S

    2003-12-04

    A high-performance liquid chromatographic method and a UV spectrophotometric method for the quantitative determination of meropenem, a highly active carbapenem antibiotic, in powder for injection were developed in present work. The parameters linearity, precision, accuracy, specificity, robustness, limit of detection and limit of quantitation were studied according to International Conference on Harmonization guidelines. Chromatography was carried out by reversed-phase technique on an RP-18 column with a mobile phase composed of 30 mM monobasic phosphate buffer and acetonitrile (90:10; v/v), adjusted to pH 3.0 with orthophosphoric acid. The UV spectrophotometric method was performed at 298 nm. The samples were prepared in water and the stability of meropenem in aqueous solution at 4 and 25 degrees C was studied. The results were satisfactory with good stability after 24 h at 4 degrees C. Statistical analysis by Student's t-test showed no significant difference between the results obtained by the two methods. The proposed methods are highly sensitive, precise and accurate and can be used for the reliable quantitation of meropenem in pharmaceutical dosage form.

  1. Survey of A{sub LT'} asymmetries in semi-exclusive electron scattering on He4 and C12

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dan Protopopescu; et. Al.

    2005-02-21

    Single spin azimuthal asymmetries A{sub LT'} were measured at Jefferson Lab using 2.2 and 4.4 GeV longitudinally polarized electrons incident on {sup 4}He and {sup 12}C targets in the CLAS detector. A{sub LT'} is related to the imaginary part of the longitudinal-transverse interference and in quasifree nucleon knockout it provides an unambiguous signature for final state interactions (FSI). Experimental values of A{sub LT'} were found to be below 5%, typically |A{sub LT'}| < 3% for data with good statistical precision. Optical Model in Eikonal Approximation (OMEA) and Relativistic Multiple-Scattering Glauber Approximation (RMSGA) calculations are shown to be consistent with themore » measured asymmetries.« less

  2. Stability indicating high performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in combined dosage form

    PubMed Central

    Bageshwar, Deepak; Khanvilkar, Vineeta; Kadam, Vilasrao

    2011-01-01

    A specific, precise and stability indicating high-performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in pharmaceutical formulations was developed and validated. The method employed TLC aluminium plates precoated with silica gel 60F254 as the stationary phase. The solvent system consisted of methanol:water:ammonium acetate; 4.0:1.0:0.5 (v/v/v). This system was found to give compact and dense spots for both itopride hydrochloride (Rf value of 0.55±0.02) and pantoprazole sodium (Rf value of 0.85±0.04). Densitometric analysis of both drugs was carried out in the reflectance–absorbance mode at 289 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with R2=0.9988±0.0012 in the concentration range of 100–400 ng for pantoprazole sodium. Also, the linear regression analysis data for the calibration plots showed a good linear relationship with R2=0.9990±0.0008 in the concentration range of 200–1200 ng for itopride hydrochloride. The method was validated for specificity, precision, robustness and recovery. Statistical analysis proves that the method is repeatable and selective for the estimation of both the said drugs. As the method could effectively separate the drug from its degradation products, it can be employed as a stability indicating method. PMID:29403710

  3. Stability indicating high performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in combined dosage form.

    PubMed

    Bageshwar, Deepak; Khanvilkar, Vineeta; Kadam, Vilasrao

    2011-11-01

    A specific, precise and stability indicating high-performance thin-layer chromatographic method for simultaneous estimation of pantoprazole sodium and itopride hydrochloride in pharmaceutical formulations was developed and validated. The method employed TLC aluminium plates precoated with silica gel 60F 254 as the stationary phase. The solvent system consisted of methanol:water:ammonium acetate; 4.0:1.0:0.5 (v/v/v). This system was found to give compact and dense spots for both itopride hydrochloride ( R f value of 0.55±0.02) and pantoprazole sodium ( R f value of 0.85±0.04). Densitometric analysis of both drugs was carried out in the reflectance-absorbance mode at 289 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with R 2 =0.9988±0.0012 in the concentration range of 100-400 ng for pantoprazole sodium. Also, the linear regression analysis data for the calibration plots showed a good linear relationship with R 2 =0.9990±0.0008 in the concentration range of 200-1200 ng for itopride hydrochloride. The method was validated for specificity, precision, robustness and recovery. Statistical analysis proves that the method is repeatable and selective for the estimation of both the said drugs. As the method could effectively separate the drug from its degradation products, it can be employed as a stability indicating method.

  4. Spectrofluorimetric determination of some water-soluble vitamins.

    PubMed

    Mohamed, Abdel-Maaboud I; Mohamed, Horria A; Abdel-Latif, Niveen M; Mohamed, Marwa R

    2011-01-01

    Two simple and sensitive spectrofluorimetric methods were developed for determination of three water-soluble vitamins (B1, B2, and B6) in mixtures in the presence of cyanocobalamin. The first one was for thiamine determination, which depends on the oxidation of thiamine HCl to thiochrome by iodine in an alkaline medium. The method was applied accurately to determine thiamine in binary, ternary, and quaternary mixtures with pyridoxine HCl, riboflavin, and cyanocobalamin without interference. In the second method, riboflavin and pyridoxine HCl were determined fluorimetrically in acetate buffer, pH 6. The three water-soluble vitamins (B1, B2, and B6) were determined spectrofluorimetrically in binary, ternary, and quaternary mixtures in the presence of cyanocobalamin. All variables were studied in order to optimize the reaction conditions. Linear relationship was obeyed for all studied vitamins by the proposed methods at their corresponding lambda(exc) or lambda(em). The linear calibration curves were obtained from 10 to 500 ng/mL; the correlation ranged from 0.9991 to 0.9999. The suggested procedures were applied to the analysis of the investigated vitamins in their laboratory-prepared mixtures and pharmaceutical dosage forms from different manufacturers. The RSD range was 0.46-1.02%, which indicates good precision. No interference was observed from common pharmaceutical additives. Good recoveries (97.6 +/- 0.7-101.2 +/- 0.8%) were obtained. Statistical comparison of the results with reported methods shows excellent agreement and indicates no significant difference in accuracy and precision.

  5. Use of statistical analysis to validate ecogenotoxicology findings arising from various comet assay components.

    PubMed

    Hussain, Bilal; Sultana, Tayyaba; Sultana, Salma; Al-Ghanim, Khalid Abdullah; Masoud, Muhammad Shahreef; Mahboob, Shahid

    2018-04-01

    Cirrhinus mrigala, Labeo rohita, and Catla catla are economically important fish for human consumption in Pakistan, but industrial and sewage pollution has drastically reduced their population in the River Chenab. Statistics are an important tool to analyze and interpret comet assay results. The specific aims of the study were to determine the DNA damage in Cirrhinus mrigala, Labeo rohita, and Catla catla due to chemical pollution and to assess the validity of statistical analyses to determine the viability of the comet assay for a possible use with these freshwater fish species as a good indicator of pollution load and habitat degradation. Comet assay results indicated a significant (P < 0.05) degree of DNA fragmentation in Cirrhinus mrigala followed by Labeo rohita and Catla catla in respect to comet head diameter, comet tail length, and % DNA damage. Regression analysis and correlation matrices conducted among the parameters of the comet assay affirmed the precision and the legitimacy of the results. The present study, therefore, strongly recommends that genotoxicological studies conduct appropriate analysis of the various components of comet assays to offer better interpretation of the assay data.

  6. Validation of analytical methods in GMP: the disposable Fast Read 102® device, an alternative practical approach for cell counting.

    PubMed

    Gunetti, Monica; Castiglia, Sara; Rustichelli, Deborah; Mareschi, Katia; Sanavio, Fiorella; Muraro, Michela; Signorino, Elena; Castello, Laura; Ferrero, Ivana; Fagioli, Franca

    2012-05-31

    The quality and safety of advanced therapy products must be maintained throughout their production and quality control cycle to ensure their final use in patients. We validated the cell count method according to the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use and European Pharmacopoeia, considering the tests' accuracy, precision, repeatability, linearity and range. As the cell count is a potency test, we checked accuracy, precision, and linearity, according to ICH Q2. Briefly our experimental approach was first to evaluate the accuracy of Fast Read 102® compared to the Bürker chamber. Once the accuracy of the alternative method was demonstrated, we checked the precision and linearity test only using Fast Read 102®. The data were statistically analyzed by average, standard deviation and coefficient of variation percentages inter and intra operator. All the tests performed met the established acceptance criteria of a coefficient of variation of less than ten percent. For the cell count, the precision reached by each operator had a coefficient of variation of less than ten percent (total cells) and under five percent (viable cells). The best range of dilution, to obtain a slope line value very similar to 1, was between 1:8 and 1:128. Our data demonstrated that the Fast Read 102® count method is accurate, precise and ensures the linearity of the results obtained in a range of cell dilution. Under our standard method procedures, this assay may thus be considered a good quality control method for the cell count as a batch release quality control test. Moreover, the Fast Read 102® chamber is a plastic, disposable device that allows a number of samples to be counted in the same chamber. Last but not least, it overcomes the problem of chamber washing after use and so allows a cell count in a clean environment such as that in a Cell Factory. In a good manufacturing practice setting the disposable cell counting devices will allow a single use of the count chamber they can then be thrown away, thus avoiding the waste disposal of vital dye (e.g. Trypan Blue) or lysing solution (e.g. Tuerk solution).

  7. Validation of analytical methods in GMP: the disposable Fast Read 102® device, an alternative practical approach for cell counting

    PubMed Central

    2012-01-01

    Background The quality and safety of advanced therapy products must be maintained throughout their production and quality control cycle to ensure their final use in patients. We validated the cell count method according to the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use and European Pharmacopoeia, considering the tests’ accuracy, precision, repeatability, linearity and range. Methods As the cell count is a potency test, we checked accuracy, precision, and linearity, according to ICH Q2. Briefly our experimental approach was first to evaluate the accuracy of Fast Read 102® compared to the Bürker chamber. Once the accuracy of the alternative method was demonstrated, we checked the precision and linearity test only using Fast Read 102®. The data were statistically analyzed by average, standard deviation and coefficient of variation percentages inter and intra operator. Results All the tests performed met the established acceptance criteria of a coefficient of variation of less than ten percent. For the cell count, the precision reached by each operator had a coefficient of variation of less than ten percent (total cells) and under five percent (viable cells). The best range of dilution, to obtain a slope line value very similar to 1, was between 1:8 and 1:128. Conclusions Our data demonstrated that the Fast Read 102® count method is accurate, precise and ensures the linearity of the results obtained in a range of cell dilution. Under our standard method procedures, this assay may thus be considered a good quality control method for the cell count as a batch release quality control test. Moreover, the Fast Read 102® chamber is a plastic, disposable device that allows a number of samples to be counted in the same chamber. Last but not least, it overcomes the problem of chamber washing after use and so allows a cell count in a clean environment such as that in a Cell Factory. In a good manufacturing practice setting the disposable cell counting devices will allow a single use of the count chamber they can then be thrown away, thus avoiding the waste disposal of vital dye (e.g. Trypan Blue) or lysing solution (e.g. Tuerk solution). PMID:22650233

  8. A global goodness-of-fit statistic for Cox regression models.

    PubMed

    Parzen, M; Lipsitz, S R

    1999-06-01

    In this paper, a global goodness-of-fit test statistic for a Cox regression model, which has an approximate chi-squared distribution when the model has been correctly specified, is proposed. Our goodness-of-fit statistic is global and has power to detect if interactions or higher order powers of covariates in the model are needed. The proposed statistic is similar to the Hosmer and Lemeshow (1980, Communications in Statistics A10, 1043-1069) goodness-of-fit statistic for binary data as well as Schoenfeld's (1980, Biometrika 67, 145-153) statistic for the Cox model. The methods are illustrated using data from a Mayo Clinic trial in primary billiary cirrhosis of the liver (Fleming and Harrington, 1991, Counting Processes and Survival Analysis), in which the outcome is the time until liver transplantation or death. The are 17 possible covariates. Two Cox proportional hazards models are fit to the data, and the proposed goodness-of-fit statistic is applied to the fitted models.

  9. A chironomid-based record of temperature variability during the past 4000 years in northern China and its possible societal implications

    NASA Astrophysics Data System (ADS)

    Wang, Haipeng; Chen, Jianhui; Zhang, Shengda; Zhang, David D.; Wang, Zongli; Xu, Qinghai; Chen, Shengqian; Wang, Shijin; Kang, Shichang; Chen, Fahu

    2018-03-01

    Long-term, high-resolution temperature records which combine an unambiguous proxy and precise dating are rare in China. In addition, the societal implications of past temperature change on a regional scale have not been sufficiently assessed. Here, based on the modern relationship between chironomids and temperature, we use fossil chironomid assemblages in a precisely dated sediment core from Gonghai Lake to explore temperature variability during the past 4000 years in northern China. Subsequently, we address the possible regional societal implications of temperature change through a statistical analysis of the occurrence of wars. Our results show the following. (1) The mean annual temperature (TANN) was relatively high during 4000-2700 cal yr BP, decreased gradually during 2700-1270 cal yr BP and then fluctuated during the last 1270 years. (2) A cold event in the Period of Disunity, the Sui-Tang Warm Period (STWP), the Medieval Warm Period (MWP) and the Little Ice Age (LIA) can all be recognized in the paleotemperature record, as well as in many other temperature reconstructions in China. This suggests that our chironomid-inferred temperature record for the Gonghai Lake region is representative. (3) Local wars in Shanxi Province, documented in the historical literature during the past 2700 years, are statistically significantly correlated with changes in temperature, and the relationship is a good example of the potential societal implications of temperature change on a regional scale.

  10. How precise can atoms of a nanocluster be located in 3D using a tilt series of scanning transmission electron microscopy images?

    PubMed

    Alania, M; De Backer, A; Lobato, I; Krause, F F; Van Dyck, D; Rosenauer, A; Van Aert, S

    2017-10-01

    In this paper, we investigate how precise atoms of a small nanocluster can ultimately be located in three dimensions (3D) from a tilt series of images acquired using annular dark field (ADF) scanning transmission electron microscopy (STEM). Therefore, we derive an expression for the statistical precision with which the 3D atomic position coordinates can be estimated in a quantitative analysis. Evaluating this statistical precision as a function of the microscope settings also allows us to derive the optimal experimental design. In this manner, the optimal angular tilt range, required electron dose, optimal detector angles, and number of projection images can be determined. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Apparatus for precision micromachining with lasers

    DOEpatents

    Chang, J.J.; Dragon, E.P.; Warner, B.E.

    1998-04-28

    A new material processing apparatus using a short-pulsed, high-repetition-rate visible laser for precision micromachining utilizes a near diffraction limited laser, a high-speed precision two-axis tilt-mirror for steering the laser beam, an optical system for either focusing or imaging the laser beam on the part, and a part holder that may consist of a cover plate and a back plate. The system is generally useful for precision drilling, cutting, milling and polishing of metals and ceramics, and has broad application in manufacturing precision components. Precision machining has been demonstrated through percussion drilling and trepanning using this system. With a 30 W copper vapor laser running at multi-kHz pulse repetition frequency, straight parallel holes with size varying from 500 microns to less than 25 microns and with aspect ratios up to 1:40 have been consistently drilled with good surface finish on a variety of metals. Micromilling and microdrilling on ceramics using a 250 W copper vapor laser have also been demonstrated with good results. Materialographic sections of machined parts show little (submicron scale) recast layer and heat affected zone. 1 fig.

  12. Precision forging technology for aluminum alloy

    NASA Astrophysics Data System (ADS)

    Deng, Lei; Wang, Xinyun; Jin, Junsong; Xia, Juchen

    2018-03-01

    Aluminum alloy is a preferred metal material for lightweight part manufacturing in aerospace, automobile, and weapon industries due to its good physical properties, such as low density, high specific strength, and good corrosion resistance. However, during forging processes, underfilling, folding, broken streamline, crack, coarse grain, and other macro- or microdefects are easily generated because of the deformation characteristics of aluminum alloys, including narrow forgeable temperature region, fast heat dissipation to dies, strong adhesion, high strain rate sensitivity, and large flow resistance. Thus, it is seriously restricted for the forged part to obtain precision shape and enhanced property. In this paper, progresses in precision forging technologies of aluminum alloy parts were reviewed. Several advanced precision forging technologies have been developed, including closed die forging, isothermal die forging, local loading forging, metal flow forging with relief cavity, auxiliary force or vibration loading, casting-forging hybrid forming, and stamping-forging hybrid forming. High-precision aluminum alloy parts can be realized by controlling the forging processes and parameters or combining precision forging technologies with other forming technologies. The development of these technologies is beneficial to promote the application of aluminum alloys in manufacturing of lightweight parts.

  13. Apparatus for precision micromachining with lasers

    DOEpatents

    Chang, Jim J.; Dragon, Ernest P.; Warner, Bruce E.

    1998-01-01

    A new material processing apparatus using a short-pulsed, high-repetition-rate visible laser for precision micromachining utilizes a near diffraction limited laser, a high-speed precision two-axis tilt-mirror for steering the laser beam, an optical system for either focusing or imaging the laser beam on the part, and a part holder that may consist of a cover plate and a back plate. The system is generally useful for precision drilling, cutting, milling and polishing of metals and ceramics, and has broad application in manufacturing precision components. Precision machining has been demonstrated through percussion drilling and trepanning using this system. With a 30 W copper vapor laser running at multi-kHz pulse repetition frequency, straight parallel holes with size varying from 500 microns to less than 25 microns and with aspect ratios up to 1:40 have been consistently drilled with good surface finish on a variety of metals. Micromilling and microdrilling on ceramics using a 250 W copper vapor laser have also been demonstrated with good results. Materialogroaphic sections of machined parts show little (submicron scale) recast layer and heat affected zone.

  14. Use of experimental design for optimisation of the cold plasma ICP-MS determination of lithium, aluminum and iron in soft drinks and alcoholic beverages.

    PubMed

    Bianchi, F; Careri, M; Maffini, M; Mangia, A; Mucchino, C

    2003-01-01

    A sensitive method for the simultaneous determination of (7)Li, (27)Al and (56)Fe by cold plasma ICP-MS was developed and validated. Experimental design was used to investigate the effects of torch position, torch power, lens 2 voltage, and coolant flow. Regression models and desirability functions were applied to find the experimental conditions providing the highest global sensitivity in a multi-elemental analysis. Validation was performed in terms of limits of detection (LOD), limits of quantitation (LOQ), linearity and precision. LODs were 1.4 and 159 ng L(-1) for (7)Li and (56)Fe, respectively; the highest LOD found being that for (27)Al (425 ng L(-1)). Linear ranges of 5 orders of magnitude for Li and 3 orders for Fe were statistically verified for each compound. Precision was evaluated by testing two concentration levels, and good results in terms of both intra-day repeatability and intermediate precision were obtained. RSD values lower than 4.8% at the lowest concentration level were calculated for intra-day repeatability. Commercially available soft drinks and alcoholic beverages contained in different packaging materials (TetraPack, polyethylene terephthalate (PET), commercial cans and glass) were analysed, and all the analytes were detected and quantitated. Copyright 2002 John Wiley & Sons, Ltd.

  15. Further Simplification of the Simple Erosion Narrowing Score With Item Response Theory Methodology.

    PubMed

    Oude Voshaar, Martijn A H; Schenk, Olga; Ten Klooster, Peter M; Vonkeman, Harald E; Bernelot Moens, Hein J; Boers, Maarten; van de Laar, Mart A F J

    2016-08-01

    To further simplify the simple erosion narrowing score (SENS) by removing scored areas that contribute the least to its measurement precision according to analysis based on item response theory (IRT) and to compare the measurement performance of the simplified version to the original. Baseline and 18-month data of the Combinatietherapie Bij Reumatoide Artritis (COBRA) trial were modeled using longitudinal IRT methodology. Measurement precision was evaluated across different levels of structural damage. SENS was further simplified by omitting the least reliably scored areas. Discriminant validity of SENS and its simplification were studied by comparing their ability to differentiate between the COBRA and sulfasalazine arms. Responsiveness was studied by comparing standardized change scores between versions. SENS data showed good fit to the IRT model. Carpal and feet joints contributed the least statistical information to both erosion and joint space narrowing scores. Omitting the joints of the foot reduced measurement precision for the erosion score in cases with below-average levels of structural damage (relative efficiency compared with the original version ranged 35-59%). Omitting the carpal joints had minimal effect on precision (relative efficiency range 77-88%). Responsiveness of a simplified SENS without carpal joints closely approximated the original version (i.e., all Δ standardized change scores were ≤0.06). Discriminant validity was also similar between versions for both the erosion score (relative efficiency = 97%) and the SENS total score (relative efficiency = 84%). Our results show that the carpal joints may be omitted from the SENS without notable repercussion for its measurement performance. © 2016, American College of Rheumatology.

  16. Reduction to Outside the Atmosphere and Statistical Tests Used in Geneva Photometry

    NASA Technical Reports Server (NTRS)

    Rufener, F.

    1984-01-01

    Conditions for creating a precise photometric system are investigated. The analytical and discriminatory potentials of a photometry obviously result from the localization of the passbands in the spectrum; they do, however, also depend critically on the precision attained. This precision is the result of two different types of precautions. Two procedures which contribute in an efficient manner to achieving greater precision are examined. These two methods are known as hardware related precision and software related precision.

  17. Statistical issues in the design, conduct and analysis of two large safety studies.

    PubMed

    Gaffney, Michael

    2016-10-01

    The emergence, post approval, of serious medical events, which may be associated with the use of a particular drug or class of drugs, is an important public health and regulatory issue. The best method to address this issue is through a large, rigorously designed safety study. Therefore, it is important to elucidate the statistical issues involved in these large safety studies. Two such studies are PRECISION and EAGLES. PRECISION is the primary focus of this article. PRECISION is a non-inferiority design with a clinically relevant non-inferiority margin. Statistical issues in the design, conduct and analysis of PRECISION are discussed. Quantitative and clinical aspects of the selection of the composite primary endpoint, the determination and role of the non-inferiority margin in a large safety study and the intent-to-treat and modified intent-to-treat analyses in a non-inferiority safety study are shown. Protocol changes that were necessary during the conduct of PRECISION are discussed from a statistical perspective. Issues regarding the complex analysis and interpretation of the results of PRECISION are outlined. EAGLES is presented as a large, rigorously designed safety study when a non-inferiority margin was not able to be determined by a strong clinical/scientific method. In general, when a non-inferiority margin is not able to be determined, the width of the 95% confidence interval is a way to size the study and to assess the cost-benefit of relative trial size. A non-inferiority margin, when able to be determined by a strong scientific method, should be included in a large safety study. Although these studies could not be called "pragmatic," they are examples of best real-world designs to address safety and regulatory concerns. © The Author(s) 2016.

  18. A Double Perturbation Method for Reducing Dynamical Degradation of the Digital Baker Map

    NASA Astrophysics Data System (ADS)

    Liu, Lingfeng; Lin, Jun; Miao, Suoxia; Liu, Bocheng

    2017-06-01

    The digital Baker map is widely used in different kinds of cryptosystems, especially for image encryption. However, any chaotic map which is realized on the finite precision device (e.g. computer) will suffer from dynamical degradation, which refers to short cycle lengths, low complexity and strong correlations. In this paper, a novel double perturbation method is proposed for reducing the dynamical degradation of the digital Baker map. Both state variables and system parameters are perturbed by the digital logistic map. Numerical experiments show that the perturbed Baker map can achieve good statistical and cryptographic properties. Furthermore, a new image encryption algorithm is provided as a simple application. With a rather simple algorithm, the encrypted image can achieve high security, which is competitive to the recently proposed image encryption algorithms.

  19. Photon asymmetry measurements of $$\\overrightarrow{\\gamma}p \\rightarrow \\pi^{0} p$$ γ → p → π 0 p for $$E_{\\gamma}=$$ 320-650 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, S.; Howdle, D.; Sikora, M. H.

    High-statistics measurements of the photon asymmetry Σ for themore » $$\\overrightarrow{\\gamma}p \\rightarrow \\pi^{0} p$$ reaction have been made in the center-of-mass energy range W = 1214–1450 MeV. The data were measured with the MAMI A2 real photon beam and Crystal Ball/TAPS detector systems in Mainz, Germany. The resulting measurements significantly improve the existing world data and are shown to be in good agreement with previous measurements, and with the MAID, SAID, and Bonn-Gatchina predictions. We have also combined the photon asymmetry results with recent cross-section measurements from Mainz to calculate the profile functions, $$\\check{Σ}$$ (= σ 0Σ), and perform a moment analysis. Comparison with calculations from the Bonn-Gatchina model shows that the precision of the data is good enough to further constrain the higher partial waves, and there is an indication of interference between the very small F-waves and the N(1520)3/2 - and N(1535)1/2 - resonances.« less

  20. Photon asymmetry measurements of $$\\overrightarrow{\\gamma}p \\rightarrow \\pi^{0} p$$ γ → p → π 0 p for $$E_{\\gamma}=$$ 320-650 MeV

    DOE PAGES

    Gardner, S.; Howdle, D.; Sikora, M. H.; ...

    2016-11-17

    High-statistics measurements of the photon asymmetry Σ for themore » $$\\overrightarrow{\\gamma}p \\rightarrow \\pi^{0} p$$ reaction have been made in the center-of-mass energy range W = 1214–1450 MeV. The data were measured with the MAMI A2 real photon beam and Crystal Ball/TAPS detector systems in Mainz, Germany. The resulting measurements significantly improve the existing world data and are shown to be in good agreement with previous measurements, and with the MAID, SAID, and Bonn-Gatchina predictions. We have also combined the photon asymmetry results with recent cross-section measurements from Mainz to calculate the profile functions, $$\\check{Σ}$$ (= σ 0Σ), and perform a moment analysis. Comparison with calculations from the Bonn-Gatchina model shows that the precision of the data is good enough to further constrain the higher partial waves, and there is an indication of interference between the very small F-waves and the N(1520)3/2 - and N(1535)1/2 - resonances.« less

  1. Further steps in the modeling of behavioural crowd dynamics, good news for safe handling. Comment on "Human behaviours in evacuation crowd dynamics: From modelling to "big data" toward crisis management" by Nicola Bellomo et al.

    NASA Astrophysics Data System (ADS)

    Knopoff, Damián A.

    2016-09-01

    The recent review paper [4] constitutes a valuable contribution on the understanding, modeling and simulation of crowd dynamics in extreme situations. It provides a very comprehensive revision about the complexity features of the system under consideration, scaling and the consequent justification of the used methods. In particular, macro and microscopic models have so far been used to model crowd dynamics [9] and authors appropriately explain that working at the mesoscale is a good choice to deal with the heterogeneous behaviour of walkers as well as with the difficulty of their deterministic identification. In this way, methods based on the kinetic theory and statistical dynamics are employed, more precisely the so-called kinetic theory for active particles [7]. This approach has successfully been applied in the modeling of several complex dynamics, with recent applications to learning [2,8] that constitutes the key to understand communication and is of great importance in social dynamics and behavioral sciences.

  2. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions.

    PubMed

    Belal, Tarek S; El-Kafrawy, Dina S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H

    2016-02-15

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415nm. The fourth method involves the formation of a yellow complex peaking at 361nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions

    NASA Astrophysics Data System (ADS)

    Belal, Tarek S.; El-Kafrawy, Dina S.; Mahrous, Mohamed S.; Abdel-Khalek, Magdi M.; Abo-Gharam, Amira H.

    2016-02-01

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524 nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490 nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415 nm. The fourth method involves the formation of a yellow complex peaking at 361 nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8 μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method.

  4. Recovering physical properties from narrow-band photometry

    NASA Astrophysics Data System (ADS)

    Schoenell, W.; Cid Fernandes, R.; Benítez, N.; Vale Asari, N.

    2013-05-01

    Our aim in this work is to answer, using simulated narrow-band photometry data, the following general question: What can we learn about galaxies from these new generation cosmological surveys? For instance, can we estimate stellar age and metallicity distributions? Can we separate star-forming galaxies from AGN? Can we measure emission lines, nebular abundances and extinction? With what precision? To accomplish this, we selected a sample of about 300k galaxies with good S/N from the SDSS and divided them in two groups: 200k objects and a template library of 100k. We corrected the spectra to z = 0 and converted them to filter fluxes. Using a statistical approach, we calculated a Probability Distribution Function (PDF) for each property of each object and the library. Since we have the properties of all the data from the STARLIGHT-SDSS database, we could compare them with the results obtained from summaries of the PDF (mean, median, etc). Our results shows that we retrieve the weighted average of the log of the galaxy age with a good error margin (σ ≈ 0.1 - 0.2 dex), and similarly for the physical properties such as mass-to-light ratio, mean stellar metallicity, etc. Furthermore, our main result is that we can derive emission line intensities and ratios with similar precision. This makes this method unique in comparison to the other methods on the market to analyze photometry data and shows that, from the point of view of galaxy studies, future photometric surveys will be much more useful than anticipated.

  5. Crossing statistic: reconstructing the expansion history of the universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman, E-mail: arman@ewha.ac.kr

    2012-08-01

    We present that by combining Crossing Statistic [1,2] and Smoothing method [3-5] one can reconstruct the expansion history of the universe with a very high precision without considering any prior on the cosmological quantities such as the equation of state of dark energy. We show that the presented method performs very well in reconstruction of the expansion history of the universe independent of the underlying models and it works well even for non-trivial dark energy models with fast or slow changes in the equation of state of dark energy. Accuracy of the reconstructed quantities along with independence of the methodmore » to any prior or assumption gives the proposed method advantages to the other non-parametric methods proposed before in the literature. Applying on the Union 2.1 supernovae combined with WiggleZ BAO data we present the reconstructed results and test the consistency of the two data sets in a model independent manner. Results show that latest available supernovae and BAO data are in good agreement with each other and spatially flat ΛCDM model is in concordance with the current data.« less

  6. Experimentally probing topological order and its breakdown through modular matrices

    NASA Astrophysics Data System (ADS)

    Luo, Zhihuang; Li, Jun; Li, Zhaokai; Hung, Ling-Yan; Wan, Yidun; Peng, Xinhua; Du, Jiangfeng

    2018-02-01

    The modern concept of phases of matter has undergone tremendous developments since the first observation of topologically ordered states in fractional quantum Hall systems in the 1980s. In this paper, we explore the following question: in principle, how much detail of the physics of topological orders can be observed using state of the art technologies? We find that using surprisingly little data, namely the toric code Hamiltonian in the presence of generic disorders and detuning from its exactly solvable point, the modular matrices--characterizing anyonic statistics that are some of the most fundamental fingerprints of topological orders--can be reconstructed with very good accuracy solely by experimental means. This is an experimental realization of these fundamental signatures of a topological order, a test of their robustness against perturbations, and a proof of principle--that current technologies have attained the precision to identify phases of matter and, as such, probe an extended region of phase space around the soluble point before its breakdown. Given the special role of anyonic statistics in quantum computation, our work promises myriad applications both in probing and realistically harnessing these exotic phases of matter.

  7. The effect of statistical noise on IMRT plan quality and convergence for MC-based and MC-correction-based optimized treatment plans.

    PubMed

    Siebers, Jeffrey V

    2008-04-04

    Monte Carlo (MC) is rarely used for IMRT plan optimization outside of research centres due to the extensive computational resources or long computation times required to complete the process. Time can be reduced by degrading the statistical precision of the MC dose calculation used within the optimization loop. However, this eventually introduces optimization convergence errors (OCEs). This study determines the statistical noise levels tolerated during MC-IMRT optimization under the condition that the optimized plan has OCEs <100 cGy (1.5% of the prescription dose) for MC-optimized IMRT treatment plans.Seven-field prostate IMRT treatment plans for 10 prostate patients are used in this study. Pre-optimization is performed for deliverable beams with a pencil-beam (PB) dose algorithm. Further deliverable-based optimization proceeds using: (1) MC-based optimization, where dose is recomputed with MC after each intensity update or (2) a once-corrected (OC) MC-hybrid optimization, where a MC dose computation defines beam-by-beam dose correction matrices that are used during a PB-based optimization. Optimizations are performed with nominal per beam MC statistical precisions of 2, 5, 8, 10, 15, and 20%. Following optimizer convergence, beams are re-computed with MC using 2% per beam nominal statistical precision and the 2 PTV and 10 OAR dose indices used in the optimization objective function are tallied. For both the MC-optimization and OC-optimization methods, statistical equivalence tests found that OCEs are less than 1.5% of the prescription dose for plans optimized with nominal statistical uncertainties of up to 10% per beam. The achieved statistical uncertainty in the patient for the 10% per beam simulations from the combination of the 7 beams is ~3% with respect to maximum dose for voxels with D>0.5D(max). The MC dose computation time for the OC-optimization is only 6.2 minutes on a single 3 Ghz processor with results clinically equivalent to high precision MC computations.

  8. Statistical characterization of short wind waves from stereo images of the sea surface

    NASA Astrophysics Data System (ADS)

    Mironov, Alexey; Yurovskaya, Maria; Dulov, Vladimir; Hauser, Danièle; Guérin, Charles-Antoine

    2013-04-01

    We propose a methodology to extract short-scale statistical characteristics of the sea surface topography by means of stereo image reconstruction. The possibilities and limitations of the technique are discussed and tested on a data set acquired from an oceanographic platform at the Black Sea. The analysis shows that reconstruction of the topography based on stereo method is an efficient way to derive non-trivial statistical properties of surface short- and intermediate-waves (say from 1 centimer to 1 meter). Most technical issues pertaining to this type of datasets (limited range of scales, lacunarity of data or irregular sampling) can be partially overcome by appropriate processing of the available points. The proposed technique also allows one to avoid linear interpolation which dramatically corrupts properties of retrieved surfaces. The processing technique imposes that the field of elevation be polynomially detrended, which has the effect of filtering out the large scales. Hence the statistical analysis can only address the small-scale components of the sea surface. The precise cut-off wavelength, which is approximatively half the patch size, can be obtained by applying a high-pass frequency filter on the reference gauge time records. The results obtained for the one- and two-points statistics of small-scale elevations are shown consistent, at least in order of magnitude, with the corresponding gauge measurements as well as other experimental measurements available in the literature. The calculation of the structure functions provides a powerful tool to investigate spectral and statistical properties of the field of elevations. Experimental parametrization of the third-order structure function, the so-called skewness function, is one of the most important and original outcomes of this study. This function is of primary importance in analytical scattering models from the sea surface and was up to now unavailable in field conditions. Due to the lack of precise reference measurements for the small-scale wave field, we could not quantify exactly the accuracy of the retrieval technique. However, it appeared clearly that the obtained accuracy is good enough for the estimation of second-order statistical quantities (such as the correlation function), acceptable for third-order quantities (such as the skwewness function) and insufficient for fourth-order quantities (such as kurtosis). Therefore, the stereo technique in the present stage should not be thought as a self-contained universal tool to characterize the surface statistics. Instead, it should be used in conjunction with other well calibrated but sparse reference measurement (such as wave gauges) for cross-validation and calibration. It then completes the statistical analysis in as much as it provides a snapshot of the three-dimensional field and allows for the evaluation of higher-order spatial statistics.

  9. Determining wave direction using curvature parameters.

    PubMed

    de Queiroz, Eduardo Vitarelli; de Carvalho, João Luiz Baptista

    2016-01-01

    The curvature of the sea wave was tested as a parameter for estimating wave direction in the search for better results in estimates of wave direction in shallow waters, where waves of different sizes, frequencies and directions intersect and it is difficult to characterize. We used numerical simulations of the sea surface to determine wave direction calculated from the curvature of the waves. Using 1000 numerical simulations, the statistical variability of the wave direction was determined. The results showed good performance by the curvature parameter for estimating wave direction. Accuracy in the estimates was improved by including wave slope parameters in addition to curvature. The results indicate that the curvature is a promising technique to estimate wave directions.•In this study, the accuracy and precision of curvature parameters to measure wave direction are analyzed using a model simulation that generates 1000 wave records with directional resolution.•The model allows the simultaneous simulation of time-series wave properties such as sea surface elevation, slope and curvature and they were used to analyze the variability of estimated directions.•The simultaneous acquisition of slope and curvature parameters can contribute to estimates wave direction, thus increasing accuracy and precision of results.

  10. A comparative study of first-derivative spectrophotometry and column high-performance liquid chromatography applied to the determination of repaglinide in tablets and for dissolution testing.

    PubMed

    AlKhalidi, Bashar A; Shtaiwi, Majed; AlKhatib, Hatim S; Mohammad, Mohammad; Bustanji, Yasser

    2008-01-01

    A fast and reliable method for the determination of repaglinide is highly desirable to support formulation screening and quality control. A first-derivative UV spectroscopic method was developed for the determination of repaglinide in tablet dosage form and for dissolution testing. First-derivative UV absorbance was measured at 253 nm. The developed method was validated for linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ) in comparison to the U.S. Pharmacopeia (USP) column high-performance liquid chromatographic (HPLC) method. The first-derivative UV spectrophotometric method showed excellent linearity [correlation coefficient (r) = 0.9999] in the concentration range of 1-35 microg/mL and precision (relative standard deviation < 1.5%). The LOD and LOQ were 0.23 and 0.72 microg/mL, respectively, and good recoveries were achieved (98-101.8%). Statistical comparison of results of the first-derivative UV spectrophotometric and the USP HPLC methods using the t-test showed that there was no significant difference between the 2 methods. Additionally, the method was successfully used for the dissolution test of repaglinide and was found to be reliable, simple, fast, and inexpensive.

  11. Dynamics of statistical distance: Quantum limits for two-level clocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braunstein, S.L.; Milburn, G.J.

    1995-03-01

    We study the evolution of statistical distance on the Bloch sphere under unitary and nonunitary dynamics. This corresponds to studying the limits to clock precision for a clock constructed from a two-state system. We find that the initial motion away from pure states under nonunitary dynamics yields the greatest accuracy for a one-tick'' clock; in this case the clock's precision is not limited by the largest frequency of the system.

  12. Time Delay Embedding Increases Estimation Precision of Models of Intraindividual Variability

    ERIC Educational Resources Information Center

    von Oertzen, Timo; Boker, Steven M.

    2010-01-01

    This paper investigates the precision of parameters estimated from local samples of time dependent functions. We find that "time delay embedding," i.e., structuring data prior to analysis by constructing a data matrix of overlapping samples, increases the precision of parameter estimates and in turn statistical power compared to standard…

  13. An Ultra-high Resolution Synthetic Precipitation Data for Ungauged Sites

    NASA Astrophysics Data System (ADS)

    Kim, Hong-Joong; Choi, Kyung-Min; Oh, Jai-Ho

    2018-05-01

    Despite the enormous damage caused by record heavy rainfall, the amount of precipitation in areas without observation points cannot be known precisely. One way to overcome these difficulties is to estimate meteorological data at ungauged sites. In this study, we have used observation data over Seoul city to calculate high-resolution (250-meter resolution) synthetic precipitation over a 10-year (2005-2014) period. Furthermore, three cases are analyzed by evaluating the rainfall intensity and performing statistical analysis over the 10-year period. In the case where the typhoon "Meari" passed to the west coast during 28-30 June 2011, the Pearson correlation coefficient was 0.93 for seven validation points, which implies that the temporal correlation between the observed precipitation and synthetic precipitation was very good. It can be confirmed that the time series of observation and synthetic precipitation in the period almost completely matches the observed rainfall. On June 28-29, 2011, the estimation of 10 to 30 mm h-1 of continuous strong precipitation was correct. In addition, it is shown that the synthetic precipitation closely follows the observed precipitation for all three cases. Statistical analysis of 10 years of data reveals a very high correlation coefficient between synthetic precipitation and observed rainfall (0.86). Thus, synthetic precipitation data show good agreement with the observations. Therefore, the 250-m resolution synthetic precipitation amount calculated in this study is useful as basic data in weather applications, such as urban flood detection.

  14. Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology.

    PubMed

    McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad

    2011-07-01

    The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  15. Quality control of estrogen receptor assays.

    PubMed

    Godolphin, W; Jacobson, B

    1980-01-01

    Four types of material have been used for the quality control of routine assays of estrogen receptors in human breast tumors. Pieces of hormone-dependent Nb rat mammary tumors gave a precision about 40%. Rat uteri and rat tumors pulverized at liquid nitrogen temperature and stored as powder yielded precision about 30%. Powdered and lyophilised human tumors appear the best with precision as good as 17%.

  16. Population Estimates for Chum Salmon Spawning in the Mainstem Columbia River, 2002 Technical Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawding, Dan; Hillson, Todd D.

    2003-11-15

    Accurate and precise population estimates of chum salmon (Oncorhynchus keta) spawning in the mainstem Columbia River are needed to provide a basis for informed water allocation decisions, to determine the status of chum salmon listed under the Endangered Species Act, and to evaluate the contribution of the Duncan Creek re-introduction program to mainstem spawners. Currently, mark-recapture experiments using the Jolly-Seber model provide the only framework for this type of estimation. In 2002, a study was initiated to estimate mainstem Columbia River chum salmon populations using seining data collected while capturing broodstock as part of the Duncan Creek re-introduction. The fivemore » assumptions of the Jolly-Seber model were examined using hypothesis testing within a statistical framework, including goodness of fit tests and secondary experiments. We used POPAN 6, an integrated computer system for the analysis of capture-recapture data, to obtain maximum likelihood estimates of standard model parameters, derived estimates, and their precision. A more parsimonious final model was selected using Akaike Information Criteria. Final chum salmon escapement estimates and (standard error) from seining data for the Ives Island, Multnomah, and I-205 sites are 3,179 (150), 1,269 (216), and 3,468 (180), respectively. The Ives Island estimate is likely lower than the total escapement because only the largest two of four spawning sites were sampled. The accuracy and precision of these estimates would improve if seining was conducted twice per week instead of weekly, and by incorporating carcass recoveries into the analysis. Population estimates derived from seining mark-recapture data were compared to those obtained using the current mainstem Columbia River salmon escapement methodologies. The Jolly-Seber population estimate from carcass tagging in the Ives Island area was 4,232 adults with a standard error of 79. This population estimate appears reasonable and precise but batch marks and lack of secondary studies made it difficult to test Jolly-Seber assumptions, necessary for unbiased estimates. We recommend that individual tags be applied to carcasses to provide a statistical basis for goodness of fit tests and ultimately model selection. Secondary or double marks should be applied to assess tag loss and male and female chum salmon carcasses should be enumerated separately. Carcass tagging population estimates at the two other sites were biased low due to limited sampling. The Area-Under-the-Curve escapement estimates at all three sites were 36% to 76% of Jolly-Seber estimates. Area-Under-the Curve estimates are likely biased low because previous assumptions that observer efficiency is 100% and residence time is 10 days proved incorrect. If managers continue to rely on Area-Under-the-Curve to estimate mainstem Columbia River spawners, a methodology is provided to develop annual estimates of observer efficiency and residence time, and to incorporate uncertainty into the Area-Under-the-Curve escapement estimate.« less

  17. Determination of Monensin in Bovine Tissues: A Bridging Study Comparing the Bioautographic Method (FSIS CLG-MON) with a Liquid Chromatography-Tandem Mass Spectrometry Method (OMA 2011.24).

    PubMed

    Mizinga, Kemmy M; Burnett, Thomas J; Brunelle, Sharon L; Wallace, Michael A; Coleman, Mark R

    2018-05-01

    The U.S. Department of Agriculture, Food Safety Inspection Service regulatory method for monensin, Chemistry Laboratory Guidebook CLG-MON, is a semiquantitative bioautographic method adopted in 1991. Official Method of AnalysisSM (OMA) 2011.24, a modern quantitative and confirmatory LC-tandem MS method, uses no chlorinated solvents and has several advantages, including ease of use, ready availability of reagents and materials, shorter run-time, and higher throughput than CLG-MON. Therefore, a bridging study was conducted to support the replacement of method CLG-MON with OMA 2011.24 for regulatory use. Using fortified bovine tissue samples, CLG-MON yielded accuracies of 80-120% in 44 of the 56 samples tested (one sample had no result, six samples had accuracies of >120%, and five samples had accuracies of 40-160%), but the semiquantitative nature of CLG-MON prevented assessment of precision, whereas OMA 2011.24 had accuracies of 88-110% and RSDr of 0.00-15.6%. Incurred residue results corroborated these results, demonstrating improved accuracy (83.3-114%) and good precision (RSDr of 2.6-20.5%) for OMA 2011.24 compared with CLG-MON (accuracy generally within 80-150%, with exceptions). Furthermore, χ2 analysis revealed no statistically significant difference between the two methods. Thus, the microbiological activity of monensin correlated with the determination of monensin A in bovine tissues, and OMA 2011.24 provided improved accuracy and precision over CLG-MON.

  18. Statistical evaluation of rainfall-simulator and erosion testing procedure : final report.

    DOT National Transportation Integrated Search

    1977-01-01

    The specific aims of this study were (1) to supply documentation of statistical repeatability and precision of the rainfall-simulator and to document the statistical repeatabiity of the soil-loss data when using the previously recommended tentative l...

  19. Calculation of precise firing statistics in a neural network model

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2017-08-01

    A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.

  20. Precision of measurement and body size in whole-body air-displacement plethysmography.

    PubMed

    Wells, J C; Fuller, N J

    2001-08-01

    To investigate methodological and biological precision for air-displacement plethysmography (ADP) across a wide range of body size. Repeated measurements of body volume (BV) and body weight (WT), and derived estimates of density (BD) and indices of fat mass (FM) and fat-free mass (FFM). Sixteen men, aged 22--48 y; 12 women, aged 24--42 y; 13 boys, aged 5--14 y; 17 girls, aged 5--16 y. BV and WT were measured using the Bodpod ADP system from which estimates of BD, FM and FFM were derived. FM and FFM were further adjusted for height to give fat mass index (FMI) and fat-free mass index (FFMI). ADP is very precise for measuring both BV and BD (between 0.16 and 0.44% of the mean). After removing two outliers from the database, and converting BD to body composition, precision of FMI was <6% in adults and within 8% in children, while precision of FFMI was within 1.5% for both age groups. ADP shows good precision for BV and BD across a wide range of body size, subject to biological artefacts. If aberrant values can be identified and rejected, precision of body composition is also good. Aberrant values can be identified by using pairs of ADP procedures, allowing the rejection of data where successive BD values differed by >0.007 kg/l. Precision of FMI obtained using pairs of procedures improves to <4.5% in adults and <5.5% in children.

  1. Design of a novel instrument for active neutron interrogation of artillery shells.

    PubMed

    Bélanger-Champagne, Camille; Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter

    2017-01-01

    The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from [Formula: see text]% to [Formula: see text]% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s.

  2. Design of a novel instrument for active neutron interrogation of artillery shells

    PubMed Central

    Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter

    2017-01-01

    The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from 53-7+7% to 74-10+8% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s. PMID:29211773

  3. Molecular weight distribution of polysaccharides from edible seaweeds by high-performance size-exclusion chromatography (HPSEC).

    PubMed

    Gómez-Ordóñez, Eva; Jiménez-Escrig, Antonio; Rupérez, Pilar

    2012-05-15

    Biological properties of polysaccharides from seaweeds are related to their composition and structure. Many factors such as the kind of sugar, type of linkage or sulfate content of algal biopolymers exert an influence in the relationship between structure and function. Besides, the molecular weight (MW) also plays an important role. Thus, a simple, reliable and fast HPSEC method with refractive index detection was developed and optimized for the MW estimation of soluble algal polysaccharides. Chromatogram shape and repeatability of retention time was considerably improved when sodium nitrate was used instead of ultrapure water as mobile phase. Pullulan and dextran standards of different MW were used for method calibration and validation. Also, main polysaccharide standards from brown (alginate, fucoidan, laminaran) and red seaweeds (kappa- and iota-carrageenan) were used for quantification and method precision and accuracy. Relative standard deviation (RSD) of repeatability for retention time, peak areas and inter-day precision was below 0.7%, 2.5% and 2.6%, respectively, which indicated good repeatability and precision. Recoveries (96.3-109.8%) also showed its fairly good accuracy. Regarding linearity, main polysaccharide standards from brown or red seaweeds showed a highly satisfactory correlation coefficient (r>0.999). Moreover, a good sensitivity was shown, with corresponding limits of detection and quantitation in mg/mL of 0.05-0.21 and 0.16-0.31, respectively. The method was applied to the MW estimation of standard algal polysaccharides, as well as to the soluble polysaccharide fractions from the brown seaweed Saccharina latissima and the red Mastocarpus stellatus, respectively. Although distribution of molecular weight was broad, the good repeatability for retention time provided a good precision in MW estimation of polysaccharides. Water- and alkali-soluble fractions from S. latissima ranged from very high (>2400 kDa) to low MW compounds (<6 kDa); this high heterogeneity could be attributable to the complex polysaccharide composition of brown algae. Regarding M. stellatus, sulfated galactans followed a descending order of MW (>1400 kDa to <10 kDa), related to the different solubility of carrageenans in red seaweeds. In summary, the method developed allows for the molecular weight analysis of seaweed polysaccharides with very good precision, accuracy, linearity and sensitivity within a short time. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  5. Determination of the number of J/ψ events with inclusive J/ψ decays

    DOE PAGES

    Ablikim, M.; Achasov, M. N.; Ai, X. C.; ...

    2016-08-26

    A measurement of the number of J/ψ events collected with the BESIII detector in 2009 and 2012 is performed using inclusive decays of the J/ψ. The number of J/ψ events taken in 2009 is recalculated to be (223.7 ± 1.4) × 10 6, which is in good agreement with the previous measurement, but with significantly improved precision due to improvements in the BESIII software. The number of J/ψ events taken in 2012 is determined to be (1086.9 ± 6.0) × 10 6. In total, the number of J/ψ events collected with the BESIII detector is measured to be (1310.6 ±more » 7.0) × 10 6, where the uncertainty is dominated by systematic effects and the statistical uncertainty is negligible.« less

  6. Determination of the number of J/ψ events with inclusive J/ψ decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ablikim, M.; Achasov, M. N.; Ai, X. C.

    A measurement of the number of J/ψ events collected with the BESIII detector in 2009 and 2012 is performed using inclusive decays of the J/ψ. The number of J/ψ events taken in 2009 is recalculated to be (223.7 ± 1.4) × 10 6, which is in good agreement with the previous measurement, but with significantly improved precision due to improvements in the BESIII software. The number of J/ψ events taken in 2012 is determined to be (1086.9 ± 6.0) × 10 6. In total, the number of J/ψ events collected with the BESIII detector is measured to be (1310.6 ±more » 7.0) × 10 6, where the uncertainty is dominated by systematic effects and the statistical uncertainty is negligible.« less

  7. Different spectrophotometric methods applied for the analysis of simeprevir in the presence of its oxidative degradation product: Acomparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; El-Abasawi, Nasr M.; El-Olemy, Ahmed; Serag, Ahmed

    2018-02-01

    Five simple spectrophotometric methods were developed for the determination of simeprevir in the presence of its oxidative degradation product namely, ratio difference, mean centering, derivative ratio using the Savitsky-Golay filters, second derivative and continuous wavelet transform. These methods are linear in the range of 2.5-40 μg/mL and validated according to the ICH guidelines. The obtained results of accuracy, repeatability and precision were found to be within the acceptable limits. The specificity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. Furthermore, these methods were statistically comparable to RP-HPLC method and good results were obtained. So, they can be used for the routine analysis of simeprevir in quality-control laboratories.

  8. Demonstration of improved sensitivity of echo interferometers to gravitational acceleration

    NASA Astrophysics Data System (ADS)

    Mok, C.; Barrett, B.; Carew, A.; Berthiaume, R.; Beattie, S.; Kumarakrishnan, A.

    2013-08-01

    We have developed two configurations of an echo interferometer that rely on standing-wave excitation of a laser-cooled sample of rubidium atoms. Both configurations can be used to measure acceleration a along the axis of excitation. For a two-pulse configuration, the signal from the interferometer is modulated at the recoil frequency and exhibits a sinusoidal frequency chirp as a function of pulse spacing. In comparison, for a three-pulse stimulated-echo configuration, the signal is observed without recoil modulation and exhibits a modulation at a single frequency as a function of pulse spacing. The three-pulse configuration is less sensitive to effects of vibrations and magnetic field curvature, leading to a longer experimental time scale. For both configurations of the atom interferometer (AI), we show that a measurement of acceleration with a statistical precision of 0.5% can be realized by analyzing the shape of the echo envelope that has a temporal duration of a few microseconds. Using the two-pulse AI, we obtain measurements of acceleration that are statistically precise to 6 parts per million (ppm) on a 25 ms time scale. In comparison, using the three-pulse AI, we obtain measurements of acceleration that are statistically precise to 0.4 ppm on a time scale of 50 ms. A further statistical enhancement is achieved by analyzing the data across the echo envelope so that the statistical error is reduced to 75 parts per billion (ppb). The inhomogeneous field of a magnetized vacuum chamber limited the experimental time scale and resulted in prominent systematic effects. Extended time scales and improved signal-to-noise ratio observed in recent echo experiments using a nonmagnetic vacuum chamber suggest that echo techniques are suitable for a high-precision measurement of gravitational acceleration g. We discuss methods for reducing systematic effects and improving the signal-to-noise ratio. Simulations of both AI configurations with a time scale of 300 ms suggest that an optimized experiment with improved vibration isolation and atoms selected in the mF=0 state can result in measurements of g statistically precise to 0.3 ppb for the two-pulse AI and 0.6 ppb for the three-pulse AI.

  9. Developing Statistical Knowledge for Teaching during Design-Based Research

    ERIC Educational Resources Information Center

    Groth, Randall E.

    2017-01-01

    Statistical knowledge for teaching is not precisely equivalent to statistics subject matter knowledge. Teachers must know how to make statistics understandable to others as well as understand the subject matter themselves. This dual demand on teachers calls for the development of viable teacher education models. This paper offers one such model,…

  10. What to use to express the variability of data: Standard deviation or standard error of mean?

    PubMed

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  11. [Precision and personalized medicine].

    PubMed

    Sipka, Sándor

    2016-10-01

    The author describes the concept of "personalized medicine" and the newly introduced "precision medicine". "Precision medicine" applies the terms of "phenotype", "endotype" and "biomarker" in order to characterize more precisely the various diseases. Using "biomarkers" the homogeneous type of a disease (a "phenotype") can be divided into subgroups called "endotypes" requiring different forms of treatment and financing. The good results of "precision medicine" have become especially apparent in relation with allergic and autoimmune diseases. The application of this new way of thinking is going to be necessary in Hungary, too, in the near future for participants, controllers and financing boards of healthcare. Orv. Hetil., 2016, 157(44), 1739-1741.

  12. Novel absorptivity centering method utilizing normalized and factorized spectra for analysis of mixtures with overlapping spectra in different matrices using built-in spectrophotometer software

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom

    2018-07-01

    A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D0) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing.

  13. Novel absorptivity centering method utilizing normalized and factorized spectra for analysis of mixtures with overlapping spectra in different matrices using built-in spectrophotometer software.

    PubMed

    Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom

    2018-07-05

    A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D 0 ) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Statistical analysis of radioimmunoassay. In comparison with bioassay (in Japanese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakano, R.

    1973-01-01

    Using the data of RIA (radioimmunoassay), statistical procedures for dealing with two problems of the linearization of dose response curve and calculation of relative potency were described. There were three methods for linearization of dose response curve of RIA. In each method, the following parameters were shown on the horizontal and vertical axis: dose x, (B/T)/sup -1/; c/x + c, B/T (C: dose which makes B/T 50%); log x, logit B/T. Among them, the last method seems to be most practical. The statistical procedures for bioassay were employed for calculating the relative potency of unknown samples compared to the standardmore » samples from dose response curves of standand and unknown samples using regression coefficient. It is desirable that relative potency is calculated by plotting more than 5 points in the standard curve and plotting more than 2 points in unknow samples. For examining the statistical limit of precision of measuremert, LH activity of gonadotropin in urine was measured and relative potency, precision coefficient and the upper and lower limits of relative potency at 95% confidence limit were calculated. On the other hand, bioassay (by the ovarian ascorbic acid reduction method and anteriol lobe of prostate weighing method) was done in the same samples, and the precision was compared with that of RIA. In these examinations, the upper and lower limits of the relative potency at 95% confidence limit were near each other, while in bioassay, a considerable difference was observed between the upper and lower limits. The necessity of standardization and systematization of the statistical procedures for increasing the precision of RIA was pointed out. (JA)« less

  15. Solar neutrino detection in a large volume double-phase liquid argon experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franco, D.; Agnes, P.; Giganti, C.

    2016-08-01

    Precision measurements of solar neutrinos emitted by specific nuclear reaction chains in the Sun are of great interest for developing an improved understanding of star formation and evolution. Given the expected neutrino fluxes and known detection reactions, such measurements require detectors capable of collecting neutrino-electron scattering data in exposures on the order of 1 ktonne-yr, with good energy resolution and extremely low background. Two-phase liquid argon time projection chambers (LAr TPCs) are under development for direct Dark Matter WIMP searches, which possess very large sensitive mass, high scintillation light yield, good energy resolution, and good spatial resolution in all threemore » cartesian directions. While enabling Dark Matter searches with sensitivity extending to the ''neutrino floor'' (given by the rate of nuclear recoil events from solar neutrino coherent scattering), such detectors could also enable precision measurements of solar neutrino fluxes using the neutrino-electron elastic scattering events. Modeling results are presented for the cosmogenic and radiogenic backgrounds affecting solar neutrino detection in a 300 tonne (100 tonne fiducial) LAr TPC operating at LNGS depth (3,800 meters of water equivalent). The results show that such a detector could measure the CNO neutrino rate with ∼15% precision, and significantly improve the precision of the {sup 7}Be and pep neutrino rates compared to the currently available results from the Borexino organic liquid scintillator detector.« less

  16. Patient-Specific Detection of Cerebral Blood Flow Alterations as Assessed by Arterial Spin Labeling in Drug-Resistant Epileptic Patients

    PubMed Central

    Boscolo Galazzo, Ilaria; Storti, Silvia Francesca; Del Felice, Alessandra; Pizzini, Francesca Benedetta; Arcaro, Chiara; Formaggio, Emanuela; Mai, Roberto; Chappell, Michael; Beltramello, Alberto; Manganotti, Paolo

    2015-01-01

    Electrophysiological and hemodynamic data can be integrated to accurately and precisely identify the generators of abnormal electrical activity in drug-resistant focal epilepsy. Arterial Spin Labeling (ASL), a magnetic resonance imaging (MRI) technique for quantitative noninvasive measurement of cerebral blood flow (CBF), can provide a direct measure of variations in cerebral perfusion associated with the epileptic focus. In this study, we aimed to confirm the ASL diagnostic value in the identification of the epileptogenic zone, as compared to electrical source imaging (ESI) results, and to apply a template-based approach to depict statistically significant CBF alterations. Standard video-electroencephalography (EEG), high-density EEG, and ASL were performed to identify clinical seizure semiology and noninvasively localize the epileptic focus in 12 drug-resistant focal epilepsy patients. The same ASL protocol was applied to a control group of 17 healthy volunteers from which a normal perfusion template was constructed using a mixed-effect approach. CBF maps of each patient were then statistically compared to the reference template to identify perfusion alterations. Significant hypo- and hyperperfused areas were identified in all cases, showing good agreement between ASL and ESI results. Interictal hypoperfusion was observed at the site of the seizure in 10/12 patients and early postictal hyperperfusion in 2/12. The epileptic focus was correctly identified within the surgical resection margins in the 5 patients who underwent lobectomy, all of which had good postsurgical outcomes. The combined use of ESI and ASL can aid in the noninvasive evaluation of drug-resistant epileptic patients. PMID:25946055

  17. Statistical analysis for improving data precision in the SPME GC-MS analysis of blackberry (Rubus ulmifolius Schott) volatiles.

    PubMed

    D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C

    2014-07-01

    Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Comparison of Accuracy Between a Conventional and Two Digital Intraoral Impression Techniques.

    PubMed

    Malik, Junaid; Rodriguez, Jose; Weisbloom, Michael; Petridis, Haralampos

    To compare the accuracy (ie, precision and trueness) of full-arch impressions fabricated using either a conventional polyvinyl siloxane (PVS) material or one of two intraoral optical scanners. Full-arch impressions of a reference model were obtained using addition silicone impression material (Aquasil Ultra; Dentsply Caulk) and two optical scanners (Trios, 3Shape, and CEREC Omnicam, Sirona). Surface matching software (Geomagic Control, 3D Systems) was used to superimpose the scans within groups to determine the mean deviations in precision and trueness (μm) between the scans, which were calculated for each group and compared statistically using one-way analysis of variance with post hoc Bonferroni (trueness) and Games-Howell (precision) tests (IBM SPSS ver 24, IBM UK). Qualitative analysis was also carried out from three-dimensional maps of differences between scans. Means and standard deviations (SD) of deviations in precision for conventional, Trios, and Omnicam groups were 21.7 (± 5.4), 49.9 (± 18.3), and 36.5 (± 11.12) μm, respectively. Means and SDs for deviations in trueness were 24.3 (± 5.7), 87.1 (± 7.9), and 80.3 (± 12.1) μm, respectively. The conventional impression showed statistically significantly improved mean precision (P < .006) and mean trueness (P < .001) compared to both digital impression procedures. There were no statistically significant differences in precision (P = .153) or trueness (P = .757) between the digital impressions. The qualitative analysis revealed local deviations along the palatal surfaces of the molars and incisal edges of the anterior teeth of < 100 μm. Conventional full-arch PVS impressions exhibited improved mean accuracy compared to two direct optical scanners. No significant differences were found between the two digital impression methods.

  19. Statistical Evaluation of VIIRS Ocean Color Products

    NASA Astrophysics Data System (ADS)

    Mikelsons, K.; Wang, M.; Jiang, L.

    2016-02-01

    Evaluation and validation of satellite-derived ocean color products is a complicated task, which often relies on precise in-situ measurements for satellite data quality assessment. However, in-situ measurements are only available in comparatively few locations, expensive, and not for all times. In the open ocean, the variability in spatial and temporal scales is longer, and the water conditions are generally more stable. We use this fact to perform extensive statistical evaluations of consistency for ocean color retrievals based on comparison of retrieved data at different times, and corresponding to various retrieval parameters. We have used the NOAA Multi-Sensor Level-1 to Level-2 (MSL12) ocean color data processing system for ocean color product data derived from the Visible Infrared Imaging Radiometer Suite (VIIRS). We show the results for statistical dependence of normalized water-leaving radiance spectra with respect to various parameters of retrieval geometry, such as solar- and sensor-zenith angles, as well as physical variables, such as wind speed, air pressure, ozone amount, water vapor, etc. In most cases, the results show consistent retrievals within the relevant range of retrieval parameters, showing a good performance with the MSL12 in the open ocean. The results also yield the upper bounds of solar- and sensor-zenith angles for reliable ocean color retrievals, and also show a slight increase of VIIRS-derived normalized water-leaving radiances with wind speed and water vapor concentration.

  20. An accuracy improvement method for the topology measurement of an atomic force microscope using a 2D wavelet transform.

    PubMed

    Yoon, Yeomin; Noh, Suwoo; Jeong, Jiseong; Park, Kyihwan

    2018-05-01

    The topology image is constructed from the 2D matrix (XY directions) of heights Z captured from the force-feedback loop controller. For small height variations, nonlinear effects such as hysteresis or creep of the PZT-driven Z nano scanner can be neglected and its calibration is quite straightforward. For large height variations, the linear approximation of the PZT-driven Z nano scanner fail and nonlinear behaviors must be considered because this would cause inaccuracies in the measurement image. In order to avoid such inaccuracies, an additional strain gauge sensor is used to directly measure displacement of the PZT-driven Z nano scanner. However, this approach also has a disadvantage in its relatively low precision. In order to obtain high precision data with good linearity, we propose a method of overcoming the low precision problem of the strain gauge while its feature of good linearity is maintained. We expect that the topology image obtained from the strain gauge sensor showing significant noise at high frequencies. On the other hand, the topology image obtained from the controller output showing low noise at high frequencies. If the low and high frequency signals are separable from both topology images, the image can be constructed so that it is represented with high accuracy and low noise. In order to separate the low frequencies from high frequencies, a 2D Haar wavelet transform is used. Our proposed method use the 2D wavelet transform for obtaining good linearity from strain gauge sensor and good precision from controller output. The advantages of the proposed method are experimentally validated by using topology images. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Experimental study of precisely selected evaporation chains in the decay of excited 25Mg

    NASA Astrophysics Data System (ADS)

    Camaiani, A.; Casini, G.; Morelli, L.; Barlini, S.; Piantelli, S.; Baiocco, G.; Bini, M.; Bruno, M.; Buccola, A.; Cinausero, M.; Cicerchia, M.; D'Agostino, M.; Degelier, M.; Fabris, D.; Frosin, C.; Gramegna, F.; Gulminelli, F.; Mantovani, G.; Marchi, T.; Olmi, A.; Ottanelli, P.; Pasquali, G.; Pastore, G.; Valdré, S.; Verde, G.

    2018-04-01

    The reaction 12C+13C at 95 MeV bombarding energy is studied using the Garfield + Ring Counter apparatus located at the INFN Laboratori Nazionali di Legnaro. In this paper we want to investigate the de-excitation of 25Mg aiming both at a new stringent test of the statistical description of nuclear decay and a direct comparison with the decay of the system 24Mg formed through 12C+12C reactions previously studied. Thanks to the large acceptance of the detector and to its good fragment identification capabilities, we could apply stringent selections on fusion-evaporation events, requiring their completeness in charge. The main decay features of the evaporation residues and of the emitted light particles are overall well described by a pure statistical model; however, as for the case of the previously studied 24Mg, we observed some deviations in the branching ratios, in particular for those chains involving only the evaporation of α particles. From this point of view the behavior of the 24Mg and 25Mg decay cases appear to be rather similar. An attempt to obtain a full mass balance even without neutron detection is also discussed.

  2. Detection of the kinematic Sunyaev–Zel'dovich effect with DES Year 1 and SPT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soergel, B.; Flender, S.; Story, K. T.

    Here, we detect the kinematic Sunyaev-Zel'dovich (kSZ) effect with a statistical significance ofmore » $$4.2 \\sigma$$ by combining a cluster catalogue derived from the first year data of the Dark Energy Survey (DES) with CMB temperature maps from the South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) Survey. This measurement is performed with a differential statistic that isolates the pairwise kSZ signal, providing the first detection of the large-scale, pairwise motion of clusters using redshifts derived from photometric data. By fitting the pairwise kSZ signal to a theoretical template we measure the average central optical depth of the cluster sample, $$\\bar{\\tau}_e = (3.75 \\pm 0.89)\\cdot 10^{-3}$$. We compare the extracted signal to realistic simulations and find good agreement with respect to the signal-to-noise, the constraint on $$\\bar{\\tau}_e$$, and the corresponding gas fraction. High-precision measurements of the pairwise kSZ signal with future data will be able to place constraints on the baryonic physics of galaxy clusters, and could be used to probe gravity on scales $$ \\gtrsim 100$$ Mpc.« less

  3. Detection of the kinematic Sunyaev–Zel'dovich effect with DES Year 1 and SPT

    DOE PAGES

    Soergel, B.; Flender, S.; Story, K. T.; ...

    2016-06-17

    Here, we detect the kinematic Sunyaev-Zel'dovich (kSZ) effect with a statistical significance ofmore » $$4.2 \\sigma$$ by combining a cluster catalogue derived from the first year data of the Dark Energy Survey (DES) with CMB temperature maps from the South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) Survey. This measurement is performed with a differential statistic that isolates the pairwise kSZ signal, providing the first detection of the large-scale, pairwise motion of clusters using redshifts derived from photometric data. By fitting the pairwise kSZ signal to a theoretical template we measure the average central optical depth of the cluster sample, $$\\bar{\\tau}_e = (3.75 \\pm 0.89)\\cdot 10^{-3}$$. We compare the extracted signal to realistic simulations and find good agreement with respect to the signal-to-noise, the constraint on $$\\bar{\\tau}_e$$, and the corresponding gas fraction. High-precision measurements of the pairwise kSZ signal with future data will be able to place constraints on the baryonic physics of galaxy clusters, and could be used to probe gravity on scales $$ \\gtrsim 100$$ Mpc.« less

  4. [Reliability of iWitness photogrammetry in maxillofacial application].

    PubMed

    Jiang, Chengcheng; Song, Qinggao; He, Wei; Chen, Shang; Hong, Tao

    2015-06-01

    This study aims to test the accuracy and precision of iWitness photogrammetry for measuring the facial tissues of mannequin head. Under ideal circumstances, the 3D landmark coordinates were repeatedly obtained from a mannequin head using iWitness photogrammetric system with different parameters, to examine the precision of this system. The differences between the 3D data and their true distance values of mannequin head were computed. Operator error of 3D system in non-zoom and zoom status were 0.20 mm and 0.09 mm, and the difference was significant (P 0.05). Image captured error of 3D system was 0.283 mm, and there was no significant difference compared with the same group of images (P>0.05). Error of 3D systen with recalibration was 0.251 mm, and the difference was not statistically significant compared with image captured error (P>0.05). Good congruence was observed between means derived from the 3D photos and direct anthropometry, with difference ranging from -0.4 mm to +0.4 mm. This study provides further evidence of the high reliability of iWitness photogrammetry for several craniofacial measurements, including landmarks and inter-landmark distances. The evaluated system can be recommended for the evaluation and documentation of the facial surface.

  5. Assigning Polarity to Causal Information in Financial Articles on Business Performance of Companies

    NASA Astrophysics Data System (ADS)

    Sakai, Hiroyuki; Masuyama, Shigeru

    We propose a method of assigning polarity to causal information extracted from Japanese financial articles concerning business performance of companies. Our method assigns polarity (positive or negative) to causal information in accordance with business performance, e.g. “zidousya no uriage ga koutyou: (Sales of cars are good)” (The polarity positive is assigned in this example). We may use causal expressions assigned polarity by our method, e.g., to analyze content of articles concerning business performance circumstantially. First, our method classifies articles concerning business performance into positive articles and negative articles. Using them, our method assigns polarity (positive or negative) to causal information extracted from the set of articles concerning business performance. Although our method needs training dataset for classifying articles concerning business performance into positive and negative ones, our method does not need a training dataset for assigning polarity to causal information. Hence, even if causal information not appearing in the training dataset for classifying articles concerning business performance into positive and negative ones exist, our method is able to assign it polarity by using statistical information of this classified sets of articles. We evaluated our method and confirmed that it attained 74.4% precision and 50.4% recall of assigning polarity positive, and 76.8% precision and 61.5% recall of assigning polarity negative, respectively.

  6. Reliability of conventional shade guides in teeth color determination.

    PubMed

    Todorović, Ana; Todorović, Aleksandar; Gostović, Aleksandra Spadijer; Lazić, Vojkan; Milicić, Biljana; Djurisić, Slobodan

    2013-10-01

    Color matching in prosthodontic therapy is a very important task because it influences the esthetic value of dental restorations. Visual shade matching represents the most frequently applied method in clinical practice. Instrumental measurements provide objective and quantified data in color assessment of natural teeth and restorations. In instrumental shade analysis, the goal is to achieve the smallest deltaE value possible, indicating the most accurate shade match. The aim of this study was to evaluate the reliability of commercially available ceramic shade guides. VITA Easyshade spectrophotometer (VITA, Germany) was used for instrumental color determination. Utilizing this device, color samples of ten VITA Classical and ten VITA 3D - Master shade guides were analyzed. Each color sample from all shade guides was measured three times and the basic parameters of color quality were examined: deltaL, deltaC, deltaH, deltaE, deltaElc. Based on these parameters spectrophotometer marks the shade matching as good, fair or adjust. After performing 1,248 measurements of ceramic color samples, frequency of evaluations adjust, fair and good were statistically significantly different between VITA Classical and VITA 3D Master shade guides (p = 0.002). There were 27.1% cases scored as adjust, 66.3% as fair and 6.7% as good. In VITA 3D - Master shade guides 30.9% cases were evaluated as adjust, 66.4% as fair and 2.7% cases as good. Color samples from different shade guides, produced by the same manufacturer, show variability in basic color parameters, which once again proves the lack of precision and nonuniformity of the conventional method.

  7. Precision manufacturing for clinical-quality regenerative medicines.

    PubMed

    Williams, David J; Thomas, Robert J; Hourd, Paul C; Chandra, Amit; Ratcliffe, Elizabeth; Liu, Yang; Rayment, Erin A; Archer, J Richard

    2012-08-28

    Innovations in engineering applied to healthcare make a significant difference to people's lives. Market growth is guaranteed by demographics. Regulation and requirements for good manufacturing practice-extreme levels of repeatability and reliability-demand high-precision process and measurement solutions. Emerging technologies using living biological materials add complexity. This paper presents some results of work demonstrating the precision automated manufacture of living materials, particularly the expansion of populations of human stem cells for therapeutic use as regenerative medicines. The paper also describes quality engineering techniques for precision process design and improvement, and identifies the requirements for manufacturing technology and measurement systems evolution for such therapies.

  8. Analysis and Test Support for Phillips Laboratory Precision Structures

    DTIC Science & Technology

    1998-11-01

    Air Force Research Laboratory ( AFRL ), Phillips Research Site . Task objectives centered...around analysis and structural dynamic test support on experiments within the Space Vehicles Directorate at Kirtland Air Force Base. These efforts help...support for Phillips Laboratory Precision Structures." Mr. James Goodding of CSA Engineering was the principal investigator for this task. Mr.

  9. Physics opportunities with meson beams

    DOE PAGES

    Briscoe, William J.; Doring, Michael; Haberzettl, Helmut; ...

    2015-10-20

    Over the past two decades, meson photo- and electro-production data of unprecedented quality and quantity have been measured at electromagnetic facilities worldwide. By contrast, the meson-beam data for the same hadronic final states are mostly outdated and largely of poor quality, or even nonexistent, and thus provide inadequate input to help interpret, analyze, and exploit the full potential of the new electromagnetic data. To reap the full benefit of the high-precision electromagnetic data, new high-statistics data from measurements with meson beams, with good angle and energy coverage for a wide range of reactions, are critically needed to advance our knowledgemore » in baryon and meson spectroscopy and other related areas of hadron physics. To address this situation, a state of-the-art meson-beam facility needs to be constructed. Furthermore, the present paper summarizes unresolved issues in hadron physics and outlines the vast opportunities and advances that only become possible with such a facility.« less

  10. Physics opportunities with meson beams

    NASA Astrophysics Data System (ADS)

    Briscoe, William J.; Döring, Michael; Haberzettl, Helmut; Manley, D. Mark; Naruki, Megumi; Strakovsky, Igor I.; Swanson, Eric S.

    2015-10-01

    Over the past two decades, meson photo- and electroproduction data of unprecedented quality and quantity have been measured at electromagnetic facilities worldwide. By contrast, the meson-beam data for the same hadronic final states are mostly outdated and largely of poor quality, or even non-existent, and thus provide inadequate input to help interpret, analyze, and exploit the full potential of the new electromagnetic data. To reap the full benefit of the high-precision electromagnetic data, new high-statistics data from measurements with meson beams, with good angle and energy coverage for a wide range of reactions, are critically needed to advance our knowledge in baryon and meson spectroscopy and other related areas of hadron physics. To address this situation, a state-of-the-art meson-beam facility needs to be constructed. The present paper summarizes unresolved issues in hadron physics and outlines the vast opportunities and advances that only become possible with such a facility.

  11. [Hang-gliding accidents in high mountains. Apropos of 200 cases].

    PubMed

    Foray, J; Abrassart, S; Femmy, T; Aldilli, M

    1991-01-01

    A review of 200 cases of "paragliding" accidents in high mountain areas has been completed. The first flights have been murderous, a thesis written in 1987 in Grenoble showing seven dead out of 97 casualties. Since then the statistics seen to be improving as a consequence of the setting of regulations and the establishment of "paragliding" schools. The more frequent accidents happen on landing: in 70% of the cases fractures of the "tibiotarsienne", the wrist and the spinal column prevail. They happen to young adults between 20 and 40 years old, with a variable experience. Preventive measures consist in a greater prudence, a good physical condition and a precise aerological knowledge. The adepts of this sport have understood that wearing a helmet and appropriate shoes could reduce the gravity of the accidents. "Paragliding" if not a dangerous sport is certainly a risky one.

  12. Solution x-ray scattering and structure formation in protein dynamics

    NASA Astrophysics Data System (ADS)

    Nasedkin, Alexandr; Davidsson, Jan; Niemi, Antti J.; Peng, Xubiao

    2017-12-01

    We propose a computationally effective approach that builds on Landau mean-field theory in combination with modern nonequilibrium statistical mechanics to model and interpret protein dynamics and structure formation in small- to wide-angle x-ray scattering (S/WAXS) experiments. We develop the methodology by analyzing experimental data in the case of Engrailed homeodomain protein as an example. We demonstrate how to interpret S/WAXS data qualitatively with a good precision and over an extended temperature range. We explain experimental observations in terms of protein phase structure, and we make predictions for future experiments and for how to analyze data at different ambient temperature values. We conclude that the approach we propose has the potential to become a highly accurate, computationally effective, and predictive tool for analyzing S/WAXS data. For this, we compare our results with those obtained previously in an all-atom molecular dynamics simulation.

  13. Bayesian planet searches in radial velocity data

    NASA Astrophysics Data System (ADS)

    Gregory, Phil

    2015-08-01

    Intrinsic stellar variability caused by magnetic activity and convection has become the main limiting factor for planet searches in both transit and radial velocity (RV) data. New spectrographs are under development like ESPRESSO and EXPRES that aim to improve RV precision by a factor of approximately 100 over the current best spectrographs, HARPS and HARPS-N. This will greatly exacerbate the challenge of distinguishing planetary signals from stellar activity induced RV signals. At the same time good progress has been made in simulating stellar activity signals. At the Porto 2014 meeting, “Towards Other Earths II,” Xavier Dumusque challenged the community to a large scale blind test using the simulated RV data to understand the limitations of present solutions to deal with stellar signals and to select the best approach. My talk will focus on some of the statistical lesson learned from this challenge with an emphasis on Bayesian methodology.

  14. [Towards DSM 5.1. Proposals for schizophrenia.

    PubMed

    Niolu, Cinzia; Bianciardi, Emanuela; Ribolsi, Michele; Siracusano, Alberto

    2016-11-01

    Schizophrenia is a debilitating illness, present in approximately 1% of the global population. It is manifested through positive symptoms including delusions, hallucinations, disorganized thoughts and negative symptoms such as avolition, alogia, and apathy. In 2013 the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) has been released and some changes were introduced to make diagnosis of schizophrenia more accurate and precise, but researchers are already studying how to improve again the diagnostic criteria of this disorder. To this regard, we hypothesize two types of schizophrenia: poor adherence and good adherence to treatment schizophrenia. Our supposition is based on the evidence of reduced relapses, rehospitalisations, and better long-term course of illness in those patients with schizophrenia who are non-adherent to treatment. Given that adherence to therapy strongly influences patients attitude to medication, quality of life, and subjective well-being, the hypothesis of introducing adherence as a new schizophrenia specifier is compelling.

  15. Multiple regression for physiological data analysis: the problem of multicollinearity.

    PubMed

    Slinker, B K; Glantz, S A

    1985-07-01

    Multiple linear regression, in which several predictor variables are related to a response variable, is a powerful statistical tool for gaining quantitative insight into complex in vivo physiological systems. For these insights to be correct, all predictor variables must be uncorrelated. However, in many physiological experiments the predictor variables cannot be precisely controlled and thus change in parallel (i.e., they are highly correlated). There is a redundancy of information about the response, a situation called multicollinearity, that leads to numerical problems in estimating the parameters in regression equations; the parameters are often of incorrect magnitude or sign or have large standard errors. Although multicollinearity can be avoided with good experimental design, not all interesting physiological questions can be studied without encountering multicollinearity. In these cases various ad hoc procedures have been proposed to mitigate multicollinearity. Although many of these procedures are controversial, they can be helpful in applying multiple linear regression to some physiological problems.

  16. A New Strategy to Land Precisely on the Northern Plains of Mars

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Huertas, Andres

    2010-01-01

    During the Phoenix mission landing site selection process, the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) images revealed widely spread and dense rock fields in the northern plains. Automatic rock mapping and subsequent statistical analyses showed 30-90% CFA (cumulative fractional area) covered by rocks larger than 1 meter in dense rock fields around craters. Less dense rock fields had 5-30% rock coverage in terrain away from craters. Detectable meter-scale boulders were found nearly everywhere. These rocks present a risk to spacecraft safety during landing. However, they are the most salient topographic features in this region, and can be good landmarks for spacecraft localization during landing. In this paper we present a novel strategy that uses abundance of rocks in northern plains for spacecraft localization. The paper discusses this approach in three sections: a rock-based landmark terrain relative navigation (TRN) algorithm; the TRN algorithm feasibility; and conclusions.

  17. Estimation of water turbidity in Gorgan Bay, South-east of Caspian Sea by using IRS-LISS-III images.

    PubMed

    Aghighi, Hossein; Alimohammadi, Abbas; Saradjian, Mohammad Reza; Ashourloo, Davood

    2008-03-01

    In this research, usefulness of IRS-LISS-III data of Gorgan Bay, South-east of Caspian Sea located in North of Iran for water turbidity mapping, has been tested. After correction of geometric and radiometric errors, the resulting radiance data were used for examination of correlations between the remotely sensed and in situ water turbidity data simultaneously measured by the Secchi depth approach. Results of this research showed good relations between the Secchi depth and spectral data. The fitted statistical model was very significant (R2 = 0.77) and test of the model performance by independent samples was encouraging. Because of the low costs encountered with acquisition and processing of remotely sensed data, further research in larger scales for the purpose of more precise test of the approach for water turbidity mapping and monitoring is recommended.

  18. Comparative study on the selectivity of various spectrophotometric techniques for the determination of binary mixture of fenbendazole and rafoxanide.

    PubMed

    Saad, Ahmed S; Attia, Ali K; Alaraki, Manal S; Elzanfaly, Eman S

    2015-11-05

    Five different spectrophotometric methods were applied for simultaneous determination of fenbendazole and rafoxanide in their binary mixture; namely first derivative, derivative ratio, ratio difference, dual wavelength and H-point standard addition spectrophotometric methods. Different factors affecting each of the applied spectrophotometric methods were studied and the selectivity of the applied methods was compared. The applied methods were validated as per the ICH guidelines and good accuracy; specificity and precision were proven within the concentration range of 5-50 μg/mL for both drugs. Statistical analysis using one-way ANOVA proved no significant differences among the proposed methods for the determination of the two drugs. The proposed methods successfully determined both drugs in laboratory prepared and commercially available binary mixtures, and were found applicable for the routine analysis in quality control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Content-based unconstrained color logo and trademark retrieval with color edge gradient co-occurrence histograms

    NASA Astrophysics Data System (ADS)

    Phan, Raymond; Androutsos, Dimitrios

    2008-01-01

    In this paper, we present a logo and trademark retrieval system for unconstrained color image databases that extends the Color Edge Co-occurrence Histogram (CECH) object detection scheme. We introduce more accurate information to the CECH, by virtue of incorporating color edge detection using vector order statistics. This produces a more accurate representation of edges in color images, in comparison to the simple color pixel difference classification of edges as seen in the CECH. Our proposed method is thus reliant on edge gradient information, and as such, we call this the Color Edge Gradient Co-occurrence Histogram (CEGCH). We use this as the main mechanism for our unconstrained color logo and trademark retrieval scheme. Results illustrate that the proposed retrieval system retrieves logos and trademarks with good accuracy, and outperforms the CECH object detection scheme with higher precision and recall.

  20. Development and validation of multivariate calibration methods for simultaneous estimation of Paracetamol, Enalapril maleate and hydrochlorothiazide in pharmaceutical dosage form

    NASA Astrophysics Data System (ADS)

    Singh, Veena D.; Daharwal, Sanjay J.

    2017-01-01

    Three multivariate calibration spectrophotometric methods were developed for simultaneous estimation of Paracetamol (PARA), Enalapril maleate (ENM) and Hydrochlorothiazide (HCTZ) in tablet dosage form; namely multi-linear regression calibration (MLRC), trilinear regression calibration method (TLRC) and classical least square (CLS) method. The selectivity of the proposed methods were studied by analyzing the laboratory prepared ternary mixture and successfully applied in their combined dosage form. The proposed methods were validated as per ICH guidelines and good accuracy; precision and specificity were confirmed within the concentration range of 5-35 μg mL- 1, 5-40 μg mL- 1 and 5-40 μg mL- 1of PARA, HCTZ and ENM, respectively. The results were statistically compared with reported HPLC method. Thus, the proposed methods can be effectively useful for the routine quality control analysis of these drugs in commercial tablet dosage form.

  1. Design-based stereology: Planning, volumetry and sampling are crucial steps for a successful study.

    PubMed

    Tschanz, Stefan; Schneider, Jan Philipp; Knudsen, Lars

    2014-01-01

    Quantitative data obtained by means of design-based stereology can add valuable information to studies performed on a diversity of organs, in particular when correlated to functional/physiological and biochemical data. Design-based stereology is based on a sound statistical background and can be used to generate accurate data which are in line with principles of good laboratory practice. In addition, by adjusting the study design an appropriate precision can be achieved to find relevant differences between groups. For the success of the stereological assessment detailed planning is necessary. In this review we focus on common pitfalls encountered during stereological assessment. An exemplary workflow is included, and based on authentic examples, we illustrate a number of sampling principles which can be implemented to obtain properly sampled tissue blocks for various purposes. Copyright © 2013 Elsevier GmbH. All rights reserved.

  2. Development and Validation of Liquid Chromatographic Method for Estimation of Naringin in Nanoformulation

    PubMed Central

    Musmade, Kranti P.; Trilok, M.; Dengale, Swapnil J.; Bhat, Krishnamurthy; Reddy, M. S.; Musmade, Prashant B.; Udupa, N.

    2014-01-01

    A simple, precise, accurate, rapid, and sensitive reverse phase high performance liquid chromatography (RP-HPLC) method with UV detection has been developed and validated for quantification of naringin (NAR) in novel pharmaceutical formulation. NAR is a polyphenolic flavonoid present in most of the citrus plants having variety of pharmacological activities. Method optimization was carried out by considering the various parameters such as effect of pH and column. The analyte was separated by employing a C18 (250.0 × 4.6 mm, 5 μm) column at ambient temperature in isocratic conditions using phosphate buffer pH 3.5: acetonitrile (75 : 25% v/v) as mobile phase pumped at a flow rate of 1.0 mL/min. UV detection was carried out at 282 nm. The developed method was validated according to ICH guidelines Q2(R1). The method was found to be precise and accurate on statistical evaluation with a linearity range of 0.1 to 20.0 μg/mL for NAR. The intra- and interday precision studies showed good reproducibility with coefficients of variation (CV) less than 1.0%. The mean recovery of NAR was found to be 99.33 ± 0.16%. The proposed method was found to be highly accurate, sensitive, and robust. The proposed liquid chromatographic method was successfully employed for the routine analysis of said compound in developed novel nanopharmaceuticals. The presence of excipients did not show any interference on the determination of NAR, indicating method specificity. PMID:26556205

  3. Urinalysis: The Automated Versus Manual Techniques; Is It Time To Change?.

    PubMed

    Ahmed, Asmaa Ismail; Baz, Heba; Lotfy, Sarah

    2016-01-01

    Urinalysis is the third major test in clinical laboratory. Manual technique imprecision urges the need for a rapid reliable automated test. We evaluated the H800-FUSIOO automatic urine sediment analyzer and compared it to the manual urinalysis technique to determine if it may be a competitive substitute in laboratories of central hospitals. 1000 urine samples were examined by the two methods in parallel. Agreement, precision, carryover, drift, sensitivity, specificity, and practicability criteria were tested. Agreement ranged from excellent to good for all urine semi-quantitative components (K > 0.4, p = 0.000), except for granular casts (K = 0.317, p = 0.000). Specific gravity results correlated well between the two methods (r = 0.884, p = 0.000). RBCS and WBCs showed moderate correlation (r = 0.42, p = 0.000) and (r = 0.44, p = 0.000), respectively. The auto-analyzer's within-run precision was > 75% for all semi-quantitative components except for proteins (50% precision). This finding in addition to the granular casts poor agreement indicate the necessity of operator interference at the critical cutoff values. As regards quantitative contents, RBCs showed a mean of 69.8 +/- 3.95, C.V. = 5.7, WBCs showed a mean of 38.9 +/- 1.9, C.V. = 4.9). Specific gravity, pH, microalbumin, and creatinine also showed good precision results with C.Vs of 0.000, 2.6, 9.1, and 0.00 respectively. In the between run precision, positive control showed good precision (C.V. = 2.9), while negative control's C.V. was strikingly high (C.V. = 127). Carryover and drift studies were satisfactory. Manual examination of inter-observer results showed major discrepancies (< 60% similar readings), while intra-observer's results correlated well with each other (r = 0.99, p = 0.000). Automation of urinalysis decreases observer-associated variation and offers prompt competitive results when standardized for screening away from the borderline cutoffs.

  4. The ACS statistical analyzer

    DOT National Transportation Integrated Search

    2010-03-01

    This document provides guidance for using the ACS Statistical Analyzer. It is an Excel-based template for users of estimates from the American Community Survey (ACS) to assess the precision of individual estimates and to compare pairs of estimates fo...

  5. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  6. Evaluation on the use of cerium in the NBL Titrimetric Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zebrowski, J.P.; Orlowicz, G.J.; Johnson, K.D.

    An alternative to potassium dichromate as titrant in the New Brunswick Laboratory Titrimetric Method for uranium analysis was sought since chromium in the waste makes disposal difficult. Substitution of a ceric-based titrant was statistically evaluated. Analysis of the data indicated statistically equivalent precisions for the two methods, but a significant overall bias of +0.035% for the ceric titrant procedure. The cause of the bias was investigated, alterations to the procedure were made, and a second statistical study was performed. This second study revealed no statistically significant bias, nor any analyst-to-analyst variation in the ceric titration procedure. A statistically significant day-to-daymore » variation was detected, but this was physically small (0.01 5%) and was only detected because of the within-day precision of the method. The added mean and standard deviation of the %RD for a single measurement was found to be 0.031%. A comparison with quality control blind dichromate titration data again indicated similar overall precision. Effects of ten elements on the ceric titration`s performance was determined. Co, Ti, Cu, Ni, Na, Mg, Gd, Zn, Cd, and Cr in previous work at NBL these impurities did not interfere with the potassium dichromate titrant. This study indicated similar results for the ceric titrant, with the exception of Ti. All the elements (excluding Ti and Cr), caused no statistically significant bias in uranium measurements at levels of 10 mg impurity per 20-40 mg uranium. The presence of Ti was found to cause a bias of {minus}0.05%; this is attributed to the presence of sulfate ions, resulting in precipitation of titanium sulfate and occlusion of uranium. A negative bias of 0.012% was also statistically observed in the samples containing chromium impurities.« less

  7. A Positive Approach to Good Grammar

    ERIC Educational Resources Information Center

    Kuehner, Alison V.

    2016-01-01

    Correct grammar is important for precise, accurate, academic prose, but the traditional skills-based approach to teaching grammar is not effective if the goal is good writing. The sentence-combining approach shows promise. However, sentence modeling is more likely to produce strong writing and enhance reading comprehension. Through sentence…

  8. Accuracy of the Precision® point-of-care ketone test examined by liquid chromatography tandem-mass spectrometry (LC-MS/MS) in the same fingerstick sample.

    PubMed

    Janssen, Marcel J W; Hendrickx, Ben H E; Habets-van der Poel, Carin D; van den Bergh, Joop P W; Haagen, Anton A M; Bakker, Jaap A

    2010-12-01

    The Precision(®) (Abbott Diabetes Care) point-of-care biosensor test strips are widely used by patients with diabetes and clinical laboratories for measurement of plasma β-hydroxybutyrate (β-HB) concentrations in capillary blood samples obtained by fingerstick. In the literature, this procedure has been validated only against the enzymatic determination of β-HB in venous plasma, i.e., the method to which the Precision(®) has been calibrated. In this study, the Precision(®) Xceed was compared to a methodologically different and superior procedure: determination of β-HB by liquid chromatography tandem-mass spectrometry (LC-MS/MS) in capillary blood spots. Blood spots were obtained from the same fingerstick sample from out of which Precision(®) measurements were performed. Linearity was tested by adding varying amounts of standard to an EDTA venous whole blood matrix. The Precision(®) was in good agreement with LC-MS/MS within the measuring range of 0.0-6.0 mmol/L (Passing and Bablok regression: slope=1.20 and no significant intercept, R=0.97, n=59). Surprisingly, the Precision(®) showed non-linearity and full saturation at concentrations above 6.0 mmol/L, which were confirmed by a standard addition experiment. Results obtained at the saturation level varied between 3.0 and 6.5 mmol/L. The Precision(®) β-HB test strips demonstrate good comparison with LC-MS/MS. Inter-individual variation around the saturation level, however, is large. Therefore, we advise reporting readings above 3.0 as >3.0 mmol/L. The test is valid for use in the clinically relevant range of 0.0-3.0 mmol/L.

  9. Towards Precision Spectroscopy of Baryonic Resonances

    NASA Astrophysics Data System (ADS)

    Döring, Michael; Mai, Maxim; Rönchen, Deborah

    2017-01-01

    Recent progress in baryon spectroscopy is reviewed. In a common effort, various groups have analyzed a set of new high-precision polarization observables from ELSA. The Jülich-Bonn group has finalized the analysis of pion-induced meson-baryon production, the potoproduction of pions and eta mesons, and (almost) the KΛ final state. As data become preciser, statistical aspects in the analysis of excited baryons become increasingly relevant and several advances in this direction are proposed.

  10. Towards precision spectroscopy of baryonic resonances

    DOE PAGES

    Doring, Michael; Mai, Maxim; Ronchen, Deborah

    2017-01-26

    Recent progress in baryon spectroscopy is reviewed. In a common effort, various groups have analyzed a set of new high-precision polarization observables from ELSA. The Julich-Bonn group has finalized the analysis of pion-induced meson-baryon production, the potoproduction of pions and eta mesons, and (almost) the KΛ final state. Lastly, as data become preciser, statistical aspects in the analysis of excited baryons become increasingly relevant and several advances in this direction are proposed.

  11. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  12. Spatial variability effects on precision and power of forage yield estimation

    USDA-ARS?s Scientific Manuscript database

    Spatial analyses of yield trials are important, as they adjust cultivar means for spatial variation and improve the statistical precision of yield estimation. While the relative efficiency of spatial analysis has been frequently reported in several yield trials, its application on long-term forage y...

  13. Precision Medicine and a Patient-Orientated Approach: Is this the Future for Tracking Cardiovascular Disorders?

    PubMed

    Pretorius, Etheresia

    2017-01-01

    The latest statistics from the 2016 heart disease and stroke statistics update shows that cardiovascular disease is the leading global cause of death, currently accounting for more than 17.3 million deaths per year. Type II diabetes is also on the rise with out-of-control numbers. To address these pandemics, we need to treat patients using an individualized patient care approach, but simultaneously gather data to support the precision medicine initiative. Last year the NIH announced the precision medicine initiative to generate novel knowledge regarding diseases, with a near-term focus on cancers, followed by a longer-term aim, applicable to a whole range of health applications and diseases. The focus of this paper is to suggest a combined effort between the latest precision medicine initiative, researchers and clinicians; whereby novel techniques could immediately make a difference in patient care, but long-term add to knowledge for use in precision medicine. We discuss the intricate relationship between individualized patient care and precision medicine and the current thoughts regarding which data is actually suitable for the precision medicine data gathering. The uses of viscoelastic techniques in precision medicine are discussed and how these techniques might give novel perspectives on the success of treatment regimes of cardiovascular patients are explored. Thrombo-embolic stroke, rheumathoid arthritis and type II diabetes are used as examples of diseases where precision medicine and a patient-orientated approach can possibly be implemented. In conclusion it is suggested that if all role players work together by embracing a new way of thought in treating and managing cardiovascular disease and diabetes will we be able to adequately address these out-ofcontrol conditions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Optimizing ELISAs for precision and robustness using laboratory automation and statistical design of experiments.

    PubMed

    Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete

    2008-08-20

    Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.

  15. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  16. A portable device for calibration of autocollimators with nanoradian precision

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer

    2017-09-01

    A portable device has been developed in TUBITAK UME to calibrate high precision autocollimators with nanoradian precision. The device can operate in the range of +/-4500" which is far enough for the calibration of the available autocollimators and can generate ultra-small angles in measurement steps of 0.0005" (2.5 nrad). Description of the device with the performance tests using the calibrated precise autocollimators and novel methods will be reported. The test results indicate that the device is a good candidate for application to on-site/in-situ calibration of autocollimators with expanded uncertainties of 0.01" (50 nrad) particularly those used in slope measuring profilers.

  17. Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations

    NASA Astrophysics Data System (ADS)

    Kozak, P.

    2014-12-01

    Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.

  18. Spatio-temporal conditional inference and hypothesis tests for neural ensemble spiking precision

    PubMed Central

    Harrison, Matthew T.; Amarasingham, Asohan; Truccolo, Wilson

    2014-01-01

    The collective dynamics of neural ensembles create complex spike patterns with many spatial and temporal scales. Understanding the statistical structure of these patterns can help resolve fundamental questions about neural computation and neural dynamics. Spatio-temporal conditional inference (STCI) is introduced here as a semiparametric statistical framework for investigating the nature of precise spiking patterns from collections of neurons that is robust to arbitrarily complex and nonstationary coarse spiking dynamics. The main idea is to focus statistical modeling and inference, not on the full distribution of the data, but rather on families of conditional distributions of precise spiking given different types of coarse spiking. The framework is then used to develop families of hypothesis tests for probing the spatio-temporal precision of spiking patterns. Relationships among different conditional distributions are used to improve multiple hypothesis testing adjustments and to design novel Monte Carlo spike resampling algorithms. Of special note are algorithms that can locally jitter spike times while still preserving the instantaneous peri-stimulus time histogram (PSTH) or the instantaneous total spike count from a group of recorded neurons. The framework can also be used to test whether first-order maximum entropy models with possibly random and time-varying parameters can account for observed patterns of spiking. STCI provides a detailed example of the generic principle of conditional inference, which may be applicable in other areas of neurostatistical analysis. PMID:25380339

  19. The Precision-Power-Gradient Theory for Teaching Basic Research Statistical Tools to Graduate Students.

    ERIC Educational Resources Information Center

    Cassel, Russell N.

    This paper relates educational and psychological statistics to certain "Research Statistical Tools" (RSTs) necessary to accomplish and understand general research in the behavioral sciences. Emphasis is placed on acquiring an effective understanding of the RSTs and to this end they are are ordered to a continuum scale in terms of individual…

  20. Air Combat Training: Good Stick Index Validation. Final Report for Period 3 April 1978-1 April 1979.

    ERIC Educational Resources Information Center

    Moore, Samuel B.; And Others

    A study was conducted to investigate and statistically validate a performance measuring system (the Good Stick Index) in the Tactical Air Command Combat Engagement Simulator I (TAC ACES I) Air Combat Maneuvering (ACM) training program. The study utilized a twelve-week sample of eighty-nine student pilots to statistically validate the Good Stick…

  1. The Kepler-10 planetary system revisited by HARPS-N: A hot rocky world and a solid Neptune-mass planet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumusque, Xavier; Buchhave, Lars A.; Latham, David W.

    Kepler-10b was the first rocky planet detected by the Kepler satellite and confirmed with radial velocity follow-up observations from Keck-HIRES. The mass of the planet was measured with a precision of around 30%, which was insufficient to constrain models of its internal structure and composition in detail. In addition to Kepler-10b, a second planet transiting the same star with a period of 45 days was statistically validated, but the radial velocities were only good enough to set an upper limit of 20 M{sub ⊕} for the mass of Kepler-10c. To improve the precision on the mass for planet b, themore » HARPS-N Collaboration decided to observe Kepler-10 intensively with the HARPS-N spectrograph on the Telescopio Nazionale Galileo on La Palma. In total, 148 high-quality radial-velocity measurements were obtained over two observing seasons. These new data allow us to improve the precision of the mass determination for Kepler-10b to 15%. With a mass of 3.33 ± 0.49 M{sub ⊕} and an updated radius of 1.47{sub −0.02}{sup +0.03} R{sub ⊕}, Kepler-10b has a density of 5.8 ± 0.8 g cm{sup –3}, very close to the value predicted by models with the same internal structure and composition as the Earth. We were also able to determine a mass for the 45-day period planet Kepler-10c, with an even better precision of 11%. With a mass of 17.2 ± 1.9 M{sub ⊕} and radius of 2.35{sub −0.04}{sup +0.09} R{sub ⊕}, Kepler-10c has a density of 7.1 ± 1.0 g cm{sup –3}. Kepler-10c appears to be the first strong evidence of a class of more massive solid planets with longer orbital periods.« less

  2. Dynamical Constraints On The Galaxy-Halo Connection

    NASA Astrophysics Data System (ADS)

    Desmond, Harry

    2017-07-01

    Dark matter halos comprise the bulk of the universe's mass, yet must be probed by the luminous galaxies that form within them. A key goal of modern astrophysics, therefore, is to robustly relate the visible and dark mass, which to first order means relating the properties of galaxies and halos. This may be expected not only to improve our knowledge of galaxy formation, but also to enable high-precision cosmological tests using galaxies and hence maximise the utility of future galaxy surveys. As halos are inaccessible to observations - as galaxies are to N-body simulations - this relation requires an additional modelling step.The aim of this thesis is to develop and evaluate models of the galaxy-halo connection using observations of galaxy dynamics. In particular, I build empirical models based on the technique of halo abundance matching for five key dynamical scaling relations of galaxies - the Tully-Fisher, Faber-Jackson, mass-size and mass discrepancy-acceleration relations, and Fundamental Plane - which relate their baryon distributions and rotation or velocity dispersion profiles. I then develop a statistical scheme based on approximate Bayesian computation to compare the predicted and measured values of a number of summary statistics describing the relations' important features. This not only provides quantitative constraints on the free parameters of the models, but also allows absolute goodness-of-fit measures to be formulated. I find some features to be naturally accounted for by an abundance matching approach and others to impose new constraints on the galaxy-halo connection; the remainder are challenging to account for and may imply galaxy-halo correlations beyond the scope of basic abundance matching.Besides providing concrete statistical tests of specific galaxy formation theories, these results will be of use for guiding the inputs of empirical and semi-analytic galaxy formation models, which require galaxy-halo correlations to be imposed by hand. As galaxy datasets become larger and more precise in the future, we may expect these methods to continue providing insight into the relation between the visible and dark matter content of the universe and the physical processes that underlie it.

  3. A Treatment of Computational Precision, Number Representation, and Large Integers in an Introductory Fortran Course

    ERIC Educational Resources Information Center

    Richardson, William H., Jr.

    2006-01-01

    Computational precision is sometimes given short shrift in a first programming course. Treating this topic requires discussing integer and floating-point number representations and inaccuracies that may result from their use. An example of a moderately simple programming problem from elementary statistics was examined. It forced students to…

  4. Using Covariates to Improve Precision for Studies that Randomize Schools to Evaluate Educational Interventions

    ERIC Educational Resources Information Center

    Bloom, Howard S.; Richburg-Hayes, Lashawn; Black, Alison Rebeck

    2007-01-01

    This article examines how controlling statistically for baseline covariates, especially pretests, improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement. Empirical findings from five urban school districts indicate that (1) pretests can reduce the number of randomized…

  5. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  6. Precision Cosmology

    NASA Astrophysics Data System (ADS)

    Jones, Bernard J. T.

    2017-04-01

    Preface; Notation and conventions; Part I. 100 Years of Cosmology: 1. Emerging cosmology; 2. The cosmic expansion; 3. The cosmic microwave background; 4. Recent cosmology; Part II. Newtonian Cosmology: 5. Newtonian cosmology; 6. Dark energy cosmological models; 7. The early universe; 8. The inhomogeneous universe; 9. The inflationary universe; Part III. Relativistic Cosmology: 10. Minkowski space; 11. The energy momentum tensor; 12. General relativity; 13. Space-time geometry and calculus; 14. The Einstein field equations; 15. Solutions of the Einstein equations; 16. The Robertson-Walker solution; 17. Congruences, curvature and Raychaudhuri; 18. Observing and measuring the universe; Part IV. The Physics of Matter and Radiation: 19. Physics of the CMB radiation; 20. Recombination of the primeval plasma; 21. CMB polarisation; 22. CMB anisotropy; Part V. Precision Tools for Precision Cosmology: 23. Likelihood; 24. Frequentist hypothesis testing; 25. Statistical inference: Bayesian; 26. CMB data processing; 27. Parametrising the universe; 28. Precision cosmology; 29. Epilogue; Appendix A. SI, CGS and Planck units; Appendix B. Magnitudes and distances; Appendix C. Representing vectors and tensors; Appendix D. The electromagnetic field; Appendix E. Statistical distributions; Appendix F. Functions on a sphere; Appendix G. Acknowledgements; References; Index.

  7. Inverse probability weighting for covariate adjustment in randomized studies.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling

    2014-02-20

    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  9. Identifiability of PBPK Models with Applications to Dimethylarsinic Acid Exposure

    EPA Science Inventory

    Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss diff...

  10. How precise are reported protein coordinate data?

    PubMed

    Konagurthu, Arun S; Allison, Lloyd; Abramson, David; Stuckey, Peter J; Lesk, Arthur M

    2014-03-01

    Atomic coordinates in the Worldwide Protein Data Bank (wwPDB) are generally reported to greater precision than the experimental structure determinations have actually achieved. By using information theory and data compression to study the compressibility of protein atomic coordinates, it is possible to quantify the amount of randomness in the coordinate data and thereby to determine the realistic precision of the reported coordinates. On average, the value of each C(α) coordinate in a set of selected protein structures solved at a variety of resolutions is good to about 0.1 Å.

  11. In vivo precision of the GE Lunar iDXA densitometer for the measurement of total-body, lumbar spine, and femoral bone mineral density in adults.

    PubMed

    Hind, Karen; Oldroyd, Brian; Truscott, John G

    2010-01-01

    Knowledge of precision is integral to the monitoring of bone mineral density (BMD) changes using dual-energy X-ray absorptiometry (DXA). We evaluated the precision for bone measurements acquired using a GE Lunar iDXA (GE Healthcare, Waukesha, WI) in self-selected men and women, with mean age of 34.8 yr (standard deviation [SD]: 8.4; range: 20.1-50.5), heterogeneous in terms of body mass index (mean: 25.8 kg/m(2); SD: 5.1; range: 16.7-42.7 kg/m(2)). Two consecutive iDXA scans (with repositioning) of the total body, lumbar spine, and femur were conducted within 1h, for each subject. The coefficient of variation (CV), the root-mean-square (RMS) averages of SDs of repeated measurements, and the corresponding 95% least significant change were calculated. Linear regression analyses were also undertaken. We found a high level of precision for BMD measurements, particularly for scans of the total body, lumbar spine, and total hip (RMS: 0.007, 0.004, and 0.007 g/cm(2); CV: 0.63%, 0.41%, and 0.53%, respectively). Precision error for the femoral neck was higher but still represented good reproducibility (RMS: 0.014 g/cm(2); CV: 1.36%). There were associations between body size and total-body BMD and total-hip BMD SD precisions (r=0.534-0.806, p<0.05) in male subjects. Regression parameters showed good association between consecutive measurements for all body sites (r(2)=0.98-0.99). The Lunar iDXA provided excellent precision for BMD measurements of the total body, lumbar spine, femoral neck, and total hip. Copyright © 2010 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  12. Accuracy and precision of 3 intraoral scanners and accuracy of conventional impressions: A novel in vivo analysis method.

    PubMed

    Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A

    2018-02-01

    To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  14. Matrix product state representation of quasielectron wave functions

    NASA Astrophysics Data System (ADS)

    Kjäll, J.; Ardonne, E.; Dwivedi, V.; Hermanns, M.; Hansson, T. H.

    2018-05-01

    Matrix product state techniques provide a very efficient way to numerically evaluate certain classes of quantum Hall wave functions that can be written as correlators in two-dimensional conformal field theories. Important examples are the Laughlin and Moore-Read ground states and their quasihole excitations. In this paper, we extend the matrix product state techniques to evaluate quasielectron wave functions, a more complex task because the corresponding conformal field theory operator is not local. We use our method to obtain density profiles for states with multiple quasielectrons and quasiholes, and to calculate the (mutual) statistical phases of the excitations with high precision. The wave functions we study are subject to a known difficulty: the position of a quasielectron depends on the presence of other quasiparticles, even when their separation is large compared to the magnetic length. Quasielectron wave functions constructed using the composite fermion picture, which are topologically equivalent to the quasielectrons we study, have the same problem. This flaw is serious in that it gives wrong results for the statistical phases obtained by braiding distant quasiparticles. We analyze this problem in detail and show that it originates from an incomplete screening of the topological charges, which invalidates the plasma analogy. We demonstrate that this can be remedied in the case when the separation between the quasiparticles is large, which allows us to obtain the correct statistical phases. Finally, we propose that a modification of the Laughlin state, that allows for local quasielectron operators, should have good topological properties for arbitrary configurations of excitations.

  15. Precision and Accuracy of a Digital Impression Scanner in Full-Arch Implant Rehabilitation.

    PubMed

    Pesce, Paolo; Pera, Francesco; Setti, Paolo; Menini, Maria

    To evaluate the accuracy and precision of a digital scanner used to scan four implants positioned according to an immediate loading implant protocol and to assess the accuracy of an aluminum framework fabricated from a digital impression. Five master casts reproducing different edentulous maxillae with four tilted implants were used. Four scan bodies were screwed onto the low-profile abutments, and a digital intraoral scanner was used to perform five digital impressions of each master cast. To assess trueness, a metal framework of the best digital impression was produced with computer-aided design/computer-assisted manufacture (CAD/CAM) technology and passive fit was assessed with the Sheffield test. Gaps between the frameworks and the implant analogs were measured with a stereomicroscope. To assess precision, three-dimensional (3D) point cloud processing software was used to measure the deviations between the five digital impressions of each cast by producing a color map. The deviation values were grouped in three classes, and differences were assessed between class 2 (representing lower discrepancies) and the assembled classes 1 and 3 (representing the higher negative and positive discrepancies, respectively). The frameworks showed a mean gap of < 30 μm (range: 2 to 47 μm). A statistically significant difference was found between the two groups by the 3D point cloud software, with higher frequencies of points in class 2 than in grouped classes 1 and 3 (P < .001). Within the limits of this in vitro study, it appears that a digital impression may represent a reliable method for fabricating full-arch implant frameworks with good passive fit when tilted implants are present.

  16. Fully automatic and precise data analysis developed for time-of-flight mass spectrometry.

    PubMed

    Meyer, Stefan; Riedo, Andreas; Neuland, Maike B; Tulej, Marek; Wurz, Peter

    2017-09-01

    Scientific objectives of current and future space missions are focused on the investigation of the origin and evolution of the solar system with the particular emphasis on habitability and signatures of past and present life. For in situ measurements of the chemical composition of solid samples on planetary surfaces, the neutral atmospheric gas and the thermal plasma of planetary atmospheres, the application of mass spectrometers making use of time-of-flight mass analysers is a technique widely used. However, such investigations imply measurements with good statistics and, thus, a large amount of data to be analysed. Therefore, faster and especially robust automated data analysis with enhanced accuracy is required. In this contribution, an automatic data analysis software, which allows fast and precise quantitative data analysis of time-of-flight mass spectrometric data, is presented and discussed in detail. A crucial part of this software is a robust and fast peak finding algorithm with a consecutive numerical integration method allowing precise data analysis. We tested our analysis software with data from different time-of-flight mass spectrometers and different measurement campaigns thereof. The quantitative analysis of isotopes, using automatic data analysis, yields results with an accuracy of isotope ratios up to 100 ppm for a signal-to-noise ratio (SNR) of 10 4 . We show that the accuracy of isotope ratios is in fact proportional to SNR -1 . Furthermore, we observe that the accuracy of isotope ratios is inversely proportional to the mass resolution. Additionally, we show that the accuracy of isotope ratios is depending on the sample width T s by T s 0.5 . Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. QCD Precision Measurements and Structure Function Extraction at a High Statistics, High Energy Neutrino Scattering Experiment:. NuSOnG

    NASA Astrophysics Data System (ADS)

    Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.

    We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.

  18. Resuscitation quality of rotating chest compression providers at one-minute vs. two-minute intervals: A mannequin study.

    PubMed

    Kılıç, D; Göksu, E; Kılıç, T; Buyurgan, C S

    2018-05-01

    The aim of this randomized cross-over study was to compare one-minute and two-minute continuous chest compressions in terms of chest compression only CPR quality metrics on a mannequin model in the ED. Thirty-six emergency medicine residents participated in this study. In the 1-minute group, there was no statistically significant difference in the mean compression rate (p=0.83), mean compression depth (p=0.61), good compressions (p=0.31), the percentage of complete release (p=0.07), adequate compression depth (p=0.11) or the percentage of good rate (p=51) over the four-minute time period. Only flow time was statistically significant among the 1-minute intervals (p<0.001). In the 2-minute group, the mean compression depth (p=0.19), good compression (p=0.92), the percentage of complete release (p=0.28), adequate compression depth (p=0.96), and the percentage of good rate (p=0.09) were not statistically significant over time. In this group, the number of compressions (248±31 vs 253±33, p=0.01) and mean compression rates (123±15 vs 126±17, p=0.01) and flow time (p=0.001) were statistically significant along the two-minute intervals. There was no statistically significant difference in the mean number of chest compressions per minute, mean chest compression depth, the percentage of good compressions, complete release, adequate chest compression depth and percentage of good compression between the 1-minute and 2-minute groups. There was no statistically significant difference in the quality metrics of chest compressions between 1- and 2-minute chest compression only groups. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Performance evaluation of Abbott CELL-DYN Ruby for routine use.

    PubMed

    Lehto, T; Hedberg, P

    2008-10-01

    CELL-DYN Ruby is a new automated hematology analyzer suitable for routine use in small laboratories and as a back-up or emergency analyzer in medium- to high-volume laboratories. The analyzer was evaluated by comparing the results from the CELL-DYN((R)) Ruby with the results obtained from CELL-DYN Sapphire . Precision, linearity, and carryover between patient samples were also assessed. Precision was good at all levels for the routine cell blood count (CBC) parameters, CV% being or= 0.98) with CELL-DYN Sapphire for the CBC parameters. For the absolute reticulocyte count, R(2) was 0.82. In the white blood cell (WBC) differentials, the between-days precision was good for all parameters (CV%: or= 0.97), and the correlation coefficient for absolute monocyte count and monocyte percentage were 0.91 and 0.87, respectively. For absolute basophil count and basophil percentage the correlations were weaker (R(2) = 0.46 and 0.34, respectively). Carryover was minimal for all the parameters studied. The linearities of WBC, red blood cell, PLTs, and hemoglobin were acceptable within the tested ranges. In conclusion, the results of the evaluation showed the performance of CELL-DYN Ruby to be good.

  20. Evaluation of the prediction precision capability of partial least squares regression approach for analysis of high alloy steel by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.

    2015-06-01

    Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).

  1. Colorimetric microdetermination of captopril in pure form and in pharmaceutical formulations

    NASA Astrophysics Data System (ADS)

    Shama, Sayed Ahmed; El-Sayed Amin, Alla; Omara, Hany

    2006-11-01

    A simple, rapid, accurate, precise and sensitive colorimetric method for the determination of captopril (CAP) in bulk sample and in dosage forms is described. The method is based on oxidation of the drug by potassium permanganate in acidic medium and determination of the unreacted oxidant by measuring the decrease in absorbance for five different dyes; methylene blue (MB); acid blue 74 (AB), acid red 73 (AR), amaranth dye (AM) and acid orange 7 (AO) at a suitable λmax (660, 610, 510, 520, and 485 nm), respectively. Regression analysis of Beer's plots showed good correlation in the concentration ranges (0.4 12.5, 0.3 10, 0.5 11, 0.4 8.3 and 0.5 9.3 μg ml-1), respectively. The apparent molar absorbtivity, Sandell sensitivity, detection and quantitation limits were calculated. For more accurate results, Ringbom optimum concentration ranges were 0.5 12, 0.5 9.6, 0.6 10.5, 0.5 8.0 and 0.7 9.0 μg ml-1, respectively. The validity of the proposed method was tested by analyzing in pure and dosage forms containing CAP whether alone or in combination with hydrochlorothiazide. Statistical analysis of the results reflects that the proposed procedures are precise, accurate and easily applicable for the determination of CAP in pure form and in pharmaceutical preparations. Also, the stability constant was determined and the free energy change was calculated potentiometrically.

  2. Development and validation of a reversed-phase high-performance thin-layer chromatography-densitometric method for determination of atorvastatin calcium in bulk drug and tablets.

    PubMed

    Shirkhedkar, Atul A; Surana, Sanjay J

    2010-01-01

    Atorvastatin calcium is a synthetic HMG-CoA reductase inhibitor that is used as a cholesterol-lowering agent. A simple, sensitive, selective, and precise RP-HPTLC-densitometric determination of atorvastatin calcium both as bulk drug and from pharmaceutical formulation was developed and validated according to International Conference on Harmonization guidelines. The method used aluminum sheets precoated with silica gel 60 RP18F254S as the stationary phase, and the mobile phase consisted of methanol-water (3.5 + 1.5, v/v). The system gave a compact band for atorvastatin calcium with an Rf value of 0.62 +/- 0.02. Densitometric quantification was carried out at 246 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with r = 0.9992 in the working concentration range of 100-800 ng/band. The method was validated for precision, accuracy, ruggedness, robustness, specificity, recovery, LOD, and LOQ. The LOD and LOQ were 6 and 18 ng, respectively. The drug underwent hydrolysis when subjected to acidic conditions and was found to be stable under alkali, oxidation, dry heat, and photodegradation conditions. Statistical analysis proved that the developed RP-HPTLC-densitometry method is reproducible and selective and that it can be applied for identification and quantitative determination of atorvastatin calcium in bulk drug and tablet formulation.

  3. Fully accelerating quantum Monte Carlo simulations of real materials on GPU clusters

    NASA Astrophysics Data System (ADS)

    Esler, Kenneth

    2011-03-01

    Quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles, combining very high accuracy with extreme parallel scalability. By solving the many-body Schrödinger equation through a stochastic projection, it achieves greater accuracy than mean-field methods and better scaling with system size than quantum chemical methods, enabling scientific discovery across a broad spectrum of disciplines. In recent years, graphics processing units (GPUs) have provided a high-performance and low-cost new approach to scientific computing, and GPU-based supercomputers are now among the fastest in the world. The multiple forms of parallelism afforded by QMC algorithms make the method an ideal candidate for acceleration in the many-core paradigm. We present the results of porting the QMCPACK code to run on GPU clusters using the NVIDIA CUDA platform. Using mixed precision on GPUs and MPI for intercommunication, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core CPUs alone, while reproducing the double-precision CPU results within statistical error. We discuss the algorithm modifications necessary to achieve good performance on this heterogeneous architecture and present the results of applying our code to molecules and bulk materials. Supported by the U.S. DOE under Contract No. DOE-DE-FG05-08OR23336 and by the NSF under No. 0904572.

  4. [Precision of digital impressions with TRIOS under simulated intraoral impression taking conditions].

    PubMed

    Yang, Xin; Sun, Yi-fei; Tian, Lei; Si, Wen-jie; Feng, Hai-lan; Liu, Yi-hong

    2015-02-18

    To evaluate the precision of digital impressions taken under simulated clinical impression taking conditions with TRIOS and to compare with the precision of extraoral digitalizations. Six #14-#17 epoxy resin dentitions with extracted #16 tooth preparations embedded were made. For each artificial dentition, (1)a silicone rubber impression was taken with individual tray, poured with type IV plaster,and digitalized with 3Shape D700 model scanner for 10 times; (2) fastened to a dental simulator, 10 digital impressions for each were taken with 3Shape TRIOS intraoral scanner. To assess the precision, best-fit algorithm and 3D comparison were conducted between repeated scan models pairwise by Geomagic Qualify 12.0, exported as averaged errors (AE) and color-coded diagrams. Non-parametric analysis was performed to compare the precisions of digital impressions and model images. The color-coded diagrams were used to show the deviations distributions. The mean of AE for digital impressions was 7.058 281 μm, which was greater than that of 4.092 363 μm for the model images (P<0.05). However, the means and medians of AE for digital impressions were no more than 10 μm, which meant that the consistency between the digital impressions was good. The deviations distribution was uniform in the model images,while nonuniform in the digital impressions with greater deviations lay mainly around the shoulders and interproximal surfaces. Digital impressions with TRIOS are of good precision and up to the clinical standard. Shoulders and interproximal surfaces scanning are more difficult.

  5. When Machines Think: Radiology's Next Frontier.

    PubMed

    Dreyer, Keith J; Geis, J Raymond

    2017-12-01

    Artificial intelligence (AI), machine learning, and deep learning are terms now seen frequently, all of which refer to computer algorithms that change as they are exposed to more data. Many of these algorithms are surprisingly good at recognizing objects in images. The combination of large amounts of machine-consumable digital data, increased and cheaper computing power, and increasingly sophisticated statistical models combine to enable machines to find patterns in data in ways that are not only cost-effective but also potentially beyond humans' abilities. Building an AI algorithm can be surprisingly easy. Understanding the associated data structures and statistics, on the other hand, is often difficult and obscure. Converting the algorithm into a sophisticated product that works consistently in broad, general clinical use is complex and incompletely understood. To show how these AI products reduce costs and improve outcomes will require clinical translation and industrial-grade integration into routine workflow. Radiology has the chance to leverage AI to become a center of intelligently aggregated, quantitative, diagnostic information. Centaur radiologists, formed as a synergy of human plus computer, will provide interpretations using data extracted from images by humans and image-analysis computer algorithms, as well as the electronic health record, genomics, and other disparate sources. These interpretations will form the foundation of precision health care, or care customized to an individual patient. © RSNA, 2017.

  6. [Comparison among three translucency parameters].

    PubMed

    Fang, Xiong; Hui, Xia

    2017-06-01

    This study aims to compare the three commonly used translucency parameters in prosthodontics: transmittance (T), contrast ratio (CR), and translucency parameter (TP). Six platelet specimens were composed of Vita enamel and dental porcelain. The initial thickness was 1.2 mm. The specimens were gradually ground to 1.0, 0.8, 0.6, 0.4, and 0.2 mm. T, color parameters, and reflection were measured by a spectrocolorimeter for each corresponding thickness. T, CR and TP were calculated and compared. TP increased, whereas CR decreased, with decreasing thickness. Moreover, 
T increased with decreasing thickness, and exponential relationships were found. Two-way ANOVA showed statistical significance between T and thickness, except between T and the 1.2 mm and 1.0 mm enamel porcelain groups. No difference was found among the coefficient variations (CV) of T, CR and TP. Curve fitting indicated the existence of exponential relationships between T and CR and between T and TP. The values for goodness of fit with statistical significance were 0.951 and 0.939, respectively (P<0.05). Under the experimental conditions, T, TP and CR achieved the same CV. T and TP, as well as T and CR, were found with exponential relationships. The value of CR and TP could not represent the translucency precisely, especially when comparing the changing ratios.

  7. Experimental design of an interlaboratory study for trace metal analysis of liquid fluids. [for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Greenbauer-Seng, L. A.

    1983-01-01

    The accurate determination of trace metals and fuels is an important requirement in much of the research into and development of alternative fuels for aerospace applications. Recognizing the detrimental effects of certain metals on fuel performance and fuel systems at the part per million and in some cases part per billion levels requires improved accuracy in determining these low concentration elements. Accurate analyses are also required to ensure interchangeability of analysis results between vendor, researcher, and end use for purposes of quality control. Previous interlaboratory studies have demonstrated the inability of different laboratories to agree on the results of metal analysis, particularly at low concentration levels, yet typically good precisions are reported within a laboratory. An interlaboratory study was designed to gain statistical information about the sources of variation in the reported concentrations. Five participant laboratories were used on a fee basis and were not informed of the purpose of the analyses. The effects of laboratory, analytical technique, concentration level, and ashing additive were studied in four fuel types for 20 elements of interest. The prescribed sample preparation schemes (variations of dry ashing) were used by all of the laboratories. The analytical data were statistically evaluated using a computer program for the analysis of variance technique.

  8. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  9. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    NASA Astrophysics Data System (ADS)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  10. Quasi-Monochromatic Visual Environments and the Resting Point of Accommodation

    DTIC Science & Technology

    1988-01-01

    accommodation. No statistically significant differences were revealed to support the possibility of color mediated differential regression to resting...discussed with respect to the general findings of the total sample as well as the specific behavior of individual participants. The summarized statistics ...remaining ten varied considerably with respect to the averaged trends reported in the above descriptive statistics as well as with respect to precision

  11. The Too-Much-Precision Effect.

    PubMed

    Loschelder, David D; Friese, Malte; Schaerer, Michael; Galinsky, Adam D

    2016-12-01

    Past research has suggested a fundamental principle of price precision: The more precise an opening price, the more it anchors counteroffers. The present research challenges this principle by demonstrating a too-much-precision effect. Five experiments (involving 1,320 experts and amateurs in real-estate, jewelry, car, and human-resources negotiations) showed that increasing the precision of an opening offer had positive linear effects for amateurs but inverted-U-shaped effects for experts. Anchor precision backfired because experts saw too much precision as reflecting a lack of competence. This negative effect held unless first movers gave rationales that boosted experts' perception of their competence. Statistical mediation and experimental moderation established the critical role of competence attributions. This research disentangles competing theoretical accounts (attribution of competence vs. scale granularity) and qualifies two putative truisms: that anchors affect experts and amateurs equally, and that more precise prices are linearly more potent anchors. The results refine current theoretical understanding of anchoring and have significant implications for everyday life.

  12. [Clinical research=design*measurements*statistical analyses].

    PubMed

    Furukawa, Toshiaki

    2012-06-01

    A clinical study must address true endpoints that matter for the patients and the doctors. A good clinical study starts with a good clinical question. Formulating a clinical question in the form of PECO can sharpen one's original question. In order to perform a good clinical study one must have a knowledge of study design, measurements and statistical analyses: The first is taught by epidemiology, the second by psychometrics and the third by biostatistics.

  13. P values are only an index to evidence: 20th- vs. 21st-century statistical science.

    PubMed

    Burnham, K P; Anderson, D R

    2014-03-01

    Early statistical methods focused on pre-data probability statements (i.e., data as random variables) such as P values; these are not really inferences nor are P values evidential. Statistical science clung to these principles throughout much of the 20th century as a wide variety of methods were developed for special cases. Looking back, it is clear that the underlying paradigm (i.e., testing and P values) was weak. As Kuhn (1970) suggests, new paradigms have taken the place of earlier ones: this is a goal of good science. New methods have been developed and older methods extended and these allow proper measures of strength of evidence and multimodel inference. It is time to move forward with sound theory and practice for the difficult practical problems that lie ahead. Given data the useful foundation shifts to post-data probability statements such as model probabilities (Akaike weights) or related quantities such as odds ratios and likelihood intervals. These new methods allow formal inference from multiple models in the a prior set. These quantities are properly evidential. The past century was aimed at finding the "best" model and making inferences from it. The goal in the 21st century is to base inference on all the models weighted by their model probabilities (model averaging). Estimates of precision can include model selection uncertainty leading to variances conditional on the model set. The 21st century will be about the quantification of information, proper measures of evidence, and multi-model inference. Nelder (1999:261) concludes, "The most important task before us in developing statistical science is to demolish the P-value culture, which has taken root to a frightening extent in many areas of both pure and applied science and technology".

  14. Hyperspectral Imaging in Tandem with R Statistics and Image Processing for Detection and Visualization of pH in Japanese Big Sausages Under Different Storage Conditions.

    PubMed

    Feng, Chao-Hui; Makino, Yoshio; Yoshimura, Masatoshi; Thuyet, Dang Quoc; García-Martín, Juan Francisco

    2018-02-01

    The potential of hyperspectral imaging with wavelengths of 380 to 1000 nm was used to determine the pH of cooked sausages after different storage conditions (4 °C for 1 d, 35 °C for 1, 3, and 5 d). The mean spectra of the sausages were extracted from the hyperspectral images and partial least squares regression (PLSR) model was developed to relate spectral profiles with the pH of the cooked sausages. Eleven important wavelengths were selected based on the regression coefficient values. The PLSR model established using the optimal wavelengths showed good precision being the prediction coefficient of determination (R p 2 ) 0.909 and the root mean square error of prediction 0.035. The prediction map for illustrating pH indices in sausages was for the first time developed by R statistics. The overall results suggested that hyperspectral imaging combined with PLSR and R statistics are capable to quantify and visualize the sausages pH evolution under different storage conditions. In this paper, hyperspectral imaging is for the first time used to detect pH in cooked sausages using R statistics, which provides another useful information for the researchers who do not have the access to Matlab. Eleven optimal wavelengths were successfully selected, which were used for simplifying the PLSR model established based on the full wavelengths. This simplified model achieved a high R p 2 (0.909) and a low root mean square error of prediction (0.035), which can be useful for the design of multispectral imaging systems. © 2017 Institute of Food Technologists®.

  15. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    NASA Astrophysics Data System (ADS)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.

  16. Air Bearings Machined On Ultra Precision, Hydrostatic CNC-Lathe

    NASA Astrophysics Data System (ADS)

    Knol, Pierre H.; Szepesi, Denis; Deurwaarder, Jan M.

    1987-01-01

    Micromachining of precision elements requires an adequate machine concept to meet the high demand of surface finish, dimensional and shape accuracy. The Hembrug ultra precision lathes have been exclusively designed with hydrostatic principles for main spindle and guideways. This concept is to be explained with some major advantages of hydrostatics compared with aerostatics at universal micromachining applications. Hembrug has originally developed the conventional Mikroturn ultra precision facing lathes, for diamond turning of computer memory discs. This first generation of machines was followed by the advanced computer numerically controlled types for machining of complex precision workpieces. One of these parts, an aerostatic bearing component has been succesfully machined on the Super-Mikroturn CNC. A case study of airbearing machining confirms the statement that a good result of the micromachining does not depend on machine performance alone, but also on the technology applied.

  17. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.

  18. How Large Should a Statistical Sample Be?

    ERIC Educational Resources Information Center

    Menil, Violeta C.; Ye, Ruili

    2012-01-01

    This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…

  19. Visualizing Teacher Education as a Complex System: A Nested Simplex System Approach

    ERIC Educational Resources Information Center

    Ludlow, Larry; Ell, Fiona; Cochran-Smith, Marilyn; Newton, Avery; Trefcer, Kaitlin; Klein, Kelsey; Grudnoff, Lexie; Haigh, Mavis; Hill, Mary F.

    2017-01-01

    Our purpose is to provide an exploratory statistical representation of initial teacher education as a complex system comprised of dynamic influential elements. More precisely, we reveal what the system looks like for differently-positioned teacher education stakeholders based on our framework for gathering, statistically analyzing, and graphically…

  20. Dynamic comparisons of piezoelectric ejecta diagnostics

    NASA Astrophysics Data System (ADS)

    Buttler, W. T.; Zellner, M. B.; Olson, R. T.; Rigg, P. A.; Hixson, R. S.; Hammerberg, J. E.; Obst, A. W.; Payton, J. R.; Iverson, A.; Young, J.

    2007-03-01

    We investigate the quantitative reliability and precision of three different piezoelectric technologies for measuring ejected areal mass from shocked surfaces. Specifically we performed ejecta measurements on Sn shocked at two pressures, P ≈215 and 235 kbar. The shock in the Sn was created by launching a impactor with a powder gun. We self-compare and cross-compare these measurements to assess the ability of these probes to precisely determine the areal mass ejected from a shocked surface. We demonstrate the precision of each technology to be good, with variabilities on the order of ±10%. We also discuss their relative accuracy.

  1. Precise automatic differential stellar photometry

    NASA Technical Reports Server (NTRS)

    Young, Andrew T.; Genet, Russell M.; Boyd, Louis J.; Borucki, William J.; Lockwood, G. Wesley

    1991-01-01

    The factors limiting the precision of differential stellar photometry are reviewed. Errors due to variable atmospheric extinction can be reduced to below 0.001 mag at good sites by utilizing the speed of robotic telescopes. Existing photometric systems produce aliasing errors, which are several millimagnitudes in general but may be reduced to about a millimagnitude in special circumstances. Conventional differential photometry neglects several other important effects, which are discussed in detail. If all of these are properly handled, it appears possible to do differential photometry of variable stars with an overall precision of 0.001 mag with ground based robotic telescopes.

  2. Prevention and control of emergent infectious disease with high specific antigen sensor.

    PubMed

    Zhang, Hongzhe; Zhang, Shanshan; Liu, Nan

    2017-11-01

    This study aims to evaluate the application of a new type of high specificity antigen sensor in detecting the viruses in sudden infectious diseases. Influenza A (H1N1) virus immunosensor was used for the respective determination of the six kinds of antigens of H1N1, H3N2 viral protein, HA protein of H7N9, influenza B virus, adenovirus, and EV71 virus of same dilution degree on the Screen Printed Carbon Electrode (SPCE), so as to test the specificity of the detection method. In addition, various batches of chick embryo allantoic saliva dilution simulation samples were also detected on their recovery (accuracy), repeatability (precision), and stability. The results were as follows: the linear equation was y = 121.33x + 168; the slope of the linear equation was 121.33 nA/HA unit, representing the sensitivity; correlation coefficient was R 2 =0.9921 > 0.90. Using Statistical Analysis System (SAS) software, we found that: the W values of seven sets of data after Shapiro-Wilk detection were 0.853, 0.991, 0.901, 0.906, 0.825, 0.974, and 0.992, respectively; P values were 0.247, 0.831, 0.386, 0.405, 0.174, 0.691, and 0.821, respectively, all of which were greater than 0.05, suggesting that normality was met. The results of homogeneity test for variance were as follows: F = 2.44, P = 0.0775 > 0.05, suggesting that homogeneity of variance was met. The parametric test results were as follows: F = 19114.0, P < 0.0001, suggesting that there were obvious differences between testing data of the seven groups. The determination recovery rate of electrochemical immunosensor was 80-110%. Relative Standard Deviation (RSD) values of repeatability (precision) test of H1N1 influenza virus electrochemical immunosensor were 7.74%, 3.54%, and 2.01%, all of which were smaller than 10%. The signal response of H1N1 electrochemical immune biological sensor could still maintain more than 85% of the original signal within 30 days of storage. In conclusion, H1N1 electrochemical immune biosensor has good specificity and the test results are not affected by other viruses of the same type. Besides, it has good accuracy which can realize the accurate determination of A (H1N1) influenza virus in actual detection. Thus, the requirement of precision measurement of A (H1N1) flu virus detection can be met. Therefore, H1N1 electrochemical immune biosensors can be used in actual detection with good stability.

  3. The use of neural network technology to model swimming performance.

    PubMed

    Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida

    2007-01-01

    to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports sciences application allowed us to create very realistic models for swimming performance prediction based on previous selected criterions that were related with the dependent variable (performance).

  4. Tests of proton structure functions using leptons at CDF and D0: W charge asymmetry and Drell-Yan production. Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbaro, P. de

    1995-06-13

    High statistics W charge asymmetry measurements at the Tevatron {bar p}p collider significantly constrain the u and d quark distributions, and specifically the slope of the d(x)/u(x) in the x range 0.007 to 0.27. The authors present measurements of lepton charge asymmetry as a function of lepton rapidity, A(y{sub l}) at {radical}s = 1.8 TeV for {vert_bar}y{sub l}{vert_bar} < 2.0, for the W decays to electrons and muons recorded by the CDF detector during the 1992-93 run ({approx} 20 pb{sup {minus}1}), and the first {approx} 50 pb{sup {minus}1} of data from the 1994-95 run. These precise data make possible furthermore » discrimination between sets of modern parton distributions. In particular it is found that the most recent parton distributions, which included the CDF 1992-93 W asymmetry data in their fits (MRSA, CTEQ3M and GRV94) are still in good agreement with the more precise data from the 1994-95 run. W charge asymmetry results from D0 based on {approx} 6.5 pb{sup {minus}1} data from 1992-1993 run and {approx} 29.7 pb{sup {minus}1} data from 1994-1995 run, using the W decays to muons, are also presented and are found to be consistent with CDF results. In addition, the authors present preliminary measurement of the Drell-Yan cross-section by CDF using a dielectron sample collected during the 1993-94 run ({approx} 20 pb{sup {minus}1}) and a high mass dimuon sample from the combined 1993-94 and 1994-95 runs ({approx} 70 pb{sup {minus}1}). The measurement is in good agreement with predictions using the most recent PDFs in a dilepton mass range between 11 and 350 GeV/c{sup 2}.« less

  5. The boundary is mixed

    NASA Astrophysics Data System (ADS)

    Bianchi, Eugenio; Haggard, Hal M.; Rovelli, Carlo

    2017-08-01

    We show that in Oeckl's boundary formalism the boundary vectors that do not have a tensor form represent, in a precise sense, statistical states. Therefore the formalism incorporates quantum statistical mechanics naturally. We formulate general-covariant quantum statistical mechanics in this language. We illustrate the formalism by showing how it accounts for the Unruh effect. We observe that the distinction between pure and mixed states weakens in the general covariant context, suggesting that local gravitational processes are naturally statistical without a sharp quantal versus probabilistic distinction.

  6. Characterizations of linear sufficient statistics

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.

    1977-01-01

    A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.

  7. An Evaluation of Different Statistical Targets for Assembling Parallel Forms in Item Response Theory

    PubMed Central

    Ali, Usama S.; van Rijn, Peter W.

    2015-01-01

    Assembly of parallel forms is an important step in the test development process. Therefore, choosing a suitable theoretical framework to generate well-defined test specifications is critical. The performance of different statistical targets of test specifications using the test characteristic curve (TCC) and the test information function (TIF) was investigated. Test length, the number of test forms, and content specifications are considered as well. The TCC target results in forms that are parallel in difficulty, but not necessarily in terms of precision. Vice versa, test forms created using a TIF target are parallel in terms of precision, but not necessarily in terms of difficulty. As sometimes the focus is either on TIF or TCC, differences in either difficulty or precision can arise. Differences in difficulty can be mitigated by equating, but differences in precision cannot. In a series of simulations using a real item bank, the two-parameter logistic model, and mixed integer linear programming for automated test assembly, these differences were found to be quite substantial. When both TIF and TCC are combined into one target with manipulation to relative importance, these differences can be made to disappear.

  8. Segmentation precision of abdominal anatomy for MRI-based radiotherapy

    PubMed Central

    Noel, Camille E.; Zhu, Fan; Lee, Andrew Y.; Yanle, Hu; Parikh, Parag J.

    2014-01-01

    The limited soft tissue visualization provided by computed tomography, the standard imaging modality for radiotherapy treatment planning and daily localization, has motivated studies on the use of magnetic resonance imaging (MRI) for better characterization of treatment sites, such as the prostate and head and neck. However, no studies have been conducted on MRI-based segmentation for the abdomen, a site that could greatly benefit from enhanced soft tissue targeting. We investigated the interobserver and intraobserver precision in segmentation of abdominal organs on MR images for treatment planning and localization. Manual segmentation of 8 abdominal organs was performed by 3 independent observers on MR images acquired from 14 healthy subjects. Observers repeated segmentation 4 separate times for each image set. Interobserver and intraobserver contouring precision was assessed by computing 3-dimensional overlap (Dice coefficient [DC]) and distance to agreement (Hausdorff distance [HD]) of segmented organs. The mean and standard deviation of intraobserver and interobserver DC and HD values were DCintraobserver = 0.89 ± 0.12, HDintraobserver = 3.6 mm ± 1.5, DCinterobserver = 0.89 ± 0.15, and HDinterobserver = 3.2 mm ± 1.4. Overall, metrics indicated good interobserver/intraobserver precision (mean DC > 0.7, mean HD < 4 mm). Results suggest that MRI offers good segmentation precision for abdominal sites. These findings support the utility of MRI for abdominal planning and localization, as emerging MRI technologies, techniques, and onboard imaging devices are beginning to enable MRI-based radiotherapy. PMID:24726701

  9. Segmentation precision of abdominal anatomy for MRI-based radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noel, Camille E.; Zhu, Fan; Lee, Andrew Y.

    2014-10-01

    The limited soft tissue visualization provided by computed tomography, the standard imaging modality for radiotherapy treatment planning and daily localization, has motivated studies on the use of magnetic resonance imaging (MRI) for better characterization of treatment sites, such as the prostate and head and neck. However, no studies have been conducted on MRI-based segmentation for the abdomen, a site that could greatly benefit from enhanced soft tissue targeting. We investigated the interobserver and intraobserver precision in segmentation of abdominal organs on MR images for treatment planning and localization. Manual segmentation of 8 abdominal organs was performed by 3 independent observersmore » on MR images acquired from 14 healthy subjects. Observers repeated segmentation 4 separate times for each image set. Interobserver and intraobserver contouring precision was assessed by computing 3-dimensional overlap (Dice coefficient [DC]) and distance to agreement (Hausdorff distance [HD]) of segmented organs. The mean and standard deviation of intraobserver and interobserver DC and HD values were DC{sub intraobserver} = 0.89 ± 0.12, HD{sub intraobserver} = 3.6 mm ± 1.5, DC{sub interobserver} = 0.89 ± 0.15, and HD{sub interobserver} = 3.2 mm ± 1.4. Overall, metrics indicated good interobserver/intraobserver precision (mean DC > 0.7, mean HD < 4 mm). Results suggest that MRI offers good segmentation precision for abdominal sites. These findings support the utility of MRI for abdominal planning and localization, as emerging MRI technologies, techniques, and onboard imaging devices are beginning to enable MRI-based radiotherapy.« less

  10. A measurement of CMB cluster lensing with SPT and DES year 1 data

    NASA Astrophysics Data System (ADS)

    Baxter, E. J.; Raghunathan, S.; Crawford, T. M.; Fosalba, P.; Hou, Z.; Holder, G. P.; Omori, Y.; Patil, S.; Rozo, E.; Abbott, T. M. C.; Annis, J.; Aylor, K.; Benoit-Lévy, A.; Benson, B. A.; Bertin, E.; Bleem, L.; Buckley-Geer, E.; Burke, D. L.; Carlstrom, J.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Chang, C. L.; Cho, H.-M.; Crites, A. T.; Crocce, M.; Cunha, C. E.; da Costa, L. N.; D'Andrea, C. B.; Davis, C.; de Haan, T.; Desai, S.; Dietrich, J. P.; Dobbs, M. A.; Dodelson, S.; Doel, P.; Drlica-Wagner, A.; Estrada, J.; Everett, W. B.; Fausti Neto, A.; Flaugher, B.; Frieman, J.; García-Bellido, J.; George, E. M.; Gaztanaga, E.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Halverson, N. W.; Harrington, N. L.; Hartley, W. G.; Holzapfel, W. L.; Honscheid, K.; Hrubes, J. D.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Knox, L.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Lee, A. T.; Leitch, E. M.; Li, T. S.; Lima, M.; Luong-Van, D.; Manzotti, A.; March, M.; Marrone, D. P.; Marshall, J. L.; Martini, P.; McMahon, J. J.; Melchior, P.; Menanteau, F.; Meyer, S. S.; Miller, C. J.; Miquel, R.; Mocanu, L. M.; Mohr, J. J.; Natoli, T.; Nord, B.; Ogando, R. L. C.; Padin, S.; Plazas, A. A.; Pryke, C.; Rapetti, D.; Reichardt, C. L.; Romer, A. K.; Roodman, A.; Ruhl, J. E.; Rykoff, E.; Sako, M.; Sanchez, E.; Sayre, J. T.; Scarpine, V.; Schaffer, K. K.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Shirokoff, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Staniszewski, Z.; Stark, A.; Story, K.; Suchyta, E.; Tarle, G.; Thomas, D.; Troxel, M. A.; Vanderlinde, K.; Vieira, J. D.; Walker, A. R.; Williamson, R.; Zhang, Y.; Zuntz, J.

    2018-05-01

    Clusters of galaxies gravitationally lens the cosmic microwave background (CMB) radiation, resulting in a distinct imprint in the CMB on arcminute scales. Measurement of this effect offers a promising way to constrain the masses of galaxy clusters, particularly those at high redshift. We use CMB maps from the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB lensing signal around galaxy clusters identified in optical imaging from first year observations of the Dark Energy Survey. The cluster catalogue used in this analysis contains 3697 members with mean redshift of \\bar{z} = 0.45. We detect lensing of the CMB by the galaxy clusters at 8.1σ significance. Using the measured lensing signal, we constrain the amplitude of the relation between cluster mass and optical richness to roughly 17 {per cent} precision, finding good agreement with recent constraints obtained with galaxy lensing. The error budget is dominated by statistical noise but includes significant contributions from systematic biases due to the thermal SZ effect and cluster miscentring.

  11. A Measurement of CMB Cluster Lensing with SPT and DES Year 1 Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, E.J.; et al.

    2017-08-03

    Clusters of galaxies gravitationally lens the cosmic microwave background (CMB) radiation, resulting in a distinct imprint in the CMB on arcminute scales. Measurement of this effect offers a promising way to constrain the masses of galaxy clusters, particularly those at high redshift. We use CMB maps from the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB lensing signal around galaxy clusters identified in optical imaging from first year observations of the Dark Energy Survey. We detect lensing of the CMB by the galaxy clusters at 6.5more » $$\\sigma$$ significance. Using the measured lensing signal, we constrain the amplitude of the relation between cluster mass and optical richness to roughly $$20\\%$$ precision, finding good agreement with recent constraints obtained with galaxy lensing. The error budget is dominated by statistical noise but includes significant contributions from systematic biases due to the thermal SZ effect and cluster miscentering.« less

  12. Development and in house validation of a new thermogravimetric method for water content analysis in soft brown sugar.

    PubMed

    Ducat, Giseli; Felsner, Maria L; da Costa Neto, Pedro R; Quináia, Sueli P

    2015-06-15

    Recently the use of brown sugar has increased due to its nutritional characteristics, thus requiring a more rigid quality control. The development of a method for water content analysis in soft brown sugar is carried out for the first time by TG/DTA with application of different statistical tests. The results of the optimization study suggest that heating rates of 5°C min(-1) and an alumina sample holder improve the efficiency of the drying process. The validation study showed that thermo gravimetry presents good accuracy and precision for water content analysis in soft brown sugar samples. This technique offers advantages over other analytical methods as it does not use toxic and costly reagents or solvents, it does not need any sample preparation, and it allows the identification of the temperature at which water is completely eliminated in relation to other volatile degradation products. This is an important advantage over the official method (loss on drying). Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Dichotomising continuous data while retaining statistical power using a distributional approach.

    PubMed

    Peacock, J L; Sauzet, O; Ewings, S M; Kerry, S M

    2012-11-20

    Dichotomisation of continuous data is known to be hugely problematic because information is lost, power is reduced and relationships may be obscured or changed. However, not only are differences in means difficult for clinicians to interpret, but thresholds also occur in many areas of medical practice and cannot be ignored. In recognition of both the problems of dichotomisation and the ways in which it may be useful clinically, we have used a distributional approach to derive a difference in proportions with a 95% CI that retains the precision and the power of the CI for the equivalent difference in means. In this way, we propose a dual approach that analyses continuous data using both means and proportions to replace dichotomisation alone and that may be useful in certain situations. We illustrate this work with examples and simulations that show good performance of the parametric approach under standard distributional assumptions from our own research and from the literature. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Ratios of N15/C12 and He4/C12 inclusive electroproduction cross sections in the nucleon resonance region

    NASA Astrophysics Data System (ADS)

    Bosted, P. E.; Fersch, R.; Adams, G.; Amarian, M.; Anefalos, S.; Anghinolfi, M.; Asryan, G.; Avakian, H.; Bagdasaryan, H.; Baillie, N.; Ball, J. P.; Baltzell, N. A.; Barrow, S.; Batourine, V.; Battaglieri, M.; Beard, K.; Bedlinskiy, I.; Bektasoglu, M.; Bellis, M.; Benmouna, N.; Biselli, A. S.; Bonner, B. E.; Bouchigny, S.; Boiarinov, S.; Bradford, R.; Branford, D.; Brooks, W. K.; Bültmann, S.; Burkert, V. D.; Butuceanu, C.; Calarco, J. R.; Careccia, S. L.; Carman, D. S.; Carnahan, B.; Cazes, A.; Chen, S.; Cole, P. L.; Collins, P.; Coltharp, P.; Cords, D.; Corvisiero, P.; Crabb, D.; Crannell, H.; Crede, V.; Cummings, J. P.; de Masi, R.; de Vita, R.; de Sanctis, E.; Degtyarenko, P. V.; Denizli, H.; Dennis, L.; Deur, A.; Djalali, C.; Dodge, G. E.; Donnelly, J.; Doughty, D.; Dragovitsch, P.; Dugger, M.; Dharmawardane, K. V.; Dytman, S.; Dzyubak, O. P.; Egiyan, H.; Egiyan, K. S.; Elouadrhiri, L.; Eugenio, P.; Fatemi, R.; Fedotov, G.; Feuerbach, R. J.; Forest, T. A.; Fradi, A.; Funsten, H.; Garçon, M.; Gavalian, G.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guillo, M.; Guler, N.; Guo, L.; Gyurjyan, V.; Hadjidakis, C.; Hafidi, K.; Hakobyan, R. S.; Hardie, J.; Heddle, D.; Hersman, F. W.; Hicks, K.; Hleiqawi, I.; Holtrop, M.; Huertas, M.; Hyde-Wright, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Ito, M. M.; Jenkins, D.; Jo, H. S.; Joo, K.; Juengst, H. G.; Kalantarians, N.; Keith, C.; Kellie, J. D.; Khandaker, M.; Kim, K. Y.; Kim, K.; Kim, W.; Klein, A.; Klein, F. J.; Klusman, M.; Kossov, M.; Kramer, L. H.; Kubarovsky, V.; Kuhn, J.; Kuhn, S. E.; Kuleshov, S. V.; Lachniet, J.; Laget, J. M.; Langheinrich, J.; Lawrence, D.; Li, Ji; Lima, A. C. S.; Livingston, K.; Lu, H.; Lukashin, K.; MacCormick, M.; Markov, N.; McAleer, S.; McKinnon, B.; McNabb, J. W. C.; Mecking, B. A.; Mestayer, M. D.; Meyer, C. A.; Mibe, T.; Mikhailov, K.; Minehart, R.; Mirazita, M.; Miskimen, R.; Mokeev, V.; Morand, L.; Morrow, S. A.; Moteabbed, M.; Mueller, J.; Mutchler, G. S.; Nadel-Turonski, P.; Nasseripour, R.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Niczyporuk, B. B.; Niroula, M. R.; Niyazov, R. A.; Nozar, M.; O'Rielly, G. V.; Osipenko, M.; Ostrovidov, A. I.; Park, K.; Pasyuk, E.; Paterson, C.; Philips, S. A.; Pierce, J.; Pivnyuk, N.; Pocanic, D.; Pogorelko, O.; Polli, E.; Pozdniakov, S.; Preedom, B. M.; Price, J. W.; Prok, Y.; Protopopescu, D.; Qin, L. M.; Raue, B. A.; Riccardi, G.; Ricco, G.; Ripani, M.; Rosner, G.; Rossi, P.; Rowntree, D.; Rubin, P. D.; Sabatié, F.; Salgado, C.; Santoro, J. P.; Sapunenko, V.; Schumacher, R. A.; Serov, V. S.; Sharabian, Y. G.; Shaw, J.; Shvedunov, N. V.; Skabelin, A. V.; Smith, E. S.; Smith, L. C.; Sober, D. I.; Stavinsky, A.; Stepanyan, S. S.; Stepanyan, S.; Stokes, B. E.; Stoler, P.; Strauch, S.; Suleiman, R.; Taiuti, M.; Taylor, S.; Tedeschi, D. J.; Thoma, U.; Tkabladze, A.; Tkachenko, S.; Todor, L.; Ungaro, M.; Vineyard, M. F.; Vlassov, A. V.; Weinstein, L. B.; Weygand, D. P.; Williams, M.; Wolin, E.; Wood, M. H.; Yegneswaran, A.; Yun, J.; Zana, L.; Zhang, J.; Zhao, B.; Zhao, Z.

    2008-07-01

    The (W,Q2) dependence of the ratio of inclusive electron scattering cross sections for N15/C12 was determined in the kinematic ranges 0.8

  15. Imputation of missing data in time series for air pollutants

    NASA Astrophysics Data System (ADS)

    Junger, W. L.; Ponce de Leon, A.

    2015-02-01

    Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.

  16. Raman scattering measurements in flames using a tunable KrF excimer laser

    NASA Technical Reports Server (NTRS)

    Wehrmeyer, Joseph A.; Cheng, Tsarng-Sheng; Pitz, Robert W.

    1992-01-01

    A narrow-band tunable KrF excimer laser is used as a spontaneous vibrational Raman scattering source to demonstrate that single-pulse concentration and temperature measurements, with only minimal fluorescence interference, are possible for all major species (O2, N2, H2O, and H2) at all stoichiometries (fuel-lean to fuel rich) of H2-air flames. Photon-statistics-limited precisions in these instantaneous and spatially resolved single-pulse measurements are typically 5 percent, which are based on the relative standard deviations of single-pulse probability distributions. In addition to the single-pulse N2 Stokes/anti-Stokes ratio temperature measurement technique, a time-averaged temperature measurement technique is presented that matches the N2 Stokes Raman spectrum to theoretical spectra by using a single intermediate state frequency to account for near-resonance enhancement. Raman flame spectra in CH4-air flames are presented that have good signal-to-noise characteristics and show promise for single-pulse UV Raman measurements in hydrocarbon flames.

  17. Flavour production by Saprochaete and Geotrichum yeasts and their close relatives.

    PubMed

    Grondin, Eric; Shum Cheong Sing, Alain; James, Steve; Nueno-Palop, Carmen; François, Jean Marie; Petit, Thomas

    2017-12-15

    In this study, a total of 30 yeast strains belonging to the genera Dipodascus, Galactomyces, Geotrichum, Magnusiomyces and Saprochaete were investigated for volatile organic compound production using HS-SPME-GC/MS analysis. The resulting flavour profiles, including 36 esters and 6 alcohols compounds, were statistically evaluated by cluster and PCA analysis. Two main groups of strains were extracted from this analysis, namely a group with a low ability to produce flavour and a group producing mainly alcohols. Two other minor groups of strains including Saprochaete suaveolens, Geotrichum marinum and Saprochaete gigas were diverging significantly from the main groups precisely because they showed a good ability to produce a large diversity of esters. In particular, we found that the Saprochaete genus (and their closed relatives) was characterized by a high production of unsaturated esters arising from partial catabolism of branched chain amino-acids. These esters were produced by eight phylogenetically related strains of Saprochaete genus. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A Multi-Level Geographical Study of Italian Political Elections from Twitter Data

    PubMed Central

    Caldarelli, Guido; Chessa, Alessandro; Pammolli, Fabio; Pompa, Gabriele; Puliga, Michelangelo; Riccaboni, Massimo; Riotta, Gianni

    2014-01-01

    In this paper we present an analysis of the behavior of Italian Twitter users during national political elections. We monitor the volumes of the tweets related to the leaders of the various political parties and we compare them to the elections results. Furthermore, we study the topics that are associated with the co-occurrence of two politicians in the same tweet. We cannot conclude, from a simple statistical analysis of tweet volume and their time evolution, that it is possible to precisely predict the election outcome (or at least not in our case of study that was characterized by a “too-close-to-call” scenario). On the other hand, we found that the volume of tweets and their change in time provide a very good proxy of the final results. We present this analysis both at a national level and at smaller levels, ranging from the regions composing the country to macro-areas (North, Center, South). PMID:24802857

  19. Langevin equation in systems with also negative temperatures

    NASA Astrophysics Data System (ADS)

    Baldovin, Marco; Puglisi, Andrea; Vulpiani, Angelo

    2018-04-01

    We discuss how to derive a Langevin equation (LE) in non standard systems, i.e. when the kinetic part of the Hamiltonian is not the usual quadratic function. This generalization allows to consider also cases with negative absolute temperature. We first give some phenomenological arguments suggesting the shape of the viscous drift, replacing the usual linear viscous damping, and its relation with the diffusion coefficient modulating the white noise term. As a second step, we implement a procedure to reconstruct the drift and the diffusion term of the LE from the time-series of the momentum of a heavy particle embedded in a large Hamiltonian system. The results of our reconstruction are in good agreement with the phenomenological arguments. Applying the method to systems with negative temperature, we can observe that also in this case there is a suitable LE, obtained with a precise protocol, able to reproduce in a proper way the statistical features of the slow variables. In other words, even in this context, systems with negative temperature do not show any pathology.

  20. A High-Precision Counter Using the DSP Technique

    DTIC Science & Technology

    2004-09-01

    DSP is not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. So we cut the...sampling number in a cycle is not good enough to achieve an accuracy less than 2×10-11. For this reason, a correlation operation is performed for... not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. We will solve this

  1. Precision determination of the πN scattering lengths and the charged πNN coupling constant

    NASA Astrophysics Data System (ADS)

    Ericson, T. E. O.; Loiseau, B.; Thomas, A. W.

    2000-01-01

    We critically evaluate the isovector GMO sumrule for the charged πNN coupling constant using recent precision data from π-p and π-d atoms and with careful attention to systematic errors. From the π-d scattering length we deduce the pion-proton scattering lengths 1/2(aπ-p + aπ-n) = (-20 +/- 6(statistic)+/-10 (systematic) .10-4m-1πc and 1/2(aπ-p - aπ-n) = (903 +/- 14) . 10-4m-1πc. From this a direct evaluation gives g2c(GMO)/4π = 14.20 +/- 0.07 (statistic)+/-0.13(systematic) or f2c/4π = 0.0786 +/- 0.0008.

  2. Measurement of the absolute v μ-CCQE cross section at the SciBooNE experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aunion, Jose Luis Alcaraz

    2010-07-01

    This thesis presents the measurement of the charged current quasi-elastic (CCQE) neutrino-nucleon cross section at neutrino energies around 1 GeV. This measurement has two main physical motivations. On one hand, the neutrino-nucleon interactions at few GeV is a region where existing old data are sparse and with low statistics. The current measurement populates low energy regions with higher statistics and precision than previous experiments. On the other hand, the CCQE interaction is the most useful interaction in neutrino oscillation experiments. The CCQE channel is used to measure the initial and final neutrino fluxes in order to determine the neutrino fractionmore » that disappeared. The neutrino oscillation experiments work at low neutrino energies, so precise measurement of CCQE interactions are essential for flux measurements. The main goal of this thesis is to measure the CCQE absolute neutrino cross section from the SciBooNE data. The SciBar Booster Neutrino Experiment (SciBooNE) is a neutrino and anti-neutrino scattering off experiment. The neutrino energy spectrum works at energies around 1 GeV. SciBooNE was running from June 8th 2007 to August 18th 2008. In that period, the experiment collected a total of 2.65 x 10 20 protons on target (POT). This thesis has used full data collection in neutrino mode 0.99 x 10 20 POT. A CCQE selection cut has been performed, achieving around 70% pure CCQE sample. A fit method has been exclusively developed to determine the absolute CCQE cross section, presenting results in a neutrino energy range from 0.2 to 2 GeV. The results are compatible with the NEUT predictions. The SciBooNE measurement has been compared with both Carbon (MiniBoonE) and deuterium (ANL and BNL) target experiments, showing a good agreement in both cases.« less

  3. A Methodological Approach to Quantifying Plyometric Intensity.

    PubMed

    Jarvis, Mark M; Graham-Smith, Phil; Comfort, Paul

    2016-09-01

    Jarvis, MM, Graham-Smith, P, and Comfort, P. A Methodological approach to quantifying plyometric intensity. J Strength Cond Res 30(9): 2522-2532, 2016-In contrast to other methods of training, the quantification of plyometric exercise intensity is poorly defined. The purpose of this study was to evaluate the suitability of a range of neuromuscular and mechanical variables to describe the intensity of plyometric exercises. Seven male recreationally active subjects performed a series of 7 plyometric exercises. Neuromuscular activity was measured using surface electromyography (SEMG) at vastus lateralis (VL) and biceps femoris (BF). Surface electromyography data were divided into concentric (CON) and eccentric (ECC) phases of movement. Mechanical output was measured by ground reaction forces and processed to provide peak impact ground reaction force (PF), peak eccentric power (PEP), and impulse (IMP). Statistical analysis was conducted to assess the reliability intraclass correlation coefficient and sensitivity smallest detectable difference of all variables. Mean values of SEMG demonstrate high reliability (r ≥ 0.82), excluding ECC VL during a 40-cm drop jump (r = 0.74). PF, PEP, and IMP demonstrated high reliability (r ≥ 0.85). Statistical power for force variables was excellent (power = 1.0), and good for SEMG (power ≥0.86) excluding CON BF (power = 0.57). There was no significant difference (p > 0.05) in CON SEMG between exercises. Eccentric phase SEMG only distinguished between exercises involving a landing and those that did not (percentage of maximal voluntary isometric contraction [%MVIC] = no landing -65 ± 5, landing -140 ± 8). Peak eccentric power, PF, and IMP all distinguished between exercises. In conclusion, CON neuromuscular activity does not appear to vary when intent is maximal, whereas ECC activity is dependent on the presence of a landing. Force characteristics provide a reliable and sensitive measure enabling precise description of intensity in plyometric exercises. The present findings provide coaches and scientists with an insightful and precise method of measuring intensity in plyometrics, which will allow for greater control of programming variables.

  4. Spectrophotometric and spectrofluorimetric methods for determination of certain biologically active phenolic drugs in their bulk powders and different pharmaceutical formulations

    NASA Astrophysics Data System (ADS)

    Omar, Mahmoud A.; Badr El-Din, Kalid M.; Salem, Hesham; Abdelmageed, Osama H.

    2018-03-01

    Two simple and sensitive spectrophotometric and spectrofluorimetric methods for the determination of terbutaline sulfate, fenoterol hydrobromide, etilefrine hydrochloride, isoxsuprine hydrochloride, ethamsylate, doxycycline hyclate have been developed. Both methods were based on the oxidation of the cited drugs with cerium (IV) in acid medium. The spectrophotometric method was based on measurement of the absorbance difference (ΔA), which represents the excess cerium (IV), at 317 nm for each drug. On the other hand, the spectrofluorimetric method was based on measurement of the fluorescent of the produced cerium (III) at emission wavelength 354 nm (λexcitation = 255 nm) for the concentrations studied for each drug. For both methods, the variables affecting the reactions were carefully investigated and the conditions were optimized. Linear relationships were found between either ΔA or the fluorescent of the produced cerium (III) values and the concentration of the studied drugs in a general concentration range of 2.0-24.0 μg mL- 1, 20.0-24.0 ng mL- 1 with good correlation coefficients in the following range 0.9990-0.9999, 0.9990-0.9993 for spectrophotometric and spectrofluorimetric methods respectively. The limits of detection and quantitation of spectrophotometric method were found in general concentration range 0.190-0.787 and 0.634-2.624 μg mL- 1respectively. For spectrofluorimetric method, the limits of detection and quantitation were found in general concentration range 4.77-9.52 and 15.91-31.74 ng mL- 1 respectively. The stoichiometry of the reaction was determined, and the reactions pathways were postulated. The analytical performance of the methods, in terms of accuracy and precision, were statistically validated and the results obtained were satisfactory. The methods have been successfully applied to the determination of the cited drugs in their commercial pharmaceutical formulations. Statistical comparison of the results with the reference methods showed excellent agreement and proved that no significant difference in the accuracy and precision.

  5. A Study of Particle Beam Spin Dynamics for High Precision Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiedler, Andrew J.

    In the search for physics beyond the Standard Model, high precision experiments to measure fundamental properties of particles are an important frontier. One group of such measurements involves magnetic dipole moment (MDM) values as well as searching for an electric dipole moment (EDM), both of which could provide insights about how particles interact with their environment at the quantum level and if there are undiscovered new particles. For these types of high precision experiments, minimizing statistical uncertainties in the measurements plays a critical role. \\\\ \\indent This work leverages computer simulations to quantify the effects of statistical uncertainty for experimentsmore » investigating spin dynamics. In it, analysis of beam properties and lattice design effects on the polarization of the beam is performed. As a case study, the beam lines that will provide polarized muon beams to the Fermilab Muon \\emph{g}-2 experiment are analyzed to determine the effects of correlations between the phase space variables and the overall polarization of the muon beam.« less

  6. The precise time-dependent solution of the Fokker–Planck equation with anomalous diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Ran; Du, Jiulin, E-mail: jiulindu@aliyun.com

    2015-08-15

    We study the time behavior of the Fokker–Planck equation in Zwanzig’s rule (the backward-Ito’s rule) based on the Langevin equation of Brownian motion with an anomalous diffusion in a complex medium. The diffusion coefficient is a function in momentum space and follows a generalized fluctuation–dissipation relation. We obtain the precise time-dependent analytical solution of the Fokker–Planck equation and at long time the solution approaches to a stationary power-law distribution in nonextensive statistics. As a test, numerically we have demonstrated the accuracy and validity of the time-dependent solution. - Highlights: • The precise time-dependent solution of the Fokker–Planck equation with anomalousmore » diffusion is found. • The anomalous diffusion satisfies a generalized fluctuation–dissipation relation. • At long time the time-dependent solution approaches to a power-law distribution in nonextensive statistics. • Numerically we have demonstrated the accuracy and validity of the time-dependent solution.« less

  7. Evaluating the quality of a cell counting measurement process via a dilution series experimental design.

    PubMed

    Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng

    2017-12-01

    Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.

  8. Statistical inference of selection and divergence of rice blast resistance gene Pi-ta

    USDA-ARS?s Scientific Manuscript database

    The resistance gene Pi-ta has been effectively used to control rice blast disease worldwide. A few recent studies have described the possible evolution of Pi-ta in cultivated and weedy rice. However, evolutionary statistics used for the studies are too limited to precisely understand selection and d...

  9. Determination of the pion-nucleon coupling constant and scattering lengths

    NASA Astrophysics Data System (ADS)

    Ericson, T. E.; Loiseau, B.; Thomas, A. W.

    2002-07-01

    We critically evaluate the isovector Goldberger-Miyazawa-Oehme (GMO) sum rule for forward πN scattering using the recent precision measurements of π-p and π-d scattering lengths from pionic atoms. We deduce the charged-pion-nucleon coupling constant, with careful attention to systematic and statistical uncertainties. This determination gives, directly from data, g2c(GMO)/ 4π=14.11+/-0.05(statistical)+/-0.19(systematic) or f2c/4π=0.0783(11). This value is intermediate between that of indirect methods and the direct determination from backward np differential scattering cross sections. We also use the pionic atom data to deduce the coherent symmetric and antisymmetric sums of the pion-proton and pion-neutron scattering lengths with high precision, namely, (aπ-p+aπ-n)/2=[- 12+/-2(statistical)+/-8(systematic)]×10-4 m-1π and (aπ-p-aπ- n)/2=[895+/-3(statistical)+/-13 (systematic)]×10-4 m-1π. For the need of the present analysis, we improve the theoretical description of the pion-deuteron scattering length.

  10. 40 CFR 35.6335 - Property management standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... property are in good condition and periodic calibration of the instruments used for precision measurements... the property; (5) Provisions for financial control and accounting in the financial management system...

  11. Design and Analysis of a Compact Precision Positioning Platform Integrating Strain Gauges and the Piezoactuator

    PubMed Central

    Huang, Hu; Zhao, Hongwei; Yang, Zhaojun; Fan, Zunqiang; Wan, Shunguang; Shi, Chengli; Ma, Zhichao

    2012-01-01

    Miniaturization precision positioning platforms are needed for in situ nanomechanical test applications. This paper proposes a compact precision positioning platform integrating strain gauges and the piezoactuator. Effects of geometric parameters of two parallel plates on Von Mises stress distribution as well as static and dynamic characteristics of the platform were studied by the finite element method. Results of the calibration experiment indicate that the strain gauge sensor has good linearity and its sensitivity is about 0.0468 mV/μm. A closed-loop control system was established to solve the problem of nonlinearity of the platform. Experimental results demonstrate that for the displacement control process, both the displacement increasing portion and the decreasing portion have good linearity, verifying that the control system is available. The developed platform has a compact structure but can realize displacement measurement with the embedded strain gauges, which is useful for the closed-loop control and structure miniaturization of piezo devices. It has potential applications in nanoindentation and nanoscratch tests, especially in the field of in situ nanomechanical testing which requires compact structures. PMID:23012566

  12. Data precision of X-ray fluorescence (XRF) scanning of discrete samples with the ITRAX XRF core-scanner exemplified on loess-paleosol samples

    NASA Astrophysics Data System (ADS)

    Profe, Jörn; Ohlendorf, Christian

    2017-04-01

    XRF-scanning is the state-of-the-art technique for geochemical analyses in marine and lacustrine sedimentology for more than a decade. However, little attention has been paid to data precision and technical limitations so far. Using homogenized, dried and powdered samples (certified geochemical reference standards and samples from a lithologically-contrasting loess-paleosol sequence) minimizes many adverse effects that influence the XRF-signal when analyzing wet sediment cores. This allows the investigation of data precision under ideal conditions and documents a new application of the XRF core-scanner technology at the same time. Reliable interpretations of XRF results require data precision evaluation of single elements as a function of X-ray tube, measurement time, sample compaction and quality of peak fitting. Ten-fold measurement of each sample constitutes data precision. Data precision of XRF measurements theoretically obeys Poisson statistics. Fe and Ca exhibit largest deviations from Poisson statistics. The same elements show the least mean relative standard deviations in the range from 0.5% to 1%. This represents the technical limit of data precision achievable by the installed detector. Measurement times ≥ 30 s reveal mean relative standard deviations below 4% for most elements. The quality of peak fitting is only relevant for elements with overlapping fluorescence lines such as Ba, Ti and Mn or for elements with low concentrations such as Y, for example. Differences in sample compaction are marginal and do not change mean relative standard deviation considerably. Data precision is in the range reported for geochemical reference standards measured by conventional techniques. Therefore, XRF scanning of discrete samples provide a cost- and time-efficient alternative to conventional multi-element analyses. As best trade-off between economical operation and data quality, we recommend a measurement time of 30 s resulting in a total scan time of 30 minutes for 30 samples.

  13. The Distribution of Non-Volatile Elements on Mars: Mars Odyssey GRS Results

    NASA Technical Reports Server (NTRS)

    Boynton, W.; Janes, D.; Kerry, K.; Kim, K.; Reedy, R.; Evans, L.; Starr, R.; Drake, D.; Taylor, J.; Waenke, H.

    2004-01-01

    The major scientific objective of the Gamma-Ray Spectrometer (GRS) on the 2001 Mars Odyssey Mission is to determine the distribution of elements in the near-surface of Mars. Mars Odyssey has been in its mapping orbit since February, 2002, and the GRS boom, which removes the instrument from the gamma-ray background of the spacecraft, was erected in June, 2002. In the 580 days since boom erection, we have accumulated 453 days of mapping data. The difference is due mostly to two times when Odyssey went into safe mode and the instrument warmed up forcing us to anneal out radiation damage that manifests itself after warming. Other data losses are due to simple transmitter data gaps and to intense solar particle events. The data from the GRS is statistical in nature. We have a very low count rate and a very low signal-to-noise ratio. With the exception of K, the most easily mapped elements have a signal/noise ratio on the order of 0.1 (0.5 for K) and the counting rates are on the order of 0.3 to 0.7 counts/min (4 cpm for K). In order to map the distribution of an element, we have to divide the total signal from Mars up into many cells that define the map s spatial resolution (unless the statistics are good enough that the intrinsic spatial resolution of the instrument, about 550 km diameter, dominates). The data for several elements have now achieved a statistical precision that permits us to make meaningful maps.

  14. Design and Application of Automatic Falling Device for Different Brands of Goods

    NASA Astrophysics Data System (ADS)

    Yang, Xudong; Ge, Qingkuan; Zuo, Ping; Peng, Tao; Dong, Weifu

    2017-12-01

    The Goods-Falling device is an important device in the intelligent sorting goods sorting system, which is responsible for the temporary storage and counting of the goods, and the function of putting the goods on the conveyor belt according to certain precision requirements. According to the present situation analysis and actual demand of the domestic goods sorting equipment, a vertical type Goods - Falling Device is designed and the simulation model of the device is established. The dynamic characteristics such as the angular error of the opening and closing mechanism are carried out by ADAMS software. The simulation results show that the maximum angular error is 0.016rad. Through the test of the device, the goods falling speed is 7031/hour, the good of the falling position error within 2mm, meet the crawl accuracy requirements of the palletizing robot.

  15. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  16. [Value of the space perception test for evaluation of the aptitude for precision work in geodesy].

    PubMed

    Remlein-Mozolewska, G

    1982-01-01

    The visual spatial localization ability of geodesy and cartography - employers and of the pupils trained for the mentioned profession has been examined. The examination has been based on work duration and the time of its performance. A correlation between the localization ability and the precision of the hand - movements required in everyday work has been proven. The better the movement precision, the more efficient the visual spatial localization. The length of work has not been significant. The test concerned appeared to be highly useful in geodesy for qualifying workers for the posts requiring good hands efficiency.

  17. [Precision medicine: new opportunities and challenges for molecular epidemiology].

    PubMed

    Song, Jing; Hu, Yonghua

    2016-04-01

    Since the completion of the Human Genome Project in 2003 and the announcement of the Precision Medicine Initiative by U.S. President Barack Obama in January 2015, human beings have initially completed the " three steps" of " genomics to biology, genomics to health as well as genomics to society". As a new inter-discipline, the emergence and development of precision medicine have relied on the support and promotion from biological science, basic medicine, clinical medicine, epidemiology, statistics, sociology and information science, etc. Meanwhile, molecular epidemiology is considered to be the core power to promote precision medical as a cross discipline of epidemiology and molecular biology. This article is based on the characteristics and research progress of medicine and molecular epidemiology respectively, focusing on the contribution and significance of molecular epidemiology to precision medicine, and exploring the possible opportunities and challenges in the future.

  18. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    ERIC Educational Resources Information Center

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  19. Facile room-temperature solution-phase synthesis of a spherical covalent organic framework for high-resolution chromatographic separation.

    PubMed

    Yang, Cheng-Xiong; Liu, Chang; Cao, Yi-Meng; Yan, Xiu-Ping

    2015-08-07

    A simple and facile room-temperature solution-phase synthesis was developed to fabricate a spherical covalent organic framework with large surface area, good solvent stability and high thermostability for high-resolution chromatographic separation of diverse important industrial analytes including alkanes, cyclohexane and benzene, α-pinene and β-pinene, and alcohols with high column efficiency and good precision.

  20. AVIRIS Spectrometer Maps Total Water Vapor Column

    NASA Technical Reports Server (NTRS)

    Conel, James E.; Green, Robert O.; Carrere, Veronique; Margolis, Jack S.; Alley, Ronald E.; Vane, Gregg A.; Bruegge, Carol J.; Gary, Bruce L.

    1992-01-01

    Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) processes maps of vertical-column abundances of water vapor in atmosphere with good precision and spatial resolution. Maps provide information for meteorology, climatology, and agriculture.

  1. The integral inventory for depression, a new, self-rated clinimetric instrument for the emotional and painful dimensions in major depressive disorder.

    PubMed

    Dueñas, Héctor; Lara, Carmen; Walton, Richard J; Granger, Renee E; Dossenbach, Martin; Raskin, Joel

    2011-09-01

    To assess the reliability and validity of the Integral Inventory for Depression (IID) scale using post hoc analyses of data from a multi-country study (ClinicalTrials.gov: NCT00561509) of patients with major depressive disorder (MDD). Patients (N = 1629) completed the IID (comprising two separate dimensions for emotional and physically painful symptoms; maximum score of 65) and a reference scale (16-item Quick Inventory of Depressive Symptomatology Self-Report) at baseline and at follow-up (8 and 24 weeks). Physicians rated MDD symptoms using the Clinical Global Impressions of Severity scale at each visit. Inter-item correlation, internal consistency, external validity, factor structure, and exploratory analysis of an optimal severity cut-off point were assessed. The IID displayed two distinct dimensions (i.e. painful and emotional) with little item redundancy and good internal consistency (Cronbach's α > 0.83 at each visit). The IID displayed good external validity (Pearson's correlations coefficients >0.60 at each visit) and statistically significant agreement (McNemar's test; P < 0.001 at follow-up) with the reference scale. Results suggest that a cut-off score of ≤24 had adequate precision (>80%) to identify patients with and without moderate MDD. Results suggest that the IID may be a reliable and valid tool for assessing emotional and painful symptoms of MDD.

  2. Super-resolution imaging applied to moving object tracking

    NASA Astrophysics Data System (ADS)

    Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi

    2017-10-01

    Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.

  3. Profiling Sea Ice with a Multiple Altimeter Beam Experimental Lidar (MABEL)

    NASA Technical Reports Server (NTRS)

    Kwok, R.; Markus, T.; Morison, J.; Palm, S. P.; Neumann, T. A.; Brunt, K. M.; Cook, W. B.; Hancock, D. W.; Cunningham, G. F.

    2014-01-01

    The sole instrument on the upcoming ICESat-2 altimetry mission is a micropulse lidar that measures the time-of-flight of individual photons from laser pulses transmitted at 532 nm. Prior to launch, MABEL serves as an airborne implementation for testing and development. In this paper, we provide a first examination of MABEL data acquired on two flights over sea ice in April 2012: one north of the Arctic coast of Greenland, and the other in the East Greenland Sea.We investigate the phenomenology of photon distributions in the sea ice returns. An approach to locate the surface and estimate its elevation in the distributions is described, and its achievable precision assessed. Retrieved surface elevations over relatively flat leads in the ice cover suggest that precisions of several centimeters are attainable. Restricting the width of the elevation window used in the surface analysis can mitigate potential biases in the elevation estimates due to subsurface returns at 532 nm. Comparisons of nearly coincident elevation profiles from MABEL with those acquired by an analog lidar show good agreement.Discrimination of ice and open water, a crucial step in the determination of sea ice free board and the estimation of ice thickness, is facilitated by contrasts in the observed signal background photon statistics. Future flight lines will sample a broader range of seasonal ice conditions for further evaluation of the year-round profiling capabilities and limitations of the MABEL instrument.

  4. RAPID SPECTROPHOTOMETRIC DETERMINATION OF TRIFLUOPERAZINE DIHYDROCHLORIDE AS BASE FORM IN PHARMACEUTICAL FORMULATION THROUGH CHARGE-TRANSFER COMPLEXATION.

    PubMed

    Prashanth, Kudige Nagaraj; Swamy, Nagaraju; Basavaiah, Kanakapura

    2016-01-01

    Two simple and selective spectrophotometric methods are described for the determination of trifluoperazine dihydrochloride (TFH) as base form (TFP) in bulk drug, and in tablets. The methods are based on the molecular charge-transfer complexation of trifluoperazine base (TFP) with either 2,4,6-trinitrophenol (picric acid; PA) or 2,4-dinitrophenol (DNP). The yellow colored radical anions formed are quantified at 410 run (PA method) or 415 nm (DNP method). The assay conditions were optimized for both the methods. Beer's law is obeyed over the concentration ranges of 1.5-24.0 pg/mL in PA method and 5.0-80.0 µg/mL in DNP method, with respective molar absorptivity values of 1.03 x 10(4) and 6.91 x 10(3) L mol-1 cm-1. The reaction stoichiometry in both methods was evaluated by Job's method of continuous variations and was found to be 1 : 2 (TFP : PA, TFP : DNP). The developed methods were successfully applied to the determination of TFP in pure form and commercial tablets with good accuracy and precision. Statistical comparison of the results was performed using Student's t-test and F-ratio at 95% confidence level and the results showed no significant difference between the reference and proposed methods with regard to accuracy and precision. Further, the accuracy and reliability of the methods were confirmed by recovery studies via standard addition technique.

  5. Bit Grooming: Statistically accurate precision-preserving quantization with compression, evaluated in the netCDF operators (NCO, v4.4.8+)

    DOE PAGES

    Zender, Charles S.

    2016-09-19

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.« less

  6. Nonlinear Statistical Estimation with Numerical Maximum Likelihood

    DTIC Science & Technology

    1974-10-01

    probably most directly attributable to the speed, precision and compactness of the linear programming algorithm exercised ; the mutual primal-dual...discriminant analysis is to classify the individual as a member of T# or IT, 1 2 according to the relative...Introduction to the Dissertation 1 Introduction to Statistical Estimation Theory 3 Choice of Estimator.. .Density Functions 12 Choice of Estimator

  7. Performance Analysis of Live-Virtual-Constructive and Distributed Virtual Simulations: Defining Requirements in Terms of Temporal Consistency

    DTIC Science & Technology

    2009-12-01

    events. Work associated with aperiodic tasks have the same statistical behavior and the same timing requirements. The timing deadlines are soft. • Sporadic...answers, but it is possible to calculate how precise the estimates are. Simulation-based performance analysis of a model includes a statistical ...to evaluate all pos- sible states in a timely manner. This is the principle reason for resorting to simulation and statistical analysis to evaluate

  8. Role of the CMS electromagnetic calorimeter in the measurement of the Higgs boson properties and search for new physics

    NASA Astrophysics Data System (ADS)

    Ferri, F.; CMS Collaboration

    2016-04-01

    The precise determination of the mass, the width and the couplings of the particle discovered in 2012 with a mass around 125 GeV is of capital importance to clarify the nature of such a particle, in particular to establish precisely if it is a Standard Model Higgs boson. In several new physics scenarios, in fact, the Higgs boson may behave differently with respect to the Standard Model one, or may not be unique, i.e. there can be more than one Higgs boson. In order to achieve the precision needed to discriminate between different models, the energy resolution, the scale uncertainty and the position resolution for electrons and photons are required to be as good as possible. The CMS scintillating lead-tungstate electromagnetic calorimeter (ECAL) was built as a precise tool with an exceptional energy resolution and a very good position resolution that improved over the years with the knowledge of the detector. Moreover, thanks to the fact that most of the lead-tungstate scintillation light is emitted in about 25 ns, the ECAL can be used to accurately determine the time of flight of photons. We present the current performance of the CMS ECAL, with a special emphasis on the impact on the measurement of the properties of the Higgs boson and on searches for new physics.

  9. Comment on the asymptotics of a distribution-free goodness of fit test statistic.

    PubMed

    Browne, Michael W; Shapiro, Alexander

    2015-03-01

    In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.

  10. Reflexion on linear regression trip production modelling method for ensuring good model quality

    NASA Astrophysics Data System (ADS)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  11. [Ultrasonographic evaluation of the uterine cervix length remaining after LOOP-excision].

    PubMed

    Robert, A-L; Nicolas, F; Lavoué, V; Henno, S; Mesbah, H; Porée, P; Levêque, J

    2014-04-01

    To assess whether there is a correlation between the length of a conization specimen and the length of the cervix measured by vaginal ultrasonography after the operation Prospective observational study including patients less than 45 years with measurement of cervical length before and the day of the conization, and measuring the histological length of the specimen. Among the 40 patients enrolled, the average ultrasound measurements before conization was 26.9 mm (± 4.9 mm) against 18.1mm (± 4.4mm) after conization with a mean difference of 8.8mm (± 2.4mm) (difference statistically significant P<.0001). The extent of histological specimen was 9 mm (± 2.2mm) on average. A correlation between ultrasound and histological measurements with a correlation coefficient R=0.85 was found statistically significant (P<0.0001). Moreover, the rate of cervix length remove by loop-excision in our series is 33% (± 8.5%). A good correlation between the measurements of the specimen and the cervical ultrasound length before and after conization was found, as a significant reduction in cervical length after conization. The precise length of the specimen should be known in case of pregnancy and the prevention of prematurity due to conization rests on selected indications and efficient surgical technique. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  12. Identifiability of PBPK Models with Applications to ...

    EPA Pesticide Factsheets

    Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss different types of identifiability that occur in PBPK models and give reasons why they occur. We particularly focus on how the mathematical structure of a PBPK model and lack of appropriate data can lead to statistical models in which it is impossible to estimate at least some parameters precisely. Methods are reviewed which can determine whether a purely linear PBPK model is globally identifiable. We propose a theorem which determines when identifiability at a set of finite and specific values of the mathematical PBPK model (global discrete identifiability) implies identifiability of the statistical model. However, we are unable to establish conditions that imply global discrete identifiability, and conclude that the only safe approach to analysis of PBPK models involves Bayesian analysis with truncated priors. Finally, computational issues regarding posterior simulations of PBPK models are discussed. The methodology is very general and can be applied to numerous PBPK models which can be expressed as linear time-invariant systems. A real data set of a PBPK model for exposure to dimethyl arsinic acid (DMA(V)) is presented to illustrate the proposed methodology. We consider statistical analy

  13. A spatial scan statistic for nonisotropic two-level risk cluster.

    PubMed

    Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie

    2012-01-30

    Spatial scan statistic methods are commonly used for geographical disease surveillance and cluster detection. The standard spatial scan statistic does not model any variability in the underlying risks of subregions belonging to a detected cluster. For a multilevel risk cluster, the isotonic spatial scan statistic could model a centralized high-risk kernel in the cluster. Because variations in disease risks are anisotropic owing to different social, economical, or transport factors, the real high-risk kernel will not necessarily take the central place in a whole cluster area. We propose a spatial scan statistic for a nonisotropic two-level risk cluster, which could be used to detect a whole cluster and a noncentralized high-risk kernel within the cluster simultaneously. The performance of the three methods was evaluated through an intensive simulation study. Our proposed nonisotropic two-level method showed better power and geographical precision with two-level risk cluster scenarios, especially for a noncentralized high-risk kernel. Our proposed method is illustrated using the hand-foot-mouth disease data in Pingdu City, Shandong, China in May 2009, compared with two other methods. In this practical study, the nonisotropic two-level method is the only way to precisely detect a high-risk area in a detected whole cluster. Copyright © 2011 John Wiley & Sons, Ltd.

  14. NGSI technologies Coming Down the Road - Fast Neutron Collar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swinhoe, Martyn T.

    2014-02-26

    This report describes the safeguard significance of NGS technologies, those things that offer new neutron collar design with 3He detectors to give good precision results in much shorter measurement time.

  15. Statistical analysis of regulatory ecotoxicity tests.

    PubMed

    Isnard, P; Flammarion, P; Roman, G; Babut, M; Bastien, P; Bintein, S; Esserméant, L; Férard, J F; Gallotti-Schmitt, S; Saouter, E; Saroli, M; Thiébaud, H; Tomassone, R; Vindimian, E

    2001-11-01

    ANOVA-type data analysis, i.e.. determination of lowest-observed-effect concentrations (LOECs), and no-observed-effect concentrations (NOECs), has been widely used for statistical analysis of chronic ecotoxicity data. However, it is more and more criticised for several reasons, among which the most important is probably the fact that the NOEC depends on the choice of test concentrations and number of replications and rewards poor experiments, i.e., high variability, with high NOEC values. Thus, a recent OECD workshop concluded that the use of the NOEC should be phased out and that a regression-based estimation procedure should be used. Following this workshop, a working group was established at the French level between government, academia and industry representatives. Twenty-seven sets of chronic data (algae, daphnia, fish) were collected and analysed by ANOVA and regression procedures. Several regression models were compared and relations between NOECs and ECx, for different values of x, were established in order to find an alternative summary parameter to the NOEC. Biological arguments are scarce to help in defining a negligible level of effect x for the ECx. With regard to their use in the risk assessment procedures, a convenient methodology would be to choose x so that ECx are on average similar to the present NOEC. This would lead to no major change in the risk assessment procedure. However, experimental data show that the ECx depend on the regression models and that their accuracy decreases in the low effect zone. This disadvantage could probably be reduced by adapting existing experimental protocols but it could mean more experimental effort and higher cost. ECx (derived with existing test guidelines, e.g., regarding the number of replicates) whose lowest bounds of the confidence interval are on average similar to present NOEC would improve this approach by a priori encouraging more precise experiments. However, narrow confidence intervals are not only linked to good experimental practices, but also depend on the distance between the best model fit and experimental data. At least, these approaches still use the NOEC as a reference although this reference is statistically not correct. On the contrary, EC50 are the most precise values to estimate on a concentration response curve, but they are clearly different from the NOEC and their use would require a modification of existing assessment factors.

  16. Determination of some phenolic compounds in red wine by RP-HPLC: method development and validation.

    PubMed

    Burin, Vívian Maria; Arcari, Stefany Grützmann; Costa, Léa Luzia Freitas; Bordignon-Luiz, Marilde T

    2011-09-01

    A methodology employing reversed-phase high-performance liquid chromatography (RP-HPLC) was developed and validated for simultaneous determination of five phenolic compounds in red wine. The chromatographic separation was carried out in a C(18) column with water acidify with acetic acid (pH 2.6) (solvent A) and 20% solvent A and 80% acetonitrile (solvent B) as the mobile phase. The validation parameters included: selectivity, linearity, range, limits of detection and quantitation, precision and accuracy, using an internal standard. All calibration curves were linear (R(2) > 0.999) within the range, and good precision (RSD < 2.6%) and recovery (80-120%) was obtained for all compounds. This method was applied to quantify phenolics in red wine samples from Santa Catarina State, Brazil, and good separation peaks for phenolic compounds in these wines were observed.

  17. Analytical parameters of the microplate-based ORAC-pyrogallol red assay.

    PubMed

    Ortiz, Rocío; Antilén, Mónica; Speisky, Hernán; Aliaga, Margarita E; López-Alarcón, Camilo

    2011-01-01

    The analytical parameters of the microplate-based oxygen radicals absorbance capacity (ORAC) method using pyrogallol red (PGR) as probe (ORAC-PGR) are presented. In addition, the antioxidant capacity of commercial beverages, such as wines, fruit juices, and iced teas, is estimated. A good linearity of the area under the curve (AUC) versus Trolox concentration plots was obtained [AUC = (845 +/- 110) + (23 +/- 2) [Trolox, microM], R = 0.9961, n = 19]. QC experiments showed better precision and accuracy at the highest Trolox concentration (40 microM) with RSD and REC (recuperation) values of 1.7 and 101.0%, respectively. When red wine was used as sample, the method also showed good linearity [AUC = (787 +/- 77) + (690 +/- 60) [red wine, microL/mL]; R = 0.9926, n = 17], precision and accuracy with RSD values from 1.4 to 8.3%, and REC values that ranged from 89.7 to 103.8%. Additivity assays using solutions containing gallic acid and Trolox (or red wine) showed an additive protection of PGR given by the samples. Red wines showed higher ORAC-PGR values than white wines, while the ORAC-PGR index of fruit juices and iced teas presented a great variability, ranging from 0.6 to 21.6 mM of Trolox equivalents. This variability was also observed for juices of the same fruit, showing the influence of the brand on the ORAC-PGR index. The ORAC-PGR methodology can be applied in a microplate reader with good linearity, precision, and accuracy.

  18. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  19. The Statistics of Visual Representation

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.

    2002-01-01

    The experience of retinex image processing has prompted us to reconsider fundamental aspects of imaging and image processing. Foremost is the idea that a good visual representation requires a non-linear transformation of the recorded (approximately linear) image data. Further, this transformation appears to converge on a specific distribution. Here we investigate the connection between numerical and visual phenomena. Specifically the questions explored are: (1) Is there a well-defined consistent statistical character associated with good visual representations? (2) Does there exist an ideal visual image? And (3) what are its statistical properties?

  20. Weather related continuity and completeness on Deep Space Ka-band links: statistics and forecasting

    NASA Technical Reports Server (NTRS)

    Shambayati, Shervin

    2006-01-01

    In this paper the concept of link 'stability' as means of measuring the continuity of the link is introduced and through it, along with the distributions of 'good' periods and 'bad' periods, the performance of the proposed Ka-band link design method using both forecasting and long-term statistics has been analyzed. The results indicate that the proposed link design method has relatively good continuity and completeness characteristics even when only long-term statistics are used and that the continuity performance further improves when forecasting is employed. .

  1. Statistical approaches to account for missing values in accelerometer data: Applications to modeling physical activity.

    PubMed

    Yue Xu, Selene; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki

    2018-04-01

    Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.

  2. Assessment of statistical uncertainty in the quantitative analysis of solid samples in motion using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Cabalín, L. M.; González, A.; Ruiz, J.; Laserna, J. J.

    2010-08-01

    Statistical uncertainty in the quantitative analysis of solid samples in motion by laser-induced breakdown spectroscopy (LIBS) has been assessed. For this purpose, a LIBS demonstrator was designed and constructed in our laboratory. The LIBS system consisted of a laboratory-scale conveyor belt, a compact optical module and a Nd:YAG laser operating at 532 nm. The speed of the conveyor belt was variable and could be adjusted up to a maximum speed of 2 m s - 1 . Statistical uncertainty in the analytical measurements was estimated in terms of precision (reproducibility and repeatability) and accuracy. The results obtained by LIBS on shredded scrap samples under real conditions have demonstrated that the analytical precision and accuracy of LIBS is dependent on the sample geometry, position on the conveyor belt and surface cleanliness. Flat, relatively clean scrap samples exhibited acceptable reproducibility and repeatability; by contrast, samples with an irregular shape or a dirty surface exhibited a poor relative standard deviation.

  3. Maximum-likelihood curve-fitting scheme for experiments with pulsed lasers subject to intensity fluctuations.

    PubMed

    Metz, Thomas; Walewski, Joachim; Kaminski, Clemens F

    2003-03-20

    Evaluation schemes, e.g., least-squares fitting, are not generally applicable to any types of experiments. If the evaluation schemes were not derived from a measurement model that properly described the experiment to be evaluated, poorer precision or accuracy than attainable from the measured data could result. We outline ways in which statistical data evaluation schemes should be derived for all types of experiment, and we demonstrate them for laser-spectroscopic experiments, in which pulse-to-pulse fluctuations of the laser power cause correlated variations of laser intensity and generated signal intensity. The method of maximum likelihood is demonstrated in the derivation of an appropriate fitting scheme for this type of experiment. Statistical data evaluation contains the following steps. First, one has to provide a measurement model that considers statistical variation of all enclosed variables. Second, an evaluation scheme applicable to this particular model has to be derived or provided. Third, the scheme has to be characterized in terms of accuracy and precision. A criterion for accepting an evaluation scheme is that it have accuracy and precision as close as possible to the theoretical limit. The fitting scheme derived for experiments with pulsed lasers is compared to well-established schemes in terms of fitting power and rational functions. The precision is found to be as much as three timesbetter than for simple least-squares fitting. Our scheme also suppresses the bias on the estimated model parameters that other methods may exhibit if they are applied in an uncritical fashion. We focus on experiments in nonlinear spectroscopy, but the fitting scheme derived is applicable in many scientific disciplines.

  4. a Statistical Analysis on the System Performance of a Bluetooth Low Energy Indoor Positioning System in a 3d Environment

    NASA Astrophysics Data System (ADS)

    Haagmans, G. G.; Verhagen, S.; Voûte, R. L.; Verbree, E.

    2017-09-01

    Since GPS tends to fail for indoor positioning purposes, alternative methods like indoor positioning systems (IPS) based on Bluetooth low energy (BLE) are developing rapidly. Generally, IPS are deployed in environments covered with obstacles such as furniture, walls, people and electronics influencing the signal propagation. The major factor influencing the system performance and to acquire optimal positioning results is the geometry of the beacons. The geometry of the beacons is limited to the available infrastructure that can be deployed (number of beacons, basestations and tags), which leads to the following challenge: Given a limited number of beacons, where should they be placed in a specified indoor environment, such that the geometry contributes to optimal positioning results? This paper aims to propose a statistical model that is able to select the optimal configuration that satisfies the user requirements in terms of precision. The model requires the definition of a chosen 3D space (in our case 7 × 10 × 6 meter), number of beacons, possible user tag locations and a performance threshold (e.g. required precision). For any given set of beacon and receiver locations, the precision, internal- and external reliability can be determined on forehand. As validation, the modeled precision has been compared with observed precision results. The measurements have been performed with an IPS of BlooLoc at a chosen set of user tag locations for a given geometric configuration. Eventually, the model is able to select the optimal geometric configuration out of millions of possible configurations based on a performance threshold (e.g. required precision).

  5. High precision in protein contact prediction using fully convolutional neural networks and minimal sequence features.

    PubMed

    Jones, David T; Kandathil, Shaun M

    2018-04-26

    In addition to substitution frequency data from protein sequence alignments, many state-of-the-art methods for contact prediction rely on additional sources of information, or features, of protein sequences in order to predict residue-residue contacts, such as solvent accessibility, predicted secondary structure, and scores from other contact prediction methods. It is unclear how much of this information is needed to achieve state-of-the-art results. Here, we show that using deep neural network models, simple alignment statistics contain sufficient information to achieve state-of-the-art precision. Our prediction method, DeepCov, uses fully convolutional neural networks operating on amino-acid pair frequency or covariance data derived directly from sequence alignments, without using global statistical methods such as sparse inverse covariance or pseudolikelihood estimation. Comparisons against CCMpred and MetaPSICOV2 show that using pairwise covariance data calculated from raw alignments as input allows us to match or exceed the performance of both of these methods. Almost all of the achieved precision is obtained when considering relatively local windows (around 15 residues) around any member of a given residue pairing; larger window sizes have comparable performance. Assessment on a set of shallow sequence alignments (fewer than 160 effective sequences) indicates that the new method is substantially more precise than CCMpred and MetaPSICOV2 in this regime, suggesting that improved precision is attainable on smaller sequence families. Overall, the performance of DeepCov is competitive with the state of the art, and our results demonstrate that global models, which employ features from all parts of the input alignment when predicting individual contacts, are not strictly needed in order to attain precise contact predictions. DeepCov is freely available at https://github.com/psipred/DeepCov. d.t.jones@ucl.ac.uk.

  6. Gene expression during blow fly development: improving the precision of age estimates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2011-01-01

    Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.

  7. Quantifying precision and availability of location memory in everyday pictures and some implications for picture database design.

    PubMed

    Lansdale, Mark W; Oliff, Lynda; Baguley, Thom S

    2005-06-01

    The authors investigated whether memory for object locations in pictures could be exploited to address known difficulties of designing query languages for picture databases. M. W. Lansdale's (1998) model of location memory was adapted to 4 experiments observing memory for everyday pictures. These experiments showed that location memory is quantified by 2 parameters: a probability that memory is available and a measure of its precision. Availability is determined by controlled attentional processes, whereas precision is mostly governed by picture composition beyond the viewer's control. Additionally, participants' confidence judgments were good predictors of availability but were insensitive to precision. This research suggests that databases using location memory are feasible. The implications of these findings for database design and for further research and development are discussed. (c) 2005 APA

  8. Statistical inference for the within-device precision of quantitative measurements in assay validation.

    PubMed

    Liu, Jen-Pei; Lu, Li-Tien; Liao, C T

    2009-09-01

    Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.

  9. Simulation of Thermal Behavior in High-Precision Measurement Instruments

    NASA Astrophysics Data System (ADS)

    Weis, Hanna Sophie; Augustin, Silke

    2008-06-01

    In this paper, a way to modularize complex finite-element models is described. The modularization is done with temperature fields that appear in high-precision measurement instruments. There, the temperature negatively impacts the achievable uncertainty of measurement. To correct for this uncertainty, the temperature must be known at every point. This cannot be achieved just by measuring temperatures at specific locations. Therefore, a numerical treatment is necessary. As the system of interest is very complex, modularization is unavoidable to obtain good numerical results.

  10. Precise measurement of the neutron magnetic form factor G(M)n in the few-GeV2 region.

    PubMed

    Lachniet, J; Afanasev, A; Arenhövel, H; Brooks, W K; Gilfoyle, G P; Higinbotham, D; Jeschonnek, S; Quinn, B; Vineyard, M F; Adams, G; Adhikari, K P; Amaryan, M J; Anghinolfi, M; Asavapibhop, B; Asryan, G; Avakian, H; Bagdasaryan, H; Baillie, N; Ball, J P; Baltzell, N A; Barrow, S; Batourine, V; Battaglieri, M; Beard, K; Bedlinskiy, I; Bektasoglu, M; Bellis, M; Benmouna, N; Berman, B L; Biselli, A S; Bonner, B E; Bookwalter, C; Bouchigny, S; Boiarinov, S; Bradford, R; Branford, D; Briscoe, W J; Bültmann, S; Burkert, V D; Calarco, J R; Careccia, S L; Carman, D S; Casey, L; Cheng, L; Cole, P L; Coleman, A; Collins, P; Cords, D; Corvisiero, P; Crabb, D; Crede, V; Cummings, J P; Dale, D; Daniel, A; Dashyan, N; De Masi, R; De Vita, R; De Sanctis, E; Degtyarenko, P V; Denizli, H; Dennis, L; Deur, A; Dhamija, S; Dharmawardane, K V; Dhuga, K S; Dickson, R; Djalali, C; Dodge, G E; Doughty, D; Dragovitsch, P; Dugger, M; Dytman, S; Dzyubak, O P; Egiyan, H; Egiyan, K S; El Fassi, L; Elouadrhiri, L; Empl, A; Eugenio, P; Fatemi, R; Fedotov, G; Fersch, R; Feuerbach, R J; Forest, T A; Fradi, A; Gabrielyan, M Y; Garçon, M; Gavalian, G; Gevorgyan, N; Giovanetti, K L; Girod, F X; Goetz, J T; Gohn, W; Golovatch, E; Gothe, R W; Graham, L; Griffioen, K A; Guidal, M; Guillo, M; Guler, N; Guo, L; Gyurjyan, V; Hadjidakis, C; Hafidi, K; Hakobyan, H; Hanretty, C; Hardie, J; Hassall, N; Heddle, D; Hersman, F W; Hicks, K; Hleiqawi, I; Holtrop, M; Hu, J; Huertas, M; Hyde-Wright, C E; Ilieva, Y; Ireland, D G; Ishkhanov, B S; Isupov, E L; Ito, M M; Jenkins, D; Jo, H S; Johnstone, J R; Joo, K; Juengst, H G; Kageya, T; Kalantarians, N; Keller, D; Kellie, J D; Khandaker, M; Khetarpal, P; Kim, K Y; Kim, K; Kim, W; Klein, A; Klein, F J; Klusman, M; Konczykowski, P; Kossov, M; Kramer, L H; Kubarovsky, V; Kuhn, J; Kuhn, S E; Kuleshov, S V; Kuznetsov, V; Laget, J M; Langheinrich, J; Lawrence, D; Lima, A C S; Livingston, K; Lowry, M; Lu, H Y; Lukashin, K; Maccormick, M; Malace, S; Manak, J J; Markov, N; Mattione, P; McAleer, S; McCracken, M E; McKinnon, B; McNabb, J W C; Mecking, B A; Mestayer, M D; Meyer, C A; Mibe, T; Mikhailov, K; Mineeva, T; Minehart, R; Mirazita, M; Miskimen, R; Mokeev, V; Moreno, B; Moriya, K; Morrow, S A; Moteabbed, M; Mueller, J; Munevar, E; Mutchler, G S; Nadel-Turonski, P; Nasseripour, R; Niccolai, S; Niculescu, G; Niculescu, I; Niczyporuk, B B; Niroula, M R; Niyazov, R A; Nozar, M; O'Rielly, G V; Osipenko, M; Ostrovidov, A I; Park, K; Park, S; Pasyuk, E; Paterson, C; Pereira, S Anefalos; Philips, S A; Pierce, J; Pivnyuk, N; Pocanic, D; Pogorelko, O; Polli, E; Popa, I; Pozdniakov, S; Preedom, B M; Price, J W; Prok, Y; Protopopescu, D; Qin, L M; Raue, B A; Riccardi, G; Ricco, G; Ripani, M; Ritchie, B G; Rosner, G; Rossi, P; Rowntree, D; Rubin, P D; Sabatié, F; Saini, M S; Salamanca, J; Salgado, C; Sandorfi, A; Santoro, J P; Sapunenko, V; Schott, D; Schumacher, R A; Serov, V S; Sharabian, Y G; Sharov, D; Shaw, J; Shvedunov, N V; Skabelin, A V; Smith, E S; Smith, L C; Sober, D I; Sokhan, D; Starostin, A; Stavinsky, A; Stepanyan, S; Stepanyan, S S; Stokes, B E; Stoler, P; Stopani, K A; Strakovsky, I I; Strauch, S; Suleiman, R; Taiuti, M; Taylor, S; Tedeschi, D J; Thompson, R; Tkabladze, A; Tkachenko, S; Ungaro, M; Vlassov, A V; Watts, D P; Wei, X; Weinstein, L B; Weygand, D P; Williams, M; Wolin, E; Wood, M H; Yegneswaran, A; Yun, J; Yurov, M; Zana, L; Zhang, J; Zhao, B; Zhao, Z W

    2009-05-15

    The neutron elastic magnetic form factor was extracted from quasielastic electron scattering on deuterium over the range Q;{2}=1.0-4.8 GeV2 with the CLAS detector at Jefferson Lab. High precision was achieved with a ratio technique and a simultaneous in situ calibration of the neutron detection efficiency. Neutrons were detected with electromagnetic calorimeters and time-of-flight scintillators at two beam energies. The dipole parametrization gives a good description of the data.

  11. Novel spectrophotometric methods for simultaneous determination of timolol and dorzolamide in their binary mixture.

    PubMed

    Lotfy, Hayam Mahmoud; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom

    2014-05-21

    Two smart and novel spectrophotometric methods namely; absorbance subtraction (AS) and amplitude modulation (AM) were developed and validated for the determination of a binary mixture of timolol maleate (TIM) and dorzolamide hydrochloride (DOR) in presence of benzalkonium chloride without prior separation, using unified regression equation. Additionally, simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of the binary mixture namely; simultaneous ratio subtraction (SRS), ratio difference (RD), ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), constant multiplication method (CM) and mean centering of ratio spectra (MCR). The proposed spectrophotometric procedures do not require any separation steps. Accuracy, precision and linearity ranges of the proposed methods were determined and the specificity was assessed by analyzing synthetic mixtures of both drugs. They were applied to their pharmaceutical formulation and the results obtained were statistically compared to that of a reported spectrophotometric method. The statistical comparison showed that there is no significant difference between the proposed methods and the reported one regarding both accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Combining censored and uncensored data in a U-statistic: design and sample size implications for cell therapy research.

    PubMed

    Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O

    2011-01-01

    The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.

  13. A calibration method based on virtual large planar target for cameras with large FOV

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  14. Quantum preservation of the measurements precision using ultra-short strong pulses in exact analytical solution

    NASA Astrophysics Data System (ADS)

    Berrada, K.; Eleuch, H.

    2017-09-01

    Various schemes have been proposed to improve the parameter-estimation precision. In the present work, we suggest an alternative method to preserve the estimation precision by considering a model that closely describes a realistic experimental scenario. We explore this active way to control and enhance the measurements precision for a two-level quantum system interacting with classical electromagnetic field using ultra-short strong pulses with an exact analytical solution, i.e. beyond the rotating wave approximation. In particular, we investigate the variation of the precision with a few cycles pulse and a smooth phase jump over a finite time interval. We show that by acting on the shape of the phase transient and other parameters of the considered system, the amount of information may be increased and has smaller decay rate in the long time. These features make two-level systems incorporated in ultra-short, of-resonant and gradually changing phase good candidates for implementation of schemes for the quantum computation and the coherent information processing.

  15. A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.

    PubMed

    Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang

    2017-06-28

    Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.

  16. A precise measurement of the [Formula: see text] meson oscillation frequency.

    PubMed

    Aaij, R; Abellán Beteta, C; Adeva, B; Adinolfi, M; Affolder, A; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Anderson, J; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Aquines Gutierrez, O; Archilli, F; d'Argent, P; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Batozskaya, V; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Bel, L J; Bellee, V; Belloli, N; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bertolin, A; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Billoir, P; Bird, T; Birnkraut, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borsato, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britsch, M; Britton, T; Brodzicka, J; Brook, N H; Buchanan, E; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Campana, P; Campora Perez, D; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cavallero, G; Cenci, R; Charles, M; Charpentier, Ph; Chefdeville, M; Chen, S; Cheung, S-F; Chiapolini, N; Chrzaszcz, M; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collazuol, G; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombes, M; Coquereau, S; Corti, G; Corvo, M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Dall'Occo, E; Dalseno, J; David, P N Y; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Simone, P; Dean, C-T; Decamp, D; Deckenhoff, M; Del Buono, L; Déléage, N; Demmer, M; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Di Ruscio, F; Dijkstra, H; Donleavy, S; Dordei, F; Dorigo, M; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dupertuis, F; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Easo, S; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; El Rifai, I; Elsasser, Ch; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Färber, C; Farley, N; Farry, S; Fay, R; Ferguson, D; Fernandez Albor, V; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fohl, K; Fol, P; Fontana, M; Fontanelli, F; C Forshaw, D; Forty, R; Frank, M; Frei, C; Frosini, M; Fu, J; Furfaro, E; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; García Pardiñas, J; Garra Tico, J; Garrido, L; Gascon, D; Gaspar, C; Gauld, R; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gotti, C; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Greening, E; Gregson, S; Griffith, P; Grillo, L; Grünberg, O; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; He, J; Head, T; Heijne, V; Heister, A; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hoballah, M; Hombach, C; Hulsbergen, W; Humair, T; Hussain, N; Hutchcroft, D; Hynds, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jawahery, A; Jing, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Kanso, W; Karacson, M; Karbach, T M; Karodia, S; Kecke, M; Kelsey, M; Kenyon, I R; Kenzie, M; Ketel, T; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Kochebina, O; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Kozeiha, M; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; K Kuonen, A; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lanfranchi, G; Langenbruch, C; Langhans, B; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Likhomanenko, T; Liles, M; Lindner, R; Linn, C; Lionetto, F; Liu, B; Liu, X; Loh, D; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusardi, N; Lusiani, A; Machefert, F; Maciuc, F; Maev, O; Maguire, K; Malde, S; Malinin, A; Manca, G; Mancinelli, G; Manning, P; Mapelli, A; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurin, B; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; Meadows, B; Meier, F; Meissner, M; Melnychuk, D; Merk, M; Michielin, E; Milanes, D A; Minard, M-N; Mitzel, D S; Molina Rodriguez, J; Monroy, I A; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A B; Mountain, R; Muheim, F; Müller, D; Müller, J; Müller, K; Müller, V; Mussini, M; Muster, B; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Niess, V; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Okhrimenko, O; Oldeman, R; Onderwater, C J G; Osorio Rodrigues, B; Otalora Goicochea, J M; Otto, A; Owen, P; Oyanguren, A; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Pappalardo, L L; Pappenheimer, C; Parkes, C; Passaleva, G; Patel, G D; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pilař, T; Pinci, D; Pistone, A; Piucci, A; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poluektov, A; Polyakov, I; Polycarpo, E; Popov, A; Popov, D; Popovici, B; Potterat, C; Price, E; Price, J D; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rama, M; Rangel, M S; Raniuk, I; Rauschmayr, N; Raven, G; Redi, F; Reichert, S; Reid, M M; Dos Reis, A C; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Roiser, S; Romanovsky, V; Romero Vidal, A; W Ronayne, J; Rotondo, M; Rouvinet, J; Ruf, T; Ruiz Valls, P; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Shires, A; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, E; Smith, I T; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Soomro, F; Souza, D; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefkova, S; Steinkamp, O; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Swientek, S; Syropoulos, V; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Tayduganov, A; Tekampe, T; Teklishyn, M; Tellarini, G; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Todd, J; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Torr, N; Tournefier, E; Tourneur, S; Trabelsi, K; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagnoni, V; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vázquez Sierra, C; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Vilasis-Cardona, X; Vollhardt, A; Volyanskyy, D; Voong, D; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wandernoth, S; Wang, J; Ward, D R; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wright, S; Wyllie, K; Xie, Y; Xu, Z; Yang, Z; Yu, J; Yuan, X; Yushchenko, O; Zangoli, M; Zavertyaev, M; Zhang, L; Zhang, Y; Zhelezov, A; Zhokhov, A; Zhong, L; Zhukov, V; Zucchelli, S

    2016-01-01

    The oscillation frequency, [Formula: see text], of [Formula: see text] mesons is measured using semileptonic decays with a [Formula: see text] or [Formula: see text] meson in the final state. The data sample corresponds to 3.0[Formula: see text] of pp collisions, collected by the LHCb experiment at centre-of-mass energies [Formula: see text] = 7 and 8[Formula: see text]. A combination of the two decay modes gives [Formula: see text], where the first uncertainty is statistical and the second is systematic. This is the most precise single measurement of this parameter. It is consistent with the current world average and has similar precision.

  17. Biomarker development in the precision medicine era: lung cancer as a case study.

    PubMed

    Vargas, Ashley J; Harris, Curtis C

    2016-08-01

    Precision medicine relies on validated biomarkers with which to better classify patients by their probable disease risk, prognosis and/or response to treatment. Although affordable 'omics'-based technology has enabled faster identification of putative biomarkers, the validation of biomarkers is still stymied by low statistical power and poor reproducibility of results. This Review summarizes the successes and challenges of using different types of molecule as biomarkers, using lung cancer as a key illustrative example. Efforts at the national level of several countries to tie molecular measurement of samples to patient data via electronic medical records are the future of precision medicine research.

  18. Modified Mostardi approach with ultra-high-molecular-weight polyethylene tape for total hip arthroplasty provides a good rate of union of osteotomized fragments.

    PubMed

    Kuroda, Yutaka; Akiyama, Haruhiko; Nankaku, Manabu; So, Kazutaka; Matsuda, Shuichi

    2015-07-01

    A lateral approach is common in total hip arthroplasty because of the good exposure it provides and its low complication rates. However, a drawback of the procedure is that the abductor mechanism is damaged when the tendinous insertion of the abductor muscle is split. Here, we describe a wafer technique using ultra-high-molecular-weight polyethylene tape for promising reattachment of the abductor mechanism. We retrospectively evaluated 120 consecutive primary total hip arthroplasties performed using a modified Mostardi approach, which involved reattaching the trochanter using either a braided polyester suture (polyester suture group, n = 60) or ultra-high-molecular-weight polyethylene tape (UHMWPE tape group, n = 60). The osteotomized fragment was reattached by inducing bone-to-bone contact using 3-mm-wide tapes that were precisely tied with a double-loop sliding knot in conjunction with a cable gun tensioner. The abductor strength and radiographic union rate were postoperatively assessed at 4 weeks and 6 months, respectively. A statistically significant lower incidence of nonunion and cutout was observed in the UHMWPE group (0 and 5.0 %, respectively) compared to the polyester suture group (8.3 and 15 %, respectively). No differences in abductor strength either preoperatively or at 4 weeks postoperatively were observed between the groups. In radiographically healed patients, abductor strength at 4 weeks post-surgery exceeded preoperative strength. The recovery rate of hip abductor strength was 109.9 ± 34.3 % in union patients and 92.9 ± 23.3 % in nonunion patients, which was statistically significant. The mean Japanese Orthopedic Association hip scores improved from 48.6 to 86.8 in union patients and from 50.3 to 85.9 in nonunion patients at 1 year postoperatively; however, this difference was not significant. The modified Mostardi approach using ultra-high molecular weight polyethylene tape can promote successful union of the osteotomized fragment.

  19. Development and Validation of a Job Exposure Matrix for Physical Risk Factors in Low Back Pain

    PubMed Central

    Solovieva, Svetlana; Pehkonen, Irmeli; Kausto, Johanna; Miranda, Helena; Shiri, Rahman; Kauppinen, Timo; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira

    2012-01-01

    Objectives The aim was to construct and validate a gender-specific job exposure matrix (JEM) for physical exposures to be used in epidemiological studies of low back pain (LBP). Materials and Methods We utilized two large Finnish population surveys, one to construct the JEM and another to test matrix validity. The exposure axis of the matrix included exposures relevant to LBP (heavy physical work, heavy lifting, awkward trunk posture and whole body vibration) and exposures that increase the biomechanical load on the low back (arm elevation) or those that in combination with other known risk factors could be related to LBP (kneeling or squatting). Job titles with similar work tasks and exposures were grouped. Exposure information was based on face-to-face interviews. Validity of the matrix was explored by comparing the JEM (group-based) binary measures with individual-based measures. The predictive validity of the matrix against LBP was evaluated by comparing the associations of the group-based (JEM) exposures with those of individual-based exposures. Results The matrix includes 348 job titles, representing 81% of all Finnish job titles in the early 2000s. The specificity of the constructed matrix was good, especially in women. The validity measured with kappa-statistic ranged from good to poor, being fair for most exposures. In men, all group-based (JEM) exposures were statistically significantly associated with one-month prevalence of LBP. In women, four out of six group-based exposures showed an association with LBP. Conclusions The gender-specific JEM for physical exposures showed relatively high specificity without compromising sensitivity. The matrix can therefore be considered as a valid instrument for exposure assessment in large-scale epidemiological studies, when more precise but more labour-intensive methods are not feasible. Although the matrix was based on Finnish data we foresee that it could be applicable, with some modifications, in other countries with a similar level of technology. PMID:23152793

  20. Using Technology to Prompt Good Questions about Distributions in Statistics

    ERIC Educational Resources Information Center

    Nabbout-Cheiban, Marie; Fisher, Forest; Edwards, Michael Todd

    2017-01-01

    The Common Core State Standards for Mathematics envisions data analysis as a key component of K-grade 12 mathematics instruction with statistics introduced in the early grades. Nonetheless, deficiencies in statistical learning persist throughout elementary school and beyond. Too often, mathematics teachers lack the statistical knowledge for…

  1. a Band Selection Method for High Precision Registration of Hyperspectral Image

    NASA Astrophysics Data System (ADS)

    Yang, H.; Li, X.

    2018-04-01

    During the registration of hyperspectral images and high spatial resolution images, too much bands in a hyperspectral image make it difficult to select bands with good registration performance. Terrible bands are possible to reduce matching speed and accuracy. To solve this problem, an algorithm based on Cram'er-Rao lower bound theory is proposed to select good matching bands in this paper. The algorithm applies the Cram'er-Rao lower bound theory to the study of registration accuracy, and selects good matching bands by CRLB parameters. Experiments show that the algorithm in this paper can choose good matching bands and provide better data for the registration of hyperspectral image and high spatial resolution image.

  2. Standard and goodness-of-fit parameter estimation methods for the three-parameter lognormal distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1982-01-01

    A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.

  3. Discerning some Tylenol brands using attenuated total reflection Fourier transform infrared data and multivariate analysis techniques.

    PubMed

    Msimanga, Huggins Z; Ollis, Robert J

    2010-06-01

    Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were used to classify acetaminophen-containing medicines using their attenuated total reflection Fourier transform infrared (ATR-FT-IR) spectra. Four formulations of Tylenol (Arthritis Pain Relief, Extra Strength Pain Relief, 8 Hour Pain Relief, and Extra Strength Pain Relief Rapid Release) along with 98% pure acetaminophen were selected for this study because of the similarity of their spectral features, with correlation coefficients ranging from 0.9857 to 0.9988. Before acquiring spectra for the predictor matrix, the effects on spectral precision with respect to sample particle size (determined by sieve size opening), force gauge of the ATR accessory, sample reloading, and between-tablet variation were examined. Spectra were baseline corrected and normalized to unity before multivariate analysis. Analysis of variance (ANOVA) was used to study spectral precision. The large particles (35 mesh) showed large variance between spectra, while fine particles (120 mesh) indicated good spectral precision based on the F-test. Force gauge setting did not significantly affect precision. Sample reloading using the fine particle size and a constant force gauge setting of 50 units also did not compromise precision. Based on these observations, data acquisition for the predictor matrix was carried out with the fine particles (sieve size opening of 120 mesh) at a constant force gauge setting of 50 units. After removing outliers, PCA successfully classified the five samples in the first and second components, accounting for 45.0% and 24.5% of the variances, respectively. The four-component PLS-DA model (R(2)=0.925 and Q(2)=0.906) gave good test spectra predictions with an overall average of 0.961 +/- 7.1% RSD versus the expected 1.0 prediction for the 20 test spectra used.

  4. Collapsing lattice animals and lattice trees in two dimensions

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping; Grassberger, Peter

    2005-06-01

    We present high statistics simulations of weighted lattice bond animals and lattice trees on the square lattice, with fugacities for each non-bonded contact and for each bond between two neighbouring monomers. The simulations are performed using a newly developed sequential sampling method with resampling, very similar to the pruned-enriched Rosenbluth method (PERM) used for linear chain polymers. We determine with high precision the line of second-order transitions from an extended to a collapsed phase in the resulting two-dimensional phase diagram. This line includes critical bond percolation as a multicritical point, and we verify that this point divides the line into different universality classes. One of them corresponds to the collapse driven by contacts and includes the collapse of (weakly embeddable) trees. There is some evidence that the other is subdivided again into two parts with different universality classes. One of these (at the far side from collapsing trees) is bond driven and is represented by the Derrida-Herrmann model of animals having bonds only (no contacts). Between the critical percolation point and this bond-driven collapse seems to be an intermediate regime, whose other end point is a multicritical point P* where a transition line between two collapsed phases (one bond driven and the other contact driven) sparks off. This point P* seems to be attractive (in the renormalization group sense) from the side of the intermediate regime, so there are four universality classes on the transition line (collapsing trees, critical percolation, intermediate regime, and Derrida-Herrmann). We obtain very precise estimates for all critical exponents for collapsing trees. It is already harder to estimate the critical exponents for the intermediate regime. Finally, it is very difficult to obtain with our method good estimates of the critical parameters of the Derrida-Herrmann universality class. As regards the bond-driven to contact-driven transition in the collapsed phase, we have some evidence for its existence and rough location, but no precise estimates of critical exponents.

  5. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    PubMed

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

  6. Computation of large-scale statistics in decaying isotropic turbulence

    NASA Technical Reports Server (NTRS)

    Chasnov, Jeffrey R.

    1993-01-01

    We have performed large-eddy simulations of decaying isotropic turbulence to test the prediction of self-similar decay of the energy spectrum and to compute the decay exponents of the kinetic energy. In general, good agreement between the simulation results and the assumption of self-similarity were obtained. However, the statistics of the simulations were insufficient to compute the value of gamma which corrects the decay exponent when the spectrum follows a k(exp 4) wave number behavior near k = 0. To obtain good statistics, it was found necessary to average over a large ensemble of turbulent flows.

  7. Differences in results of analyses of concurrent and split stream-water samples collected and analyzed by the US Geological Survey and the Illinois Environmental Protection Agency, 1985-91

    USGS Publications Warehouse

    Melching, C.S.; Coupe, R.H.

    1995-01-01

    During water years 1985-91, the U.S. Geological Survey (USGS) and the Illinois Environmental Protection Agency (IEPA) cooperated in the collection and analysis of concurrent and split stream-water samples from selected sites in Illinois. Concurrent samples were collected independently by field personnel from each agency at the same time and sent to the IEPA laboratory, whereas the split samples were collected by USGS field personnel and divided into aliquots that were sent to each agency's laboratory for analysis. The water-quality data from these programs were examined by means of the Wilcoxon signed ranks test to identify statistically significant differences between results of the USGS and IEPA analyses. The data sets for constituents and properties identified by the Wilcoxon test as having significant differences were further examined by use of the paired t-test, mean relative percentage difference, and scattergrams to determine if the differences were important. Of the 63 constituents and properties in the concurrent-sample analysis, differences in only 2 (pH and ammonia) were statistically significant and large enough to concern water-quality engineers and planners. Of the 27 constituents and properties in the split-sample analysis, differences in 9 (turbidity, dissolved potassium, ammonia, total phosphorus, dissolved aluminum, dissolved barium, dissolved iron, dissolved manganese, and dissolved nickel) were statistically significant and large enough to con- cern water-quality engineers and planners. The differences in concentration between pairs of the concurrent samples were compared to the precision of the laboratory or field method used. The differences in concentration between pairs of the concurrent samples were compared to the precision of the laboratory or field method used. The differences in concentration between paris of split samples were compared to the precision of the laboratory method used and the interlaboratory precision of measuring a given concentration or property. Consideration of method precision indicated that differences between concurrent samples were insignificant for all concentrations and properties except pH, and that differences between split samples were significant for all concentrations and properties. Consideration of interlaboratory precision indicated that the differences between the split samples were not unusually large. The results for the split samples illustrate the difficulty in obtaining comparable and accurate water-quality data.

  8. Simulation of proportional control of hydraulic actuator using digital hydraulic valves

    NASA Astrophysics Data System (ADS)

    Raghuraman, D. R. S.; Senthil Kumar, S.; Kalaiarasan, G.

    2017-11-01

    Fluid power systems using oil hydraulics in earth moving and construction equipment have been using proportional and servo control valves for a long time to achieve precise and accurate position control backed by system performance. Such valves are having feedback control in them and exhibit good response, sensitivity and fine control of the actuators. Servo valves and proportional valves are possessing less hysteresis when compared to on-off type valves, but when the servo valve spools get stuck in one position, a high frequency called as jitter is employed to bring the spool back, whereas in on-off type valves it requires lesser technology to retract the spool. Hence on-off type valves are used in a technology known as digital valve technology, which caters to precise control on slow moving loads with fast switching times and with good flow and pressure control mimicking the performance of an equivalent “proportional valve” or “servo valve”.

  9. A new method for stable lead isotope extraction from seawater.

    PubMed

    Zurbrick, Cheryl M; Gallon, Céline; Flegal, A Russell

    2013-10-24

    A new technique for stable lead (Pb) isotope extraction from seawater is established using Toyopearl AF-Chelate 650M(®) resin (Tosoh Bioscience LLC). This new method is advantageous because it is semi-automated and relatively fast; in addition it introduces a relatively low blank by minimizing the volume of chemicals used in the extraction. Subsequent analyses by HR ICP-MS have a good relative external precision (2σ) of 3.5‰ for (206)Pb/(207)Pb, while analyses by MC-ICP-MS have a better relative external precision of 0.6‰. However, Pb sample concentrations limit MC-ICP-MS analyses to (206)Pb, (207)Pb, and (208)Pb. The method was validated by processing the common Pb isotope reference material NIST SRM-981 and several GEOTRACES intercalibration samples, followed by analyses by HR ICP-MS, all of which showed good agreement with previously reported values. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Precision of guided scanning procedures for full-arch digital impressions in vivo.

    PubMed

    Zimmermann, Moritz; Koller, Christina; Rumetsch, Moritz; Ender, Andreas; Mehl, Albert

    2017-11-01

    System-specific scanning strategies have been shown to influence the accuracy of full-arch digital impressions. Special guided scanning procedures have been implemented for specific intraoral scanning systems with special regard to the digital orthodontic workflow. The aim of this study was to evaluate the precision of guided scanning procedures compared to conventional impression techniques in vivo. Two intraoral scanning systems with implemented full-arch guided scanning procedures (Cerec Omnicam Ortho; Ormco Lythos) were included along with one conventional impression technique with irreversible hydrocolloid material (alginate). Full-arch impressions were taken three times each from 5 participants (n = 15). Impressions were then compared within the test groups using a point-to-surface distance method after best-fit model matching (OraCheck). Precision was calculated using the (90-10%)/2 quantile and statistical analysis with one-way repeated measures ANOVA and post hoc Bonferroni test was performed. The conventional impression technique with alginate showed the lowest precision for full-arch impressions with 162.2 ± 71.3 µm. Both guided scanning procedures performed statistically significantly better than the conventional impression technique (p < 0.05). Mean values for group Cerec Omnicam Ortho were 74.5 ± 39.2 µm and for group Ormco Lythos 91.4 ± 48.8 µm. The in vivo precision of guided scanning procedures exceeds conventional impression techniques with the irreversible hydrocolloid material alginate. Guided scanning procedures may be highly promising for clinical applications, especially for digital orthodontic workflows.

  11. Precision matrix expansion - efficient use of numerical simulations in estimating errors on cosmological parameters

    NASA Astrophysics Data System (ADS)

    Friedrich, Oliver; Eifler, Tim

    2018-01-01

    Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.

  12. Comparison of Four Search Engines and their efficacy With Emphasis on Literature Research in Addiction (Prevention and Treatment).

    PubMed

    Samadzadeh, Gholam Reza; Rigi, Tahereh; Ganjali, Ali Reza

    2013-01-01

    Surveying valuable and most recent information from internet, has become vital for researchers and scholars, because every day, thousands and perhaps millions of scientific works are brought out as digital resources which represented by internet and researchers can't ignore this great resource to find related documents for their literature search, which may not be found in any library. With regard to variety of documents presented on the internet, search engines are one of the most effective search tools for finding information. The aim of this study is to evaluate the three criteria, recall, preciseness and importance of the four search engines which are PubMed, Science Direct, Google Scholar and federated search of Iranian National Medical Digital Library in addiction (prevention and treatment) to select the most effective search engine for offering the best literature research. This research was a cross-sectional study by which four popular search engines in medical sciences were evaluated. To select keywords, medical subject heading (Mesh) was used. We entered given keywords in the search engines and after searching, 10 first entries were evaluated. Direct observation was used as a mean for data collection and they were analyzed by descriptive statistics (number, percent number and mean) and inferential statistics, One way analysis of variance (ANOVA) and post hoc Tukey in Spss. 15 statistical software. P Value < 0.05 was considered statistically significant. Results have shown that the search engines had different operations with regard to the evaluated criteria. Since P Value was 0.004 < 0.05 for preciseness and was 0.002 < 0.05 for importance, it shows significant difference among search engines. PubMed, Science Direct and Google Scholar were the best in recall, preciseness and importance respectively. As literature research is one of the most important stages of research, it's better for researchers, especially Substance-Related Disorders scholars to use different search engines with the best recall, preciseness and importance in that subject field to reach desirable results while searching and they don't depend on just one search engine.

  13. Comparison of Four Search Engines and their efficacy With Emphasis on Literature Research in Addiction (Prevention and Treatment)

    PubMed Central

    Samadzadeh, Gholam Reza; Rigi, Tahereh; Ganjali, Ali Reza

    2013-01-01

    Background Surveying valuable and most recent information from internet, has become vital for researchers and scholars, because every day, thousands and perhaps millions of scientific works are brought out as digital resources which represented by internet and researchers can’t ignore this great resource to find related documents for their literature search, which may not be found in any library. With regard to variety of documents presented on the internet, search engines are one of the most effective search tools for finding information. Objectives The aim of this study is to evaluate the three criteria, recall, preciseness and importance of the four search engines which are PubMed, Science Direct, Google Scholar and federated search of Iranian National Medical Digital Library in addiction (prevention and treatment) to select the most effective search engine for offering the best literature research. Materials and Methods This research was a cross-sectional study by which four popular search engines in medical sciences were evaluated. To select keywords, medical subject heading (Mesh) was used. We entered given keywords in the search engines and after searching, 10 first entries were evaluated. Direct observation was used as a mean for data collection and they were analyzed by descriptive statistics (number, percent number and mean) and inferential statistics, One way analysis of variance (ANOVA) and post hoc Tukey in Spss. 15 statistical software. P Value < 0.05 was considered statistically significant. Results Results have shown that the search engines had different operations with regard to the evaluated criteria. Since P Value was 0.004 < 0.05 for preciseness and was 0.002 < 0.05 for importance, it shows significant difference among search engines. PubMed, Science Direct and Google Scholar were the best in recall, preciseness and importance respectively. Conclusions As literature research is one of the most important stages of research, it's better for researchers, especially Substance-Related Disorders scholars to use different search engines with the best recall, preciseness and importance in that subject field to reach desirable results while searching and they don’t depend on just one search engine. PMID:24971257

  14. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    PubMed

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  15. Electrocardiograph-gated single photon emission computed tomography radionuclide angiography presents good interstudy reproducibility for the quantification of global systolic right ventricular function.

    PubMed

    Daou, Doumit; Coaguila, Carlos; Vilain, Didier

    2007-05-01

    Electrocardiograph-gated single photon emission computed tomography (SPECT) radionuclide angiography provides accurate measurement of right ventricular ejection fraction and end-diastolic and end-systolic volumes. In this study, we report the interstudy precision and reliability of SPECT radionuclide angiography for the measurement of global systolic right ventricular function using two, three-dimensional volume processing methods (SPECT-QBS, SPECT-35%). These were compared with equilibrium planar radionuclide angiography. Ten patients with chronic coronary artery disease having two SPECT and planar radionuclide angiography acquisitions were included. For the right ventricular ejection fraction, end-diastolic volume and end-systolic volume, the interstudy precision and reliability were better with SPECT-35% than with SPECT-QBS. The sample sizes needed to objectify a change in right ventricular volumes or ejection fraction were lower with SPECT-35% than with SPECT-QBS. The interstudy precision and reliability of SPECT-35% and SPECT-QBS for the right ventricle were better than those of equilibrium planar radionuclide angiography, but poorer than those previously reported for the left ventricle with SPECT radionuclide angiography on the same population. SPECT-35% and SPECT-QBS present good interstudy precision and reliability for right ventricular function, with the results favouring the use of SPECT-35%. The results are better than those of equilibrium planar radionuclide angiography, but poorer than those previously reported for the left ventricle with SPECT radionuclide angiography. They need to be confirmed in a larger population.

  16. Improving the power of clinical trials of rheumatoid arthritis by using data on continuous scales when analysing response rates: an application of the augmented binary method

    PubMed Central

    Jenkins, Martin

    2016-01-01

    Objective. In clinical trials of RA, it is common to assess effectiveness using end points based upon dichotomized continuous measures of disease activity, which classify patients as responders or non-responders. Although dichotomization generally loses statistical power, there are good clinical reasons to use these end points; for example, to allow for patients receiving rescue therapy to be assigned as non-responders. We adopt a statistical technique called the augmented binary method to make better use of the information provided by these continuous measures and account for how close patients were to being responders. Methods. We adapted the augmented binary method for use in RA clinical trials. We used a previously published randomized controlled trial (Oral SyK Inhibition in Rheumatoid Arthritis-1) to assess its performance in comparison to a standard method treating patients purely as responders or non-responders. The power and error rate were investigated by sampling from this study. Results. The augmented binary method reached similar conclusions to standard analysis methods but was able to estimate the difference in response rates to a higher degree of precision. Results suggested that CI widths for ACR responder end points could be reduced by at least 15%, which could equate to reducing the sample size of a study by 29% to achieve the same statistical power. For other end points, the gain was even higher. Type I error rates were not inflated. Conclusion. The augmented binary method shows considerable promise for RA trials, making more efficient use of patient data whilst still reporting outcomes in terms of recognized response end points. PMID:27338084

  17. Superallowed Fermi β-Decay Studies with SCEPTAR and the 8π Gamma-Ray Spectrometer

    NASA Astrophysics Data System (ADS)

    Koopmans, K. A.

    2005-04-01

    The 8π Gamma-Ray Spectrometer, operating at TRIUMF in Vancouver Canada, is a high-precision instrument for detecting the decay radiations from exotic nuclei. In 2003, a new beta-scintillating array called SCEPTAR was installed within the 8π Spectrometer. With these two systems, precise measurements of half-lives and branching ratios can be made, specifically on certain nuclei which exhibit Superallowed Fermi 0+ → 0+ β-decay. These data can be used to determine the value of δC, an isospin symmetry-breaking (Coulomb) correction factor to good precision. As this correction factor is currently one of the leading sources of error in the unitarity test of the CKM matrix, a precise determination of its value could help to eliminate any possible "trivial" explanation of the seeming departure of current experimental data from Standard Model predictions.

  18. Content range and precision of a computer adaptive test of upper extremity function for children with cerebral palsy.

    PubMed

    Montpetit, Kathleen; Haley, Stephen; Bilodeau, Nathalie; Ni, Pengsheng; Tian, Feng; Gorton, George; Mulcahey, M J

    2011-02-01

    This article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized measures: Pediatric Outcomes Data Collection Instrument and Functional Independence Measure for Children. The UE CAT correlated strongly with the upper extremity component of these measures and had greater precision when describing individual functional ability. The UE item bank has wider range with items populating the lower end of the ability spectrum. This new UE item bank and CAT have the capability to quickly assess children of all ages and abilities with good precision and, most importantly, with items that are meaningful and appropriate for their age and level of physical function.

  19. Localization of an Underwater Control Network Based on Quasi-Stable Adjustment.

    PubMed

    Zhao, Jianhu; Chen, Xinhua; Zhang, Hongmei; Feng, Jie

    2018-03-23

    There exists a common problem in the localization of underwater control networks that the precision of the absolute coordinates of known points obtained by marine absolute measurement is poor, and it seriously affects the precision of the whole network in traditional constraint adjustment. Therefore, considering that the precision of underwater baselines is good, we use it to carry out quasi-stable adjustment to amend known points before constraint adjustment so that the points fit the network shape better. In addition, we add unconstrained adjustment for quality control of underwater baselines, the observations of quasi-stable adjustment and constrained adjustment, to eliminate the unqualified baselines and improve the results' accuracy of the two adjustments. Finally, the modified method is applied to a practical LBL (Long Baseline) experiment and obtains a mean point location precision of 0.08 m, which improves by 38% compared with the traditional method.

  20. Localization of an Underwater Control Network Based on Quasi-Stable Adjustment

    PubMed Central

    Chen, Xinhua; Zhang, Hongmei; Feng, Jie

    2018-01-01

    There exists a common problem in the localization of underwater control networks that the precision of the absolute coordinates of known points obtained by marine absolute measurement is poor, and it seriously affects the precision of the whole network in traditional constraint adjustment. Therefore, considering that the precision of underwater baselines is good, we use it to carry out quasi-stable adjustment to amend known points before constraint adjustment so that the points fit the network shape better. In addition, we add unconstrained adjustment for quality control of underwater baselines, the observations of quasi-stable adjustment and constrained adjustment, to eliminate the unqualified baselines and improve the results’ accuracy of the two adjustments. Finally, the modified method is applied to a practical LBL (Long Baseline) experiment and obtains a mean point location precision of 0.08 m, which improves by 38% compared with the traditional method. PMID:29570627

  1. Measuring Efficiency of Tunisian Schools in the Presence of Quasi-Fixed Inputs: A Bootstrap Data Envelopment Analysis Approach

    ERIC Educational Resources Information Center

    Essid, Hedi; Ouellette, Pierre; Vigeant, Stephane

    2010-01-01

    The objective of this paper is to measure the efficiency of high schools in Tunisia. We use a statistical data envelopment analysis (DEA)-bootstrap approach with quasi-fixed inputs to estimate the precision of our measure. To do so, we developed a statistical model serving as the foundation of the data generation process (DGP). The DGP is…

  2. Testing the statistical compatibility of independent data sets

    NASA Astrophysics Data System (ADS)

    Maltoni, M.; Schwetz, T.

    2003-08-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.

  3. A New Statistic for Evaluating Item Response Theory Models for Ordinal Data. CRESST Report 839

    ERIC Educational Resources Information Center

    Cai, Li; Monroe, Scott

    2014-01-01

    We propose a new limited-information goodness of fit test statistic C[subscript 2] for ordinal IRT models. The construction of the new statistic lies formally between the M[subscript 2] statistic of Maydeu-Olivares and Joe (2006), which utilizes first and second order marginal probabilities, and the M*[subscript 2] statistic of Cai and Hansen…

  4. A Simple Automated Method for the Determination of Nitrate and Nitrite in Infant Formula and Milk Powder Using Sequential Injection Analysis

    PubMed Central

    Pistón, Mariela; Mollo, Alicia; Knochen, Moisés

    2011-01-01

    A fast and efficient automated method using a sequential injection analysis (SIA) system, based on the Griess, reaction was developed for the determination of nitrate and nitrite in infant formulas and milk powder. The system enables to mix a measured amount of sample (previously constituted in the liquid form and deproteinized) with the chromogenic reagent to produce a colored substance whose absorbance was recorded. For nitrate determination, an on-line prereduction step was added by passing the sample through a Cd minicolumn. The system was controlled from a PC by means of a user-friendly program. Figures of merit include linearity (r2 > 0.999 for both analytes), limits of detection (0.32 mg kg−1 NO3-N, and 0.05 mg kg−1 NO2-N), and precision (sr%) 0.8–3.0. Results were statistically in good agreement with those obtained with the reference ISO-IDF method. The sampling frequency was 30 hour−1 (nitrate) and 80 hour−1 (nitrite) when performed separately. PMID:21960750

  5. Evaluation of chemical parameters in soft mold-ripened cheese during ripening by mid-infrared spectroscopy.

    PubMed

    Martín-del-Campo, S T; Picque, D; Cosío-Ramírez, R; Corrieu, G

    2007-06-01

    The suitability of mid-infrared spectroscopy (MIR) to follow the evolution throughout ripening of specific physicochemical parameters in Camembert-type cheeses was evaluated. The infrared spectra were obtained directly from raw cheese samples deposited on an attenuated total reflectance crystal. Significant correlations were observed between physicochemical data, pH, acid-soluble nitrogen, nonprotein nitrogen, ammonia (NH4+), lactose, and lactic acid. Dry matter showed significant correlation only with lactose and nonprotein nitrogen. Principal components analysis factorial maps of physicochemical data showed a ripening evolution in 2 steps, from d 1 to d 7 and from d 8 to d 27, similar to that observed previously from infrared spectral data. Partial least squares regressions made it possible to obtain good prediction models for dry matter, acid-soluble nitrogen, nonprotein nitrogen, lactose, lactic acid, and NH4+ values from spectral data of raw cheese. The values of 3 statistical parameters (coefficient of determination, root mean square error of cross validation, and ratio prediction deviation) are satisfactory. Less precise models were obtained for pH.

  6. Determining the dominant partial wave contributions from angular distributions of single- and double-polarization observables in pseudoscalar meson photoproduction

    NASA Astrophysics Data System (ADS)

    Wunderlich, Y.; Afzal, F.; Thiel, A.; Beck, R.

    2017-05-01

    This work presents a simple method to determine the significant partial wave contributions to experimentally determined observables in pseudoscalar meson photoproduction. First, fits to angular distributions are presented and the maximum orbital angular momentum Lmax needed to achieve a good fit is determined. Then, recent polarization measurements for γ p → π0 p from ELSA, GRAAL, JLab and MAMI are investigated according to the proposed method. This method allows us to project high-spin partial wave contributions to any observable as long as the measurement has the necessary statistical accuracy. We show, that high precision and large angular coverage in the polarization data are needed in order to be sensitive to high-spin resonance states and thereby also for the finding of small resonance contributions. This task can be achieved via interference of these resonances with the well-known states. For the channel γ p → π0 p, those are the N(1680)5/2+ and Δ(1950)7/2+, contributing to the F-waves.

  7. SAPO-34/AlMCM-41, as a novel hierarchical nanocomposite: preparation, characterization and investigation of synthesis factors using response surface methodology

    NASA Astrophysics Data System (ADS)

    Roohollahi, Hossein; Halladj, Rouein; Askari, Sima; Yaripour, Fereydoon

    2018-06-01

    SAPO-34/AlMCM-41, as a new hierarchical nanocomposite was successfully synthesized via hydrothermal and dry-gel conversion. In an experimental and statistical study, effect of five input parameters including synthesis period, drying temperature, NaOH/Si, water/dried-gel and SAPO% were investigated on range-order degree of mesochannels and the relative crystallinity. X-ray diffraction (XRD) patterns were recorded to characterize the ordered AlMCM-41 and crystalline SAPO-34 structures. Nitrogen adsorption-desorption technique, scanning electron microscopy (SEM), field-emission SEM (FESEM) equipped with an energy-dispersive X-ray spectroscopy (EDS-Map) and transmission electron microscopy (TEM) were used to study the textural properties, morphology and surface elemental composition. Two reduced polynomials were fitted to the responses with good precision. Further, based on analysis of variances, SAPO% and time duration of dry-gel conversion were observed as the most effective parameters on the composite structure. The hierarchical porosity, narrow pore size distribution, high external surface area and large specific pore volume were of interesting characteristics for this novel nanocomposite.

  8. A measurement of CMB cluster lensing with SPT and DES year 1 data

    DOE PAGES

    Baxter, E. J.; Raghunathan, S.; Crawford, T. M.; ...

    2018-02-09

    Clusters of galaxies gravitationally lens the cosmic microwave background (CMB) radiation, resulting in a distinct imprint in the CMB on arcminute scales. Measurement of this effect offers a promising way to constrain the masses of galaxy clusters, particularly those at high redshift. We use CMB maps from the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB lensing signal around galaxy clusters identified in optical imaging from first year observations of the Dark Energy Survey. The cluster catalog used in this analysis contains 3697 members with mean redshift ofmore » $$\\bar{z} = 0.45$$. We detect lensing of the CMB by the galaxy clusters at $$8.1\\sigma$$ significance. Using the measured lensing signal, we constrain the amplitude of the relation between cluster mass and optical richness to roughly $$17\\%$$ precision, finding good agreement with recent constraints obtained with galaxy lensing. The error budget is dominated by statistical noise but includes significant contributions from systematic biases due to the thermal SZ effect and cluster miscentering.« less

  9. Comparison of two scanning instruments to measure peripheral refraction in the human eye.

    PubMed

    Jaeken, Bart; Tabernero, Juan; Schaeffel, Frank; Artal, Pablo

    2012-03-01

    To better understand how peripheral refraction affects development of myopia in humans, specialized instruments are fundamental for precise and rapid measurements of refraction over the visual field. We compare here two prototype instruments that measure in a few seconds the peripheral refraction in the eye with high angular resolution over a range of about ±45 deg. One instrument is based on the continuous recording of Hartmann-Shack (HS) images (HS scanner) and the other is based on the photorefraction (PR) principle (PR scanner). On average, good correlations were found between the refraction results provided by the two devices, although it varied across subjects. A detailed statistical analysis of the differences between both instruments was performed based on measurements in 35 young subjects. Both instruments have advantages and disadvantages. The HS scanner also provides the high-order aberration data, while the PR scanner is more compact and has a lower cost. Both instruments are current prototypes, and further optimization is possible to make them even more suitable tools for future visual optics and myopia research and also for different ophthalmic applications.

  10. Precision growth index using the clustering of cosmic structures and growth data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pouri, Athina; Basilakos, Spyros; Plionis, Manolis, E-mail: athpouri@phys.uoa.gr, E-mail: svasil@academyofathens.gr, E-mail: mplionis@physics.auth.gr

    2014-08-01

    We use the clustering properties of Luminous Red Galaxies (LRGs) and the growth rate data provided by the various galaxy surveys in order to constrain the growth index γ) of the linear matter fluctuations. We perform a standard χ{sup 2}-minimization procedure between theoretical expectations and data, followed by a joint likelihood analysis and we find a value of γ=0.56± 0.05, perfectly consistent with the expectations of the ΛCDM model, and Ω{sub m0} =0.29± 0.01, in very good agreement with the latest Planck results. Our analysis provides significantly more stringent growth index constraints with respect to previous studies, as indicated by the fact thatmore » the corresponding uncertainty is only ∼ 0.09 γ. Finally, allowing γ to vary with redshift in two manners (Taylor expansion around z=0, and Taylor expansion around the scale factor), we find that the combined statistical analysis between our clustering and literature growth data alleviates the degeneracy and obtain more stringent constraints with respect to other recent studies.« less

  11. Development and Validation of High-performance Thin Layer Chromatographic Method for Ursolic Acid in Malus domestica Peel

    PubMed Central

    Nikam, P. H.; Kareparamban, J. A.; Jadhav, A. P.; Kadam, V. J.

    2013-01-01

    Ursolic acid, a pentacyclic triterpenoid possess a wide range of pharmacological activities. It shows hypoglycemic, antiandrogenic, antibacterial, antiinflammatory, antioxidant, diuretic and cynogenic activity. It is commonly present in plants especially coating of leaves and fruits, such as apple fruit, vinca leaves, rosemary leaves, and eucalyptus leaves. A simple high-performance thin layer chromatographic method has been developed for the quantification of ursolic acid from apple peel (Malus domestica). The samples dissolved in methanol and linear ascending development was carried out in twin trough glass chamber. The mobile phase was selected as toluene:ethyl acetate:glacial acetic acid (70:30:2). The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.9982 in the concentration range 0.2-7 μg/spot with respect to peak area. According to the ICH guidelines the method was validated for linearity, accuracy, precision, and robustness. Statistical analysis of the data showed that the method is reproducible and selective for the estimation of ursolic acid. PMID:24302805

  12. Refining glass structure in two dimensions

    NASA Astrophysics Data System (ADS)

    Sadjadi, Mahdi; Bhattarai, Bishal; Drabold, D. A.; Thorpe, M. F.; Wilson, Mark

    2017-11-01

    Recently determined atomistic scale structures of near-two dimensional bilayers of vitreous silica (using scanning probe and electron microscopy) allow us to refine the experimentally determined coordinates to incorporate the known local chemistry more precisely. Further refinement is achieved by using classical potentials of varying complexity: one using harmonic potentials and the second employing an electrostatic description incorporating polarization effects. These are benchmarked against density functional calculations. Our main findings are that (a) there is a symmetry plane between the two disordered layers, a nice example of an emergent phenomena, (b) the layers are slightly tilted so that the Si-O-Si angle between the two layers is not 180∘ as originally thought but rather 175 ±2∘ , and (c) while interior areas that are not completely imagined can be reliably reconstructed, surface areas are more problematic. It is shown that small crystallites that appear are just as expected statistically in a continuous random network. This provides a good example of the value that can be added to disordered structures imaged at the atomic level by implementing computer refinement.

  13. Probabilistic flood extent estimates from social media flood observations

    NASA Astrophysics Data System (ADS)

    Brouwer, Tom; Eilander, Dirk; van Loenen, Arnejan; Booij, Martijn J.; Wijnberg, Kathelijne M.; Verkade, Jan S.; Wagemaker, Jurjen

    2017-05-01

    The increasing number and severity of floods, driven by phenomena such as urbanization, deforestation, subsidence and climate change, create a growing need for accurate and timely flood maps. In this paper we present and evaluate a method to create deterministic and probabilistic flood maps from Twitter messages that mention locations of flooding. A deterministic flood map created for the December 2015 flood in the city of York (UK) showed good performance (F(2) = 0.69; a statistic ranging from 0 to 1, with 1 expressing a perfect fit with validation data). The probabilistic flood maps we created showed that, in the York case study, the uncertainty in flood extent was mainly induced by errors in the precise locations of flood observations as derived from Twitter data. Errors in the terrain elevation data or in the parameters of the applied algorithm contributed less to flood extent uncertainty. Although these maps tended to overestimate the actual probability of flooding, they gave a reasonable representation of flood extent uncertainty in the area. This study illustrates that inherently uncertain data from social media can be used to derive information about flooding.

  14. Test bench for measurements of NOvA scintillator properties at JINR

    NASA Astrophysics Data System (ADS)

    Velikanova, D. S.; Antoshkin, A. I.; Anfimov, N. V.; Samoylov, O. B.

    2018-04-01

    The NOvA experiment was built to study oscillation parameters, mass hierarchy, CP- violation phase in the lepton sector and θ23 octant, via vɛ appearance and vμ disappearance modes in both neutrino and antineutrino beams. These scientific goals require good knowledge about NOvA scintillator basic properties. The new test bench was constructed and upgraded at JINR. The main goal of this bench is to measure scintillator properties (for solid and liquid scintillators), namely α/β discrimination and Birk's coefficients for protons and other hadrons (quenching factors). This knowledge will be crucial for recovering the energy of the hadronic part of neutrino interactions with scintillator nuclei. α/β discrimination was performed on the first version of the bench for LAB-based and NOvA scintillators. It was performed again on the upgraded version of the bench with higher statistic and precision level. Preliminary result of quenching factors for protons was obtained. A technical description of both versions of the bench and current results of the measurements and analysis are presented in this work.

  15. Building blocks for automated elucidation of metabolites: machine learning methods for NMR prediction.

    PubMed

    Kuhn, Stefan; Egert, Björn; Neumann, Steffen; Steinbeck, Christoph

    2008-09-25

    Current efforts in Metabolomics, such as the Human Metabolome Project, collect structures of biological metabolites as well as data for their characterisation, such as spectra for identification of substances and measurements of their concentration. Still, only a fraction of existing metabolites and their spectral fingerprints are known. Computer-Assisted Structure Elucidation (CASE) of biological metabolites will be an important tool to leverage this lack of knowledge. Indispensable for CASE are modules to predict spectra for hypothetical structures. This paper evaluates different statistical and machine learning methods to perform predictions of proton NMR spectra based on data from our open database NMRShiftDB. A mean absolute error of 0.18 ppm was achieved for the prediction of proton NMR shifts ranging from 0 to 11 ppm. Random forest, J48 decision tree and support vector machines achieved similar overall errors. HOSE codes being a notably simple method achieved a comparatively good result of 0.17 ppm mean absolute error. NMR prediction methods applied in the course of this work delivered precise predictions which can serve as a building block for Computer-Assisted Structure Elucidation for biological metabolites.

  16. Derivative spectrophotometry for the determination of faropenem in the presence of degradation products: an application for kinetic studies.

    PubMed

    Cielecka-Piontek, Judyta

    2013-07-01

    A simple and selective derivative spectrophotometric method was developed for the quantitative determination of faropenem in pure form and in pharmaceutical dosage. The method is based on the zero-crossing effect of first-derivative spectrophotometry (λ = 324 nm), which eliminates the overlapping effect caused by the excipients present in the pharmaceutical preparation, as well as degradation products, formed during hydrolysis, oxidation, photolysis, and thermolysis. The method was linear in the concentration range 2.5-300 μg/mL (r = 0.9989) at λ = 341 nm; the limits of detection and quantitation were 0.16 and 0.46 μg/mL, respectively. The method had good precision (relative standard deviation from 0.68 to 2.13%). Recovery of faropenem ranged from 97.9 to 101.3%. The first-order rate constants of the degradation of faropenem in pure form and in pharmaceutical dosage were determined by using first-derivative spectrophotometry. A statistical comparison of the validation results and the observed rate constants for faropenem degradation with these obtained with the high-performance liquid chromatography method demonstrated that both were compatible.

  17. Characterizing the size and shape of sea ice floes

    PubMed Central

    Gherardi, Marco; Lagomarsino, Marco Cosentino

    2015-01-01

    Monitoring drift ice in the Arctic and Antarctic regions directly and by remote sensing is important for the study of climate, but a unified modeling framework is lacking. Hence, interpretation of the data, as well as the decision of what to measure, represent a challenge for different fields of science. To address this point, we analyzed, using statistical physics tools, satellite images of sea ice from four different locations in both the northern and southern hemispheres, and measured the size and the elongation of ice floes (floating pieces of ice). We find that (i) floe size follows a distribution that can be characterized with good approximation by a single length scale , which we discuss in the framework of stochastic fragmentation models, and (ii) the deviation of their shape from circularity is reproduced with remarkable precision by a geometric model of coalescence by freezing, based on random Voronoi tessellations, with a single free parameter expressing the shape disorder. Although the physical interpretations remain open, this advocates the parameters and as two independent indicators of the environment in the polar regions, which are easily accessible by remote sensing. PMID:26014797

  18. Comparative study of the efficiency of computed univariate and multivariate methods for the estimation of the binary mixture of clotrimazole and dexamethasone using two different spectral regions

    NASA Astrophysics Data System (ADS)

    Fayez, Yasmin Mohammed; Tawakkol, Shereen Mostafa; Fahmy, Nesma Mahmoud; Lotfy, Hayam Mahmoud; Shehata, Mostafa Abdel-Aty

    2018-04-01

    Three methods of analysis are conducted that need computational procedures by the Matlab® software. The first is the univariate mean centering method which eliminates the interfering signal of the one component at a selected wave length leaving the amplitude measured to represent the component of interest only. The other two multivariate methods named PLS and PCR depend on a large number of variables that lead to extraction of the maximum amount of information required to determine the component of interest in the presence of the other. Good accurate and precise results are obtained from the three methods for determining clotrimazole in the linearity range 1-12 μg/mL and 75-550 μg/mL with dexamethasone acetate 2-20 μg/mL in synthetic mixtures and pharmaceutical formulation using two different spectral regions 205-240 nm and 233-278 nm. The results obtained are compared statistically to each other and to the official methods.

  19. A demonstration of a transportable radio interferometric surveying system with 3-cm accuracy on a 307-m base line

    NASA Technical Reports Server (NTRS)

    Ong, K. M.; Macdoran, P. F.; Thomas, J. B.; Fliegel, H. F.; Skjerve, L. J.; Spitzmesser, D. J.; Batelaan, P. D.; Paine, S. R.; Newsted, M. G.

    1976-01-01

    A precision geodetic measurement system (Aries, for Astronomical Radio Interferometric Earth Surveying) based on the technique of very long base line interferometry has been designed and implemented through the use of a 9-m transportable antenna and the NASA 64-m antenna of the Deep Space Communications Complex at Goldstone, California. A series of experiments designed to demonstrate the inherent accuracy of a transportable interferometer was performed on a 307-m base line during the period from December 1973 to June 1974. This short base line was chosen in order to obtain a comparison with a conventional survey with a few-centimeter accuracy and to minimize Aries errors due to transmission media effects, source locations, and earth orientation parameters. The base-line vector derived from a weighted average of the measurements, representing approximately 24 h of data, possessed a formal uncertainty of about 3 cm in all components. This average interferometry base-line vector was in good agreement with the conventional survey vector within the statistical range allowed by the combined uncertainties (3-4 cm) of the two techniques.

  20. A measurement of CMB cluster lensing with SPT and DES year 1 data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, E. J.; Raghunathan, S.; Crawford, T. M.

    Clusters of galaxies gravitationally lens the cosmic microwave background (CMB) radiation, resulting in a distinct imprint in the CMB on arcminute scales. Measurement of this effect offers a promising way to constrain the masses of galaxy clusters, particularly those at high redshift. We use CMB maps from the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB lensing signal around galaxy clusters identified in optical imaging from first year observations of the Dark Energy Survey. The cluster catalog used in this analysis contains 3697 members with mean redshift ofmore » $$\\bar{z} = 0.45$$. We detect lensing of the CMB by the galaxy clusters at $$8.1\\sigma$$ significance. Using the measured lensing signal, we constrain the amplitude of the relation between cluster mass and optical richness to roughly $$17\\%$$ precision, finding good agreement with recent constraints obtained with galaxy lensing. The error budget is dominated by statistical noise but includes significant contributions from systematic biases due to the thermal SZ effect and cluster miscentering.« less

  1. The Strong Lensing Time Delay Challenge (2014)

    NASA Astrophysics Data System (ADS)

    Liao, Kai; Dobler, G.; Fassnacht, C. D.; Treu, T.; Marshall, P. J.; Rumbaugh, N.; Linder, E.; Hojjati, A.

    2014-01-01

    Time delays between multiple images in strong lensing systems are a powerful probe of cosmology. At the moment the application of this technique is limited by the number of lensed quasars with measured time delays. However, the number of such systems is expected to increase dramatically in the next few years. Hundred such systems are expected within this decade, while the Large Synoptic Survey Telescope (LSST) is expected to deliver of order 1000 time delays in the 2020 decade. In order to exploit this bounty of lenses we needed to make sure the time delay determination algorithms have sufficiently high precision and accuracy. As a first step to test current algorithms and identify potential areas for improvement we have started a "Time Delay Challenge" (TDC). An "evil" team has created realistic simulated light curves, to be analyzed blindly by "good" teams. The challenge is open to all interested parties. The initial challenge consists of two steps (TDC0 and TDC1). TDC0 consists of a small number of datasets to be used as a training template. The non-mandatory deadline is December 1 2013. The "good" teams that complete TDC0 will be given access to TDC1. TDC1 consists of thousands of lightcurves, a number sufficient to test precision and accuracy at the subpercent level, necessary for time-delay cosmography. The deadline for responding to TDC1 is July 1 2014. Submissions will be analyzed and compared in terms of predefined metrics to establish the goodness-of-fit, efficiency, precision and accuracy of current algorithms. This poster describes the challenge in detail and gives instructions for participation.

  2. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    PubMed

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. The theory precision analyse of RFM localization of satellite remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  4. Bit-Grooming: Shave Your Bits with Razor-sharp Precision

    NASA Astrophysics Data System (ADS)

    Zender, C. S.; Silver, J.

    2017-12-01

    Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.

  5. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459

  6. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  7. Passage relevance models for genomics search.

    PubMed

    Urbain, Jay; Frieder, Ophir; Goharian, Nazli

    2009-03-19

    We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  8. Constraining the mass–richness relationship of redMaPPer clusters with angular clustering

    DOE PAGES

    Baxter, Eric J.; Rozo, Eduardo; Jain, Bhuvnesh; ...

    2016-08-04

    The potential of using cluster clustering for calibrating the mass–richness relation of galaxy clusters has been recognized theoretically for over a decade. In this paper, we demonstrate the feasibility of this technique to achieve high-precision mass calibration using redMaPPer clusters in the Sloan Digital Sky Survey North Galactic Cap. By including cross-correlations between several richness bins in our analysis, we significantly improve the statistical precision of our mass constraints. The amplitude of the mass–richness relation is constrained to 7 per cent statistical precision by our analysis. However, the error budget is systematics dominated, reaching a 19 per cent total errormore » that is dominated by theoretical uncertainty in the bias–mass relation for dark matter haloes. We confirm the result from Miyatake et al. that the clustering amplitude of redMaPPer clusters depends on galaxy concentration as defined therein, and we provide additional evidence that this dependence cannot be sourced by mass dependences: some other effect must account for the observed variation in clustering amplitude with galaxy concentration. Assuming that the observed dependence of redMaPPer clustering on galaxy concentration is a form of assembly bias, we find that such effects introduce a systematic error on the amplitude of the mass–richness relation that is comparable to the error bar from statistical noise. Finally, the results presented here demonstrate the power of cluster clustering for mass calibration and cosmology provided the current theoretical systematics can be ameliorated.« less

  9. OPTIMA: sensitive and accurate whole-genome alignment of error-prone genomic maps by combinatorial indexing and technology-agnostic statistical analysis.

    PubMed

    Verzotto, Davide; M Teo, Audrey S; Hillmer, Axel M; Nagarajan, Niranjan

    2016-01-01

    Resolution of complex repeat structures and rearrangements in the assembly and analysis of large eukaryotic genomes is often aided by a combination of high-throughput sequencing and genome-mapping technologies (for example, optical restriction mapping). In particular, mapping technologies can generate sparse maps of large DNA fragments (150 kilo base pairs (kbp) to 2 Mbp) and thus provide a unique source of information for disambiguating complex rearrangements in cancer genomes. Despite their utility, combining high-throughput sequencing and mapping technologies has been challenging because of the lack of efficient and sensitive map-alignment algorithms for robustly aligning error-prone maps to sequences. We introduce a novel seed-and-extend glocal (short for global-local) alignment method, OPTIMA (and a sliding-window extension for overlap alignment, OPTIMA-Overlap), which is the first to create indexes for continuous-valued mapping data while accounting for mapping errors. We also present a novel statistical model, agnostic with respect to technology-dependent error rates, for conservatively evaluating the significance of alignments without relying on expensive permutation-based tests. We show that OPTIMA and OPTIMA-Overlap outperform other state-of-the-art approaches (1.6-2 times more sensitive) and are more efficient (170-200 %) and precise in their alignments (nearly 99 % precision). These advantages are independent of the quality of the data, suggesting that our indexing approach and statistical evaluation are robust, provide improved sensitivity and guarantee high precision.

  10. SPA- STATISTICAL PACKAGE FOR TIME AND FREQUENCY DOMAIN ANALYSIS

    NASA Technical Reports Server (NTRS)

    Brownlow, J. D.

    1994-01-01

    The need for statistical analysis often arises when data is in the form of a time series. This type of data is usually a collection of numerical observations made at specified time intervals. Two kinds of analysis may be performed on the data. First, the time series may be treated as a set of independent observations using a time domain analysis to derive the usual statistical properties including the mean, variance, and distribution form. Secondly, the order and time intervals of the observations may be used in a frequency domain analysis to examine the time series for periodicities. In almost all practical applications, the collected data is actually a mixture of the desired signal and a noise signal which is collected over a finite time period with a finite precision. Therefore, any statistical calculations and analyses are actually estimates. The Spectrum Analysis (SPA) program was developed to perform a wide range of statistical estimation functions. SPA can provide the data analyst with a rigorous tool for performing time and frequency domain studies. In a time domain statistical analysis the SPA program will compute the mean variance, standard deviation, mean square, and root mean square. It also lists the data maximum, data minimum, and the number of observations included in the sample. In addition, a histogram of the time domain data is generated, a normal curve is fit to the histogram, and a goodness-of-fit test is performed. These time domain calculations may be performed on both raw and filtered data. For a frequency domain statistical analysis the SPA program computes the power spectrum, cross spectrum, coherence, phase angle, amplitude ratio, and transfer function. The estimates of the frequency domain parameters may be smoothed with the use of Hann-Tukey, Hamming, Barlett, or moving average windows. Various digital filters are available to isolate data frequency components. Frequency components with periods longer than the data collection interval are removed by least-squares detrending. As many as ten channels of data may be analyzed at one time. Both tabular and plotted output may be generated by the SPA program. This program is written in FORTRAN IV and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 142K (octal) of 60 bit words. This core requirement can be reduced by segmentation of the program. The SPA program was developed in 1978.

  11. The Use of Neural Network Technology to Model Swimming Performance

    PubMed Central

    Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida

    2007-01-01

    The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports sciences application allowed us to create very realistic models for swimming performance prediction based on previous selected criterions that were related with the dependent variable (performance). PMID:24149233

  12. Substantial Goodness and Nascent Human Life.

    PubMed

    Floyd, Shawn

    2015-09-01

    Many believe that moral value is--at least to some extent--dependent on the developmental states necessary for supporting rational activity. My paper rejects this view, but does not aim simply to register objections to it. Rather, my essay aims to answer the following question: if a human being's developmental state and occurrent capacities do not bequeath moral standing, what does? The question is intended to prompt careful consideration of what makes human beings objects of moral value, dignity, or (to employ my preferred term) goodness. Not only do I think we can answer this question, I think we can show that nascent human life possesses goodness of precisely this sort. I appeal to Aquinas's metaethics to establish the conclusion that the goodness of a human being--even if that being is an embryo or fetus--resides at the substratum of her existence. If she possesses goodness, it is because human existence is good.

  13. A precise measurement of the $B^0$ meson oscillation frequency

    DOE PAGES

    Aaij, R.; Abellán Beteta, C.; Adeva, B.; ...

    2016-07-21

    The oscillation frequency, Δm d, of B 0 mesons is measured using semileptonic decays with a D – or D* – meson in the final state. The data sample corresponds to 3.0fb –1 of pp collisions, collected by the LHCb experiment at centre-of-mass energies √s = 7 and 8TeV. A combination of the two decay modes gives Δm d=(505.0±2.1±1.0)ns –1, where the first uncertainty is statistical and the second is systematic. This is the most precise single measurement of this parameter. It is consistent with the current world average and has similar precision.

  14. Achieving metrological precision limits through postselection

    NASA Astrophysics Data System (ADS)

    Alves, G. Bié; Pimentel, A.; Hor-Meyll, M.; Walborn, S. P.; Davidovich, L.; Filho, R. L. de Matos

    2017-01-01

    Postselection strategies have been proposed with the aim of amplifying weak signals, which may help to overcome detection thresholds associated with technical noise in high-precision measurements. Here we use an optical setup to experimentally explore two different postselection protocols for the estimation of a small parameter: a weak-value amplification procedure and an alternative method that does not provide amplification but nonetheless is shown to be more robust for the sake of parameter estimation. Each technique leads approximately to the saturation of quantum limits for the estimation precision, expressed by the Cramér-Rao bound. For both situations, we show that parameter estimation is improved when the postselection statistics are considered together with the measurement device.

  15. Development and validation spectroscopic methods for the determination of lomefloxacin in bulk and pharmaceutical formulations

    NASA Astrophysics Data System (ADS)

    El-Didamony, A. M.; Hafeez, S. M.

    2016-01-01

    Four simple, sensitive spectrophotometric and spectrofluorimetric methods (A-D) for the determination of antibacterial drug lomefloxacin (LMFX) in pharmaceutical formulations have been developed. Method A is based on formation of ternary complex between Pd(II), eosin and LMFX in the presence of methyl cellulose as surfactant and acetate-HCl buffer pH 4.0. Spectrophotometrically, under the optimum conditions, the ternary complex showed absorption maximum at 530 nm. Methods B and C are based on redox reaction between LMFX and KMnO4 in acid and alkaline media. In indirect spectrophotometry method B the drug solution is treated with a known excess of KMnO4 in H2SO4 medium and subsequent determination of unreacted oxidant by reacting it with safronine O in the same medium at λmax = 520 nm. Direct spectrophotometry method C involves treating the alkaline solution of LMFX with KMnO4 and measuring the bluish green product at 604 nm. Method D is based on the chelation of LMFX with Zr(IV) to produce fluorescent chelate. At the optimum reaction conditions, the drug-metal chelate showed excitation maxima at 280 nm and emission maxima at 443 nm. The optimum experimental parameters for the reactions have been studied. The validity of the described procedures was assessed. Statistical analysis of the results has been carried out revealing high accuracy and good precision. The proposed methods were successfully applied for the determination of the selected drug in pharmaceutical preparations with good recoveries.

  16. A comparison of manual anthropometric measurements with Kinect-based scanned measurements in terms of precision and reliability.

    PubMed

    Bragança, Sara; Arezes, Pedro; Carvalho, Miguel; Ashdown, Susan P; Castellucci, Ignacio; Leão, Celina

    2018-01-01

    Collecting anthropometric data for real-life applications demands a high degree of precision and reliability. It is important to test new equipment that will be used for data collectionOBJECTIVE:Compare two anthropometric data gathering techniques - manual methods and a Kinect-based 3D body scanner - to understand which of them gives more precise and reliable results. The data was collected using a measuring tape and a Kinect-based 3D body scanner. It was evaluated in terms of precision by considering the regular and relative Technical Error of Measurement and in terms of reliability by using the Intraclass Correlation Coefficient, Reliability Coefficient, Standard Error of Measurement and Coefficient of Variation. The results obtained showed that both methods presented better results for reliability than for precision. Both methods showed relatively good results for these two variables, however, manual methods had better results for some body measurements. Despite being considered sufficiently precise and reliable for certain applications (e.g. apparel industry), the 3D scanner tested showed, for almost every anthropometric measurement, a different result than the manual technique. Many companies design their products based on data obtained from 3D scanners, hence, understanding the precision and reliability of the equipment used is essential to obtain feasible results.

  17. Expertise for upright faces improves the precision but not the capacity of visual working memory.

    PubMed

    Lorenc, Elizabeth S; Pratte, Michael S; Angeloni, Christopher F; Tong, Frank

    2014-10-01

    Considerable research has focused on how basic visual features are maintained in working memory, but little is currently known about the precision or capacity of visual working memory for complex objects. How precisely can an object be remembered, and to what extent might familiarity or perceptual expertise contribute to working memory performance? To address these questions, we developed a set of computer-generated face stimuli that varied continuously along the dimensions of age and gender, and we probed participants' memories using a method-of-adjustment reporting procedure. This paradigm allowed us to separately estimate the precision and capacity of working memory for individual faces, on the basis of the assumptions of a discrete capacity model, and to assess the impact of face inversion on memory performance. We found that observers could maintain up to four to five items on average, with equally good memory capacity for upright and upside-down faces. In contrast, memory precision was significantly impaired by face inversion at every set size tested. Our results demonstrate that the precision of visual working memory for a complex stimulus is not strictly fixed but, instead, can be modified by learning and experience. We find that perceptual expertise for upright faces leads to significant improvements in visual precision, without modifying the capacity of working memory.

  18. Ultra-High Precision Half-Life Measurement for the Superallowed &+circ; Emitter ^26Al^m

    NASA Astrophysics Data System (ADS)

    Finlay, P.; Demand, G.; Garrett, P. E.; Leach, K. G.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Grinyer, G. F.; Leslie, J. R.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Williams, S. J.

    2009-10-01

    The calculated nuclear structure dependent correction for ^26Al^m (δC-δNS= 0.305(27)% [1]) is smaller by nearly a factor of two than the other twelve precision superallowed cases, making it an ideal case to pursue a reduction in the experimental errors contributing to the Ft value. An ultra-high precision half-life measurement for the superallowed &+circ; emitter ^26Al^m has been made at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada. A beam of ˜10^5 ^26Al^m/s was delivered in October 2007 and its decay was observed using a 4π continuous gas flow proportional counter as part of an ongoing experimental program in superallowed Fermi β decay studies. With a statistical precision of ˜0.008%, the present work represents the single most precise measurement of any superallowed half-life to date. [4pt] [1] I.S. Towner and J.C. Hardy, Phys. Rev. C 79, 055502 (2009).

  19. Ultra-High Precision Half-Life Measurement for the Superallowed &+circ; Emitter ^26Al^m

    NASA Astrophysics Data System (ADS)

    Finlay, P.; Demand, G.; Garrett, P. E.; Leach, K. G.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Ball, G. C.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Williams, S. J.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Grinyer, G. F.; Leslie, J. R.

    2008-10-01

    The calculated nuclear structure dependent correction for ^26Al^m (δC-δNS= 0.305(27)% [1]) is smaller by nearly a factor of two than the other twelve precision superallowed cases, making it an ideal case to pursue a reduction in the experimental errors contributing to the Ft value. An ultra-high precision half-life measurement for the superallowed &+circ; emitter ^26Al^m has been made using a 4π continuous gas flow proportional counter as part of an ongoing experimental program in superallowed Fermi β decay studies at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada, which delivered a beam of ˜10^5 ^26Al^m/s in October 2007. With a statistical precision of ˜0.008%, the present work represents the single most precise measurement of any superallowed half-life to date. [1] I.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008).

  20. Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision

    PubMed Central

    Yang, Bingwei; Xie, Xinhao; Li, Duan

    2018-01-01

    Time of flight (TOF) based light detection and ranging (LiDAR) is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC) that counts time between trigger signals and analog-to-digital converter (ADC) that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR) with analog discrete return system based ranging (AR), a peak detection method (WR-PK) shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC), WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision. PMID:29642639

  1. A new MRI grading system for chondromalacia patellae.

    PubMed

    Özgen, Ali; Taşdelen, Neslihan; Fırat, Zeynep

    2017-04-01

    Background Chondromalacia patellae is a very common disorder. Although magnetic resonance imaging (MRI) is widely used to investigate patellar cartilage lesions, there is no descriptive MRI-based grading system for chondromalacia patellae. Purpose To propose a new MRI grading system for chondromalacia patellae with corresponding high resolution images which might be useful in precisely reporting and comparing knee examinations in routine daily practice and used in predicting natural course and clinical outcome of the patellar cartilage lesions. Material and Methods High resolution fat-saturated proton density (FS PD) images in the axial plane with corresponding T2 mapping images were reviewed. A detailed MRI grading system covering the deficiencies of the existing gradings has been set and presented on these images. Two experienced observers blinded to clinical data examined 44 knee MR images and evaluated patellar cartilage changes according to the proposed grading system. Inter- and intra-rater validity testing using kappa statistics were calculated. Results A descriptive and detailed grading system with corresponding FS PD and T2 mapping images has been presented. Inter-rater agreement was 0.80 (95% confidence interval [CI], 0.71-0.89). Intra-rater agreements were 0.83 (95% CI, 0.74-0.91) for observer A and 0.79 (95% CI, 0.70-0.88) for observer B (k-values). Conclusion We present a new MRI grading system for chondromalacia patellae with corresponding images and good inter- and intra-rater agreement which might be useful in reporting and comparing knee MRI examinations in daily practice and may also have the potential for using more precisely predicting prognosis and clinical outcome of the patients.

  2. Quantifying precision of in situ length and weight measurements of fish

    USGS Publications Warehouse

    Gutreuter, S.; Krzoska, D.J.

    1994-01-01

    We estimated and compared errors in field-made (in situ) measurements of lengths and weights of fish. We made three measurements of length and weight on each of 33 common carp Cyprinus carpio, and on each of a total of 34 bluegills Lepomis macrochirus and black crappies Pomoxis nigromaculatus. Maximum total lengths of all fish were measured to the nearest 1 mm on a conventional measuring board. The bluegills and black crappies (85–282 mm maximum total length) were weighed to the nearest 1 g on a 1,000-g spring-loaded scale. The common carp (415–600 mm maximum total length) were weighed to the nearest 0.05 kg on a 20-kg spring-loaded scale. We present a statistical model for comparison of coefficients of variation of length (Cl ) and weight (Cw ). Expected Cl was near zero and constant across mean length, indicating that length can be measured with good precision in the field. Expected Cw decreased with increasing mean length, and was larger than expected Cl by 5.8 to over 100 times for the bluegills and black crappies, and by 3 to over 20 times for the common carp. Unrecognized in situ weighing errors bias the apparent content of unique information in weight, which is the information not explained by either length or measurement error. We recommend procedures to circumvent effects of weighing errors, including elimination of unnecessary weighing from routine monitoring programs. In situ weighing must be conducted with greater care than is common if the content of unique and nontrivial information in weight is to be correctly identified.

  3. Impact of a Higgs boson at a mass of 126 GeV on the standard model with three and four fermion generations.

    PubMed

    Eberhardt, Otto; Herbert, Geoffrey; Lacker, Heiko; Lenz, Alexander; Menzel, Andreas; Nierste, Ulrich; Wiebusch, Martin

    2012-12-14

    We perform a comprehensive statistical analysis of the standard model (SM) with three and four generations using the latest Higgs search results from LHC and Tevatron, the electroweak precision observables measured at LEP and SLD, and the latest determinations of M(W), m(t), and α(s). For the three-generation case we analyze the tensions in the electroweak fit by removing individual observables from the fit and comparing their predicted values with the measured ones. In particular, we discuss the impact of the Higgs search results on the deviations of the electroweak precision observables from their best-fit values. Our indirect prediction of the top mass is m(t) =175.7(-2.2)(+3.0) GeV at 68.3% C.L., which is in good agreement with the direct measurement. We also plot the preferred area in the M(W)-m(t) plane. The best-fit Higgs boson mass is 126.0 GeV. For the case of the SM with a perturbative sequential fourth fermion generation (SM4) we discuss the deviations of the Higgs signal strengths from their best-fit values. The H → γγ signal strength now disagrees with its best-fit SM4 value at more than 4σ. We perform a likelihood-ratio test to compare the SM and SM4 and show that the SM4 is excluded at 5.3σ. Without the Tevatron data on H → bb the significance drops to 4.8σ.

  4. Survival and aging of a small laboratory population of a marine mollusc, Aplysia californica.

    PubMed

    Hirsch, H R; Peretz, B

    1984-09-01

    In an investigation of the postmetamorphic survival of a population of 112 Aplysia californica, five animals died before 100 days of age and five after 200 days. The number of survivors among the 102 animals which died between 100 and 220 days declined approximately linearly with age. The median age at death was 155 days. The animals studied were those that died of natural causes within a laboratory population that was established to provide Aplysia for sacrifice in an experimental program. Actuarial separation of the former group from the latter was justified by theoretical consideration. Age-specific mortality rates were calculated from the survival data. Statistical fluctuation arising from the small size of the population was reduced by grouping the data in bins of unequal age duration. The durations were specified such that each bin contained approximately the same number of data points. An algorithm for choosing the number of data bins was based on the requirement that the precision with which the age of a group is determined should equal the precision with which the number of deaths in the groups is known. The Gompertz and power laws of mortality were fitted to the age-specific mortality-rate data with equally good results. The positive values of slope associated with the mortality-rate functions as well as the linear shape of the curve of survival provide actuarial evidence that Aplysia age. Since Aplysia grow linearly without approaching a limiting size, the existence of senescence indicates especially clearly the falsity of Bidder's hypothesis that aging is a by-product of the cessation of growth.

  5. Precise time transfer using MKIII VLBI technology

    NASA Technical Reports Server (NTRS)

    Johnston, K. J.; Buisson, J. A.; Lister, M. J.; Oaks, O. J.; Spencer, J. H.; Waltman, W. B.; Elgered, G.; Lundqvist, G.; Rogers, A. E. E.; Clark, T. A.

    1984-01-01

    It is well known that Very Long Baseline Interferometry (VLBI) is capable of precise time synchronization at subnanosecond levels. This paper deals with a demonstration of clock synchronization using the MKIII VBLI system. The results are compared with clock synchronization by traveling cesium clocks and GPS. The comparison agrees within the errors of the portable clocks (+ 5 ns) and GPS(+ or - 30 ns) systems. The MKIII technology appears to be capable of clock synchronization at subnanosecond levels and appears to be very good benchmark system against which future time synchronization systems can be evaluated.

  6. International time and frequency comparison using very long baseline interferometer

    NASA Astrophysics Data System (ADS)

    Hama, Shinichi; Yoshino, Taizoh; Kiuchi, Hitoshi; Morikawa, Takao; Sato, Tokuo

    VLBI time comparison experiments using the Kashima station of the Radio Research Laboratory and the Richmond and Maryland Point stations of the U.S. Naval Observatory have been performed since April 1985. A precision of 0.2 ns for the clock offset and 0.2 ps/s for the clock rate have been achieved, and good agreement has been found with GPS results for clock offset. Much higher precision has been found for VLBI time and frequency comparison than that possible with conventional portable clock or Loran-C methods.

  7. Study of Fricke-gel dosimeter calibration for attaining precise measurements of the absorbed dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liosi, Giulia Maria; Benedini, Sara; Giacobbo, Francesca

    2015-07-01

    A method has been studied for attaining, with good precision, absolute measurements of the spatial distribution of the absorbed dose by means of the Fricke gelatin Xylenol Orange dosimetric system. With this aim, the dose response to subsequent irradiations was analyzed. In fact, the proposed modality is based on a pre-irradiation of each single dosimeter in a uniform field with a known dose, in order to extrapolate a calibration image for a subsequent non-uniform irradiation with an un-known dose to be measured. (authors)

  8. Error in telemetry studies: Effects of animal movement on triangulation

    USGS Publications Warehouse

    Schmutz, Joel A.; White, Gary C.

    1990-01-01

    We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.

  9. High precision measurement of the proton charge radius: The PRad experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meziane, Mehdi

    2013-11-01

    The recent high precision measurements of the proton charge radius performed at PSI from muonic hydrogen Lamb shift puzzled the hadronic physics community. A value of 0.8418 {+-} 0.0007 fm was extracted which is 7{sigma} smaller than the previous determinations obtained from electron-proton scattering experiments and based on precision spectroscopy of electronic hydrogen. An additional extraction of the proton charge radius from electron scattering at Mainz is also in good agreement with these "electronic" determinations. An independent measurement of the proton charge radius from unpolarized elastic ep scattering using a magnetic spectrometer free method was proposed and fully approved atmore » Jefferson Laboratory in June 2012. This novel technique uses the high precision calorimeter HyCal and a windowless hydrogen gas target which makes possible the extraction of the charge radius at very forward angles and thus very low momentum transfer Q{sup 2} up to 10{sup -4} (GeV/c){sup 2} with an unprecedented sub-percent precision for this type of experiment. In this paper, after a review of the recent progress on the proton charge radius extraction and the new high precision experiment PRad will be presented.« less

  10. Does bad inference drive out good?

    PubMed

    Marozzi, Marco

    2015-07-01

    The (mis)use of statistics in practice is widely debated, and a field where the debate is particularly active is medicine. Many scholars emphasize that a large proportion of published medical research contains statistical errors. It has been noted that top class journals like Nature Medicine and The New England Journal of Medicine publish a considerable proportion of papers that contain statistical errors and poorly document the application of statistical methods. This paper joins the debate on the (mis)use of statistics in the medical literature. Even though the validation process of a statistical result may be quite elusive, a careful assessment of underlying assumptions is central in medicine as well as in other fields where a statistical method is applied. Unfortunately, a careful assessment of underlying assumptions is missing in many papers, including those published in top class journals. In this paper, it is shown that nonparametric methods are good alternatives to parametric methods when the assumptions for the latter ones are not satisfied. A key point to solve the problem of the misuse of statistics in the medical literature is that all journals have their own statisticians to review the statistical method/analysis section in each submitted paper. © 2015 Wiley Publishing Asia Pty Ltd.

  11. How Do Statistical Detection Methods Compare to Entropy Measures

    DTIC Science & Technology

    2012-08-28

    October 2001. It is known as RS attack or “Reliable Detection of LSB Steganography in Grayscale and color images ”. The algorithm they use is very...precise for the detection of pseudo-aleatory LSB steganography . Its precision varies with the image but, its referential value is a 0.005 bits by...Jessica Fridrich, Miroslav Goljan, Rui Du, "Detecting LSB Steganography in Color and Gray-Scale Images ," IEEE Multimedia, vol. 8, no. 4, pp. 22-28, Oct

  12. Corpus and Method for Identifying Citations in Non-Academic Text (Open Access, Publisher’s Version)

    DTIC Science & Technology

    2014-05-31

    patents, train a CRF classifier to find new citations, and apply a reranker to incorporate non-local information. Our best system achieves 0.83 F -score on...report precision, recall, and F -scores on chunk level. CRF training and decoding is performed with the CRF++ package7 using its default setting. 5.1...only obtain a very small number of training examples for statistical rerankers. 7http://crfpp.sourceforge.net Precision Recall F -score TEXT 0.7997 0.7805

  13. Statistical errors and systematic biases in the calibration of the convective core overshooting with eclipsing binaries. A case study: TZ Fornacis

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2017-04-01

    Context. Recently published work has made high-precision fundamental parameters available for the binary system TZ Fornacis, making it an ideal target for the calibration of stellar models. Aims: Relying on these observations, we attempt to constrain the initial helium abundance, the age and the efficiency of the convective core overshooting. Our main aim is in pointing out the biases in the results due to not accounting for some sources of uncertainty. Methods: We adopt the SCEPtER pipeline, a maximum likelihood technique based on fine grids of stellar models computed for various values of metallicity, initial helium abundance and overshooting efficiency by means of two independent stellar evolutionary codes, namely FRANEC and MESA. Results: Beside the degeneracy between the estimated age and overshooting efficiency, we found the existence of multiple independent groups of solutions. The best one suggests a system of age 1.10 ± 0.07 Gyr composed of a primary star in the central helium burning stage and a secondary in the sub-giant branch (SGB). The resulting initial helium abundance is consistent with a helium-to-metal enrichment ratio of ΔY/ ΔZ = 1; the core overshooting parameter is β = 0.15 ± 0.01 for FRANEC and fov = 0.013 ± 0.001 for MESA. The second class of solutions, characterised by a worse goodness-of-fit, still suggest a primary star in the central helium-burning stage but a secondary in the overall contraction phase, at the end of the main sequence (MS). In this case, the FRANEC grid provides an age of Gyr and a core overshooting parameter , while the MESA grid gives 1.23 ± 0.03 Gyr and fov = 0.025 ± 0.003. We analyse the impact on the results of a larger, but typical, mass uncertainty and of neglecting the uncertainty in the initial helium content of the system. We show that very precise mass determinations with uncertainty of a few thousandths of solar mass are required to obtain reliable determinations of stellar parameters, as mass errors larger than approximately 1% lead to estimates that are not only less precise but also biased. Moreover, we show that a fit obtained with a grid of models computed at a fixed ΔY/ ΔZ - thus neglecting the current uncertainty in the initial helium content of the system - can provide severely biased age and overshooting estimates. The possibility of independent overshooting efficiencies for the two stars of the system is also explored. Conclusions: The present analysis confirms that to constrain the core overshooting parameter by means of binary systems is a very difficult task that requires an observational precision still rarely achieved and a robust statistical treatment of the error sources.

  14. International air cargo operations and gateways : their emerging importance to the state of Texas.

    DOT National Transportation Integrated Search

    2011-07-01

    Air cargo transport has become particularly important in todays expanding global : economy for the movement of high-value goods such as electronics, computer components, : precision equipment, medical supplies, auto parts, and perishables. Air car...

  15. Selecting the optimum plot size for a California design-based stream and wetland mapping program.

    PubMed

    Lackey, Leila G; Stein, Eric D

    2014-04-01

    Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.

  16. Spectrophotometric and spectrofluorimetric methods for determination of certain biologically active phenolic drugs in their bulk powders and different pharmaceutical formulations.

    PubMed

    Omar, Mahmoud A; Badr El-Din, Kalid M; Salem, Hesham; Abdelmageed, Osama H

    2018-03-05

    Two simple and sensitive spectrophotometric and spectrofluorimetric methods for the determination of terbutaline sulfate, fenoterol hydrobromide, etilefrine hydrochloride, isoxsuprine hydrochloride, ethamsylate, doxycycline hyclate have been developed. Both methods were based on the oxidation of the cited drugs with cerium (IV) in acid medium. The spectrophotometric method was based on measurement of the absorbance difference (ΔA), which represents the excess cerium (IV), at 317nm for each drug. On the other hand, the spectrofluorimetric method was based on measurement of the fluorescent of the produced cerium (III) at emission wavelength 354nm (λ excitation =255nm) for the concentrations studied for each drug. For both methods, the variables affecting the reactions were carefully investigated and the conditions were optimized. Linear relationships were found between either ΔA or the fluorescent of the produced cerium (III) values and the concentration of the studied drugs in a general concentration range of 2.0-24.0μgmL -1 , 20.0-24.0ngmL -1 with good correlation coefficients in the following range 0.9990-0.9999, 0.9990-0.9993 for spectrophotometric and spectrofluorimetric methods respectively. The limits of detection and quantitation of spectrophotometric method were found in general concentration range 0.190-0.787 and 0.634-2.624μgmL -1 respectively. For spectrofluorimetric method, the limits of detection and quantitation were found in general concentration range 4.77-9.52 and 15.91-31.74ngmL -1 respectively. The stoichiometry of the reaction was determined, and the reactions pathways were postulated. The analytical performance of the methods, in terms of accuracy and precision, were statistically validated and the results obtained were satisfactory. The methods have been successfully applied to the determination of the cited drugs in their commercial pharmaceutical formulations. Statistical comparison of the results with the reference methods showed excellent agreement and proved that no significant difference in the accuracy and precision. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A new fitting method for measurement of the curvature radius of a short arc with high precision

    NASA Astrophysics Data System (ADS)

    Tao, Wei; Zhong, Hong; Chen, Xiao; Selami, Yassine; Zhao, Hui

    2018-07-01

    The measurement of an object with a short arc is widely encountered in scientific research and industrial production. As the most classic method of arc fitting, the least squares fitting method suffers from low precision when it is used for measurement of arcs with smaller central angles and fewer sampling points. The shorter the arc, the lower is the measurement accuracy. In order to improve the measurement precision of short arcs, a parameter constrained fitting method based on a four-parameter circle equation is proposed in this paper. The generalized Lagrange function was introduced together with the optimization by gradient descent method to reduce the influence from noise. The simulation and experimental results showed that the proposed method has high precision even when the central angle drops below 4° and it has good robustness when the noise standard deviation rises to 0.4 mm. This new fitting method is suitable for the high precision measurement of short arcs with smaller central angles without any prior information.

  18. Rapid computation of single PET scan rest-stress myocardial blood flow parametric images by table look up.

    PubMed

    Guehl, Nicolas J; Normandin, Marc D; Wooten, Dustin W; Rozen, Guy; Ruskin, Jeremy N; Shoup, Timothy M; Woo, Jonghye; Ptaszek, Leon M; Fakhri, Georges El; Alpert, Nathaniel M

    2017-09-01

    We have recently reported a method for measuring rest-stress myocardial blood flow (MBF) using a single, relatively short, PET scan session. The method requires two IV tracer injections, one to initiate rest imaging and one at peak stress. We previously validated absolute flow quantitation in ml/min/cc for standard bull's eye, segmental analysis. In this work, we extend the method for fast computation of rest-stress MBF parametric images. We provide an analytic solution to the single-scan rest-stress flow model which is then solved using a two-dimensional table lookup method (LM). Simulations were performed to compare the accuracy and precision of the lookup method with the original nonlinear method (NLM). Then the method was applied to 16 single scan rest/stress measurements made in 12 pigs: seven studied after infarction of the left anterior descending artery (LAD) territory, and nine imaged in the native state. Parametric maps of rest and stress MBF as well as maps of left (f LV ) and right (f RV ) ventricular spill-over fractions were generated. Regions of interest (ROIs) for 17 myocardial segments were defined in bull's eye fashion on the parametric maps. The mean of each ROI was then compared to the rest (K 1r ) and stress (K 1s ) MBF estimates obtained from fitting the 17 regional TACs with the NLM. In simulation, the LM performed as well as the NLM in terms of precision and accuracy. The simulation did not show that bias was introduced by the use of a predefined two-dimensional lookup table. In experimental data, parametric maps demonstrated good statistical quality and the LM was computationally much more efficient than the original NLM. Very good agreement was obtained between the mean MBF calculated on the parametric maps for each of the 17 ROIs and the regional MBF values estimated by the NLM (K 1map LM  = 1.019 × K 1 ROI NLM  + 0.019, R 2  = 0.986; mean difference = 0.034 ± 0.036 mL/min/cc). We developed a table lookup method for fast computation of parametric imaging of rest and stress MBF. Our results show the feasibility of obtaining good quality MBF maps using modest computational resources, thus demonstrating that the method can be applied in a clinical environment to obtain full quantitative MBF information. © 2017 American Association of Physicists in Medicine.

  19. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less

  20. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    PubMed Central

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-01-01

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150

  1. APPLICATION OF STATISTICAL ENERGY ANALYSIS TO VIBRATIONS OF MULTI-PANEL STRUCTURES.

    DTIC Science & Technology

    cylindrical shell are compared with predictions obtained from statistical energy analysis . Generally good agreement is observed. The flow of mechanical...the coefficients of proportionality between power flow and average modal energy difference, which one must know in order to apply statistical energy analysis . No

  2. Influence of Running on Pistol Shot Hit Patterns.

    PubMed

    Kerkhoff, Wim; Bolck, Annabel; Mattijssen, Erwin J A T

    2016-01-01

    In shooting scene reconstructions, risk assessment of the situation can be important for the legal system. Shooting accuracy and precision, and thus risk assessment, might be correlated with the shooter's physical movement and experience. The hit patterns of inexperienced and experienced shooters, while shooting stationary (10 shots) and in running motion (10 shots) with a semi-automatic pistol, were compared visually (with confidence ellipses) and statistically. The results show a significant difference in precision (circumference of the hit patterns) between stationary shots and shots fired in motion for both inexperienced and experienced shooters. The decrease in precision for all shooters was significantly larger in the y-direction than in the x-direction. The precision of the experienced shooters is overall better than that of the inexperienced shooters. No significant change in accuracy (shift in the hit pattern center) between stationary shots and shots fired in motion can be seen for all shooters. © 2015 American Academy of Forensic Sciences.

  3. Status and outlook of CHIP-TRAP: The Central Michigan University high precision Penning trap

    NASA Astrophysics Data System (ADS)

    Redshaw, M.; Bryce, R. A.; Hawks, P.; Gamage, N. D.; Hunt, C.; Kandegedara, R. M. E. B.; Ratnayake, I. S.; Sharp, L.

    2016-06-01

    At Central Michigan University we are developing a high-precision Penning trap mass spectrometer (CHIP-TRAP) that will focus on measurements with long-lived radioactive isotopes. CHIP-TRAP will consist of a pair of hyperbolic precision-measurement Penning traps, and a cylindrical capture/filter trap in a 12 T magnetic field. Ions will be produced by external ion sources, including a laser ablation source, and transported to the capture trap at low energies enabling ions of a given m / q ratio to be selected via their time-of-flight. In the capture trap, contaminant ions will be removed with a mass-selective rf dipole excitation and the ion of interest will be transported to the measurement traps. A phase-sensitive image charge detection technique will be used for simultaneous cyclotron frequency measurements on single ions in the two precision traps, resulting in a reduction in statistical uncertainty due to magnetic field fluctuations.

  4. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution tomore » mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.« less

  5. Liquid-fuel valve with precise throttling control

    NASA Technical Reports Server (NTRS)

    Mcdougal, A. R.; Porter, R. N.; Riebling, R. W.

    1971-01-01

    Prototype liquid-fuel valve performs on-off and throttling functions in vacuum without component cold-welding or excessive leakage. Valve design enables simple and rapid disassembly and parts replacement and operates with short working stroke, providing maximum throttling sensitivity commensurate with good control.

  6. Validation of a Spectral Method for Quantitative Measurement of Color in Protein Drug Solutions.

    PubMed

    Yin, Jian; Swartz, Trevor E; Zhang, Jian; Patapoff, Thomas W; Chen, Bartolo; Marhoul, Joseph; Shih, Norman; Kabakoff, Bruce; Rahimi, Kimia

    2016-01-01

    A quantitative spectral method has been developed to precisely measure the color of protein solutions. In this method, a spectrophotometer is utilized for capturing the visible absorption spectrum of a protein solution, which can then be converted to color values (L*a*b*) that represent human perception of color in a quantitative three-dimensional space. These quantitative values (L*a*b*) allow for calculating the best match of a sample's color to a European Pharmacopoeia reference color solution. In order to qualify this instrument and assay for use in clinical quality control, a technical assessment was conducted to evaluate the assay suitability and precision. Setting acceptance criteria for this study required development and implementation of a unique statistical method for assessing precision in 3-dimensional space. Different instruments, cuvettes, protein solutions, and analysts were compared in this study. The instrument accuracy, repeatability, and assay precision were determined. The instrument and assay are found suitable for use in assessing color of drug substances and drug products and is comparable to the current European Pharmacopoeia visual assessment method. In the biotechnology industry, a visual assessment is the most commonly used method for color characterization, batch release, and stability testing of liquid protein drug solutions. Using this method, an analyst visually determines the color of the sample by choosing the closest match to a standard color series. This visual method can be subjective because it requires an analyst to make a judgment of the best match of color of the sample to the standard color series, and it does not capture data on hue and chroma that would allow for improved product characterization and the ability to detect subtle differences between samples. To overcome these challenges, we developed a quantitative spectral method for color determination that greatly reduces the variability in measuring color and allows for a more precise understanding of color differences. In this study, we established a statistical method for assessing precision in 3-dimensional space and demonstrated that the quantitative spectral method is comparable with respect to precision and accuracy to the current European Pharmacopoeia visual assessment method. © PDA, Inc. 2016.

  7. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  8. Electroweak precision data and gravitino dark matter

    NASA Astrophysics Data System (ADS)

    Heinemeyer, S.

    2007-11-01

    Electroweak precision measurements can provide indirect information about the possible scale of supersymmetry already at the present level of accuracy. We review present day sensitivities of precision data in mSUGRA-type models with the gravitino as the lightest supersymmetric particle (LSP). The c2 fit is based on MW, sin2 qeff, (g-2)m , BR (b xAE sl) and the lightest MSSM Higgs boson mass, Mh. We find indications for relatively light soft supersymmetry-breaking masses, offering good prospects for the LHC and the ILC, and in some cases also for the Tevatron.

  9. High precision AlGaAsSb ridge-waveguide etching by in situ reflectance monitored ICP-RIE

    NASA Astrophysics Data System (ADS)

    Tran, N. T.; Breivik, Magnus; Patra, S. K.; Fimland, Bjørn-Ove

    2014-05-01

    GaSb-based semiconductor diode lasers are promising candidates for light sources working in the mid-infrared wavelength region of 2-5 μm. Using edge emitting lasers with ridge-waveguide structure, light emission with good beam quality can be achieved. Fabrication of the ridge waveguide requires precise etch stop control for optimal laser performance. Simulation results are presented that show the effect of increased confinement in the waveguide when the etch depth is well-defined. In situ reflectance monitoring with a 675 nm-wavelength laser was used to determine the etch stop with high accuracy. Based on the simulations of laser reflectance from a proposed sample, the etching process can be controlled to provide an endpoint depth precision within +/- 10 nm.

  10. Application of SPM interferometry in MEMS vibration measurement

    NASA Astrophysics Data System (ADS)

    Tang, Chaowei; He, Guotian; Xu, Changbiao; Zhao, Lijuan; Hu, Jun

    2007-12-01

    The resonant frequency measurement of cantilever has an important position in MEMS(Micro Electro Mechanical Systems) research. Meanwhile the SPM interferometry is a high-precision optical measurement technique, which can be used in physical quantity measurement of vibration, displacement, surface profile. Hence, in this paper we propose to apply SPM(SPM) interferometry in measuring the vibration of MEMS cantilever and in the experiment the vibration of MEMS cantilever was driven by light source. Then this kind of vibration was measured in nm precision. Finally the relational characteristics of MEMS cantilever vibration under optical excitation can be gotten and the measurement principle is analyzed. This method eliminates the influence on the measuring precision caused by external interference and light intensity change through feedback control loop. Experiment results prove that this measurement method has a good effect.

  11. Machine vision system for measuring conifer seedling morphology

    NASA Astrophysics Data System (ADS)

    Rigney, Michael P.; Kranzler, Glenn A.

    1995-01-01

    A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.

  12. Precision Measurement of the e + e − → Λ c + Λ ¯ c − Cross Section Near Threshold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ablikim, M.; Achasov, M. N.; Ahmed, S.

    2018-03-01

    The cross section of the e+e− ! +c¯ −c process is measured with unprecedented precision using data collected with the BESIII detector at ps = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. The non-zero cross section near the +c¯ −c production threshold is cleared. At center-of-mass energies ps = 4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the c polar angle distributions. From these, the c electric over magnetic form factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03 respectively, where the first uncertainties are statistical and the secondmore » are systematic.« less

  13. Precision Measurement of the e^{+}e^{-}→Λ_{c}^{+}Λ[over ¯]_{c}^{-} Cross Section Near Threshold.

    PubMed

    Ablikim, M; Achasov, M N; Ahmed, S; Albrecht, M; Alekseev, M; Amoroso, A; An, F F; An, Q; Bai, J Z; Bai, Y; Bakina, O; Baldini Ferroli, R; Ban, Y; Begzsuren, K; Bennett, D W; Bennett, J V; Berger, N; Bertani, M; Bettoni, D; Bianchi, F; Boger, E; Boyko, I; Briere, R A; Cai, H; Cai, X; Cakir, O; Calcaterra, A; Cao, G F; Cetin, S A; Chai, J; Chang, J F; Chelkov, G; Chen, G; Chen, H S; Chen, J C; Chen, M L; Chen, P L; Chen, S J; Chen, X R; Chen, Y B; Chu, X K; Cibinetto, G; Cossio, F; Dai, H L; Dai, J P; Dbeyssi, A; Dedovich, D; Deng, Z Y; Denig, A; Denysenko, I; Destefanis, M; De Mori, F; Ding, Y; Dong, C; Dong, J; Dong, L Y; Dong, M Y; Dou, Z L; Du, S X; Duan, P F; Fang, J; Fang, S S; Fang, Y; Farinelli, R; Fava, L; Fegan, S; Feldbauer, F; Felici, G; Feng, C Q; Fioravanti, E; Fritsch, M; Fu, C D; Gao, Q; Gao, X L; Gao, Y; Gao, Y G; Gao, Z; Garillon, B; Garzia, I; Gilman, A; Goetzen, K; Gong, L; Gong, W X; Gradl, W; Greco, M; Gu, M H; Gu, Y T; Guo, A Q; Guo, R P; Guo, Y P; Guskov, A; Haddadi, Z; Han, S; Hao, X Q; Harris, F A; He, K L; He, X Q; Heinsius, F H; Held, T; Heng, Y K; Holtmann, T; Hou, Z L; Hu, H M; Hu, J F; Hu, T; Hu, Y; Huang, G S; Huang, J S; Huang, X T; Huang, X Z; Huang, Z L; Hussain, T; Ikegami Andersson, W; Ji, Q; Ji, Q P; Ji, X B; Ji, X L; Jiang, X S; Jiang, X Y; Jiao, J B; Jiao, Z; Jin, D P; Jin, S; Jin, Y; Johansson, T; Julin, A; Kalantar-Nayestanaki, N; Kang, X S; Kavatsyuk, M; Ke, B C; Khan, T; Khoukaz, A; Kiese, P; Kliemt, R; Koch, L; Kolcu, O B; Kopf, B; Kornicer, M; Kuemmel, M; Kuhlmann, M; Kupsc, A; Kühn, W; Lange, J S; Lara, M; Larin, P; Lavezzi, L; Leithoff, H; Li, C; Li, Cheng; Li, D M; Li, F; Li, F Y; Li, G; Li, H B; Li, H J; Li, J C; Li, J W; Li, Jin; Li, K J; Li, Kang; Li, Ke; Li, Lei; Li, P L; Li, P R; Li, Q Y; Li, W D; Li, W G; Li, X L; Li, X N; Li, X Q; Li, Z B; Liang, H; Liang, Y F; Liang, Y T; Liao, G R; Libby, J; Lin, C X; Lin, D X; Liu, B; Liu, B J; Liu, C X; Liu, D; Liu, F H; Liu, Fang; Liu, Feng; Liu, H B; Liu, H L; Liu, H M; Liu, Huanhuan; Liu, Huihui; Liu, J B; Liu, J Y; Liu, K; Liu, K Y; Liu, Ke; Liu, L D; Liu, Q; Liu, S B; Liu, X; Liu, Y B; Liu, Z A; Liu, Zhiqing; Long, Y F; Lou, X C; Lu, H J; Lu, J G; Lu, Y; Lu, Y P; Luo, C L; Luo, M X; Luo, X L; Lusso, S; Lyu, X R; Ma, F C; Ma, H L; Ma, L L; Ma, M M; Ma, Q M; Ma, T; Ma, X N; Ma, X Y; Ma, Y M; Maas, F E; Maggiora, M; Malik, Q A; Mao, Y J; Mao, Z P; Marcello, S; Meng, Z X; Messchendorp, J G; Mezzadri, G; Min, J; Mitchell, R E; Mo, X H; Mo, Y J; Morales Morales, C; Muchnoi, N Yu; Muramatsu, H; Mustafa, A; Nefedov, Y; Nerling, F; Nikolaev, I B; Ning, Z; Nisar, S; Niu, S L; Niu, X Y; Olsen, S L; Ouyang, Q; Pacetti, S; Pan, Y; Papenbrock, M; Patteri, P; Pelizaeus, M; Pellegrino, J; Peng, H P; Peng, Z Y; Peters, K; Pettersson, J; Ping, J L; Ping, R G; Pitka, A; Poling, R; Prasad, V; Qi, H R; Qi, M; Qi, T Y; Qian, S; Qiao, C F; Qin, N; Qin, X S; Qin, Z H; Qiu, J F; Rashid, K H; Redmer, C F; Richter, M; Ripka, M; Rolo, M; Rong, G; Rosner, Ch; Sarantsev, A; Savrié, M; Schnier, C; Schoenning, K; Shan, W; Shan, X Y; Shao, M; Shen, C P; Shen, P X; Shen, X Y; Sheng, H Y; Shi, X; Song, J J; Song, W M; Song, X Y; Sosio, S; Sowa, C; Spataro, S; Sun, G X; Sun, J F; Sun, L; Sun, S S; Sun, X H; Sun, Y J; Sun, Y K; Sun, Y Z; Sun, Z J; Sun, Z T; Tan, Y T; Tang, C J; Tang, G Y; Tang, X; Tapan, I; Tiemens, M; Tsednee, B; Uman, I; Varner, G S; Wang, B; Wang, B L; Wang, D; Wang, D Y; Wang, Dan; Wang, K; Wang, L L; Wang, L S; Wang, M; Wang, Meng; Wang, P; Wang, P L; Wang, W P; Wang, X F; Wang, Y; Wang, Y D; Wang, Y F; Wang, Y Q; Wang, Z; Wang, Z G; Wang, Z Y; Wang, Zongyuan; Weber, T; Wei, D H; Wei, J H; Weidenkaff, P; Wen, S P; Wiedner, U; Wolke, M; Wu, L H; Wu, L J; Wu, Z; Xia, L; Xia, Y; Xiao, D; Xiao, Y J; Xiao, Z J; Xie, Y G; Xie, Y H; Xiong, X A; Xiu, Q L; Xu, G F; Xu, J J; Xu, L; Xu, Q J; Xu, Q N; Xu, X P; Yan, F; Yan, L; Yan, W B; Yan, W C; Yan, Y H; Yang, H J; Yang, H X; Yang, L; Yang, Y H; Yang, Y X; Yang, Yifan; Ye, M; Ye, M H; Yin, J H; You, Z Y; Yu, B X; Yu, C X; Yu, J S; Yuan, C Z; Yuan, Y; Yuncu, A; Zafar, A A; Zeng, Y; Zeng, Z; Zhang, B X; Zhang, B Y; Zhang, C C; Zhang, D H; Zhang, H H; Zhang, H Y; Zhang, J; Zhang, J L; Zhang, J Q; Zhang, J W; Zhang, J Y; Zhang, J Z; Zhang, K; Zhang, L; Zhang, S Q; Zhang, X Y; Zhang, Y; Zhang, Y H; Zhang, Y T; Zhang, Yang; Zhang, Yao; Zhang, Yu; Zhang, Z H; Zhang, Z P; Zhang, Z Y; Zhao, G; Zhao, J W; Zhao, J Y; Zhao, J Z; Zhao, Lei; Zhao, Ling; Zhao, M G; Zhao, Q; Zhao, S J; Zhao, T C; Zhao, Y B; Zhao, Z G; Zhemchugov, A; Zheng, B; Zheng, J P; Zheng, Y H; Zhong, B; Zhou, L; Zhou, Q; Zhou, X; Zhou, X K; Zhou, X R; Zhou, X Y; Zhu, A N; Zhu, J; Zhu, K; Zhu, K J; Zhu, S; Zhu, S H; Zhu, X L; Zhu, Y C; Zhu, Y S; Zhu, Z A; Zhuang, J; Zou, B S; Zou, J H

    2018-03-30

    The cross section of the e^{+}e^{-}→Λ_{c}^{+}Λ[over ¯]_{c}^{-} process is measured with unprecedented precision using data collected with the BESIII detector at sqrt[s]=4574.5, 4580.0, 4590.0 and 4599.5 MeV. The nonzero cross section near the Λ_{c}^{+}Λ[over ¯]_{c}^{-} production threshold is cleared. At center-of-mass energies sqrt[s]=4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λ_{c} polar angle distributions. From these, the Λ_{c} electric over magnetic form-factor ratios (|G_{E}/G_{M}|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03, respectively, where the first uncertainties are statistical and the second are systematic.

  14. Precision Measurement of the e+e-→Λc+Λ¯c - Cross Section Near Threshold

    NASA Astrophysics Data System (ADS)

    Ablikim, M.; Achasov, M. N.; Ahmed, S.; Albrecht, M.; Alekseev, M.; Amoroso, A.; An, F. F.; An, Q.; Bai, J. Z.; Bai, Y.; Bakina, O.; Baldini Ferroli, R.; Ban, Y.; Begzsuren, K.; Bennett, D. W.; Bennett, J. V.; Berger, N.; Bertani, M.; Bettoni, D.; Bianchi, F.; Boger, E.; Boyko, I.; Briere, R. A.; Cai, H.; Cai, X.; Cakir, O.; Calcaterra, A.; Cao, G. F.; Cetin, S. A.; Chai, J.; Chang, J. F.; Chelkov, G.; Chen, G.; Chen, H. S.; Chen, J. C.; Chen, M. L.; Chen, P. L.; Chen, S. J.; Chen, X. R.; Chen, Y. B.; Chu, X. K.; Cibinetto, G.; Cossio, F.; Dai, H. L.; Dai, J. P.; Dbeyssi, A.; Dedovich, D.; Deng, Z. Y.; Denig, A.; Denysenko, I.; Destefanis, M.; de Mori, F.; Ding, Y.; Dong, C.; Dong, J.; Dong, L. Y.; Dong, M. Y.; Dou, Z. L.; Du, S. X.; Duan, P. F.; Fang, J.; Fang, S. S.; Fang, Y.; Farinelli, R.; Fava, L.; Fegan, S.; Feldbauer, F.; Felici, G.; Feng, C. Q.; Fioravanti, E.; Fritsch, M.; Fu, C. D.; Gao, Q.; Gao, X. L.; Gao, Y.; Gao, Y. G.; Gao, Z.; Garillon, B.; Garzia, I.; Gilman, A.; Goetzen, K.; Gong, L.; Gong, W. X.; Gradl, W.; Greco, M.; Gu, M. H.; Gu, Y. T.; Guo, A. Q.; Guo, R. P.; Guo, Y. P.; Guskov, A.; Haddadi, Z.; Han, S.; Hao, X. Q.; Harris, F. A.; He, K. L.; He, X. Q.; Heinsius, F. H.; Held, T.; Heng, Y. K.; Holtmann, T.; Hou, Z. L.; Hu, H. M.; Hu, J. F.; Hu, T.; Hu, Y.; Huang, G. S.; Huang, J. S.; Huang, X. T.; Huang, X. Z.; Huang, Z. L.; Hussain, T.; Ikegami Andersson, W.; Ji, Q.; Ji, Q. P.; Ji, X. B.; Ji, X. L.; Jiang, X. S.; Jiang, X. Y.; Jiao, J. B.; Jiao, Z.; Jin, D. P.; Jin, S.; Jin, Y.; Johansson, T.; Julin, A.; Kalantar-Nayestanaki, N.; Kang, X. S.; Kavatsyuk, M.; Ke, B. C.; Khan, T.; Khoukaz, A.; Kiese, P.; Kliemt, R.; Koch, L.; Kolcu, O. B.; Kopf, B.; Kornicer, M.; Kuemmel, M.; Kuhlmann, M.; Kupsc, A.; Kühn, W.; Lange, J. S.; Lara, M.; Larin, P.; Lavezzi, L.; Leithoff, H.; Li, C.; Li, Cheng; Li, D. M.; Li, F.; Li, F. Y.; Li, G.; Li, H. B.; Li, H. J.; Li, J. C.; Li, J. W.; Li, Jin; Li, K. J.; Li, Kang; Li, Ke; Li, Lei; Li, P. L.; Li, P. R.; Li, Q. Y.; Li, W. D.; Li, W. G.; Li, X. L.; Li, X. N.; Li, X. Q.; Li, Z. B.; Liang, H.; Liang, Y. F.; Liang, Y. T.; Liao, G. R.; Libby, J.; Lin, C. X.; Lin, D. X.; Liu, B.; Liu, B. J.; Liu, C. X.; Liu, D.; Liu, F. H.; Liu, Fang; Liu, Feng; Liu, H. B.; Liu, H. L.; Liu, H. M.; Liu, Huanhuan; Liu, Huihui; Liu, J. B.; Liu, J. Y.; Liu, K.; Liu, K. Y.; Liu, Ke; Liu, L. D.; Liu, Q.; Liu, S. B.; Liu, X.; Liu, Y. B.; Liu, Z. A.; Liu, Zhiqing; Long, Y. F.; Lou, X. C.; Lu, H. J.; Lu, J. G.; Lu, Y.; Lu, Y. P.; Luo, C. L.; Luo, M. X.; Luo, X. L.; Lusso, S.; Lyu, X. R.; Ma, F. C.; Ma, H. L.; Ma, L. L.; Ma, M. M.; Ma, Q. M.; Ma, T.; Ma, X. N.; Ma, X. Y.; Ma, Y. M.; Maas, F. E.; Maggiora, M.; Malik, Q. A.; Mao, Y. J.; Mao, Z. P.; Marcello, S.; Meng, Z. X.; Messchendorp, J. G.; Mezzadri, G.; Min, J.; Mitchell, R. E.; Mo, X. H.; Mo, Y. J.; Morales Morales, C.; Muchnoi, N. Yu.; Muramatsu, H.; Mustafa, A.; Nefedov, Y.; Nerling, F.; Nikolaev, I. B.; Ning, Z.; Nisar, S.; Niu, S. L.; Niu, X. Y.; Olsen, S. L.; Ouyang, Q.; Pacetti, S.; Pan, Y.; Papenbrock, M.; Patteri, P.; Pelizaeus, M.; Pellegrino, J.; Peng, H. P.; Peng, Z. Y.; Peters, K.; Pettersson, J.; Ping, J. L.; Ping, R. G.; Pitka, A.; Poling, R.; Prasad, V.; Qi, H. R.; Qi, M.; Qi, T. Y.; Qian, S.; Qiao, C. F.; Qin, N.; Qin, X. S.; Qin, Z. H.; Qiu, J. F.; Rashid, K. H.; Redmer, C. F.; Richter, M.; Ripka, M.; Rolo, M.; Rong, G.; Rosner, Ch.; Sarantsev, A.; Savrié, M.; Schnier, C.; Schoenning, K.; Shan, W.; Shan, X. Y.; Shao, M.; Shen, C. P.; Shen, P. X.; Shen, X. Y.; Sheng, H. Y.; Shi, X.; Song, J. J.; Song, W. M.; Song, X. Y.; Sosio, S.; Sowa, C.; Spataro, S.; Sun, G. X.; Sun, J. F.; Sun, L.; Sun, S. S.; Sun, X. H.; Sun, Y. J.; Sun, Y. K.; Sun, Y. Z.; Sun, Z. J.; Sun, Z. T.; Tan, Y. T.; Tang, C. J.; Tang, G. Y.; Tang, X.; Tapan, I.; Tiemens, M.; Tsednee, B.; Uman, I.; Varner, G. S.; Wang, B.; Wang, B. L.; Wang, D.; Wang, D. Y.; Wang, Dan; Wang, K.; Wang, L. L.; Wang, L. S.; Wang, M.; Wang, Meng; Wang, P.; Wang, P. L.; Wang, W. P.; Wang, X. F.; Wang, Y.; Wang, Y. D.; Wang, Y. F.; Wang, Y. Q.; Wang, Z.; Wang, Z. G.; Wang, Z. Y.; Wang, Zongyuan; Weber, T.; Wei, D. H.; Wei, J. H.; Weidenkaff, P.; Wen, S. P.; Wiedner, U.; Wolke, M.; Wu, L. H.; Wu, L. J.; Wu, Z.; Xia, L.; Xia, Y.; Xiao, D.; Xiao, Y. J.; Xiao, Z. J.; Xie, Y. G.; Xie, Y. H.; Xiong, X. A.; Xiu, Q. L.; Xu, G. F.; Xu, J. J.; Xu, L.; Xu, Q. J.; Xu, Q. N.; Xu, X. P.; Yan, F.; Yan, L.; Yan, W. B.; Yan, W. C.; Yan, Y. H.; Yang, H. J.; Yang, H. X.; Yang, L.; Yang, Y. H.; Yang, Y. X.; Yang, Yifan; Ye, M.; Ye, M. H.; Yin, J. H.; You, Z. Y.; Yu, B. X.; Yu, C. X.; Yu, J. S.; Yuan, C. Z.; Yuan, Y.; Yuncu, A.; Zafar, A. A.; Zeng, Y.; Zeng, Z.; Zhang, B. X.; Zhang, B. Y.; Zhang, C. C.; Zhang, D. H.; Zhang, H. H.; Zhang, H. Y.; Zhang, J.; Zhang, J. L.; Zhang, J. Q.; Zhang, J. W.; Zhang, J. Y.; Zhang, J. Z.; Zhang, K.; Zhang, L.; Zhang, S. Q.; Zhang, X. Y.; Zhang, Y.; Zhang, Y. H.; Zhang, Y. T.; Zhang, Yang; Zhang, Yao; Zhang, Yu; Zhang, Z. H.; Zhang, Z. P.; Zhang, Z. Y.; Zhao, G.; Zhao, J. W.; Zhao, J. Y.; Zhao, J. Z.; Zhao, Lei; Zhao, Ling; Zhao, M. G.; Zhao, Q.; Zhao, S. J.; Zhao, T. C.; Zhao, Y. B.; Zhao, Z. G.; Zhemchugov, A.; Zheng, B.; Zheng, J. P.; Zheng, Y. H.; Zhong, B.; Zhou, L.; Zhou, Q.; Zhou, X.; Zhou, X. K.; Zhou, X. R.; Zhou, X. Y.; Zhu, A. N.; Zhu, J.; Zhu, K.; Zhu, K. J.; Zhu, S.; Zhu, S. H.; Zhu, X. L.; Zhu, Y. C.; Zhu, Y. S.; Zhu, Z. A.; Zhuang, J.; Zou, B. S.; Zou, J. H.; Besiii Collaboration

    2018-03-01

    The cross section of the e+e-→Λc+Λ¯c - process is measured with unprecedented precision using data collected with the BESIII detector at √{s }=4574.5 , 4580.0, 4590.0 and 4599.5 MeV. The nonzero cross section near the Λc+Λ¯c- production threshold is cleared. At center-of-mass energies √{s }=4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λc polar angle distributions. From these, the Λc electric over magnetic form-factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14 ±0.14 ±0.07 and 1.23 ±0.05 ±0.03 , respectively, where the first uncertainties are statistical and the second are systematic.

  15. Precision Measurement of the e + e - → Λ c + Λ ¯ c - Cross Section Near Threshold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ablikim, M.; Achasov, M. N.; Ahmed, S.

    The cross section of the e +e - →more » $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ process is measured with unprecedented precision using data collected with the BESIII detector at √s = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. The non-zero cross section near the $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ production threshold is cleared. At center-of-mass energies √s = 4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λ c polar angle distributions. From these, the Λ c electric over magnetic form factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03 respectively, where the first uncertainties are statistical and the second are systematic.« less

  16. Feasibility studies of time-like proton electromagnetic form factors at $$\\overline{\\rm P}$$ANDA at FAIR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, B.; Erni, W.; Krusche, B.

    Simulation results for future measurements of electromagnetic proton form factors atmore » $$\\overline{\\rm P}$$ANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel p¯p → e +e – is studied on the basis of two different but consistent procedures. The suppression of the main background channel, i.e. p¯p → π +π –, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. Furthermore, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.« less

  17. Feasibility studies of time-like proton electromagnetic form factors at $$\\overline{\\rm P}$$ANDA at FAIR

    DOE PAGES

    Singh, B.; Erni, W.; Krusche, B.; ...

    2016-10-28

    Simulation results for future measurements of electromagnetic proton form factors atmore » $$\\overline{\\rm P}$$ANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel p¯p → e +e – is studied on the basis of two different but consistent procedures. The suppression of the main background channel, i.e. p¯p → π +π –, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. Furthermore, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.« less

  18. Precision Cosmology: The First Half Million Years

    NASA Astrophysics Data System (ADS)

    Jones, Bernard J. T.

    2017-06-01

    Cosmology seeks to characterise our Universe in terms of models based on well-understood and tested physics. Today we know our Universe with a precision that once would have been unthinkable. This book develops the entire mathematical, physical and statistical framework within which this has been achieved. It tells the story of how we arrive at our profound conclusions, starting from the early twentieth century and following developments up to the latest data analysis of big astronomical datasets. It provides an enlightening description of the mathematical, physical and statistical basis for understanding and interpreting the results of key space- and ground-based data. Subjects covered include general relativity, cosmological models, the inhomogeneous Universe, physics of the cosmic background radiation, and methods and results of data analysis. Extensive online supplementary notes, exercises, teaching materials, and exercises in Python make this the perfect companion for researchers, teachers and students in physics, mathematics, and astrophysics.

  19. Precision Measurement of the e + e - → Λ c + Λ ¯ c - Cross Section Near Threshold

    DOE PAGES

    Ablikim, M.; Achasov, M. N.; Ahmed, S.; ...

    2018-03-29

    The cross section of the e +e - →more » $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ process is measured with unprecedented precision using data collected with the BESIII detector at √s = 4574.5, 4580.0, 4590.0 and 4599.5 MeV. The non-zero cross section near the $$Λ_c^+$$$\\bar{Λ}$$$_c^-$$ production threshold is cleared. At center-of-mass energies √s = 4574.5 and 4599.5 MeV, the higher statistics data enable us to measure the Λ c polar angle distributions. From these, the Λ c electric over magnetic form factor ratios (|GE/GM|) are measured for the first time. They are found to be 1.14±0.14±0.07 and 1.23±0.05±0.03 respectively, where the first uncertainties are statistical and the second are systematic.« less

  20. Methodologies for the Statistical Analysis of Memory Response to Radiation

    NASA Astrophysics Data System (ADS)

    Bosser, Alexandre L.; Gupta, Viyas; Tsiligiannis, Georgios; Frost, Christopher D.; Zadeh, Ali; Jaatinen, Jukka; Javanainen, Arto; Puchner, Helmut; Saigné, Frédéric; Virtanen, Ari; Wrobel, Frédéric; Dilillo, Luigi

    2016-08-01

    Methodologies are proposed for in-depth statistical analysis of Single Event Upset data. The motivation for using these methodologies is to obtain precise information on the intrinsic defects and weaknesses of the tested devices, and to gain insight on their failure mechanisms, at no additional cost. The case study is a 65 nm SRAM irradiated with neutrons, protons and heavy ions. This publication is an extended version of a previous study [1].

  1. Touch Precision Modulates Visual Bias.

    PubMed

    Misceo, Giovanni F; Jones, Maurice D

    2018-01-01

    The sensory precision hypothesis holds that different seen and felt cues about the size of an object resolve themselves in favor of the more reliable modality. To examine this precision hypothesis, 60 college students were asked to look at one size while manually exploring another unseen size either with their bare fingers or, to lessen the reliability of touch, with their fingers sleeved in rigid tubes. Afterwards, the participants estimated either the seen size or the felt size by finding a match from a visual display of various sizes. Results showed that the seen size biased the estimates of the felt size when the reliability of touch decreased. This finding supports the interaction between touch reliability and visual bias predicted by statistically optimal models of sensory integration.

  2. Accuracy and precision of occlusal contacts of stereolithographic casts mounted by digital interocclusal registrations.

    PubMed

    Krahenbuhl, Jason T; Cho, Seok-Hwan; Irelan, Jon; Bansal, Naveen K

    2016-08-01

    Little peer-reviewed information is available regarding the accuracy and precision of the occlusal contact reproduction of digitally mounted stereolithographic casts. The purpose of this in vitro study was to evaluate the accuracy and precision of occlusal contacts among stereolithographic casts mounted by digital occlusal registrations. Four complete anatomic dentoforms were arbitrarily mounted on a semi-adjustable articulator in maximal intercuspal position and served as the 4 different simulated patients (SP). A total of 60 digital impressions and digital interocclusal registrations were made with a digital intraoral scanner to fabricate 15 sets of mounted stereolithographic (SLA) definitive casts for each dentoform. After receiving a total of 60 SLA casts, polyvinyl siloxane (PVS) interocclusal records were made for each set. The occlusal contacts for each set of SLA casts were measured by recording the amount of light transmitted through the interocclusal records. To evaluate the accuracy between the SP and their respective SLA casts, the areas of actual contact (AC) and near contact (NC) were calculated. For precision analysis, the coefficient of variation (CoV) was used. The data was analyzed with t tests for accuracy and the McKay and Vangel test for precision (α=.05). The accuracy analysis showed a statistically significant difference between the SP and the SLA cast of each dentoform (P<.05). For the AC in all dentoforms, a significant increase was found in the areas of actual contact of SLA casts compared with the contacts present in the SP (P<.05). Conversely, for the NC in all dentoforms, a significant decrease was found in the occlusal contact areas of the SLA casts compared with the contacts in the SP (P<.05). The precision analysis demonstrated the different CoV values between AC (5.8 to 8.8%) and NC (21.4 to 44.6%) of digitally mounted SLA casts, indicating that the overall precision of the SLA cast was low. For the accuracy evaluation, statistically significant differences were found between the occlusal contacts of all digitally mounted SLA casts groups, with an increase in AC values and a decrease in NC values. For the precision assessment, the CoV values of the AC and NC showed the digitally articulated cast's inability to reproduce the uniform occlusal contacts. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  3. Statistical and Economic Techniques for Site-specific Nematode Management.

    PubMed

    Liu, Zheng; Griffin, Terry; Kirkpatrick, Terrence L

    2014-03-01

    Recent advances in precision agriculture technologies and spatial statistics allow realistic, site-specific estimation of nematode damage to field crops and provide a platform for the site-specific delivery of nematicides within individual fields. This paper reviews the spatial statistical techniques that model correlations among neighboring observations and develop a spatial economic analysis to determine the potential of site-specific nematicide application. The spatial econometric methodology applied in the context of site-specific crop yield response contributes to closing the gap between data analysis and realistic site-specific nematicide recommendations and helps to provide a practical method of site-specifically controlling nematodes.

  4. Determining the Statistical Power of the Kolmogorov-Smirnov and Anderson-Darling Goodness-of-Fit Tests via Monte Carlo Simulation

    DTIC Science & Technology

    2016-12-01

    KS and AD Statistical Power via Monte Carlo Simulation Statistical power is the probability of correctly rejecting the null hypothesis when the...Select a caveat DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Determining the Statistical Power...real-world data to test the accuracy of the simulation. Statistical comparison of these metrics can be necessary when making such a determination

  5. Cumulative detection probabilities and range accuracy of a pulsed Geiger-mode avalanche photodiode laser ranging system

    NASA Astrophysics Data System (ADS)

    Luo, Hanjun; Ouyang, Zhengbiao; Liu, Qiang; Chen, Zhiliang; Lu, Hualan

    2017-10-01

    Cumulative pulses detection with appropriate cumulative pulses number and threshold has the ability to improve the detection performance of the pulsed laser ranging system with GM-APD. In this paper, based on Poisson statistics and multi-pulses cumulative process, the cumulative detection probabilities and their influence factors are investigated. With the normalized probability distribution of each time bin, the theoretical model of the range accuracy and precision is established, and the factors limiting the range accuracy and precision are discussed. The results show that the cumulative pulses detection can produce higher target detection probability and lower false alarm probability. However, for a heavy noise level and extremely weak echo intensity, the false alarm suppression performance of the cumulative pulses detection deteriorates quickly. The range accuracy and precision is another important parameter evaluating the detection performance, the echo intensity and pulse width are main influence factors on the range accuracy and precision, and higher range accuracy and precision is acquired with stronger echo intensity and narrower echo pulse width, for 5-ns echo pulse width, when the echo intensity is larger than 10, the range accuracy and precision lower than 7.5 cm can be achieved.

  6. On theory in ecology: Another perspective

    USGS Publications Warehouse

    Houlahan, Jeff E.; McKinney, Shawn T.; Rochette, Rémy

    2015-01-01

    We agree with Marquet and colleagues (2014) that the balance between theory and data is an important one. However, their description of what constitutes good theory in ecology ignores the most important characteristic of successful theory—that it accurately and precisely describes the way the world works.

  7. Skeletal Correlates for Body Mass Estimation in Modern and Fossil Flying Birds

    PubMed Central

    Field, Daniel J.; Lynner, Colton; Brown, Christian; Darroch, Simon A. F.

    2013-01-01

    Scaling relationships between skeletal dimensions and body mass in extant birds are often used to estimate body mass in fossil crown-group birds, as well as in stem-group avialans. However, useful statistical measurements for constraining the precision and accuracy of fossil mass estimates are rarely provided, which prevents the quantification of robust upper and lower bound body mass estimates for fossils. Here, we generate thirteen body mass correlations and associated measures of statistical robustness using a sample of 863 extant flying birds. By providing robust body mass regressions with upper- and lower-bound prediction intervals for individual skeletal elements, we address the longstanding problem of body mass estimation for highly fragmentary fossil birds. We demonstrate that the most precise proxy for estimating body mass in the overall dataset, measured both as coefficient determination of ordinary least squares regression and percent prediction error, is the maximum diameter of the coracoid’s humeral articulation facet (the glenoid). We further demonstrate that this result is consistent among the majority of investigated avian orders (10 out of 18). As a result, we suggest that, in the majority of cases, this proxy may provide the most accurate estimates of body mass for volant fossil birds. Additionally, by presenting statistical measurements of body mass prediction error for thirteen different body mass regressions, this study provides a much-needed quantitative framework for the accurate estimation of body mass and associated ecological correlates in fossil birds. The application of these regressions will enhance the precision and robustness of many mass-based inferences in future paleornithological studies. PMID:24312392

  8. First spaceborne phase altimetry over sea ice using TechDemoSat-1 GNSS-R signals

    NASA Astrophysics Data System (ADS)

    Li, Weiqiang; Cardellach, Estel; Fabra, Fran; Rius, Antonio; Ribó, Serni; Martín-Neira, Manuel

    2017-08-01

    A track of sea ice reflected Global Navigation Satellite System (GNSS) signal collected by the TechDemoSat-1 mission is processed to perform phase altimetry over sea ice. High-precision carrier phase measurements are extracted from coherent GNSS reflections at a high angle of elevation (>57°). The altimetric results show good consistency with a mean sea surface (MSS) model, and the root-mean-square difference is 4.7 cm with an along-track sampling distance of ˜140 m and a spatial resolution of ˜400 m. The difference observed between the altimetric results and the MSS shows good correlation with the colocated sea ice thickness data from Soil Moisture and Ocean Salinity. This is consistent with the reflecting surface aligned with the bottom of the ice-water interface, due to the penetration of the GNSS signal into the sea ice. Therefore, these high-precision altimetric results have potential to be used for determination of sea ice thickness.

  9. Effects of RF profile on precision of quantitative T2 mapping using dual-echo steady-state acquisition.

    PubMed

    Wu, Pei-Hsin; Cheng, Cheng-Chieh; Wu, Ming-Long; Chao, Tzu-Cheng; Chung, Hsiao-Wen; Huang, Teng-Yi

    2014-01-01

    The dual echo steady-state (DESS) sequence has been shown successful in achieving fast T2 mapping with good precision. Under-estimation of T2, however, becomes increasingly prominent as the flip angle decreases. In 3D DESS imaging, therefore, the derived T2 values would become a function of the slice location in the presence of non-ideal slice profile of the excitation RF pulse. Furthermore, the pattern of slice-dependent variation in T2 estimates is dependent on the RF pulse waveform. Multi-slice 2D DESS imaging provides better inter-slice consistency, but the signal intensity is subject to integrated effects of within-slice distribution of the actual flip angle. Consequently, T2 measured using 2D DESS is prone to inaccuracy even at the designated flip angle of 90°. In this study, both phantom and human experiments demonstrate the above phenomena in good agreement with model prediction. © 2013.

  10. [Precision Nursing: Individual-Based Knowledge Translation].

    PubMed

    Chiang, Li-Chi; Yeh, Mei-Ling; Su, Sui-Lung

    2016-12-01

    U.S. President Obama announced a new era of precision medicine in the Precision Medicine Initiative (PMI). This initiative aims to accelerate the progress of personalized medicine in light of individual requirements for prevention and treatment in order to improve the state of individual and public health. The recent and dramatic development of large-scale biologic databases (such as the human genome sequence), powerful methods for characterizing patients (such as genomics, microbiome, diverse biomarkers, and even pharmacogenomics), and computational tools for analyzing big data are maximizing the potential benefits of precision medicine. Nursing science should follow and keep pace with this trend in order to develop empirical knowledge and expertise in the area of personalized nursing care. Nursing scientists must encourage, examine, and put into practice innovative research on precision nursing in order to provide evidence-based guidance to clinical practice. The applications in personalized precision nursing care include: explanations of personalized information such as the results of genetic testing; patient advocacy and support; anticipation of results and treatment; ongoing chronic monitoring; and support for shared decision-making throughout the disease trajectory. Further, attention must focus on the family and the ethical implications of taking a personalized approach to care. Nurses will need to embrace the paradigm shift to precision nursing and work collaboratively across disciplines to provide the optimal personalized care to patients. If realized, the full potential of precision nursing will provide the best chance for good health for all.

  11. An ATMND/SGI based label-free and fluorescence ratiometric aptasensor for rapid and highly sensitive detection of cocaine in biofluids.

    PubMed

    Wang, Jiamian; Song, Jie; Wang, Xiuyun; Wu, Shuo; Zhao, Yanqiu; Luo, Pinchen; Meng, Changgong

    2016-12-01

    A label-free ratiometric fluorescence aptasensor has been developed for the rapid and sensitive detection of cocaine in complex biofluids. The fluorescent aptasensor is composed of a non-labeled GC-38 cocaine aptamer which serves as a basic sensing unit and two fluorophores, 2-amino-5,6,7-trimethyl-1,8-naphthyridine (ATMND) and SYBR Green I (SGI) which serves as a signal reporter and a build-in reference, respectively. The detection principle is based on a specific cocaine mediated ATMND displacement reaction and the corresponding change in the fluorescence ratio of ATMND to SGI. Due to the high affinity of the non-labeled aptamer, the good precision originated from the ratiometric method, and the good fluorescence quantum yield of the fluorophore, the aptasensor shows good analytical performance with respect to cocaine detection. Under optimal conditions, the aptasensor shows a linear range of 0.10-10μM and a low limit of detection of 56nM, with a fast response of 20s. The low limit of detection is comparable to most of the fluorescent aptasensors with signal amplification strategies and much lower than all of the unamplified cocaine aptasensors. Practical sample analysis in a series of complex biofluids, including urine, saliva and serum, also indicates the good precision, stability, and high sensitivity of the aptasensor, which may have great potential for the point-of-care screening of cocaine in complex biofluids. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. 40 CFR 63.5350 - How do I distinguish between the water-resistant/specialty and nonwater-resistant leather product...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... by the Administrator. (1) Statistical analysis of initial water penetration data performed to support ASTM Designation D2099-00 indicates that poor quantitative precision is associated with this testing...

  13. Does Breast Cancer Drive the Building of Survival Probability Models among States? An Assessment of Goodness of Fit for Patient Data from SEER Registries

    PubMed

    Khan, Hafiz; Saxena, Anshul; Perisetti, Abhilash; Rafiq, Aamrin; Gabbidon, Kemesha; Mende, Sarah; Lyuksyutova, Maria; Quesada, Kandi; Blakely, Summre; Torres, Tiffany; Afesse, Mahlet

    2016-12-01

    Background: Breast cancer is a worldwide public health concern and is the most prevalent type of cancer in women in the United States. This study concerned the best fit of statistical probability models on the basis of survival times for nine state cancer registries: California, Connecticut, Georgia, Hawaii, Iowa, Michigan, New Mexico, Utah, and Washington. Materials and Methods: A probability random sampling method was applied to select and extract records of 2,000 breast cancer patients from the Surveillance Epidemiology and End Results (SEER) database for each of the nine state cancer registries used in this study. EasyFit software was utilized to identify the best probability models by using goodness of fit tests, and to estimate parameters for various statistical probability distributions that fit survival data. Results: Statistical analysis for the summary of statistics is reported for each of the states for the years 1973 to 2012. Kolmogorov-Smirnov, Anderson-Darling, and Chi-squared goodness of fit test values were used for survival data, the highest values of goodness of fit statistics being considered indicative of the best fit survival model for each state. Conclusions: It was found that California, Connecticut, Georgia, Iowa, New Mexico, and Washington followed the Burr probability distribution, while the Dagum probability distribution gave the best fit for Michigan and Utah, and Hawaii followed the Gamma probability distribution. These findings highlight differences between states through selected sociodemographic variables and also demonstrate probability modeling differences in breast cancer survival times. The results of this study can be used to guide healthcare providers and researchers for further investigations into social and environmental factors in order to reduce the occurrence of and mortality due to breast cancer. Creative Commons Attribution License

  14. Surrogate biochemical markers: precise measurement for strategic drug and biologics development.

    PubMed

    Lee, J W; Hulse, J D; Colburn, W A

    1995-05-01

    More efficient drug and biologics development is necessary for future success of pharmaceutical and biotechnology companies. One way to achieve this objective is to use rationally selected surrogate markers to improve the early decision-making process. Using typical clinical chemistry methods to measure biochemical markers may not ensure adequate precision and reproducibility. In contrast, using analytical methods that meet good laboratory practices along with rational selection and validation of biochemical markers can give those who use them a competitive advantage over those who do not by providing meaningful data for earlier decision making.

  15. Interlaboratory comparison of chemical analysis of uranium mononitride

    NASA Technical Reports Server (NTRS)

    Merkle, E. J.; Davis, W. F.; Halloran, J. T.; Graab, J. W.

    1974-01-01

    Analytical methods were established in which the critical variables were controlled, with the result that acceptable interlaboratory agreement was demonstrated for the chemical analysis of uranium mononitride. This was accomplished by using equipment readily available to laboratories performing metallurgical analyses. Agreement among three laboratories was shown to be very good for uranium and nitrogen. Interlaboratory precision of + or - 0.04 percent was achieved for both of these elements. Oxygen was determined to + or - 15 parts per million (ppm) at the 170-ppm level. The carbon determination gave an interlaboratory precision of + or - 46 ppm at the 320-ppm level.

  16. Integrated circuit layer image segmentation

    NASA Astrophysics Data System (ADS)

    Masalskis, Giedrius; Petrauskas, Romas

    2010-09-01

    In this paper we present IC layer image segmentation techniques which are specifically created for precise metal layer feature extraction. During our research we used many samples of real-life de-processed IC metal layer images which were obtained using optical light microscope. We have created sequence of various image processing filters which provides segmentation results of good enough precision for our application. Filter sequences were fine tuned to provide best possible results depending on properties of IC manufacturing process and imaging technology. Proposed IC image segmentation filter sequences were experimentally tested and compared with conventional direct segmentation algorithms.

  17. Composite panel development at JPL

    NASA Technical Reports Server (NTRS)

    Mcelroy, Paul; Helms, Rich

    1988-01-01

    Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.

  18. Emancipation through interaction--how eugenics and statistics converged and diverged.

    PubMed

    Louçã, Francisco

    2009-01-01

    The paper discusses the scope and influence of eugenics in defining the scientific programme of statistics and the impact of the evolution of biology on social scientists. It argues that eugenics was instrumental in providing a bridge between sciences, and therefore created both the impulse and the institutions necessary for the birth of modern statistics in its applications first to biology and then to the social sciences. Looking at the question from the point of view of the history of statistics and the social sciences, and mostly concentrating on evidence from the British debates, the paper discusses how these disciplines became emancipated from eugenics precisely because of the inspiration of biology. It also relates how social scientists were fascinated and perplexed by the innovations taking place in statistical theory and practice.

  19. Examples of sex/gender sensitivity in epidemiological research: results of an evaluation of original articles published in JECH 2006-2014.

    PubMed

    Jahn, Ingeborg; Börnhorst, Claudia; Günther, Frauke; Brand, Tilman

    2017-02-15

    During the last decades, sex and gender biases have been identified in various areas of biomedical and public health research, leading to compromised validity of research findings. As a response, methodological requirements were developed but these are rarely translated into research practice. The aim of this study is to provide good practice examples of sex/gender sensitive health research. We conducted a systematic search of research articles published in JECH between 2006 and 2014. An instrument was constructed to evaluate sex/gender sensitivity in four stages of the research process (background, study design, statistical analysis, discussion). In total, 37 articles covering diverse topics were included. Thereof, 22 were evaluated as good practice example in at least one stage; two articles achieved highest ratings across all stages. Good examples of the background referred to available knowledge on sex/gender differences and sex/gender informed theoretical frameworks. Related to the study design, good examples calculated sample sizes to be able to detect sex/gender differences, selected sex/gender sensitive outcome/exposure indicators, or chose different cut-off values for male and female participants. Good examples of statistical analyses used interaction terms with sex/gender or different shapes of the estimated relationship for men and women. Examples of good discussions interpreted their findings related to social and biological explanatory models or questioned the statistical methods used to detect sex/gender differences. The identified good practice examples may inspire researchers to critically reflect on the relevance of sex/gender issues of their studies and help them to translate methodological recommendations of sex/gender sensitivity into research practice.

  20. Construct Validation of a Multidimensional Computerized Adaptive Test for Fatigue in Rheumatoid Arthritis

    PubMed Central

    Nikolaus, Stephanie; Bode, Christina; Taal, Erik; Vonkeman, Harald E.; Glas, Cees A. W.; van de Laar, Mart A. F. J.

    2015-01-01

    Objective Multidimensional computerized adaptive testing enables precise measurements of patient-reported outcomes at an individual level across different dimensions. This study examined the construct validity of a multidimensional computerized adaptive test (CAT) for fatigue in rheumatoid arthritis (RA). Methods The ‘CAT Fatigue RA’ was constructed based on a previously calibrated item bank. It contains 196 items and three dimensions: ‘severity’, ‘impact’ and ‘variability’ of fatigue. The CAT was administered to 166 patients with RA. They also completed a traditional, multidimensional fatigue questionnaire (BRAF-MDQ) and the SF-36 in order to examine the CAT’s construct validity. A priori criterion for construct validity was that 75% of the correlations between the CAT dimensions and the subscales of the other questionnaires were as expected. Furthermore, comprehensive use of the item bank, measurement precision and score distribution were investigated. Results The a priori criterion for construct validity was supported for two of the three CAT dimensions (severity and impact but not for variability). For severity and impact, 87% of the correlations with the subscales of the well-established questionnaires were as expected but for variability, 53% of the hypothesised relations were found. Eighty-nine percent of the items were selected between one and 137 times for CAT administrations. Measurement precision was excellent for the severity and impact dimensions, with more than 90% of the CAT administrations reaching a standard error below 0.32. The variability dimension showed good measurement precision with 90% of the CAT administrations reaching a standard error below 0.44. No floor- or ceiling-effects were found for the three dimensions. Conclusion The CAT Fatigue RA showed good construct validity and excellent measurement precision on the dimensions severity and impact. The dimension variability had less ideal measurement characteristics, pointing to the need to recalibrate the CAT item bank with a two-dimensional model, solely consisting of severity and impact. PMID:26710104

  1. Precision measurements of solar energetic particle elemental composition

    NASA Technical Reports Server (NTRS)

    Breneman, H.; Stone, E. C.

    1985-01-01

    Using data from the Cosmic Ray Subsystem (CRS) aboard the Voyager 1 and 2 spacecraft, solar energetic particle abundances or upper limits for all elements with 3 = Z = 30 from a combined set of 10 solar flares during the 1977 to 1982 time period were determined. Statistically meaningful abundances have been determined for the first time for several rare elements including P, Cl, K, Ti and Mn, while the precision of the mean abundances for the more abundant elements has been improved by typically a factor of approximately 3 over previously reported values.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritychenko, B.

    The precision of double-beta ββ-decay experimental half lives and their uncertainties is reanalyzed. The method of Benford's distributions has been applied to nuclear reaction, structure and decay data sets. First-digit distribution trend for ββ-decay T 2v 1/2 is consistent with large nuclear reaction and structure data sets and provides validation of experimental half-lives. A complementary analysis of the decay uncertainties indicates deficiencies due to small size of statistical samples, and incomplete collection of experimental information. Further experimental and theoretical efforts would lead toward more precise values of-decay half-lives and nuclear matrix elements.

  3. High statistics measurement of the positron fraction in primary cosmic rays of 0.5-500 GeV with the alpha magnetic spectrometer on the international space station.

    PubMed

    Accardo, L; Aguilar, M; Aisa, D; Alpat, B; Alvino, A; Ambrosi, G; Andeen, K; Arruda, L; Attig, N; Azzarello, P; Bachlechner, A; Barao, F; Barrau, A; Barrin, L; Bartoloni, A; Basara, L; Battarbee, M; Battiston, R; Bazo, J; Becker, U; Behlmann, M; Beischer, B; Berdugo, J; Bertucci, B; Bigongiari, G; Bindi, V; Bizzaglia, S; Bizzarri, M; Boella, G; de Boer, W; Bollweg, K; Bonnivard, V; Borgia, B; Borsini, S; Boschini, M J; Bourquin, M; Burger, J; Cadoux, F; Cai, X D; Capell, M; Caroff, S; Carosi, G; Casaus, J; Cascioli, V; Castellini, G; Cernuda, I; Cerreta, D; Cervelli, F; Chae, M J; Chang, Y H; Chen, A I; Chen, H; Cheng, G M; Chen, H S; Cheng, L; Chikanian, A; Chou, H Y; Choumilov, E; Choutko, V; Chung, C H; Cindolo, F; Clark, C; Clavero, R; Coignet, G; Consolandi, C; Contin, A; Corti, C; Coste, B; Cui, Z; Dai, M; Delgado, C; Della Torre, S; Demirköz, M B; Derome, L; Di Falco, S; Di Masso, L; Dimiccoli, F; Díaz, C; von Doetinchem, P; Du, W J; Duranti, M; D'Urso, D; Eline, A; Eppling, F J; Eronen, T; Fan, Y Y; Farnesini, L; Feng, J; Fiandrini, E; Fiasson, A; Finch, E; Fisher, P; Galaktionov, Y; Gallucci, G; García, B; García-López, R; Gast, H; Gebauer, I; Gervasi, M; Ghelfi, A; Gillard, W; Giovacchini, F; Goglov, P; Gong, J; Goy, C; Grabski, V; Grandi, D; Graziani, M; Guandalini, C; Guerri, I; Guo, K H; Haas, D; Habiby, M; Haino, S; Han, K C; He, Z H; Heil, M; Henning, R; Hoffman, J; Hsieh, T H; Huang, Z C; Huh, C; Incagli, M; Ionica, M; Jang, W Y; Jinchi, H; Kanishev, K; Kim, G N; Kim, K S; Kirn, Th; Kossakowski, R; Kounina, O; Kounine, A; Koutsenko, V; Krafczyk, M S; Kunz, S; La Vacca, G; Laudi, E; Laurenti, G; Lazzizzera, I; Lebedev, A; Lee, H T; Lee, S C; Leluc, C; Levi, G; Li, H L; Li, J Q; Li, Q; Li, Q; Li, T X; Li, W; Li, Y; Li, Z H; Li, Z Y; Lim, S; Lin, C H; Lipari, P; Lippert, T; Liu, D; Liu, H; Lolli, M; Lomtadze, T; Lu, M J; Lu, Y S; Luebelsmeyer, K; Luo, F; Luo, J Z; Lv, S S; Majka, R; Malinin, A; Mañá, C; Marín, J; Martin, T; Martínez, G; Masi, N; Massera, F; Maurin, D; Menchaca-Rocha, A; Meng, Q; Mo, D C; Monreal, B; Morescalchi, L; Mott, P; Müller, M; Ni, J Q; Nikonov, N; Nozzoli, F; Nunes, P; Obermeier, A; Oliva, A; Orcinha, M; Palmonari, F; Palomares, C; Paniccia, M; Papi, A; Pauluzzi, M; Pedreschi, E; Pensotti, S; Pereira, R; Pilastrini, R; Pilo, F; Piluso, A; Pizzolotto, C; Plyaskin, V; Pohl, M; Poireau, V; Postaci, E; Putze, A; Quadrani, L; Qi, X M; Rancoita, P G; Rapin, D; Ricol, J S; Rodríguez, I; Rosier-Lees, S; Rossi, L; Rozhkov, A; Rozza, D; Rybka, G; Sagdeev, R; Sandweiss, J; Saouter, P; Sbarra, C; Schael, S; Schmidt, S M; Schuckardt, D; Schulz von Dratzig, A; Schwering, G; Scolieri, G; Seo, E S; Shan, B S; Shan, Y H; Shi, J Y; Shi, X Y; Shi, Y M; Siedenburg, T; Son, D; Spada, F; Spinella, F; Sun, W; Sun, W H; Tacconi, M; Tang, C P; Tang, X W; Tang, Z C; Tao, L; Tescaro, D; Ting, Samuel C C; Ting, S M; Tomassetti, N; Torsti, J; Türkoğlu, C; Urban, T; Vagelli, V; Valente, E; Vannini, C; Valtonen, E; Vaurynovich, S; Vecchi, M; Velasco, M; Vialle, J P; Vitale, V; Volpini, G; Wang, L Q; Wang, Q L; Wang, R S; Wang, X; Wang, Z X; Weng, Z L; Whitman, K; Wienkenhöver, J; Wu, H; Wu, K Y; Xia, X; Xie, M; Xie, S; Xiong, R Q; Xin, G M; Xu, N S; Xu, W; Yan, Q; Yang, J; Yang, M; Ye, Q H; Yi, H; Yu, Y J; Yu, Z Q; Zeissler, S; Zhang, J H; Zhang, M T; Zhang, X B; Zhang, Z; Zheng, Z M; Zhou, F; Zhuang, H L; Zhukov, V; Zichichi, A; Zimmermann, N; Zuccon, P; Zurbach, C

    2014-09-19

    A precision measurement by AMS of the positron fraction in primary cosmic rays in the energy range from 0.5 to 500 GeV based on 10.9 million positron and electron events is presented. This measurement extends the energy range of our previous observation and increases its precision. The new results show, for the first time, that above ∼200  GeV the positron fraction no longer exhibits an increase with energy.

  4. Detection of non-Gaussian fluctuations in a quantum point contact.

    PubMed

    Gershon, G; Bomze, Yu; Sukhorukov, E V; Reznikov, M

    2008-07-04

    An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.

  5. Detection of Non-Gaussian Fluctuations in a Quantum Point Contact

    NASA Astrophysics Data System (ADS)

    Gershon, G.; Bomze, Yu.; Sukhorukov, E. V.; Reznikov, M.

    2008-07-01

    An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.

  6. Design and Construction for Community Health Service Precision Fund Appropriation System Based on Performance Management.

    PubMed

    Gao, Xing; He, Yao; Hu, Hongpu

    2017-01-01

    Allowing for the differences in economy development, informatization degree and characteristic of population served and so on among different community health service organizations, community health service precision fund appropriation system based on performance management is designed, which can provide support for the government to appropriate financial funds scientifically and rationally for primary care. The system has the characteristic of flexibility and practicability, in which there are five subsystems including data acquisition, parameter setting, fund appropriation, statistical analysis system and user management.

  7. High Statistics Measurement of the Positron Fraction in Primary Cosmic Rays of 0.5-500 GeV with the Alpha Magnetic Spectrometer on the International Space Station

    NASA Astrophysics Data System (ADS)

    Accardo, L.; Aguilar, M.; Aisa, D.; Alvino, A.; Ambrosi, G.; Andeen, K.; Arruda, L.; Attig, N.; Azzarello, P.; Bachlechner, A.; Barao, F.; Barrau, A.; Barrin, L.; Bartoloni, A.; Basara, L.; Battarbee, M.; Battiston, R.; Bazo, J.; Becker, U.; Behlmann, M.; Beischer, B.; Berdugo, J.; Bertucci, B.; Bigongiari, G.; Bindi, V.; Bizzaglia, S.; Bizzarri, M.; Boella, G.; de Boer, W.; Bollweg, K.; Bonnivard, V.; Borgia, B.; Borsini, S.; Boschini, M. J.; Bourquin, M.; Burger, J.; Cadoux, F.; Cai, X. D.; Capell, M.; Caroff, S.; Casaus, J.; Cascioli, V.; Castellini, G.; Cernuda, I.; Cervelli, F.; Chae, M. J.; Chang, Y. H.; Chen, A. I.; Chen, H.; Cheng, G. M.; Chen, H. S.; Cheng, L.; Chikanian, A.; Chou, H. Y.; Choumilov, E.; Choutko, V.; Chung, C. H.; Clark, C.; Clavero, R.; Coignet, G.; Consolandi, C.; Contin, A.; Corti, C.; Coste, B.; Cui, Z.; Dai, M.; Delgado, C.; Della Torre, S.; Demirköz, M. B.; Derome, L.; Di Falco, S.; Di Masso, L.; Dimiccoli, F.; Díaz, C.; von Doetinchem, P.; Du, W. J.; Duranti, M.; D'Urso, D.; Eline, A.; Eppling, F. J.; Eronen, T.; Fan, Y. Y.; Farnesini, L.; Feng, J.; Fiandrini, E.; Fiasson, A.; Finch, E.; Fisher, P.; Galaktionov, Y.; Gallucci, G.; García, B.; García-López, R.; Gast, H.; Gebauer, I.; Gervasi, M.; Ghelfi, A.; Gillard, W.; Giovacchini, F.; Goglov, P.; Gong, J.; Goy, C.; Grabski, V.; Grandi, D.; Graziani, M.; Guandalini, C.; Guerri, I.; Guo, K. H.; Habiby, M.; Haino, S.; Han, K. C.; He, Z. H.; Heil, M.; Hoffman, J.; Hsieh, T. H.; Huang, Z. C.; Huh, C.; Incagli, M.; Ionica, M.; Jang, W. Y.; Jinchi, H.; Kanishev, K.; Kim, G. N.; Kim, K. S.; Kirn, Th.; Kossakowski, R.; Kounina, O.; Kounine, A.; Koutsenko, V.; Krafczyk, M. S.; Kunz, S.; La Vacca, G.; Laudi, E.; Laurenti, G.; Lazzizzera, I.; Lebedev, A.; Lee, H. T.; Lee, S. C.; Leluc, C.; Li, H. L.; Li, J. Q.; Li, Q.; Li, Q.; Li, T. X.; Li, W.; Li, Y.; Li, Z. H.; Li, Z. Y.; Lim, S.; Lin, C. H.; Lipari, P.; Lippert, T.; Liu, D.; Liu, H.; Lomtadze, T.; Lu, M. J.; Lu, Y. S.; Luebelsmeyer, K.; Luo, F.; Luo, J. Z.; Lv, S. S.; Majka, R.; Malinin, A.; Mañá, C.; Marín, J.; Martin, T.; Martínez, G.; Masi, N.; Maurin, D.; Menchaca-Rocha, A.; Meng, Q.; Mo, D. C.; Morescalchi, L.; Mott, P.; Müller, M.; Ni, J. Q.; Nikonov, N.; Nozzoli, F.; Nunes, P.; Obermeier, A.; Oliva, A.; Orcinha, M.; Palmonari, F.; Palomares, C.; Paniccia, M.; Papi, A.; Pedreschi, E.; Pensotti, S.; Pereira, R.; Pilo, F.; Piluso, A.; Pizzolotto, C.; Plyaskin, V.; Pohl, M.; Poireau, V.; Postaci, E.; Putze, A.; Quadrani, L.; Qi, X. M.; Rancoita, P. G.; Rapin, D.; Ricol, J. S.; Rodríguez, I.; Rosier-Lees, S.; Rozhkov, A.; Rozza, D.; Sagdeev, R.; Sandweiss, J.; Saouter, P.; Sbarra, C.; Schael, S.; Schmidt, S. M.; Schuckardt, D.; von Dratzig, A. Schulz; Schwering, G.; Scolieri, G.; Seo, E. S.; Shan, B. S.; Shan, Y. H.; Shi, J. Y.; Shi, X. Y.; Shi, Y. M.; Siedenburg, T.; Son, D.; Spada, F.; Spinella, F.; Sun, W.; Sun, W. H.; Tacconi, M.; Tang, C. P.; Tang, X. W.; Tang, Z. C.; Tao, L.; Tescaro, D.; Ting, Samuel C. C.; Ting, S. M.; Tomassetti, N.; Torsti, J.; Türkoǧlu, C.; Urban, T.; Vagelli, V.; Valente, E.; Vannini, C.; Valtonen, E.; Vaurynovich, S.; Vecchi, M.; Velasco, M.; Vialle, J. P.; Wang, L. Q.; Wang, Q. L.; Wang, R. S.; Wang, X.; Wang, Z. X.; Weng, Z. L.; Whitman, K.; Wienkenhöver, J.; Wu, H.; Xia, X.; Xie, M.; Xie, S.; Xiong, R. Q.; Xin, G. M.; Xu, N. S.; Xu, W.; Yan, Q.; Yang, J.; Yang, M.; Ye, Q. H.; Yi, H.; Yu, Y. J.; Yu, Z. Q.; Zeissler, S.; Zhang, J. H.; Zhang, M. T.; Zhang, X. B.; Zhang, Z.; Zheng, Z. M.; Zhuang, H. L.; Zhukov, V.; Zichichi, A.; Zimmermann, N.; Zuccon, P.; Zurbach, C.; AMS Collaboration

    2014-09-01

    A precision measurement by AMS of the positron fraction in primary cosmic rays in the energy range from 0.5 to 500 GeV based on 10.9 million positron and electron events is presented. This measurement extends the energy range of our previous observation and increases its precision. The new results show, for the first time, that above ∼200 GeV the positron fraction no longer exhibits an increase with energy.

  8. Artificial Intelligence Approach to Support Statistical Quality Control Teaching

    ERIC Educational Resources Information Center

    Reis, Marcelo Menezes; Paladini, Edson Pacheco; Khator, Suresh; Sommer, Willy Arno

    2006-01-01

    Statistical quality control--SQC (consisting of Statistical Process Control, Process Capability Studies, Acceptance Sampling and Design of Experiments) is a very important tool to obtain, maintain and improve the Quality level of goods and services produced by an organization. Despite its importance, and the fact that it is taught in technical and…

  9. Japanese vegetation lidar (MOLI) on ISS (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kimura, Toshiyoshi; Imai, Tadashi; Sakaizawa, Daisuke; Murooka, Junpei

    2016-10-01

    Multi-footprint Observation LIDAR and Imager (MOLI) is a candidate mission for International Space Station - Japanese Experiment Module. The mission objective MOLI is to manage forest and to be a good calibrator for evaluation of forest biomass using satellite instrument such as L-band SAR. SAR is the powerful tool to evaluate biomass globally. However it has some signal saturation over 100 t/ha biomass measurement, whereas Vegetation LIDAR is expected to measure higher mass precisely. MOLI is designed to evaluate forest biomass with high accuracy. An imager, that is equipped together in good registration with LIDAR, will help to understand the situation of target forest. Also two simultaneous Laser beams from MOLI will calibrate the relief effect, which affects the precision of canopy height extremely. Using together with L-band SAR observation data or multispectral image, it is expected to have a good "wall to wall" biomass map with its phonological information. Such MOLI observation capability is so important, because both quantity and quality evaluation of biomass are essential for carbon circulation system understandings. Currently, as a key technical development, LASER Transmitters for MOLI is under test in vacuum condition. Its power is 40mJ and PRF is 150Hz. Pressure vessel design for LIDAR transmitter is supressing Laser induced contamination effect. MOLI is now under study towards around 2020 operation.

  10. SpecBit, DecayBit and PrecisionBit: GAMBIT modules for computing mass spectra, particle decay rates and precision observables

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin

    2018-01-01

    We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.

  11. A comparison of technical replicate (cuts) effect on lamb Warner-Bratzler shear force measurement precision.

    PubMed

    Holman, B W B; Alvarenga, T I R C; van de Ven, R J; Hopkins, D L

    2015-07-01

    The Warner-Bratzler shear force (WBSF) of 335 lamb m. longissimus lumborum (LL) caudal and cranial ends was measured to examine and simulate the effect of replicate number (r: 1-8) on the precision of mean WBSF estimates and to compare LL caudal and cranial end WBSF means. All LL were sourced from two experimental flocks as part of the Information Nucleus slaughter programme (CRC for Sheep Industry Innovation) and analysed using a Lloyd Texture analyser with a Warner-Bratzler blade attachment. WBSF data were natural logarithm (ln) transformed before statistical analysis. Mean ln(WBSF) precision improved as r increased; however the practical implications support an r equal to 6, as precision improves only marginally with additional replicates. Increasing LL sample replication results in better ln(WBSF) precision compared with increasing r, provided that sample replicates are removed from the same LL end. Cranial end mean WBSF was 11.2 ± 1.3% higher than the caudal end. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  12. Non-enzymatic electrochemical glucose sensor based on NiMoO4 nanorods

    NASA Astrophysics Data System (ADS)

    Wang, Dandan; Cai, Daoping; Huang, Hui; Liu, Bin; Wang, Lingling; Liu, Yuan; Li, Han; Wang, Yanrong; Li, Qiuhong; Wang, Taihong

    2015-04-01

    A non-enzymatic glucose sensor based on the NiMoO4 nanorods has been fabricated for the first time. The electrocatalytic performance of the NiMoO4 nanorods’ modified electrode toward glucose oxidation was evaluated by cyclic voltammetry and amperometry. The NiMoO4 nanorods’ modified electrode showed a greatly enhanced electrocatalytic property toward glucose oxidation, as well as an excellent anti-interference and a good stability. Impressively, good accuracy and high precision for detecting glucose concentration in human serum samples were obtained. These excellent sensing properties, combined with good reproducibility and low cost, indicate that NiMoO4 nanorods are a promising candidate for non-enzymatic glucose sensors.

  13. Merging National Forest and National Forest Health Inventories to Obtain an Integrated Forest Resource Inventory – Experiences from Bavaria, Slovenia and Sweden

    PubMed Central

    Kovač, Marko; Bauer, Arthur; Ståhl, Göran

    2014-01-01

    Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120

  14. Automated extraction and validation of children's gait parameters with the Kinect.

    PubMed

    Motiian, Saeid; Pergami, Paola; Guffey, Keegan; Mancinelli, Corrie A; Doretto, Gianfranco

    2015-12-02

    Gait analysis for therapy regimen prescription and monitoring requires patients to physically access clinics with specialized equipment. The timely availability of such infrastructure at the right frequency is especially important for small children. Besides being very costly, this is a challenge for many children living in rural areas. This is why this work develops a low-cost, portable, and automated approach for in-home gait analysis, based on the Microsoft Kinect. A robust and efficient method for extracting gait parameters is introduced, which copes with the high variability of noisy Kinect skeleton tracking data experienced across the population of young children. This is achieved by temporally segmenting the data with an approach based on coupling a probabilistic matching of stride template models, learned offline, with the estimation of their global and local temporal scaling. A preliminary study conducted on healthy children between 2 and 4 years of age is performed to analyze the accuracy, precision, repeatability, and concurrent validity of the proposed method against the GAITRite when measuring several spatial and temporal children's gait parameters. The method has excellent accuracy and good precision, with segmenting temporal sequences of body joint locations into stride and step cycles. Also, the spatial and temporal gait parameters, estimated automatically, exhibit good concurrent validity with those provided by the GAITRite, as well as very good repeatability. In particular, on a range of nine gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. This work enables and validates the automated use of the Kinect for children's gait analysis in healthy subjects. In particular, the approach makes a step forward towards developing a low-cost, portable, parent-operated in-home tool for clinicians assisting young children.

  15. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  16. Probing the Galactic Potential with Next-generation Observations of Disk Stars

    NASA Astrophysics Data System (ADS)

    Sumi, T.; Johnston, K. V.; Tremaine, S.; Spergel, D. N.; Majewski, S. R.

    2009-07-01

    Our current knowledge of the rotation curve of the Milky Way is remarkably poor compared to other galaxies, limited by the combined effects of extinction and the lack of large samples of stars with good distance estimates and proper motions. Near-future surveys promise a dramatic improvement in the number and precision of astrometric, photometric, and spectroscopic measurements of stars in the Milky Way's disk. We examine the impact of such surveys on our understanding of the Galaxy by "observing" particle realizations of nonaxisymmetric disk distributions orbiting in an axisymmetric halo with appropriate errors and then attempting to recover the underlying potential using a Markov Chain Monte Carlo approach. We demonstrate that the azimuthally averaged gravitational force field in the Galactic plane—and hence, to a lesser extent, the Galactic mass distribution—can be tightly constrained over a large range of radii using a variety of types of surveys so long as the error distribution of the measurements of the parallax, proper motion, and radial velocity are well understood and the disk is surveyed globally. One advantage of our method is that the target stars can be selected nonrandomly in real or apparent-magnitude space to ensure just such a global sample without biasing the results. Assuming that we can always measure the line-of-sight velocity of a star with at least 1 km s-1 precision, we demonstrate that the force field can be determined to better than ~1% for Galactocentric radii in the range R = 4-20 kpc using either: (1) small samples (a few hundred stars) with very accurate trigonometric parallaxes and good proper-motion measurements (uncertainties δ p,tri lsim 10 μas and δμ lsim 100 μas yr-1 respectively); (2) modest samples (~1000 stars) with good indirect parallax estimates (e.g., uncertainty in photometric parallax δ p,phot~ 10%-20%) and good proper-motion measurements (δμ ~ 100 μas yr-1) or (3) large samples (~104 stars) with good indirect parallax estimates and lower accuracy proper-motion measurements (δμ~ 1 mas yr-1). We conclude that near-future surveys, like Space Interferometry Mission Lite, Global Astrometric Interferometer for Astrophysics, and VERA, will provide the first precise mapping of the gravitational force field in the region of the Galactic disk.

  17. Money Matters: Comment and Analysis.

    ERIC Educational Resources Information Center

    Alexander, Kern

    1998-01-01

    If money truly does not matter, and disadvantage cannot be quantified in terms of valuable social or economic goods, then questions of justice become aridly academic. How are resources to be valued? Faulty research design skewed Eric Hanushek's results. More precisely designed studies are revealing relationships between school expenditures and…

  18. A Good Move

    NASA Astrophysics Data System (ADS)

    Lakota, Gregory J.; Essary, Andrew; Bast, William D.; Dicaprio, Ralph; Symmes, Arthur H.; McDonald, Edward T.

    2006-11-01

    An underground exhibit space constructed at Chicago's Museum of Science and Industry now serves as the home of the German submarine U-505 -- the only vessel of its class captured by the United States during World War II. The careful lifting and moving of the vessel required precise coordination and meticulous reviews.

  19. A subsurface drip irrigation system for weighing lysimetry

    USDA-ARS?s Scientific Manuscript database

    Large, precision weighing lysimeters can have accuracies as good as 0.04 mm equivalent depth of water, adequate for hourly and even half-hourly determinations of evapotranspiration (ET) rate from crops. Such data are important for testing and improving simulation models of the complex interactions o...

  20. An interval precise integration method for transient unbalance response analysis of rotor system with uncertainty

    NASA Astrophysics Data System (ADS)

    Fu, Chao; Ren, Xingmin; Yang, Yongfeng; Xia, Yebao; Deng, Wangqun

    2018-07-01

    A non-intrusive interval precise integration method (IPIM) is proposed in this paper to analyze the transient unbalance response of uncertain rotor systems. The transfer matrix method (TMM) is used to derive the deterministic equations of motion of a hollow-shaft overhung rotor. The uncertain transient dynamic problem is solved by combing the Chebyshev approximation theory with the modified precise integration method (PIM). Transient response bounds are calculated by interval arithmetic of the expansion coefficients. Theoretical error analysis of the proposed method is provided briefly, and its accuracy is further validated by comparing with the scanning method in simulations. Numerical results show that the IPIM can keep good accuracy in vibration prediction of the start-up transient process. Furthermore, the proposed method can also provide theoretical guidance to other transient dynamic mechanical systems with uncertainties.

  1. User's Manual for Downscaler Fusion Software

    EPA Science Inventory

    Recently, a series of 3 papers has been published in the statistical literature that details the use of downscaling to obtain more accurate and precise predictions of air pollution across the conterminous U.S. This downscaling approach combines CMAQ gridded numerical model output...

  2. 15 CFR 200.103 - Consulting and advisory services.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...., details of design and construction, operational aspects, unusual or extreme conditions, methods of statistical control of the measurement process, automated acquisition of laboratory data, and data reduction... group seminars on the precision measurement of specific types of physical quantities, offering the...

  3. 15 CFR 200.103 - Consulting and advisory services.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...., details of design and construction, operational aspects, unusual or extreme conditions, methods of statistical control of the measurement process, automated acquisition of laboratory data, and data reduction... group seminars on the precision measurement of specific types of physical quantities, offering the...

  4. A roughness-corrected index of relative bed stability for regional stream surveys

    EPA Science Inventory

    Quantitative regional assessments of streambed sedimentation and its likely causes are hampered because field investigations typically lack the requisite sample size, measurements, or precision for sound geomorphic and statistical interpretation. We adapted an index of relative b...

  5. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    PubMed

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.

  6. Search for transient ultralight dark matter signatures with networks of precision measurement devices using a Bayesian statistics method

    NASA Astrophysics Data System (ADS)

    Roberts, B. M.; Blewitt, G.; Dailey, C.; Derevianko, A.

    2018-04-01

    We analyze the prospects of employing a distributed global network of precision measurement devices as a dark matter and exotic physics observatory. In particular, we consider the atomic clocks of the global positioning system (GPS), consisting of a constellation of 32 medium-Earth orbit satellites equipped with either Cs or Rb microwave clocks and a number of Earth-based receiver stations, some of which employ highly-stable H-maser atomic clocks. High-accuracy timing data is available for almost two decades. By analyzing the satellite and terrestrial atomic clock data, it is possible to search for transient signatures of exotic physics, such as "clumpy" dark matter and dark energy, effectively transforming the GPS constellation into a 50 000 km aperture sensor array. Here we characterize the noise of the GPS satellite atomic clocks, describe the search method based on Bayesian statistics, and test the method using simulated clock data. We present the projected discovery reach using our method, and demonstrate that it can surpass the existing constrains by several order of magnitude for certain models. Our method is not limited in scope to GPS or atomic clock networks, and can also be applied to other networks of precision measurement devices.

  7. Perspectives on clinical trial data transparency and disclosure.

    PubMed

    Alemayehu, Demissie; Anziano, Richard J; Levenstein, Marcia

    2014-09-01

    The increased demand for transparency and disclosure of data from clinical trials sponsored by pharmaceutical companies poses considerable challenges and opportunities from a statistical perspective. A central issue is the need to protect patient privacy and adhere to Good Clinical and Statistical Practices, while ensuring access to patient-level data from clinical trials to the wider research community. This paper offers options to navigate this dilemma and balance competing priorities, with emphasis on the role of good clinical and statistical practices as proven safeguards for scientific integrity, the importance of adopting best practices for reporting of data from secondary analyses, and the need for optimal collaboration among stakeholders to facilitate data sharing. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. [Determination of tungsten and cobalt in the air of workplace by ICP-OES].

    PubMed

    Zhang, J; Ding, C G; Li, H B; Song, S; Yan, H F

    2017-08-20

    Objective: To establish the inductively coupled plasma optical emission spectrometry (ICP-OES) method for determination of cobalt and tungsten in the air of workplace. Methods: The cobalt and tungsten were collected by filter membrane and then digested by nitric acid, inductively coupled plasma optical emission spectrometry (ICP-OES) was used for the detection of cobalt and tungsten. Results: The linearity of tungsten was good at the range of 0.01-1 000 μg/ml with a correlation coefficient of 0.999 9, the LOD and LOQ were 0.006 7 μg/ml and 0.022 μg/ml, respectively. The recovery was ranged from 98%-101%, the RSD of intra-and inter-batch precision were 1.1%-3.0% and 2.1%-3.8%, respectively. The linearity of cobalt was good at the range of 0.01-100 μg/ml with a correlation coefficient of 0.999 9, the LOD and LOQ were 0.001 2 μg/ml and 0.044 μg/ml, respectively. The recovery was ranged from 95%-97%, the RSD of intra-and inter-batch precision were 1.1%-2.4% and 1.1%-2.9%, respectively. The sampling efficiency of tungsten and cobalt were higher than 94%. Conclusion: The linear range, sensitivity and precision of the method was suitable for the detection of tungsten and cobalt in the air of workplace.

  9. Research on registration algorithm for check seal verification

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Liu, Tiegen

    2008-03-01

    Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.

  10. Evaluation of a modified knee rotation angle in MRI scans with and without trochlear dysplasia: a parameter independent of knee size and trochlear morphology.

    PubMed

    Dornacher, Daniel; Trubrich, Angela; Guelke, Joachim; Reichel, Heiko; Kappe, Thomas

    2017-08-01

    Regarding TT-TG in knee realignment surgery, two aspects have to be considered: first, there might be flaws in using absolute values for TT-TG, ignoring the knee size of the individual. Second, in high-grade trochlear dysplasia with a dome-shaped trochlea, measurement of TT-TG has proven to lack precision and reliability. The purpose of this examination was to establish a knee rotation angle, independent of the size of the individual knee and unaffected by a dysplastic trochlea. A total of 114 consecutive MRI scans of knee joints were analysed by two observers, retrospectively. Of these, 59 were obtained from patients with trochlear dysplasia, and another 55 were obtained from patients presenting with a different pathology of the knee joint. Trochlear dysplasia was classified into low grade and high grade. TT-TG was measured according to the method described by Schoettle et al. In addition, a modified knee rotation angle was assessed. Interobserver reliability of the knee rotation angle and its correlation with TT-TG was calculated. The knee rotation angle showed good correlation with TT-TG in the readings of observer 1 and observer 2. Interobserver correlation of the parameter showed excellent values for the scans with normal trochlea, low-grade and high-grade trochlear dysplasia, respectively. All calculations were statistically significant (p < 0.05). The knee rotation angle might meet the requirements for precise diagnostics in knee realignment surgery. Unlike TT-TG, this parameter seems not to be affected by a dysplastic trochlea. In addition, the dimensionless parameter is independent of the knee size of the individual. II.

  11. I Think I See the Light Curve: The Good (and Bad) of Exoplanetary Inverse Problems

    NASA Astrophysics Data System (ADS)

    Schwartz, Joel Colin

    Planets and planetary systems change in brightness as a function of time. These "light curves" can have several features, including transits where a planet blocks some starlight, eclipses where a star obscures a planet's flux, and rotational variations where a planet reflects light differently as it spins. One can measure these brightness changes--which encode radii, temperatures, and more of planets--using current and planned telescopes. But interpreting light curves is an inverse problem: one has to extract astrophysical signals from the effects of imperfect instruments. In this thesis, I first present a meta study of planetary eclipses taken with the Spitzer Space Telescope. We find that eclipse depth uncertainties may be overly precise, especially those in early Spitzer papers. I then offer the first rigorous test of BiLinearly-Interpolated Subpixel Sensitivity (BLISS) mapping, which is widely used to model detector systematics of Spitzer. We show that this ad hoc method is not statistically sound, but it performs adequately in many real-life scenarios. Next, I present the most comprehensive empirical analysis to date on the energy budgets and bulk atmospherics of hot Jupiters. We find that dayside and nightside measurements suggest many hot Jupiters have reflective clouds in the infrared, and that day-night heat transport decreases as these planets are irradiated more. I lastly describe a semi-analytical model for how a planet's surfaces, clouds, and orbital geometry imprint on a light curve. We show that one can strongly constrain a planet's spin axis--and even spin direction--from modest high-precision data. Importantly, these methods will be useful for temperate, terrestrial planets with the launch of the James Webb Space Telescope and beyond.

  12. Absolute GPS Positioning Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  13. Stability-indicating methods for the determination of piretanide in presence of the alkaline induced degradates.

    PubMed

    Youssef, Nadia F

    2005-10-04

    Stability-indicating high performance liquid chromatography (HPLC), thin-layer chromatography (TLC) and first-derivative of ratio spectra (1DD) methods are developed for the determination of piretanide in presence of its alkaline induced degradates. HPLC method depends on separation of piretanide from its degradates on mu-Bondapak C18 column using methanol:water:acetic acid (70:30:1, v/v/v) as a mobile phase at flow rate 1.0 ml/min and UV detector at 275 nm. TLC densitometic method is based on the difference in Rf-values between the intact drug and its degradates on thin-layer silica gel. Iso-propanol:ammonia 33% (8:2, v/v) was used as a developing mobile phase and the chromatogram was scanned at 275 nm. The derivative of ratio spectra method (1DD) depends on the measurement of the absorbance at 288 nm in the first-derivative of ratio spectra for the determination of the cited drug in the presence of its degradates. Calibration graphs of the three suggested methods are linear in the concentration ranges 0.02-0.3 microg/20 microl, 0.5-10 microg/spot and 5-50 microg/ml, with mean percentage recovery 99.27+/-0.52, 99,17+/-1.01 and 99.65+/-1.01%, respectively. The three proposed methods were successfully applied for the determination of piretanide in bulk powder, laboratory-prepared mixtures and pharmaceutical dosage form with good accuracy and precision. The results were statistically analyzed and compared with those obtained by the official method. Validation of the method was determined with favourable specificity, linearity, precision, and accuracy was assessed by applying the standard addition technique.

  14. Measuring coping in parents of children with disabilities: a rasch model approach.

    PubMed

    Gothwal, Vijaya K; Bharani, Seelam; Reddy, Shailaja P

    2015-01-01

    Parents of a child with disability must cope with greater demands than those living with a healthy child. Coping refers to a person's cognitive or behavioral efforts to manage the demands of a stressful situation. The Coping Health Inventory for Parents (CHIP) is a well-recognized measure of coping among parents of chronically ill children and assesses different coping patterns using its three subscales. The purpose of this study was to provide further insights into the psychometric properties of the CHIP subscales in a sample of parents of children with disabilities. In this cross-sectional study, 220 parents (mean age, 33.4 years; 85% mothers) caring for a child with disability enrolled in special schools as well as in mainstream schools completed the 45-item CHIP. Rasch analysis was applied to the CHIP data and the psychometric performance of each of the three subscales was tested. Subscale revision was performed in the context of Rasch analysis statistics. Response categories were not used as intended, necessitating combining categories, thereby reducing the number from 4 to 3. The subscale - 'maintaining social support' satisfied all the Rasch model expectations. Four item misfit the Rasch model in the subscale -maintaining family integration', but their deletion resulted in a 15-item scale with items that fit the Rasch model well. The remaining subscale - 'understanding the healthcare situation' lacked adequate measurement precision (<2.0 logits). The current Rasch analyses add to the evidence of measurement properties of the CHIP and show that the two of its subscales (one original and the other revised) have good psychometric properties and work well to measure coping patterns in parents of children with disabilities. However the third subscale is limited by its inadequate measurement precision and requires more items.

  15. Stability-indicating assay of repaglinide in bulk and optimized nanoemulsion by validated high performance thin layer chromatography technique.

    PubMed

    Akhtar, Juber; Fareed, Sheeba; Aqil, Mohd

    2013-07-01

    A sensitive, selective, precise and stability-indicating high-performance thin-layer chromatographic (HPTLC) method for analysis of repaglinide both as a bulk drug and in nanoemulsion formulation was developed and validated. The method employed TLC aluminum plates precoated with silica gel 60F-254 as the stationary phase. The solvent system consisted of chloroform/methanol/ammonia/glacial acetic acid (7.5:1.5:0.9:0.1, v/v/v/v). This system was found to give compact spots for repaglinide (R f value of 0.38 ± 0.02). Repaglinide was subjected to acid and alkali hydrolysis, oxidation, photodegradation and dry heat treatment. Also, the degraded products were well separated from the pure drug. Densitometric analysis of repaglinide was carried out in the absorbance mode at 240 nm. The linear regression data for the calibration plots showed good linear relationship with r (2)= 0.998 ± 0.032 in the concentration range of 50-800 ng. The method was validated for precision, accuracy as recovery, robustness and specificity. The limits of detection and quantitation were 0.023 and 0.069 ng per spot, respectively. The drug undergoes degradation under acidic and basic conditions, oxidation and dry heat treatment. All the peaks of the degraded product were resolved from the standard drug with significantly different R f values. Statistical analysis proves that the method is reproducible and selective for the estimation of the said drug. As the method could effectively separate the drug from its degradation products, it can be employed as a stability-indicating one. Moreover, the proposed HPTLC method was utilized to investigate the degradation kinetics in 1M NaOH.

  16. Validated stability-indicating spectrophotometric methods for the determination of cefixime trihydrate in the presence of its acid and alkali degradation products.

    PubMed

    Mostafa, Nadia M; Abdel-Fattah, Laila; Weshahy, Soheir A; Hassan, Nagiba Y; Boltia, Shereen A

    2015-01-01

    Five simple, accurate, precise, and economical spectrophotometric methods have been developed for the determination of cefixime trihydrate (CFX) in the presence of its acid and alkali degradation products without prior separation. In the first method, second derivative (2D) and first derivative (1D) spectrophotometry was applied to the absorption spectra of CFX and its acid (2D) or alkali (1D) degradation products by measuring the amplitude at 289 and 308 nm, respectively. The second method was a first derivative (1DD) ratio spectrophotometric method where the peak amplitudes were measured at 311 nm in presence of the acid degradation product, and 273 and 306 nm in presence of its alkali degradation product. The third method was ratio subtraction spectrophotometry where the drug is determined at 286 nm in laboratory-prepared mixtures of CFX and its acid or alkali degradation product. The fourth method was based on dual wavelength analysis; two wavelengths were selected at which the absorbances of one component were the same, so wavelengths 209 and 252 nm were used to determine CFX in presence of its acid degradation product and 310 and 321 nm in presence of its alkali degradation product. The fifth method was bivariate spectrophotometric calibration based on four linear regression equations obtained at the wavelengths 231 and 290 nm, and 231 and 285 nm for the binary mixture of CFX with either its acid or alkali degradation product, respectively. The developed methods were successfully applied to the analysis of CFX in laboratory-prepared mixtures and pharmaceutical formulations with good recoveries, and their validation was carried out following the International Conference on Harmonization guidelines. The results obtained were statistically compared with each other and showed no significant difference with respect to accuracy and precision.

  17. A Learning-Style Theory for Understanding Autistic Behaviors

    PubMed Central

    Qian, Ning; Lipkin, Richard M.

    2011-01-01

    Understanding autism's ever-expanding array of behaviors, from sensation to cognition, is a major challenge. We posit that autistic and typically developing brains implement different algorithms that are better suited to learn, represent, and process different tasks; consequently, they develop different interests and behaviors. Computationally, a continuum of algorithms exists, from lookup table (LUT) learning, which aims to store experiences precisely, to interpolation (INT) learning, which focuses on extracting underlying statistical structure (regularities) from experiences. We hypothesize that autistic and typical brains, respectively, are biased toward LUT and INT learning, in low- and high-dimensional feature spaces, possibly because of their narrow and broad tuning functions. The LUT style is good at learning relationships that are local, precise, rigid, and contain little regularity for generalization (e.g., the name–number association in a phonebook). However, it is poor at learning relationships that are context dependent, noisy, flexible, and do contain regularities for generalization (e.g., associations between gaze direction and intention, language and meaning, sensory input and interpretation, motor-control signal and movement, and social situation and proper response). The LUT style poorly compresses information, resulting in inefficiency, sensory overload (overwhelm), restricted interests, and resistance to change. It also leads to poor prediction and anticipation, frequent surprises and over-reaction (hyper-sensitivity), impaired attentional selection and switching, concreteness, strong local focus, weak adaptation, and superior and inferior performances on simple and complex tasks. The spectrum nature of autism can be explained by different degrees of LUT learning among different individuals, and in different systems of the same individual. Our theory suggests that therapy should focus on training autistic LUT algorithm to learn regularities. PMID:21886617

  18. Evaluation of the Q analyzer, a new cap-piercing fully automated coagulometer with clotting, chromogenic, and immunoturbidometric capability.

    PubMed

    Kitchen, Steve; Woolley, Anita

    2013-01-01

    The Q analyzer is a recently launched fully automated photo-optical analyzer equipped with primary tube cap-piercing and capable of clotting, chromogenic, and immunoturbidometric tests. The purpose of the present study was to evaluate the performance characteristics of the Q analyzer with reagents from the instrument manufacturer. We assessed precision and throughput when performing coagulation screening tests, prothrombin time (PT)/international normalized ratio (INR), activated partial thromboplastin time (APTT), and fibrinogen assay by Clauss assay. We compared results with established reagent instrument combinations in widespread use. Precision of PT/INR and APTT was acceptable as indicated by total precision of around 3%. The time to first result was 3  min for an INR and 5  min for PT/APTT. The system produced 115 completed samples per hour when processing only INRs and 60 samples (120 results) per hour for PT/APTT combined. The sensitivity of the DG-APTT Synth/Q method to mild deficiency of factor VIII (FVIII), IX, and XI was excellent (as indicated by APTTs being prolonged above the upper limit of the reference range). The Q analyzer was associated with high precision, acceptable throughput, and good reliability. When used in combination with DG-PT reagent and manufacturer's instrument-specific international sensitivity index, the INRs obtained were accurate. The Q analyzer with DG-APTT Synth reagent demonstrated good sensitivity to isolated mild deficiency of FVIII, IX, and XI and had the advantage of relative insensitivity to mild FXII deficiency. Taken together, our data indicate that the Q hemostasis analyzer was suitable for routine use in combination with the reagents evaluated.

  19. Spectrophotometric determination of certain CNS stimulants in dosage forms and spiked human urine via derivatization with 2,4-Dinitrofluorobenzene

    PubMed Central

    2011-01-01

    A new spectrophotometric method is developed for the determination of phenylpropanolamine HCl (PPA), ephedrine HCl (EPH) and pseudoephedrine HCl (PSE) in pharmaceutical preparations and spiked human urine. The method involved heat-catalyzed derivatization of the three drugs with 2,4-dinitrofluorobenzene (DNFB) producing a yellow colored product peaking at 370 nm for PPA and 380 nm for EPH and PSE, respectively. The absorbance concentration plots were rectilinear over the range of 2-20 for PPA and 1-14 μg/mL for both of EPH and PSE, respectively. The limit of detection (LOD) values were 0.20, 0.13 and 0.20 μg/mL for PPA, EPH and PSE, respectively and limit of quantitation (LOQ) values of 0.60 and 0.40 and 0.59 μg/mL for PPA, EPH and PSE, respectively. The analytical performance of the method was fully validated and the results were satisfactory. The proposed method was successfully applied to the determination of the three studied drugs in their commercial dosage forms including tablets, capsules and ampoules with good percentage recoveries. The proposed method was further applied for the determination of PSE in spiked human urine with a mean percentage recovery of 108.17 ± 1.60 for (n = 3). Statistical comparison of the results obtained with those of the comparison methods showed good agreement and proved that there was no significant difference in the accuracy and precision between the two methods. The mechanism of the reaction pathway was postulated. PMID:22032335

  20. Investigation of power-plant plume photochemistry using a reactive plume model

    NASA Astrophysics Data System (ADS)

    Kim, Y. H.; Kim, H. S.; Song, C. H.

    2016-12-01

    Emissions from large-scale point sources have continuously increased due to the rapid industrial growth. In particular, primary and secondary air pollutants are directly relevant to atmospheric environment and human health. Thus, we tried to precisely describe the atmospheric photochemical conversion from primary to secondary air pollutants inside the plumes emitted from large-scale point sources. A reactive plume model (RPM) was developed to comprehensively consider power-plant plume photochemistry with 255 condensed photochemical reactions. The RPM can simulate two main components of power-plant plumes: turbulent dispersion of plumes and compositional changes of plumes via photochemical reactions. In order to evaluate the performance of the RPM developed in the present study, two sets of observational data obtained from the TexAQS II 2006 (Texas Air Quality Study II 2006) campaign were compared with RPM-simulated data. Comparison shows that the RPM produces relatively accurate concentrations for major primary and secondary in-plume species such as NO2, SO2, ozone, and H2SO4. Statistical analyses show good correlation, with correlation coefficients (R) ranging from 0.61 to 0.92, and good agreement with the Index of Agreement (IOA) ranging from 0.70 to 0.95. Following evaluation of the performance of the RPM, a demonstration was also carried out to show the applicability of the RPM. The RPM can calculate NOx photochemical lifetimes inside the two plumes (Monticello and Welsh power plants). Further applicability and possible uses of the RPM are also discussed together with some limitations of the current version of the RPM.

  1. Multicentre study for validation of the French addictovigilance network reports assessment tool

    PubMed Central

    Hardouin, Jean Benoit; Rousselet, Morgane; Gerardin, Marie; Guerlais, Marylène; Guillou, Morgane; Bronnec, Marie; Sébille, Véronique; Jolliet, Pascale

    2016-01-01

    Aims The French health authority (ANSM) is responsible for monitoring medicinal and other drug dependencies. To support these activities, the ANSM manages a network of 13 drug dependence evaluation and information centres (Centres d'Evaluation et d'Information sur la Pharmacodépendance ‐ Addictovigilance ‐ CEIP‐A) throughout France. In 2006, the Nantes CEIP‐A created a new tool called the EGAP (Echelle de GrAvité de la Pharmacodépendance‐ drug dependence severity scale) based on DSM IV criteria. This tool allows the creation of a substance use profile that enables the drug dependence severity to be homogeneously quantified by assigning a score to each substance indicated in the reports from health professionals. This article describes the validation and psychometric properties of the drug dependence severity score obtained from the scale ( Clinicaltrials.gov NCT01052675). Method The validity of the EGAP construct, the concurrent validity and the discriminative ability of the EGAP score, the consistency of answers to EGAP items, the internal consistency and inter rater reliability of the EGAP score were assessed using statistical methods that are generally used for psychometric tests. Results The total EGAP score was a reliable and precise measure for evaluating drug dependence (Cronbach alpha = 0.84; ASI correlation = 0.70; global ICC = 0.92). In addition to its good psychometric properties, the EGAP is a simple and efficient tool that can be easily specified on the official ANSM notification form. Conclusion The good psychometric properties of the total EGAP score justify its use for evaluating the severity of drug dependence. PMID:27302554

  2. Surface circulation in the Strait of Gibraltar: a comparison study between HF radar and high resolution model data.

    NASA Astrophysics Data System (ADS)

    Soto-Navarro, Javier; Lorente, Pablo; Álvarez-Fanjul, Enrique; Ruiz-Gil de la Serna, M. Isabel

    2015-04-01

    Surface currents from the HF radar system deployed by Puertos del Estado (PdE) at the Strait of Gibraltar and an operational high resolution configuration of the MIT global circulation model, implemented in the strait area in the frame of the SAMPA project, have been analyzed and compared in the period February 2013 - September 2014. The comparison have been carried out in the time and frequency domains, by statistical a geophysical (tide ellipses, wind forcing, EOF) methods. Results show good agreement between both current fields in the strait axis, with correlation around 0.6 (reaching 0.9 in the low frequency band). Higher discrepancies are found in the boundaries of the domain, due to the differences in the meridional components, likely related to the sparser and less accurate radar measurements in these areas. Rotary spectral analysis show a very good agreement between both systems, which is reflected in a very similar and realistic representation of the main tide constituents (M2, S2 and K1). The wind forced circulation pattern, of special interest in the mouth of the Bay of Algeciras, is also precisely represented by radar and model. Finally, the spatial patterns of the first four EOF modes of both fields are rather close, reinforcing the previous results. As conclusion, the analysis points out the proper representation of the surface circulation of the area performed by the PdE HF radar system and the SAMPA model. However, weak and strong points are found in both, stressing the importance of having two complementary tools in the area.

  3. Monitoring utilizations of amino acids and vitamins in culture media and Chinese hamster ovary cells by liquid chromatography tandem mass spectrometry.

    PubMed

    Qiu, Jinshu; Chan, Pik Kay; Bondarenko, Pavel V

    2016-01-05

    Monitoring amino acids and vitamins is important for understanding human health, food nutrition and the culture of mammalian cells used to produce therapeutic proteins in biotechnology. A method including ion pairing reversed-phase liquid chromatography with tandem mass spectrometry was developed and optimized to quantify 21 amino acids and 9 water-soluble vitamins in Chinese hamster ovary (CHO) cells and culture media. By optimizing the chromatographic separation, scan time, monitoring time window, and sample preparation procedure, and using isotopically labeled (13)C, (15)N and (2)H internal standards, low limits of quantitation (≤0.054 mg/L), good precision (<10%) and good accuracy (100±10%) were achieved for nearly all the 30 compounds. Applying this method to CHO cell extracts, statistically significant differences in the metabolite levels were measured between two cell lines originated from the same host, indicating differences in genetic makeup or metabolic activities and nutrient supply levels in the culture media. In a fed-batch process of manufacturing scale bioreactors, two distinguished trends for changes in amino acid concentrations were identified in response to feeding. Ten essential amino acids showed a zigzag pattern with maxima at the feeding days, and 9 non-essential amino acids displayed a smoothly changing profile as they were mainly products of cellular metabolism. Five of 9 vitamins accumulated continuously during the culture period, suggesting that they were fed in access. The method serves as an effective tool for the development and optimization of mammalian cell cultures. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Detection of Yarkovsky acceleration in the context of precovery observations and the future Gaia catalogue

    NASA Astrophysics Data System (ADS)

    Desmars, J.

    2015-03-01

    Context. The Yarkovsky effect is a weak non-gravitational force leading to a small variation of the semi-major axis of an asteroid. Using radar measurements and astrometric observations, it is possible to measure a drift in semi-major axis through orbit determination. Aims: This paper aims to detect a reliable drift in semi-major axis of near-Earth asteroids (NEAs) from ground-based observations and to investigate the impact of precovery observations and the future Gaia catalogue in the detection of a secular drift in semi-major axis. Methods: We have developed a precise dynamical model of an asteroid's motion taking the Yarkovsky acceleration into account and allowing the fitting of the drift in semi-major axis. Using statistical methods, we investigate the quality and the robustness of the detection. Results: By filtering spurious detections with an estimated maximum drift depending on the asteroid's size, we found 46 NEAs with a reliable drift in semi-major axis in good agreement with the previous studies. The measure of the drift leads to a better orbit determination and constrains some physical parameters of these objects. Our results are in good agreement with the 1 /D dependence of the drift and with the expected ratio of prograde and retrograde NEAs. We show that the uncertainty of the drift mainly depends on the length of orbital arc and in this way we highlight the importance of the precovery observations and data mining in the detection of consistent drift. Finally, we discuss the impact of Gaia catalogue in the determination of drift in semi-major axis.

  5. Spectrophotometric and spectrofluorimetric methods for analysis of tramadol, acebutolol and dothiepin in pharmaceutical preparations

    NASA Astrophysics Data System (ADS)

    Abdellatef, Hisham E.; El-Henawee, Magda M.; El-Sayed, Heba M.; Ayad, Magda M.

    2006-12-01

    Sensitive spectrophotometric and spectrofluorimetric methods are described for the determination of tramadol, acebutolol and dothiepin (dosulepin) hydrochlorides. The two methods are based on the condensation of the cited drugs with the mixed anhydrides of malonic and acetic acids at 60 °C for 25-40 min. The coloured condensation products are suitable for the spectrophotometric and spectrofluorimetric determination at 329-333 and 431-434 nm (excitation at 389 nm), respectively. For the spectrophotometric method, Beer's law was obeyed from 0.5 to 2.5 μg ml -1 for tramadol, dothiepin and 5-25 μg ml -1 for acebutolol. Using the spectrofluorimetric method linearity ranged from 0.25 to 1.25 μg ml -1 for tramadol, dothiepin and 1-5 μg ml -1 for acebutolol. Mean percentage recoveries for the spectrophotometric method were 99.68 ± 1.00, 99.95 ± 1.11 and 99.72 ± 1.01 for tramadol, acebutolol and dothiepin, respectively and for the spectrofluorimetric method, recoveries were 99.5 ± 0.844, 100.32 ± 0.969 and 99.82 ± 1.15 for the three drugs, respectively. The optimum experimental parameters for the reaction has been studied. The validity of the described procedures was assessed. Statistical analysis of the results has been carried out revealing high accuracy and good precision. The proposed methods were successfully applied for the determination of the selected drugs in their pharmaceutical preparations with good recoveries. The procedures were accurate, simple and suitable for quality control application.

  6. Precise measurement of scleral radius using anterior eye profilometry.

    PubMed

    Jesus, Danilo A; Kedzia, Renata; Iskander, D Robert

    2017-02-01

    To develop a new and precise methodology to measure the scleral radius based on anterior eye surface. Eye Surface Profiler (ESP, Eaglet-Eye, Netherlands) was used to acquire the anterior eye surface of 23 emmetropic subjects aged 28.1±6.6years (mean±standard deviation) ranging from 20 to 45. Scleral radius was obtained based on the approximation of the topographical scleral data to a sphere using least squares fitting and considering the axial length as a reference point. To better understand the role of scleral radius in ocular biometry, measurements of corneal radius, central corneal thickness, anterior chamber depth and white-to-white corneal diameter were acquired with IOLMaster 700 (Carl Zeiss Meditec AG, Jena, Germany). The estimated scleral radius (11.2±0.3mm) was shown to be highly precise with a coefficient of variation of 0.4%. A statistically significant correlation between axial length and scleral radius (R 2 =0.957, p<0.001) was observed. Moreover, corneal radius (R 2 =0.420, p<0.001), anterior chamber depth (R 2 =0.141, p=0.039) and white-to-white corneal diameter (R 2 =0.146, p=0.036) have also shown statistically significant correlations with the scleral radius. Lastly, no correlation was observed comparing scleral radius to the central corneal thickness (R 2 =0.047, p=0.161). Three-dimensional topography of anterior eye acquired with Eye Surface Profiler together with a given estimate of the axial length, can be used to calculate the scleral radius with high precision. Copyright © 2016 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  7. Analytical precision of the Urolizer for the determination of the BONN-Risk-Index (BRI) for calcium oxalate urolithiasis and evaluation of the influence of 24-h urine storage at moderate temperatures on BRI.

    PubMed

    Berg, Wolfgang; Bechler, Robin; Laube, Norbert

    2009-01-01

    Since its first publication in 2000, the BONN-Risk-Index (BRI) has been successfully used to determine the calcium oxalate (CaOx) crystallization risk from urine samples. To date, a BRI-measuring device, the "Urolizer", has been developed, operating automatically and requiring only a minimum of preparation. Two major objectives were pursued: determination of Urolizer precision, and determination of the influence of 24-h urine storage at moderate temperatures on BRI. 24-h urine samples from 52 CaOx stone-formers were collected. A total of 37 urine samples were used for the investigation of Urolizer precision by performing six independent BRI determinations in series. In total, 30 samples were taken for additional investigation of urine storability. Each sample was measured thrice: directly after collection, after 24-h storage at T=21 degrees C, and after 24-h cooling at T=4 degrees C. Outcomes were statistically tested for identity with regard to the immediately obtained results. Repeat measurements for evaluation of Urolizer precision revealed statistical identity of data (p-0.05). 24-h storage of urine at both tested temperatures did not significantly affect BRI (p-0.05). The pilot-run Urolizer shows high analytical reliability. The innovative analysis device may be especially suited for urologists specializing in urolithiasis treatment. The possibility for urine storage at moderate temperatures without loss of analysis quality further demonstrates the applicability of the BRI method.

  8. Application of machine/statistical learning, artificial intelligence and statistical experimental design for the modeling and optimization of methylene blue and Cd(ii) removal from a binary aqueous solution by natural walnut carbon.

    PubMed

    Mazaheri, H; Ghaedi, M; Ahmadi Azqhandi, M H; Asfaram, A

    2017-05-10

    Analytical chemists apply statistical methods for both the validation and prediction of proposed models. Methods are required that are adequate for finding the typical features of a dataset, such as nonlinearities and interactions. Boosted regression trees (BRTs), as an ensemble technique, are fundamentally different to other conventional techniques, with the aim to fit a single parsimonious model. In this work, BRT, artificial neural network (ANN) and response surface methodology (RSM) models have been used for the optimization and/or modeling of the stirring time (min), pH, adsorbent mass (mg) and concentrations of MB and Cd 2+ ions (mg L -1 ) in order to develop respective predictive equations for simulation of the efficiency of MB and Cd 2+ adsorption based on the experimental data set. Activated carbon, as an adsorbent, was synthesized from walnut wood waste which is abundant, non-toxic, cheap and locally available. This adsorbent was characterized using different techniques such as FT-IR, BET, SEM, point of zero charge (pH pzc ) and also the determination of oxygen containing functional groups. The influence of various parameters (i.e. pH, stirring time, adsorbent mass and concentrations of MB and Cd 2+ ions) on the percentage removal was calculated by investigation of sensitive function, variable importance rankings (BRT) and analysis of variance (RSM). Furthermore, a central composite design (CCD) combined with a desirability function approach (DFA) as a global optimization technique was used for the simultaneous optimization of the effective parameters. The applicability of the BRT, ANN and RSM models for the description of experimental data was examined using four statistical criteria (absolute average deviation (AAD), mean absolute error (MAE), root mean square error (RMSE) and coefficient of determination (R 2 )). All three models demonstrated good predictions in this study. The BRT model was more precise compared to the other models and this showed that BRT could be a powerful tool for the modeling and optimizing of removal of MB and Cd(ii). Sensitivity analysis (calculated from the weight of neurons in ANN) confirmed that the adsorbent mass and pH were the essential factors affecting the removal of MB and Cd(ii), with relative importances of 28.82% and 38.34%, respectively. A good agreement (R 2 > 0.960) between the predicted and experimental values was obtained. Maximum removal (R% > 99) was achieved at an initial dye concentration of 15 mg L -1 , a Cd 2+ concentration of 20 mg L -1 , a pH of 5.2, an adsorbent mass of 0.55 g and a time of 35 min.

  9. A Rasch Analysis of the Junior Metacognitive Awareness Inventory with Singapore Students

    ERIC Educational Resources Information Center

    Ning, Hoi Kwan

    2018-01-01

    The psychometric properties of the 2 versions of the Junior Metacognitive Awareness Inventory were examined with Singapore student samples. Other than 2 misfitting items and an underutilized response scale, Rasch analysis demonstrated that the instruments have good measurement precision, and no differential item functioning was detected across…

  10. Teaching Drafting 101: What Comes First?

    ERIC Educational Resources Information Center

    Carkhuff, Don

    2006-01-01

    Employers require pristine drawings that convey clarity and precision for the production of goods. Can a change in sequence of instruction be expeditious and help teachers better prepare their students for the workplace? Research suggests that combining traditional drafting and computer-aided drafting (CAD) instruction makes sense. It is analogous…

  11. What good are unmanned aircraft systems for agricultural remote sensing and precision agriculture?

    USDA-ARS?s Scientific Manuscript database

    Civilian applications of unmanned aircraft systems (UAS, also called drones) are rapidly expanding into crop production. UAS acquire high spatial resolution remote sensing imagery that can be used three different ways in agriculture. One is to assist crop scouts looking for problems in crop fields....

  12. Metrology Careers: Jobs for Good Measure

    ERIC Educational Resources Information Center

    Liming, Drew

    2009-01-01

    What kind of career rewards precision and accuracy? One in metrology--the science of measurement. By evaluating and calibrating the technology in people's everyday lives, metrologists keep their world running smoothly. Metrology is used in the design and production of almost everything people encounter daily, from the cell phones in their pockets…

  13. Horsfall-Barratt recalibration and replicated severity estimates of citrus canker

    USDA-ARS?s Scientific Manuscript database

    Citrus canker is a serious disease of citrus in tropical and subtropical citrus growing regions. Accurate and precise assessment of citrus canker and other plant pathogens is needed to obtain good quality data. Citrus canker assessment data were used to ascertain some of the mechanics of the Horsfal...

  14. Response time distributions in rapid chess: a large-scale decision making experiment.

    PubMed

    Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A

    2010-01-01

    Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.

  15. Effects of process variables and kinetics on the degradation of 2,4-dichlorophenol using advanced reduction processes (ARP).

    PubMed

    Yu, Xingyue; Cabooter, Deirdre; Dewil, Raf

    2018-05-24

    This study aims at investigating the efficiency and kinetics of 2,4-DCP degradation via advanced reduction processes (ARP). Using UV light as activation method, the highest degradation efficiency of 2,4-DCP was obtained when using sulphite as a reducing agent. The highest degradation efficiency was observed under alkaline conditions (pH = 10.0), for high sulphite dosage and UV intensity, and low 2,4-DCP concentration. For all process conditions, first-order reaction rate kinetics were applicable. A quadratic polynomial equation fitted by a Box-Behnken Design was used as a statistical model and proved to be precise and reliable in describing the significance of the different process variables. The analysis of variance demonstrated that the experimental results were in good agreement with the predicted model (R 2  = 0.9343), and solution pH, sulphite dose and UV intensity were found to be key process variables in the sulphite/UV ARP. Consequently, the present study provides a promising approach for the efficient degradation of 2,4-DCP with fast degradation kinetics. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Barcode index number, taxonomic rank and modes of speciation: examples from fish.

    PubMed

    Kartavtsev, Yuri Phedorovich

    2018-05-01

    Species delimitation by DNA sequence data or DNA barcoding is successful, as confirmed by the vast BOLD data base. However, the theory that would explain this fact has not been developed yet. An approach based on Barcoding Index Number (BIN), suggested in the assignment, allows delimiting of taxa of three ranks (species, genera, and families) and statistical validation with a high precision of delimiting (over 80%), as well as shows for majority of Co-1-based single gene trees good correspondence between their topology and conventional taxa content for analyzed fish species (R 2  ≈ 0.84-0.98). Knowledge of deviations from these data can help to find out new taxa and improve biodiversity description. It is concluded that delimiting is successful for bulk of cases because the geographic mode of speciation prevails in nature. It takes a long time for new taxa to form in isolation, which allows accumulation of random mutations and many different nucleotide substitutions between them that can be detected by molecular markers and give unique DNA barcodes. The use of BIN approach, described here, can aid greatly in making this important question clearer especially under wider examination of other organisms.

  17. Failure of Breit-Wigner and success of dispersive descriptions of the τ- → K-ηντ decays

    NASA Astrophysics Data System (ADS)

    Roig, Pablo

    2015-11-01

    The τ- → K-ηντ decays have been studied using Chiral Perturbation Theory extended by including resonances as active fields. We have found that the treatment of final state interactions is crucial to provide a good description of the data. The Breit-Wigner approximation does not resum them and neglects the real part of the corresponding chiral loop functions, which violates analyticity and leads to a failure in the confrontation with the data. On the contrary, its resummation by means of an Omnes-like exponentiation of through a dispersive representation provides a successful explanation of the measurements. These results illustrate the fact that Breit-Wigner parametrizations of hadronic data, although simple and easy to handle, lack a link with the underlying strong interaction theory and should be avoided. As a result of our analysis we determine the properties of the K* (1410) resonance with a precision competitive to its traditional extraction using τ- → (Kπ)-ντ decays, albeit the much limited statistics accumulated for the τ- → K-ηντ channel. We also predict the soon discovery of the τ- → K-η'ντ decays.

  18. Incorporating uncertainty in predictive species distribution modelling.

    PubMed

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  19. Response Time Distributions in Rapid Chess: A Large-Scale Decision Making Experiment

    PubMed Central

    Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A.

    2010-01-01

    Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation. PMID:21031032

  20. [Health governance and social protection indicators in Latin-America: strengths, weaknesses and lessons-learned from six Mexican states].

    PubMed

    Arredondo-López, Armando; Orozco-Núñez, Emanuel

    2014-01-01

    Evaluative research projects for identifying good practice have been postponed regarding health system reform. This study was thus aimed at identifying health governance and social protection indicators. This study involved evaluative research regarding the health system for the uninsured part of the population in six Mexican states. The primary data was obtained from in-depth interviews with key players from the participating states; official statistics and the results of a macro-project concerned with Mexican health and governance reform and policy was used for secondary. Atlas Ti and Policy Maker software were used for processing and analysing the data. A list of strengths and weaknesses was presented as evidence of health system governance. Accountability at federal level (even though not lacking) was of a prescriptive nature and a system of accountability and transparency regarding the assignment of resources and strategies for the democratisation of health in the states and municipalities was still lacking. All six states had low levels of governance and experienced difficulty in conducting effective reform programmes and strategies involving a lack of precision regarding the rules and roles adopted by different health system actors.

  1. Method of radiometric quality assessment of NIR images acquired with a custom sensor mounted on an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Wierzbicki, Damian; Fryskowska, Anna; Kedzierski, Michal; Wojtkowska, Michalina; Delis, Paulina

    2018-01-01

    Unmanned aerial vehicles are suited to various photogrammetry and remote sensing missions. Such platforms are equipped with various optoelectronic sensors imaging in the visible and infrared spectral ranges and also thermal sensors. Nowadays, near-infrared (NIR) images acquired from low altitudes are often used for producing orthophoto maps for precision agriculture among other things. One major problem results from the application of low-cost custom and compact NIR cameras with wide-angle lenses introducing vignetting. In numerous cases, such cameras acquire low radiometric quality images depending on the lighting conditions. The paper presents a method of radiometric quality assessment of low-altitude NIR imagery data from a custom sensor. The method utilizes statistical analysis of NIR images. The data used for the analyses were acquired from various altitudes in various weather and lighting conditions. An objective NIR imagery quality index was determined as a result of the research. The results obtained using this index enabled the classification of images into three categories: good, medium, and low radiometric quality. The classification makes it possible to determine the a priori error of the acquired images and assess whether a rerun of the photogrammetric flight is necessary.

  2. B*Bπ coupling using relativistic heavy quarks

    DOE PAGES

    Flynn, J. M.; Fritzsch, P.; Kawanai, T.; ...

    2016-01-27

    We report on a calculation of the B*Bπ coupling in lattice QCD. The strong matrix element (Bπ|B*) is directly related to the leading order low-energy constant in heavy meson chiral perturbation theory (HM ΧPT) for B mesons. We carry out our calculation directly at the b-quark mass using a non-perturbatively tuned clover action that controls discretization effects of order |p →a| and (ma) n for all n. Our analysis is performed on RBC/UKQCD gauge configurations using domain-wall fermions and the Iwasaki gauge action at two lattice spacings of a –1 = 1.729(25) GeV, a –1 = 2.281 (28) GeV, andmore » unitary pion masses down to 290 MeV. We achieve good statistical precision and control all systematic uncertainties, giving a final result for the HM ΧPT coupling g b = 0.56(3) stat(7) sys in the continuum and at the physical light-quark masses. Furthermore, this is the first calculation performed directly at the physical b-quark mass and lies in the region one would expect from carrying out an interpolation between previous results at the charm mass and at the static point.« less

  3. Clinical advances of nanocarrier-based cancer therapy and diagnostics.

    PubMed

    Luque-Michel, Edurne; Imbuluzqueta, Edurne; Sebastián, Víctor; Blanco-Prieto, María J

    2017-01-01

    Cancer is a leading cause of death worldwide and efficient new strategies are urgently needed to combat its high mortality and morbidity statistics. Fortunately, over the years, nanotechnology has evolved as a frontrunner in the areas of imaging, diagnostics and therapy, giving the possibility of monitoring, evaluating and individualizing cancer treatments in real-time. Areas covered: Polymer-based nanocarriers have been extensively studied to maximize cancer treatment efficacy and minimize the adverse effects of standard therapeutics. Regarding diagnosis, nanomaterials like quantum dots, iron oxide nanoparticles or gold nanoparticles have been developed to provide rapid, sensitive detection of cancer and, therefore, facilitate early treatment and monitoring of the disease. Therefore, multifunctional nanosystems with both imaging and therapy functionalities bring us a step closer to delivering precision/personalized medicine in the cancer setting. Expert opinion: There are multiple barriers for these new nanosystems to enter the clinic, but it is expected that in the near future, nanocarriers, together with new 'targeted drugs', could replace our current treatments and cancer could become a nonfatal disease with good recovery rates. Joint efforts between scientists, clinicians, the pharmaceutical industry and legislative bodies are needed to bring to fruition the application of nanosystems in the clinical management of cancer.

  4. The 2D analytic signal for envelope detection and feature extraction on ultrasound images.

    PubMed

    Wachinger, Christian; Klein, Tassilo; Navab, Nassir

    2012-08-01

    The fundamental property of the analytic signal is the split of identity, meaning the separation of qualitative and quantitative information in form of the local phase and the local amplitude, respectively. Especially the structural representation, independent of brightness and contrast, of the local phase is interesting for numerous image processing tasks. Recently, the extension of the analytic signal from 1D to 2D, covering also intrinsic 2D structures, was proposed. We show the advantages of this improved concept on ultrasound RF and B-mode images. Precisely, we use the 2D analytic signal for the envelope detection of RF data. This leads to advantages for the extraction of the information-bearing signal from the modulated carrier wave. We illustrate this, first, by visual assessment of the images, and second, by performing goodness-of-fit tests to a Nakagami distribution, indicating a clear improvement of statistical properties. The evaluation is performed for multiple window sizes and parameter estimation techniques. Finally, we show that the 2D analytic signal allows for an improved estimation of local features on B-mode images. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Precision of coherence analysis to detect cerebral autoregulation by near-infrared spectroscopy in preterm infants

    NASA Astrophysics Data System (ADS)

    Hahn, Gitte Holst; Christensen, Karl Bang; Leung, Terence S.; Greisen, Gorm

    2010-05-01

    Coherence between spontaneous fluctuations in arterial blood pressure (ABP) and the cerebral near-infrared spectroscopy signal can detect cerebral autoregulation. Because reliable measurement depends on signals with high signal-to-noise ratio, we hypothesized that coherence is more precisely determined when fluctuations in ABP are large rather than small. Therefore, we investigated whether adjusting for variability in ABP (variabilityABP) improves precision. We examined the impact of variabilityABP within the power spectrum in each measurement and between repeated measurements in preterm infants. We also examined total monitoring time required to discriminate among infants with a simulation study. We studied 22 preterm infants (GA<30) yielding 215 10-min measurements. Surprisingly, adjusting for variabilityABP within the power spectrum did not improve the precision. However, adjusting for the variabilityABP among repeated measurements (i.e., weighting measurements with high variabilityABP in favor of those with low) improved the precision. The evidence of drift in individual infants was weak. Minimum monitoring time needed to discriminate among infants was 1.3-3.7 h. Coherence analysis in low frequencies (0.04-0.1 Hz) had higher precision and statistically more power than in very low frequencies (0.003-0.04 Hz). In conclusion, a reliable detection of cerebral autoregulation takes hours and the precision is improved by adjusting for variabilityABP between repeated measurements.

  6. 42 CFR 493.1256 - Standard: Control procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... for having control procedures that monitor the accuracy and precision of the complete analytic process..., include two control materials, including one that is capable of detecting errors in the extraction process... control materials having previously determined statistical parameters. (e) For reagent, media, and supply...

  7. [Subjective health and burden of disease in seniors: Overview of official statistics and public health reports].

    PubMed

    Bardehle, D

    2015-12-01

    There are different types of information on men's health in older age. High morbidity burden is offset by subjective assessments of "very good" and "good" health by 52% of men over 65 years. The aim of this study is to assess the health situation of seniors from official publications and public health reports. How can the quality of life in our male population be positively influenced so that they can actively participate in society in old age. Information on the health of seniors and burden of disease were taken from men's health reports and official publications from the Robert-Koch-Institute, the Federal Statistical Office, and the IHME Institute of the USA according to age groups and gender. Burden of disease in seniors is influenced by one's own health behavior and the social situation. The increase in life expectancy of seniors is characterized by longer life with chronic conditions. Official statistics indicate that about 50% of seniors are affected by disease or severe disability, while 50% assess their health status as "very good" or "good". Aging of the population requires diverse health promotion activities. Parallel with the inevitable increased multimorbidity in the elderly, maintaining and increase of physical fitness is required so that seniors have a positive "subjective health" or "wellbeing".

  8. Possibility of New Precise Measurements of Muonic Helium Atom HFS at J-PARC MUSE

    NASA Astrophysics Data System (ADS)

    Strasser, P.; Shimomura, K.; Torii, H. A.

    We propose the next generation of precision microwave spectroscopy measurements of the ground state hyperfine structure (HFS) of the muonic helium atom. The HFS interval is a sensitive tool to test three-body atomic system and bound-state QED theory as well as precise direct determination of the negative muon magnetic moment and hence its mass. Previous measurements performed in 1980s at PSI and LAMPF had uncertainties dominated by statistical errors. The new high-intensity pulsed negative muon beam at J-PARC MUSE give an opportunity to improve these measurements by nearly two orders of magnitude for the HFS interval, and almost tenfold for the negative muon mass, thus providing a more precise test of CPT invariance and determination of the negative counterpart of the anomalous g-factor for the existing BNL muon g-2 experiment. Both measurements at zero field and at high magnetic field are considered. An overview of the different aspects of these new muonic helium HFS measurements is presented.

  9. Metabolomics through the lens of precision cardiovascular medicine.

    PubMed

    Lam, Sin Man; Wang, Yuan; Li, Bowen; Du, Jie; Shui, Guanghou

    2017-03-20

    Metabolomics, which targets at the extensive characterization and quantitation of global metabolites from both endogenous and exogenous sources, has emerged as a novel technological avenue to advance the field of precision medicine principally driven by genomics-oriented approaches. In particular, metabolomics has revealed the cardinal roles that the environment exerts in driving the progression of major diseases threatening public health. Herein, the existent and potential applications of metabolomics in two key areas of precision cardiovascular medicine will be critically discussed: 1) the use of metabolomics in unveiling novel disease biomarkers and pathological pathways; 2) the contribution of metabolomics in cardiovascular drug development. Major issues concerning the statistical handling of big data generated by metabolomics, as well as its interpretation, will be briefly addressed. Finally, the need for integration of various omics branches and adopting a multi-omics approach to precision medicine will be discussed. Copyright © 2017 Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, and Genetics Society of China. Published by Elsevier Ltd. All rights reserved.

  10. Photon Statistics of Propagating Thermal Microwaves.

    PubMed

    Goetz, J; Pogorzalek, S; Deppe, F; Fedorov, K G; Eder, P; Fischer, M; Wulschner, F; Xie, E; Marx, A; Gross, R

    2017-03-10

    In experiments with superconducting quantum circuits, characterizing the photon statistics of propagating microwave fields is a fundamental task. We quantify the n^{2}+n photon number variance of thermal microwave photons emitted from a blackbody radiator for mean photon numbers, 0.05≲n≲1.5. We probe the fields using either correlation measurements or a transmon qubit coupled to a microwave resonator. Our experiments provide a precise quantitative characterization of weak microwave states and information on the noise emitted by a Josephson parametric amplifier.

  11. Photon Statistics of Propagating Thermal Microwaves

    NASA Astrophysics Data System (ADS)

    Goetz, J.; Pogorzalek, S.; Deppe, F.; Fedorov, K. G.; Eder, P.; Fischer, M.; Wulschner, F.; Xie, E.; Marx, A.; Gross, R.

    2017-03-01

    In experiments with superconducting quantum circuits, characterizing the photon statistics of propagating microwave fields is a fundamental task. We quantify the n2+n photon number variance of thermal microwave photons emitted from a blackbody radiator for mean photon numbers, 0.05 ≲n ≲1.5 . We probe the fields using either correlation measurements or a transmon qubit coupled to a microwave resonator. Our experiments provide a precise quantitative characterization of weak microwave states and information on the noise emitted by a Josephson parametric amplifier.

  12. Brain tissues volume measurements from 2D MRI using parametric approach

    NASA Astrophysics Data System (ADS)

    L'vov, A. A.; Toropova, O. A.; Litovka, Yu. V.

    2018-04-01

    The purpose of the paper is to propose a fully automated method of volume assessment of structures within human brain. Our statistical approach uses maximum interdependency principle for decision making process of measurements consistency and unequal observations. Detecting outliers performed using maximum normalized residual test. We propose a statistical model which utilizes knowledge of tissues distribution in human brain and applies partial data restoration for precision improvement. The approach proposes completed computationally efficient and independent from segmentation algorithm used in the application.

  13. Dual mobility hip arthroplasty wear measurement: Experimental accuracy assessment using radiostereometric analysis (RSA).

    PubMed

    Pineau, V; Lebel, B; Gouzy, S; Dutheil, J-J; Vielpeau, C

    2010-10-01

    The use of dual mobility cups is an effective method to prevent dislocations. However, the specific design of these implants can raise the suspicion of increased wear and subsequent periprosthetic osteolysis. Using radiostereometric analysis (RSA), migration of the femoral head inside the cup of a dual mobility implant can be defined to apprehend polyethylene wear rate. The study aimed to establish the precision of RSA measurement of femoral head migration in the cup of a dual mobility implant, and its intra- and interobserver variability. A total hip prosthesis phantom was implanted and placed under weight loading conditions in a simulator. Model-based RSA measurement of implant penetration involved specially machined polyethylene liners with increasing concentric wear (no wear, then 0.25, 0.5 and 0.75mm). Three examiners, blinded to the level of wear, analyzed (10 times) the radiostereometric films of the four liners. There was one experienced, one trained, and one inexperienced examiner. Statistical analysis measured the accuracy, precision, and intra- and interobserver variability by calculating Root Mean Square Error (RMSE), Concordance Correlation Coefficient (CCC), Intra Class correlation Coefficient (ICC), and Bland-Altman plots. Our protocol, that used a simple geometric model rather than the manufacturer's CAD files, showed precision of 0.072mm and accuracy of 0.034mm, comparable with machining tolerances with low variability. Correlation between wear measurement and true value was excellent with a CCC of 0.9772. Intraobserver reproducibility was very good with an ICC of 0.9856, 0.9883 and 0.9842, respectively for examiners 1, 2 and 3. Interobserver reproducibility was excellent with a CCC of 0.9818 between examiners 2 and 1, and 0.9713 between examiners 3 and 1. Quantification of wear is indispensable for the surveillance of dual mobility implants. This in vitro study validates our measurement method. Our results, and comparison with other studies using different measurement technologies (RSA, standard radiographs, Martell method) make model-based RSA the reference method for measuring the wear of total hip prostheses in vivo. Level 3. Prospective diagnostic study. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  14. [Research on identification of cabbages and weeds combining spectral imaging technology and SAM taxonomy].

    PubMed

    Zu, Qin; Zhang, Shui-fa; Cao, Yang; Zhao, Hui-yi; Dang, Chang-qing

    2015-02-01

    Weeds automatic identification is the key technique and also the bottleneck for implementation of variable spraying and precision pesticide. Therefore, accurate, rapid and non-destructive automatic identification of weeds has become a very important research direction for precision agriculture. Hyperspectral imaging system was used to capture the hyperspectral images of cabbage seedlings and five kinds of weeds such as pigweed, barnyard grass, goosegrass, crabgrass and setaria with the wavelength ranging from 1000 to 2500 nm. In ENVI, by utilizing the MNF rotation to implement the noise reduction and de-correlation of hyperspectral data and reduce the band dimensions from 256 to 11, and extracting the region of interest to get the spectral library as standard spectra, finally, using the SAM taxonomy to identify cabbages and weeds, the classification effect was good when the spectral angle threshold was set as 0. 1 radians. In HSI Analyzer, after selecting the training pixels to obtain the standard spectrum, the SAM taxonomy was used to distinguish weeds from cabbages. Furthermore, in order to measure the recognition accuracy of weeds quantificationally, the statistical data of the weeds and non-weeds were obtained by comparing the SAM classification image with the best classification effects to the manual classification image. The experimental results demonstrated that, when the parameters were set as 5-point smoothing, 0-order derivative and 7-degree spectral angle, the best classification result was acquired and the recognition rate of weeds, non-weeds and overall samples was 80%, 97.3% and 96.8% respectively. The method that combined the spectral imaging technology and the SAM taxonomy together took full advantage of fusion information of spectrum and image. By applying the spatial classification algorithms to establishing training sets for spectral identification, checking the similarity among spectral vectors in the pixel level, integrating the advantages of spectra and images meanwhile considering their accuracy and rapidity and improving weeds detection range in the full range that could detect weeds between and within crop rows, the above method contributes relevant analysis tools and means to the application field requiring the accurate information of plants in agricultural precision management

  15. Exact goodness-of-fit tests for Markov chains.

    PubMed

    Besag, J; Mondal, D

    2013-06-01

    Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.

  16. Quantity of dates trumps quality of dates for dense Bayesian radiocarbon sediment chronologies - Gas ion source 14C dating instructed by simultaneous Bayesian accumulation rate modeling

    NASA Astrophysics Data System (ADS)

    Rosenheim, B. E.; Firesinger, D.; Roberts, M. L.; Burton, J. R.; Khan, N.; Moyer, R. P.

    2016-12-01

    Radiocarbon (14C) sediment core chronologies benefit from a high density of dates, even when precision of individual dates is sacrificed. This is demonstrated by a combined approach of rapid 14C analysis of CO2 gas generated from carbonates and organic material coupled with Bayesian statistical modeling. Analysis of 14C is facilitated by the gas ion source on the Continuous Flow Accelerator Mass Spectrometry (CFAMS) system at the Woods Hole Oceanographic Institution's National Ocean Sciences Accelerator Mass Spectrometry facility. This instrument is capable of producing a 14C determination of +/- 100 14C y precision every 4-5 minutes, with limited sample handling (dissolution of carbonates and/or combustion of organic carbon in evacuated containers). Rapid analysis allows over-preparation of samples to include replicates at each depth and/or comparison of different sample types at particular depths in a sediment or peat core. Analysis priority is given to depths that have the least chronologic precision as determined by Bayesian modeling of the chronology of calibrated ages. Use of such a statistical approach to determine the order in which samples are run ensures that the chronology constantly improves so long as material is available for the analysis of chronologic weak points. Ultimately, accuracy of the chronology is determined by the material that is actually being dated, and our combined approach allows testing of different constituents of the organic carbon pool and the carbonate minerals within a core. We will present preliminary results from a deep-sea sediment core abundant in deep-sea foraminifera as well as coastal wetland peat cores to demonstrate statistical improvements in sediment- and peat-core chronologies obtained by increasing the quantity and decreasing the quality of individual dates.

  17. Improved identification of noun phrases in clinical radiology reports using a high-performance statistical natural language parser augmented with the UMLS specialist lexicon.

    PubMed

    Huang, Yang; Lowe, Henry J; Klein, Dan; Cucina, Russell J

    2005-01-01

    The aim of this study was to develop and evaluate a method of extracting noun phrases with full phrase structures from a set of clinical radiology reports using natural language processing (NLP) and to investigate the effects of using the UMLS(R) Specialist Lexicon to improve noun phrase identification within clinical radiology documents. The noun phrase identification (NPI) module is composed of a sentence boundary detector, a statistical natural language parser trained on a nonmedical domain, and a noun phrase (NP) tagger. The NPI module processed a set of 100 XML-represented clinical radiology reports in Health Level 7 (HL7)(R) Clinical Document Architecture (CDA)-compatible format. Computed output was compared with manual markups made by four physicians and one author for maximal (longest) NP and those made by one author for base (simple) NP, respectively. An extended lexicon of biomedical terms was created from the UMLS Specialist Lexicon and used to improve NPI performance. The test set was 50 randomly selected reports. The sentence boundary detector achieved 99.0% precision and 98.6% recall. The overall maximal NPI precision and recall were 78.9% and 81.5% before using the UMLS Specialist Lexicon and 82.1% and 84.6% after. The overall base NPI precision and recall were 88.2% and 86.8% before using the UMLS Specialist Lexicon and 93.1% and 92.6% after, reducing false-positives by 31.1% and false-negatives by 34.3%. The sentence boundary detector performs excellently. After the adaptation using the UMLS Specialist Lexicon, the statistical parser's NPI performance on radiology reports increased to levels comparable to the parser's native performance in its newswire training domain and to that reported by other researchers in the general nonmedical domain.

  18. Analysis of video-recorded images to determine linear and angular dimensions in the growing horse.

    PubMed

    Hunt, W F; Thomas, V G; Stiefel, W

    1999-09-01

    Studies of growth and conformation require statistical methods that are not applicable to subjective conformation standards used by breeders and trainers. A new system was developed to provide an objective approach for both science and industry, based on analysis of video images to measure aspects of conformation that were represented by angles or lengths. A studio crush was developed in which video images of horses of different sizes were taken after bone protuberances, located by palpation, were marked with white paper stickers. Screen pixel coordinates of calibration marks, bone markers and points on horse outlines were digitised from captured images and corrected for aspect ratio and 'fish-eye' lens effects. Calculations from the corrected coordinates produced linear dimensions and angular dimensions useful for comparison of horses for conformation and experimental purposes. The precision achieved by the method in determining linear and angular dimensions was examined through systematically determining variance for isolated steps of the procedure. Angles of the front limbs viewed from in front were determined with a standard deviation of 2-5 degrees and effects of viewing angle were detectable statistically. The height of the rump and wither were determined with precision closely related to the limitations encountered in locating a point on a screen, which was greater for markers applied to the skin than for points at the edge of the image. Parameters determined from markers applied to the skin were, however, more variable (because their relation to bone position was affected by movement), but still provided a means by which a number of aspects of size and conformation can be determined objectively for many horses during growth. Sufficient precision was achieved to detect statistically relatively small effects on calculated parameters of camera height position.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Andrew J.; Fast, James E.; Fulsom, Bryan G.

    For many nuclear material safeguards inspections, spectroscopic gamma detectors are required which can achieve high event rates (in excess of 10^6 s^-1) while maintaining very good energy resolution for discrimination of neighboring gamma signatures in complex backgrounds. Such spectra can be useful for non-destructive assay (NDA) of spent nuclear fuel with long cooling times, which contains many potentially useful low-rate gamma lines, e.g., Cs-134, in the presence of a few dominating gamma lines, such as Cs-137. Detectors in use typically sacrifice energy resolution for count rate, e.g., LaBr3, or visa versa, e.g., CdZnTe. In contrast, we anticipate that beginning withmore » a detector with high energy resolution, e.g., high-purity germanium (HPGe), and adapting the data acquisition for high throughput will be able to achieve the goals of the ideal detector. In this work, we present quantification of Cs-134 and Cs-137 activities, useful for fuel burn-up quantification, in fuel that has been cooling for 22.3 years. A segmented, planar HPGe detector is used for this inspection, which has been adapted for a high-rate throughput in excess of 500k counts/s. Using a very-high-statistic spectrum of 2.4*10^11 counts, isotope activities can be determined with very low statistical uncertainty. However, it is determined that systematic uncertainties dominate in such a data set, e.g., the uncertainty in the pulse line shape. This spectrum offers a unique opportunity to quantify this uncertainty and subsequently determine required counting times for given precision on values of interest.« less

  20. Epistasis and the Structure of Fitness Landscapes: Are Experimental Fitness Landscapes Compatible with Fisher’s Geometric Model?

    PubMed Central

    Blanquart, François; Bataillon, Thomas

    2016-01-01

    The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher’s model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher’s model was able to explain several statistical properties of the landscapes—including the mean and SD of selection and epistasis coefficients—it was often unable to explain the full structure of fitness landscapes. PMID:27052568

  1. Enhanced sensitivity for Os isotope ratios by magnetic sector ICP-MS with a capacitive decoupling Pt guard electrode.

    PubMed

    Townsend, A T

    2000-08-01

    A magnetic sector ICP-MS with enhanced sensitivity was used to measure Os isotope ratios in solutions of low Os concentration (approximately 1 ng g(-1) or less). Ratios with 192Os as the basis were determined, while the geologically useful 187Os/188Os ratio was also measured. Sample introduction was via the traditional nebuliser-spray chamber method. A capacitive decoupling Pt shield torch was developed "in-house" and was found to increase Os signals by approximately 5 x under "moderate" plasma conditions (1050 W) over that found during normal operation (1250 W). Sensitivity using the guard electrode for 192Os was approximately 250-350,000 counts s(-1) per ng g(-1) Os. For a I ng g(-1) Os solution with no guard electrode, precisions of the order of 0.2-0.3% (189Os/192Os and 190Os/192Os) to approximately 1% or greater (186Os/192Os, 187Os/192Os and 187Os/188Os) were found (values as 1 sigma for n = 10). With the guard electrode in use, ratio precisions were found to improve to 0.2 to 0.8%. The total amount of Os used in the acquisition of this data was approximately 2.5 ng per measurement per replicate. At the higher concentration of 10 ng g(-1), precisions of the order of 0.15-0.3% were measured (for all ratios), irrespective of whether the shield torch was used. Ratio accuracy was confirmed by comparison with independently obtained NTIMS data. For both Os concentrations considered, the improvement in precision offered by the guard electrode (if any) was small in comparison to calculated theoretical values based on Poisson counting statistics, suggesting noise contributions from other sources (such as the sample introduction system, plasma flicker etc). At lower Os concentrations (to 100 pg g(-1)) no appreciable loss of ratio accuracy was observed, although as expected based on counting statistics, poorer precisions of the order of 0.45-3% (1 sigma, n = 5) were noted. Re was found to have a detrimental effect on the precision of Os ratios involving 187Os, indicating that separation of Re and Os samples is a necessary pre-requisite for highly accurate and precise Os isotope ratio measurements.

  2. An indirect approach to the extensive calculation of relationship coefficients

    PubMed Central

    Colleau, Jean-Jacques

    2002-01-01

    A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design) by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population. PMID:12270102

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nahar, Sultana N., E-mail: nahar@astronomy.ohio-state.edu

    The atomic parameters–oscillator strengths, line strengths, radiative decay rates (A), and lifetimes–for fine structure transitions of electric dipole (E1) type for the astrophysically abundant ion Ne IV are presented. The results include 868 fine structure levels with n≤ 10, l≤ 9, and 1/2≤J≤ 19/2 of even and odd parities, and the corresponding 83,767 E1 transitions. The calculations were carried out using the relativistic Breit–Pauli R-matrix method in the close coupling approximation. The transitions have been identified spectroscopically using an algorithm based on quantum defect analysis and other criteria. The calculated energies agree with the 103 observed and identified energies to withinmore » 3% or better for most of the levels. Some larger differences are also noted. The A-values show good to fair agreement with the very limited number of available transitions in the table compiled by NIST, but show very good agreement with the latest published multi-configuration Hartree–Fock calculations. The present transitions should be useful for diagnostics as well as for precise and complete spectral modeling in the soft X-ray to infra-red regions of astrophysical and laboratory plasmas. -- Highlights: •The first application of BPRM method for accurate E1 transitions in Ne IV is reported. •Amount of atomic data (n going up to 10) is complete for most practical applications. •The calculated energies are in very good agreement with most observed levels. •Very good agreement of A-values and lifetimes with other relativistic calculations. •The results should provide precise nebular abundances, chemical evolution etc.« less

  4. High Astrometric Precision in the Calculation of the Coordinates of Orbiters in the GEO Ring

    NASA Astrophysics Data System (ADS)

    Lacruz, E.; Abad, C.; Downes, J. J.; Hernández-Pérez, F.; Casanova, D.; Tresaco, E.

    2018-04-01

    We present an astrometric method for the calculation of the positions of orbiters in the GEO ring with a high precision, through a rigorous astrometric treatment of observations with a 1-m class telescope, which are part of the CIDA survey of the GEO ring. We compute the distortion pattern to correct for the systematic errors introduced by the optics and electronics of the telescope, resulting in absolute mean errors of 0.16″ and 0.12″ in right ascension and declination, respectively. These correspond to ≍25 m at the mean distance of the GEO ring, and are thus good quality results.

  5. Role of urodynamics in stress urinary incontinence: A critical appraisal

    PubMed Central

    Yande, Shirish Dattatraya; Joglekar, Omkar Vinay; Joshi, Maya

    2016-01-01

    Introduction: Role of urodynamics prior to surgery of stress urinary incontinence (SUI) is under constant debate. Demonstration of the presence of detrusor overactivity is the only aspect that has been emphasized in the literature so far. We believe that there are number of other factors which may influence the evaluation and in turn the choice of surgical management and prediction of outcome of treatment. They are as follows: (1) Presence of voiding inefficiency, (2) asymptomatic detrusor overactivity, (3) and severity of SUI. These features may complicate the precise evaluation of patients of SUI. The main objective of this study is to analyze the dynamics of leakage and voiding using urodynamics. This study also aims at correlating these findings with clinical information. Materials and Methods: One hundred consecutive cases referred to our center for preoperative evaluation of SUI were recruited in the study prospectively. All patients were interrogated using International Consultation on Incontinence Questionnaire. All patients underwent complete urodynamic evaluation including uroflowmetry, filling cystometry, leak point pressure measurement, and pressure flow studies, according to Good Urodynamic Practice guidelines. Patients’ symptoms were correlated with urodynamic findings, with special emphasis on the presence of detrusor overactivity, severity of SUI, voiding efficiency, and presence of bladder outlet obstruction. Clinical information and urodynamic findings were correlated using Chi-square test. Results: There is a statistically significant correlation between the presence of symptoms of urge urinary incontinence and urodynamic findings of detrusor overactivity at P < 0.05. There is a statistically significant correlation between the symptoms of urge incontinence (in addition to SUI) and urodynamic findings of intrinsic sphincter deficiency at P < 0.05. Fifteen of 51 patients who did not have associated storage symptoms were found to have some degree of detrusor overactivity on urodynamic evaluation. There was no statistically significant correlation between asymptomatic cases of urge incontinence and incidental finding of detrusor overactivity at P < 0.05. There is no statistically significant correlation between the urodynamic findings of symptoms of voiding dysfunction and urodynamic findings, suggestive of the same value at P < 0.05. Conclusions: Urodynamic study in SUI has a potential of giving much more information than demonstration of Detrusor Overactivity alone. The predominant symptom of urge urinary incontinence can predictably diagnose detrusor overactivity in these cases. However, the incidence of asymptomatic detrusor overactivity remains as high as 15% and may have implication in postoperative results. This study clearly shows that there is a definite incidence of significant voiding dysfunction, which cannot be reliably evaluated without properly conducted pressure flow study. This factor may govern the choice of correct treatment which also predicts the outcome more reliably. Preoperative urodynamic study thus adds a dimension of precision to evaluation of the patients of SUI and may also influence technique and outcome measures in this group of patients. PMID:27721639

  6. Developmental Risk and Goodness of Fit in the Mother-Child Relationship: Links to Parenting Stress and Children's Behaviour Problems.

    PubMed

    Newland, Rebecca P; Crnic, Keith A

    2017-01-01

    Despite the compelling nature of goodness of fit, empirical support has lagged for this construct. The present study examined an interactional approach to measuring goodness of fit and prospectively explored associations with mother-child relationship quality, child behavior problems, and parenting stress across the preschool period. In addition, as goodness of fit might be particularly important for children at developmental risk, the presence of early developmental delay was considered as a moderator of goodness of fit processes. Children with ( n = 110) and without ( n = 137) developmental delays and their mothers were coded while interacting in the lab at child age 36 months and during naturalistic home observations at child ages 36 and 48 months. Mothers also completed questionnaires at child age 60 months. Results highlight the effects of child developmental risk as a moderator of mother-child goodness of fit processes across the preschool period. There was also evidence that the goodness of fit between maternal scaffolding and child activity level at 36 months influenced both mother and child functioning at 60 months. Findings call for more precise models and expanded developmental perspectives to fully capture the transactional and dynamic nature of goodness of fit.

  7. Precision Measurements of Solar Energetic Particle Elemental Composition

    NASA Technical Reports Server (NTRS)

    Breneman, H.; Stone, E. C.

    1985-01-01

    Data from the Cosmic Ray Subsystem (CRS) aboard the Voyager 1 and 2 spaceraft were used to determined, solar energetic particle abundances or upper limits for all elements with Z 30 from a combined set of 10 solar flares during the 1977 to 1982 time period. Statistically meaningful abundances were determined for several rare elements including P, C1, K, Ti and Mn, while the precision of the mean abundances for the more abundant elements was proved. When compared to solar photospheric spectroscopic abundances, these new SEP abundances more clearly exhibit the step-function dependence on first ionization potential previously reported.

  8. Automatic Bone Drilling - More Precise, Reliable and Safe Manipulation in the Orthopaedic Surgery

    NASA Astrophysics Data System (ADS)

    Boiadjiev, George; Kastelov, Rumen; Boiadjiev, Tony; Delchev, Kamen; Zagurski, Kazimir

    2016-06-01

    Bone drilling manipulation often occurs in the orthopaedic surgery. By statistics, nowadays, about one million people only in Europe need such an operation every year, where bone implants are inserted. Almost always, the drilling is performed handily, which cannot avoid the subjective factor influence. The question of subjective factor reduction has its answer - automatic bone drilling. The specific features and problems of orthopaedic drilling manipulation are considered in this work. The automatic drilling is presented according the possibilities of robotized system Orthopaedic Drilling Robot (ODRO) for assuring the manipulation accuracy, precision, reliability and safety.

  9. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    NASA Astrophysics Data System (ADS)

    Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile

    2017-10-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.

  10. 1998 Conference on Precision Electromagnetic Measurements Digest. Proceedings.

    NASA Astrophysics Data System (ADS)

    Nelson, T. L.

    The following topics were dealt with: fundamental constants; caesium standards; AC-DC transfer; impedance measurement; length measurement; units; statistics; cryogenic resonators; time transfer; QED; resistance scaling and bridges; mass measurement; atomic fountains and clocks; single electron transport; Newtonian constant of gravitation; stabilised lasers and frequency measurements; cryogenic current comparators; optical frequency standards; high voltage devices and systems; international compatibility; magnetic measurement; precision power measurement; high resolution spectroscopy; DC transport standards; waveform acquisition and analysis; ion trap standards; optical metrology; quantised Hall effect; Josephson array comparisons; signal generation and measurement; Avogadro constant; microwave networks; wideband power standards; antennas, fields and EMC; quantum-based standards.

  11. Identifying the Source of Misfit in Item Response Theory Models.

    PubMed

    Liu, Yang; Maydeu-Olivares, Alberto

    2014-01-01

    When an item response theory model fails to fit adequately, the items for which the model provides a good fit and those for which it does not must be determined. To this end, we compare the performance of several fit statistics for item pairs with known asymptotic distributions under maximum likelihood estimation of the item parameters: (a) a mean and variance adjustment to bivariate Pearson's X(2), (b) a bivariate subtable analog to Reiser's (1996) overall goodness-of-fit test, (c) a z statistic for the bivariate residual cross product, and (d) Maydeu-Olivares and Joe's (2006) M2 statistic applied to bivariate subtables. The unadjusted Pearson's X(2) with heuristically determined degrees of freedom is also included in the comparison. For binary and ordinal data, our simulation results suggest that the z statistic has the best Type I error and power behavior among all the statistics under investigation when the observed information matrix is used in its computation. However, if one has to use the cross-product information, the mean and variance adjusted X(2) is recommended. We illustrate the use of pairwise fit statistics in 2 real-data examples and discuss possible extensions of the current research in various directions.

  12. Speeding up the Consensus Clustering methodology for microarray data analysis

    PubMed Central

    2011-01-01

    Background The inference of the number of clusters in a dataset, a fundamental problem in Statistics, Data Analysis and Classification, is usually addressed via internal validation measures. The stated problem is quite difficult, in particular for microarrays, since the inferred prediction must be sensible enough to capture the inherent biological structure in a dataset, e.g., functionally related genes. Despite the rich literature present in that area, the identification of an internal validation measure that is both fast and precise has proved to be elusive. In order to partially fill this gap, we propose a speed-up of Consensus (Consensus Clustering), a methodology whose purpose is the provision of a prediction of the number of clusters in a dataset, together with a dissimilarity matrix (the consensus matrix) that can be used by clustering algorithms. As detailed in the remainder of the paper, Consensus is a natural candidate for a speed-up. Results Since the time-precision performance of Consensus depends on two parameters, our first task is to show that a simple adjustment of the parameters is not enough to obtain a good precision-time trade-off. Our second task is to provide a fast approximation algorithm for Consensus. That is, the closely related algorithm FC (Fast Consensus) that would have the same precision as Consensus with a substantially better time performance. The performance of FC has been assessed via extensive experiments on twelve benchmark datasets that summarize key features of microarray applications, such as cancer studies, gene expression with up and down patterns, and a full spectrum of dimensionality up to over a thousand. Based on their outcome, compared with previous benchmarking results available in the literature, FC turns out to be among the fastest internal validation methods, while retaining the same outstanding precision of Consensus. Moreover, it also provides a consensus matrix that can be used as a dissimilarity matrix, guaranteeing the same performance as the corresponding matrix produced by Consensus. We have also experimented with the use of Consensus and FC in conjunction with NMF (Nonnegative Matrix Factorization), in order to identify the correct number of clusters in a dataset. Although NMF is an increasingly popular technique for biological data mining, our results are somewhat disappointing and complement quite well the state of the art about NMF, shedding further light on its merits and limitations. Conclusions In summary, FC with a parameter setting that makes it robust with respect to small and medium-sized datasets, i.e, number of items to cluster in the hundreds and number of conditions up to a thousand, seems to be the internal validation measure of choice. Moreover, the technique we have developed here can be used in other contexts, in particular for the speed-up of stability-based validation measures. PMID:21235792

  13. Validity and usability of a safe driving behaviors measure for older adults : strategy for congestion mitigation.

    DOT National Transportation Integrated Search

    2012-01-01

    Statistics project that crash/injury/fatality rates of older drivers will increase with the future growth of : this population. Accurate and precise measurement of older driver behaviors becomes imperative to : curtail these crash trends and resultin...

  14. AMES Stereo Pipeline Derived DEM Accuracy Experiment Using LROC-NAC Stereopairs and Weighted Spatial Dependence Simulation for Lunar Site Selection

    NASA Astrophysics Data System (ADS)

    Laura, J. R.; Miller, D.; Paul, M. V.

    2012-03-01

    An accuracy assessment of AMES Stereo Pipeline derived DEMs for lunar site selection using weighted spatial dependence simulation and a call for outside AMES derived DEMs to facilitate a statistical precision analysis.

  15. METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS

    EPA Science Inventory

    Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such ...

  16. The lawful imprecision of human surface tilt estimation in natural scenes

    PubMed Central

    2018-01-01

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477

  17. The lawful imprecision of human surface tilt estimation in natural scenes.

    PubMed

    Kim, Seha; Burge, Johannes

    2018-01-31

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.

  18. The fuzzy algorithm in the die casting mould for the application of multi-channel temperature control

    NASA Astrophysics Data System (ADS)

    Sun, Jin-gen; Chen, Yi; Zhang, Jia-nan

    2017-01-01

    Mould manufacturing is one of the most basic elements in the production chain of China. The mould manufacturing technology has become an important symbol to measure the level of a country's manufacturing industry. The die-casting mould multichannel intelligent temperature control method is studied by cooling water circulation, which uses fuzzy control to realize, aiming at solving the shortcomings of slow speed and big energy consumption during the cooling process of current die-casting mould. At present, the traditional PID control method is used to control the temperature, but it is difficult to ensure the control precision. While , the fuzzy algorithm is used to realize precise control of mould temperature in cooling process. The design is simple, fast response, strong anti-interference ability and good robustness. Simulation results show that the control method is completely feasible, which has higher control precision.

  19. Gradated assembly of multiple proteins into supramolecular nanomaterials

    NASA Astrophysics Data System (ADS)

    Hudalla, Gregory A.; Sun, Tao; Gasiorowski, Joshua Z.; Han, Huifang; Tian, Ye F.; Chong, Anita S.; Collier, Joel H.

    2014-08-01

    Biomaterials exhibiting precise ratios of different bioactive protein components are critical for applications ranging from vaccines to regenerative medicine, but their design is often hindered by limited choices and cross-reactivity of protein conjugation chemistries. Here, we describe a strategy for inducing multiple different expressed proteins of choice to assemble into nanofibres and gels with exceptional compositional control. The strategy employs ‘βTail’ tags, which allow for good protein expression in bacteriological cultures, yet can be induced to co-assemble into nanomaterials when mixed with additional β-sheet fibrillizing peptides. Multiple different βTail fusion proteins could be inserted into peptide nanofibres alone or in combination at predictable, smoothly gradated concentrations, providing a simple yet versatile route to install precise combinations of proteins into nanomaterials. The technology is illustrated by achieving precisely targeted hues using mixtures of fluorescent proteins, by creating nanofibres bearing enzymatic activity, and by adjusting antigenic dominance in vaccines.

  20. High channel count and high precision channel spacing multi-wavelength laser array for future PICs.

    PubMed

    Shi, Yuechun; Li, Simin; Chen, Xiangfei; Li, Lianyan; Li, Jingsi; Zhang, Tingting; Zheng, Jilin; Zhang, Yunshan; Tang, Song; Hou, Lianping; Marsh, John H; Qiu, Bocang

    2014-12-09

    Multi-wavelength semiconductor laser arrays (MLAs) have wide applications in wavelength multiplexing division (WDM) networks. In spite of their tremendous potential, adoption of the MLA has been hampered by a number of issues, particularly wavelength precision and fabrication cost. In this paper, we report high channel count MLAs in which the wavelengths of each channel can be determined precisely through low-cost standard μm-level photolithography/holographic lithography and the reconstruction-equivalent-chirp (REC) technique. 60-wavelength MLAs with good wavelength spacing uniformity have been demonstrated experimentally, in which nearly 83% lasers are within a wavelength deviation of ±0.20 nm, corresponding to a tolerance of ±0.032 nm in the period pitch. As a result of employing the equivalent phase shift technique, the single longitudinal mode (SLM) yield is nearly 100%, while the theoretical yield of standard DFB lasers is only around 33.3%.

Top