NASA Astrophysics Data System (ADS)
Rounds, S. A.; Sullivan, A. B.
2004-12-01
Assessing a model's ability to reproduce field data is a critical step in the modeling process. For any model, some method of determining goodness-of-fit to measured data is needed to aid in calibration and to evaluate model performance. Visualizations and graphical comparisons of model output are an excellent way to begin that assessment. At some point, however, model performance must be quantified. Goodness-of-fit statistics, including the mean error (ME), mean absolute error (MAE), root mean square error, and coefficient of determination, typically are used to measure model accuracy. Statistical tools such as the sign test or Wilcoxon test can be used to test for model bias. The runs test can detect phase errors in simulated time series. Each statistic is useful, but each has its limitations. None provides a complete quantification of model accuracy. In this study, a suite of goodness-of-fit statistics was applied to a model of Henry Hagg Lake in northwest Oregon. Hagg Lake is a man-made reservoir on Scoggins Creek, a tributary to the Tualatin River. Located on the west side of the Portland metropolitan area, the Tualatin Basin is home to more than 450,000 people. Stored water in Hagg Lake helps to meet the agricultural and municipal water needs of that population. Future water demands have caused water managers to plan for a potential expansion of Hagg Lake, doubling its storage to roughly 115,000 acre-feet. A model of the lake was constructed to evaluate the lake's water quality and estimate how that quality might change after raising the dam. The laterally averaged, two-dimensional, U.S. Army Corps of Engineers model CE-QUAL-W2 was used to construct the Hagg Lake model. Calibrated for the years 2000 and 2001 and confirmed with data from 2002 and 2003, modeled parameters included water temperature, ammonia, nitrate, phosphorus, algae, zooplankton, and dissolved oxygen. Several goodness-of-fit statistics were used to quantify model accuracy and bias. Model performance was judged to be excellent for water temperature (annual ME: -0.22 to 0.05 ° C; annual MAE: 0.62 to 0.68 ° C) and dissolved oxygen (annual ME: -0.28 to 0.18 mg/L; annual MAE: 0.43 to 0.92 mg/L), showing that the model is sufficiently accurate for future water resources planning and management.
2016-12-01
KS and AD Statistical Power via Monte Carlo Simulation Statistical power is the probability of correctly rejecting the null hypothesis when the...Select a caveat DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Determining the Statistical Power...real-world data to test the accuracy of the simulation. Statistical comparison of these metrics can be necessary when making such a determination
Karzmark, Peter; Deutsch, Gayle K
2018-01-01
This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.
A Kill is a Kill: Asymmetrically Attacking U.S. Airpower
1999-06-01
genius behind the Tet offensive. Until this point in the Vietnam War, Americans had been fed a steady diet of good news and compelling statistics ; and...in Air and Space Operations Coursebook , Course Dir. Maj Andre Provoncha (Maxwell AFB, Ala.: Air Command and Staff College, 1998), 362. 100United...this altitude sanctuary comes at the expense of bombing accuracy. Gordon and Trainor, 249-250. 139 All statistics taken from Chaim Herzog, The Arab
Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses
Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy
2015-01-01
Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579
Deep learning and non-negative matrix factorization in recognition of mammograms
NASA Astrophysics Data System (ADS)
Swiderski, Bartosz; Kurek, Jaroslaw; Osowski, Stanislaw; Kruk, Michal; Barhoumi, Walid
2017-02-01
This paper presents novel approach to the recognition of mammograms. The analyzed mammograms represent the normal and breast cancer (benign and malignant) cases. The solution applies the deep learning technique in image recognition. To obtain increased accuracy of classification the nonnegative matrix factorization and statistical self-similarity of images are applied. The images reconstructed by using these two approaches enrich the data base and thanks to this improve of quality measures of mammogram recognition (increase of accuracy, sensitivity and specificity). The results of numerical experiments performed on large DDSM data base containing more than 10000 mammograms have confirmed good accuracy of class recognition, exceeding the best results reported in the actual publications for this data base.
The accuracy of the ATLAS muon X-ray tomograph
NASA Astrophysics Data System (ADS)
Avramidou, R.; Berbiers, J.; Boudineau, C.; Dechelette, C.; Drakoulakos, D.; Fabjan, C.; Grau, S.; Gschwendtner, E.; Maugain, J.-M.; Rieder, H.; Rangod, S.; Rohrbach, F.; Sbrissa, E.; Sedykh, E.; Sedykh, I.; Smirnov, Y.; Vertogradov, L.; Vichou, I.
2003-01-01
A gigantic detector, the ATLAS project, is under construction at CERN for particle physics research at the Large Hadron Collider which is to be ready by 2006. An X-ray tomograph has been developed, designed and constructed at CERN in order to control the mechanical quality of the ATLAS muon chambers. We reached a measurement accuracy of 2 μm systematic and 2 μm statistical uncertainties in the horizontal and vertical directions in the working area 220 cm (horizontal)×60 cm (vertical). Here we describe in detail the fundamental approach of the basic principle chosen to achieve such good accuracy. In order to crosscheck our precision, key results of measurements are presented.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... You may extend the sampling time to improve measurement accuracy of PM emissions, using good..., you may omit speed, torque, and power points from the duty-cycle regression statistics if the... mapped. (2) For variable-speed engines without low-speed governors, you may omit torque and power points...
Code of Federal Regulations, 2012 CFR
2012-07-01
.... You may extend the sampling time to improve measurement accuracy of PM emissions, using good..., you may omit speed, torque, and power points from the duty-cycle regression statistics if the... mapped. (2) For variable-speed engines without low-speed governors, you may omit torque and power points...
Torralbo, Ana; Walther, Dirk B.; Chai, Barry; Caddigan, Eamon; Fei-Fei, Li; Beck, Diane M.
2013-01-01
Within the range of images that we might categorize as a “beach”, for example, some will be more representative of that category than others. Here we first confirmed that humans could categorize “good” exemplars better than “bad” exemplars of six scene categories and then explored whether brain regions previously implicated in natural scene categorization showed a similar sensitivity to how well an image exemplifies a category. In a behavioral experiment participants were more accurate and faster at categorizing good than bad exemplars of natural scenes. In an fMRI experiment participants passively viewed blocks of good or bad exemplars from the same six categories. A multi-voxel pattern classifier trained to discriminate among category blocks showed higher decoding accuracy for good than bad exemplars in the PPA, RSC and V1. This difference in decoding accuracy cannot be explained by differences in overall BOLD signal, as average BOLD activity was either equivalent or higher for bad than good scenes in these areas. These results provide further evidence that V1, RSC and the PPA not only contain information relevant for natural scene categorization, but their activity patterns mirror the fundamentally graded nature of human categories. Analysis of the image statistics of our good and bad exemplars shows that variability in low-level features and image structure is higher among bad than good exemplars. A simulation of our neuroimaging experiment suggests that such a difference in variance could account for the observed differences in decoding accuracy. These results are consistent with both low-level models of scene categorization and models that build categories around a prototype. PMID:23555588
NASA Astrophysics Data System (ADS)
Bonetto, P.; Qi, Jinyi; Leahy, R. M.
2000-08-01
Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.
ERIC Educational Resources Information Center
Witmer, David R.
A search for better predictors of incomes of high school and college graduates is described. The accuracy of the prediction, implicit in the work of John R. Walsh of Harvard University, that the income differences in a given year are good indicators of income differences in future years, was tested by applying standard statistical procedures to…
NASA Astrophysics Data System (ADS)
Lin, Shu; Wang, Rui; Xia, Ning; Li, Yongdong; Liu, Chunliang
2018-01-01
Statistical multipactor theories are critical prediction approaches for multipactor breakdown determination. However, these approaches still require a negotiation between the calculation efficiency and accuracy. This paper presents an improved stationary statistical theory for efficient threshold analysis of two-surface multipactor. A general integral equation over the distribution function of the electron emission phase with both the single-sided and double-sided impacts considered is formulated. The modeling results indicate that the improved stationary statistical theory can not only obtain equally good accuracy of multipactor threshold calculation as the nonstationary statistical theory, but also achieve high calculation efficiency concurrently. By using this improved stationary statistical theory, the total time consumption in calculating full multipactor susceptibility zones of parallel plates can be decreased by as much as a factor of four relative to the nonstationary statistical theory. It also shows that the effect of single-sided impacts is indispensable for accurate multipactor prediction of coaxial lines and also more significant for the high order multipactor. Finally, the influence of secondary emission yield (SEY) properties on the multipactor threshold is further investigated. It is observed that the first cross energy and the energy range between the first cross and the SEY maximum both play a significant role in determining the multipactor threshold, which agrees with the numerical simulation results in the literature.
The Diagnostic Accuracy of the Berg Balance Scale in Predicting Falls.
Park, Seong-Hi; Lee, Young-Shin
2017-11-01
This study aimed to evaluate the predictive validity of the Berg Balance Scale (BBS) as a screening tool for fall risks among those with varied levels of balance. A total of 21 studies reporting predictive validity of the BBS of fall risk were meta-analyzed. With regard to the overall predictive validity of the BBS, the pooled sensitivity and specificity were 0.72 and 0.73, respectively; the accuracy curve area was 0.84. The findings showed statistical heterogeneity among studies. Among the sub-groups, the age group of those younger than 65 years, those with neuromuscular disease, those with 2+ falls, and those with a cutoff point of 45 to 49 showed better sensitivity with statistically less heterogeneity. The empirical evidence indicates that the BBS is a suitable tool to screen for the risk of falls and shows good predictability when used with the appropriate criteria and applied to those with neuromuscular disease.
Diagnostic accuracy of language sample measures with Persian-speaking preschool children.
Kazemi, Yalda; Klee, Thomas; Stringer, Helen
2015-04-01
This study examined the diagnostic accuracy of selected language sample measures (LSMs) with Persian-speaking children. A pre-accuracy study followed by phase I and II studies are reported. Twenty-four Persian-speaking children, aged 42 to 54 months, with primary language impairment (PLI) were compared to 27 age-matched children without PLI on a set of measures derived from play-based, conversational language samples. Results showed that correlations between age and LSMs were not statistically significant in either group of children. However, a majority of LSMs differentiated children with and without PLI at the group level (phase I), while three of the measures exhibited good diagnostic accuracy at the level of the individual (phase II). We conclude that general LSMs are promising for distinguishing between children with and without PLI. Persian-specific measures are mainly helpful in identifying children without language impairment while their ability to identify children with PLI is poor.
Paroxysmal atrial fibrillation prediction method with shorter HRV sequences.
Boon, K H; Khalil-Hani, M; Malarvili, M B; Sia, C W
2016-10-01
This paper proposes a method that predicts the onset of paroxysmal atrial fibrillation (PAF), using heart rate variability (HRV) segments that are shorter than those applied in existing methods, while maintaining good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to stabilize (electrically) and prevent the onset of atrial arrhythmias with different pacing techniques. We investigate the effect of HRV features extracted from different lengths of HRV segments prior to PAF onset with the proposed PAF prediction method. The pre-processing stage of the predictor includes QRS detection, HRV quantification and ectopic beat correction. Time-domain, frequency-domain, non-linear and bispectrum features are then extracted from the quantified HRV. In the feature selection, the HRV feature set and classifier parameters are optimized simultaneously using an optimization procedure based on genetic algorithm (GA). Both full feature set and statistically significant feature subset are optimized by GA respectively. For the statistically significant feature subset, Mann-Whitney U test is used to filter non-statistical significance features that cannot pass the statistical test at 20% significant level. The final stage of our predictor is the classifier that is based on support vector machine (SVM). A 10-fold cross-validation is applied in performance evaluation, and the proposed method achieves 79.3% prediction accuracy using 15-minutes HRV segment. This accuracy is comparable to that achieved by existing methods that use 30-minutes HRV segments, most of which achieves accuracy of around 80%. More importantly, our method significantly outperforms those that applied segments shorter than 30 minutes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Improved Forecasting Methods for Naval Manpower Studies
2015-03-25
Using monthly data is likely to improve the overall fit of the models and the accuracy of the BP test . A measure of unemployment to control for...measure of the relative goodness of fit of a statistical model. It is grounded in the concept of information entropy, in effect, offering a relative...the Kullback – Leibler divergence, DKL(f,g1); similarly, the information lost from using g2 to
Ahlawat, Shivani; Stern, Steven E; Belzberg, Allan J; Fritz, Jan
2017-07-01
To assess the quality and accuracy of metal artifact reduction sequence (MARS) magnetic resonance imaging (MRI) for the diagnosis of lumbosacral neuropathies in patients with metallic implants in the pelvis. Twenty-two subjects with lumbosacral neuropathy following pelvic instrumentation underwent 1.5-T MARS MRI including optimized axial intermediate-weighted and STIR turbo spin echo sequences extending from L5 to the ischial tuberosity. Two readers graded the visibility of the lumbosacral trunk, sciatic, femoral, lateral femoral cutaneous, and obturator nerves and the nerve signal intensity of nerve, architecture, caliber, course, continuity, and skeletal muscle denervation. Clinical examination and electrodiagnostic studies were used as the standard of reference. Descriptive, agreement, and diagnostic performance statistics were applied. Lumbosacral plexus visibility on MARS MRI was good (4) or very good (3) in 92% of cases with 81% exact agreement and a Kendall's W coefficient of 0.811. The obturator nerve at the obturator foramen and the sciatic nerve posterior to the acetabulum had the lowest visibility, with good or very good ratings in only 61% and 77% of cases respectively. The reader agreement for nerve abnormalities on MARS MRI was excellent, ranging from 95.5 to 100%. MARS MRI achieved a sensitivity of 86%, specificity of 67%, positive predictive value of 95%, and negative predictive value of 40%, and accuracy of 83% for the detection of neuropathy. MARS MRI yields high image quality and diagnostic accuracy for the assessment of lumbosacral neuropathies in patients with metallic implants of the pelvis and hips.
Decay Properties of K-Vacancy States in Fe X-Fe XVII
NASA Technical Reports Server (NTRS)
Mendoza, C.; Kallman, T. R.; Bautista, M. A.; Palmeri, P.
2003-01-01
We report extensive calculations of the decay properties of fine-structure K-vacancy levels in Fe X-Fe XVII. A large set of level energies, wavelengths, radiative and Auger rates, and fluorescence yields has been computed using three different standard atomic codes, namely Cowan's HFR, AUTOSTRUCTURE and the Breit-Pauli R-matrix package. This multi-code approach is used to the study the effects of core relaxation, configuration interaction and the Breit interaction, and enables the estimate of statistical accuracy ratings. The Ksigma and KLL Auger widths have been found to be nearly independent of both the outer-electron configuration and electron occupancy keeping a constant ratio of 1.53 +/- 0.06. By comparing with previous theoretical and measured wavelengths, the accuracy of the present set is determined to be within 2 m Angstrom. Also, the good agreement found between the different radiative and Auger data sets that have been computed allow us to propose with confidence an accuracy rating of 20% for the line fluorescence yields greater than 0.01. Emission and absorption spectral features are predicted finding good correlation with measurements in both laboratory and astrophysical plasmas.
Revealing how network structure affects accuracy of link prediction
NASA Astrophysics Data System (ADS)
Yang, Jin-Xuan; Zhang, Xiao-Dong
2017-08-01
Link prediction plays an important role in network reconstruction and network evolution. The network structure affects the accuracy of link prediction, which is an interesting problem. In this paper we use common neighbors and the Gini coefficient to reveal the relation between them, which can provide a good reference for the choice of a suitable link prediction algorithm according to the network structure. Moreover, the statistical analysis reveals correlation between the common neighbors index, Gini coefficient index and other indices to describe the network structure, such as Laplacian eigenvalues, clustering coefficient, degree heterogeneity, and assortativity of network. Furthermore, a new method to predict missing links is proposed. The experimental results show that the proposed algorithm yields better prediction accuracy and robustness to the network structure than existing currently used methods for a variety of real-world networks.
MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera
NASA Astrophysics Data System (ADS)
Wang, Hongkai; Stout, David B.; Taschereau, Richard; Gu, Zheng; Vu, Nam T.; Prout, David L.; Chatziioannou, Arion F.
2012-10-01
This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.
MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera.
Wang, Hongkai; Stout, David B; Taschereau, Richard; Gu, Zheng; Vu, Nam T; Prout, David L; Chatziioannou, Arion F
2012-10-07
This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.
El-Didamony, Akram M; Gouda, Ayman A
2011-01-01
A new highly sensitive and specific spectrofluorimetric method has been developed to determine a sympathomimetic drug pseudoephedrine hydrochloride. The present method was based on derivatization with 4-chloro-7-nitrobenzofurazan in phosphate buffer at pH 7.8 to produce a highly fluorescent product which was measured at 532 nm (excitation at 475 nm). Under the optimized conditions a linear relationship and good correlation was found between the fluorescence intensity and pseudoephedrine hydrochloride concentration in the range of 0.5-5 µg mL(-1). The proposed method was successfully applied to the assay of pseudoephedrine hydrochloride in commercial pharmaceutical formulations with good accuracy and precision and without interferences from common additives. Statistical comparison of the results with a well-established method showed excellent agreement and proved that there was no significant difference in the accuracy and precision. The stoichiometry of the reaction was determined and the reaction pathway was postulated. Copyright © 2010 John Wiley & Sons, Ltd.
Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana
2014-02-01
To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil.
Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana
2014-01-01
Objective To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. Methods TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Results Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. Conclusions The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil. PMID:25182282
Ernst, Dominique; Köhler, Jürgen
2013-01-21
We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.
Types and Characteristics of Data for Geomagnetic Field Modeling
NASA Technical Reports Server (NTRS)
Langel, R. A. (Editor); Baldwin, R. T. (Editor)
1992-01-01
Given here is material submitted at a symposium convened on Friday, August 23, 1991, at the General Assembly of the International Union of Geodesy and Geophysics (IUGG) held in Vienna, Austria. Models of the geomagnetic field are only as good as the data upon which they are based, and depend upon correct understanding of data characteristics such as accuracy, correlations, systematic errors, and general statistical properties. This symposium was intended to expose and illuminate these data characteristics.
Parent-based diagnosis of ADHD is as accurate as a teacher-based diagnosis of ADHD.
Bied, Adam; Biederman, Joseph; Faraone, Stephen
2017-04-01
To review the literature evaluating the psychometric properties of parent and teacher informants relative to a gold-standard ADHD diagnosis in pediatric populations. We included studies that included both a parent and teacher informant, a gold-standard diagnosis, and diagnostic accuracy metrics. Potential confounds were evaluated. We also assessed the 'OR' and the 'AND' rules for combining informant reports. Eight articles met inclusion criteria. The diagnostic accuracy for predicting gold standard ADHD diagnoses did not differ between parents and teachers. Sample size, sample type, participant drop-out, participant age, participant gender, geographic area of the study, and date of study publication were assessed as potential confounds. Parent and teachers both yielded moderate to good diagnostic accuracy for ADHD diagnoses. Parent reports were statistically indistinguishable from those of teachers. The predictive features of the 'OR' and 'AND' rules are useful in evaluating approaches to better integrating information from these informants.
Binary recursive partitioning: background, methods, and application to psychology.
Merkle, Edgar C; Shaffer, Victoria A
2011-02-01
Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.
Taylor, Vivien F; Longerich, Henry P; Greenough, John D
2003-02-12
Trace element fingerprints were deciphered for wines from Canada's two major wine-producing regions, the Okanagan Valley and the Niagara Peninsula, for the purpose of examining differences in wine element composition with region of origin and identifying elements important to determining provenance. Analysis by ICP-MS allowed simultaneous determination of 34 trace elements in wine (Li, Be, Mg, Al, P, Cl, Ca, Ti, V, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Br, Rb, Sr, Mo, Ag, Cd, Sb, I, Cs, Ba, La, Ce, Tl, Pb, Bi, Th, and U) at low levels of detection, and patterns in trace element concentrations were deciphered by multivariate statistical analysis. The two regions were discriminated with 100% accuracy using 10 of these elements. Differences in soil chemistry between the Niagara and Okanagan vineyards were evident, without a good correlation between soil and wine composition. The element Sr was found to be a good indicator of provenance and has been reported in fingerprinting studies of other regions.
Marics, Gábor; Koncz, Levente; Eitler, Katalin; Vatai, Barbara; Szénási, Boglárka; Zakariás, David; Mikos, Borbála; Körner, Anna; Tóth-Heyn, Péter
2015-03-19
Continuous glucose monitoring (CGM) originally was developed for diabetic patients and it may be a useful tool for monitoring glucose changes in pediatric intensive care unit (PICU). Its use is, however, limited by the lack of sufficient data on its reliability at insufficient peripheral perfusion. We aimed to correlate the accuracy of CGM with laboratory markers relevant to disturbed tissue perfusion. In 38 pediatric patients (age range, 0-18 years) requiring intensive care we tested the effect of pH, lactate, hematocrit and serum potassium on the difference between CGM and meter glucose measurements. Guardian® (Medtronic®) CGM results were compared to GEM 3000 (Instrumentation laboratory®) and point-of-care measurements. The clinical accuracy of CGM was evaluated by Clarke Error Grid -, Bland-Altman analysis and Pearson's correlation. We used Friedman test for statistical analysis (statistical significance was established as a p < 0.05). CGM values exhibited a considerable variability without any correlation with the examined laboratory parameters. Clarke, Bland-Altman analysis and Pearson's correlation coefficient demonstrated a good clinical accuracy of CGM (zone A and B = 96%; the mean difference between reference and CGM glucose was 1,3 mg/dL, 48 from the 780 calibration pairs overrunning the 2 standard deviation; Pearson's correlation coefficient: 0.83). The accuracy of CGM measurements is independent of laboratory parameters relevant to tissue hypoperfusion. CGM may prove a reliable tool for continuous monitoring of glucose changes in PICUs, not much influenced by tissue perfusion, but still not appropriate for being the base for clinical decisions.
Using CRANID to test the population affinity of known crania.
Kallenberger, Lauren; Pilbrow, Varsha
2012-11-01
CRANID is a statistical program used to infer the source population of a cranium of unknown origin by comparing its cranial dimensions with a worldwide craniometric database. It has great potential for estimating ancestry in archaeological, forensic and repatriation cases. In this paper we test the validity of CRANID in classifying crania of known geographic origin. Twenty-three crania of known geographic origin but unknown sex were selected from the osteological collections of the University of Melbourne. Only 18 crania showed good statistical match with the CRANID database. Without considering accuracy of sex allocation, 11 crania were accurately classified into major geographic regions and nine were correctly classified to geographically closest available reference populations. Four of the five crania with poor statistical match were nonetheless correctly allocated to major geographical regions, although none was accurately assigned to geographically closest reference samples. We conclude that if sex allocations are overlooked, CRANID can accurately assign 39% of specimens to geographically closest matching reference samples and 48% to major geographic regions. Better source population representation may improve goodness of fit, but known sex-differentiated samples are needed to further test the utility of CRANID. © 2012 The Authors Journal of Anatomy © 2012 Anatomical Society.
Cognitive Change Questionnaire as a method for cognitive impairment screening
Damin, Antonio Eduardo; Nitrini, Ricardo; Brucki, Sonia Maria Dozzi
2015-01-01
The Cognitive Change Questionnaire (CCQ) was created as an effective measure of cognitive change that is easy to use and suitable for application in Brazil. Objective To evaluate whether the CCQ can accurately distinguish normal subjects from individuals with Mild Cognitive Impairment (MCI) and/or early stage dementia and to develop a briefer questionnaire, based on the original 22-item CCQ (CCQ22), that contains fewer questions. Methods A total of 123 individuals were evaluated: 42 healthy controls, 40 patients with MCI and 41 with mild dementia. The evaluation was performed using cognitive tests based on individual performance and on questionnaires administered to informants. The CCQ22 was created based on a selection of questions that experts deemed useful in screening for early stage dementia. Results The CCQ22 showed good accuracy for distinguishing between the groups. Statistical models selected the eight questions with the greatest power to discriminate between the groups. The AUC ROC corresponding to the final version of the 8-item CCQ (CCQ8), demonstrated good accuracy in differentiating between groups, good correlation with the final diagnosis (r=0.861) and adequate internal consistency (Cronbach's α=0.876). Conclusion The CCQ8 can be used to accurately differentiate between normal subjects and individuals with cognitive impairment, constituting a brief and appropriate instrument for cognitive screening. PMID:29213967
Klein, A A; Collier, T; Yeates, J; Miles, L F; Fletcher, S N; Evans, C; Richards, T
2017-09-01
A simple and accurate scoring system to predict risk of transfusion for patients undergoing cardiac surgery is lacking. We identified independent risk factors associated with transfusion by performing univariate analysis, followed by logistic regression. We then simplified the score to an integer-based system and tested it using the area under the receiver operator characteristic (AUC) statistic with a Hosmer-Lemeshow goodness-of-fit test. Finally, the scoring system was applied to the external validation dataset and the same statistical methods applied to test the accuracy of the ACTA-PORT score. Several factors were independently associated with risk of transfusion, including age, sex, body surface area, logistic EuroSCORE, preoperative haemoglobin and creatinine, and type of surgery. In our primary dataset, the score accurately predicted risk of perioperative transfusion in cardiac surgery patients with an AUC of 0.76. The external validation confirmed accuracy of the scoring method with an AUC of 0.84 and good agreement across all scores, with a minor tendency to under-estimate transfusion risk in very high-risk patients. The ACTA-PORT score is a reliable, validated tool for predicting risk of transfusion for patients undergoing cardiac surgery. This and other scores can be used in research studies for risk adjustment when assessing outcomes, and might also be incorporated into a Patient Blood Management programme. © The Author 2017. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Anguita, Jaime A; Neifeld, Mark A; Vasic, Bane V
2007-09-10
By means of numerical simulations we analyze the statistical properties of the power fluctuations induced by the incoherent superposition of multiple transmitted laser beams in a terrestrial free-space optical communication link. The measured signals arising from different transmitted optical beams are found to be statistically correlated. This channel correlation increases with receiver aperture and propagation distance. We find a simple scaling rule for the spatial correlation coefficient in terms of the propagation distance and we are able to predict the scintillation reduction in previously reported experiments with good accuracy. We propose an approximation to the probability density function of the received power of a spatially correlated multiple-beam system in terms of the parameters of the single-channel gamma-gamma function. A bit-error-rate evaluation is also presented to demonstrate the improvement of a multibeam system over its single-beam counterpart.
NASA Astrophysics Data System (ADS)
Kim, Hyun-Tae; Romanelli, M.; Yuan, X.; Kaye, S.; Sips, A. C. C.; Frassinetti, L.; Buchanan, J.; Contributors, JET
2017-06-01
This paper presents for the first time a statistical validation of predictive TRANSP simulations of plasma temperature using two transport models, GLF23 and TGLF, over a database of 80 baseline H-mode discharges in JET-ILW. While the accuracy of the predicted T e with TRANSP-GLF23 is affected by plasma collisionality, the dependency of predictions on collisionality is less significant when using TRANSP-TGLF, indicating that the latter model has a broader applicability across plasma regimes. TRANSP-TGLF also shows a good matching of predicted T i with experimental measurements allowing for a more accurate prediction of the neutron yields. The impact of input data and assumptions prescribed in the simulations are also investigated in this paper. The statistical validation and the assessment of uncertainty level in predictive TRANSP simulations for JET-ILW-DD will constitute the basis for the extrapolation to JET-ILW-DT experiments.
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Chen, Y. S.
1986-01-01
The Melick method of inlet flow dynamic distortion prediction by statistical means is outlined. A hypothetic vortex model is used as the basis for the mathematical formulations. The main variables are identified by matching the theoretical total pressure rms ratio with the measured total pressure rms ratio. Data comparisons, using the HiMAT inlet test data set, indicate satisfactory prediction of the dynamic peak distortion for cases with boundary layer control device vortex generators. A method for the dynamic probe selection was developed. Validity of the probe selection criteria is demonstrated by comparing the reduced-probe predictions with the 40-probe predictions. It is indicated that the the number of dynamic probes can be reduced to as few as two and still retain good accuracy.
Variance approximations for assessments of classification accuracy
R. L. Czaplewski
1994-01-01
Variance approximations are derived for the weighted and unweighted kappa statistics, the conditional kappa statistic, and conditional probabilities. These statistics are useful to assess classification accuracy, such as accuracy of remotely sensed classifications in thematic maps when compared to a sample of reference classifications made in the field. Published...
Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches
NASA Astrophysics Data System (ADS)
H, Vathsala; Koolagudi, Shashidhar G.
2017-10-01
This paper presents a hybrid model to better predict Indian summer monsoon rainfall. The algorithm considers suitable techniques for processing dense datasets. The proposed three-step algorithm comprises closed itemset generation-based association rule mining for feature selection, cluster membership for dimensionality reduction, and simple logistic function for prediction. The application of predicting rainfall into flood, excess, normal, deficit, and drought based on 36 predictors consisting of land and ocean variables is presented. Results show good accuracy in the considered study period of 37years (1969-2005).
NASA Astrophysics Data System (ADS)
Koseki, Masahiro
2017-04-01
You might think that the newly developed CAMS video system can get much more fine results than the system of SonotaCo network. There are small differences between them surely, but the comparison between the statistics of them reveals both data are comparable in accuracy. We find that the SonotaCo system cannot detect slow velocity meteors as good as CAMS but it is superior for faster meteors. There is a more important difference between them, that is, the definition of meteor showers and this is resulting in curious stream data.
Henderson, Shelley; Purdie, Colin; Michie, Caroline; Evans, Andrew; Lerski, Richard; Johnston, Marilyn; Vinnicombe, Sarah; Thompson, Alastair M
2017-11-01
To investigate whether interim changes in hetereogeneity (measured using entropy features) on MRI were associated with pathological residual cancer burden (RCB) at final surgery in patients receiving neoadjuvant chemotherapy (NAC) for primary breast cancer. This was a retrospective study of 88 consenting women (age: 30-79 years). Scanning was performed on a 3.0 T MRI scanner prior to NAC (baseline) and after 2-3 cycles of treatment (interim). Entropy was derived from the grey-level co-occurrence matrix, on slice-matched baseline/interim T2-weighted images. Response, assessed using RCB score on surgically resected specimens, was compared statistically with entropy/heterogeneity changes and ROC analysis performed. Association of pCR within each tumour immunophenotype was evaluated. Mean entropy percent differences between examinations, by response category, were: pCR: 32.8%, RCB-I: 10.5%, RCB-II: 9.7% and RCB-III: 3.0%. Association of ultimate pCR with coarse entropy changes between baseline/interim MRI across all lesions yielded 85.2% accuracy (area under ROC curve: 0.845). Excellent sensitivity/specificity was obtained for pCR prediction within each immunophenotype: ER+: 100%/100%; HER2+: 83.3%/95.7%, TNBC: 87.5%/80.0%. Lesion T2 heterogeneity changes are associated with response to NAC using RCB scores, particularly for pCR, and can be useful across all immunophenotypes with good diagnostic accuracy. • Texture analysis provides a means of measuring lesion heterogeneity on MRI images. • Heterogeneity changes between baseline/interim MRI can be linked with ultimate pathological response. • Heterogeneity changes give good diagnostic accuracy of pCR response across all immunophenotypes. • Percentage reduction in heterogeneity is associated with pCR with good accuracy and NPV.
Cluster mass inference via random field theory.
Zhang, Hui; Nichols, Thomas E; Johnson, Timothy D
2009-01-01
Cluster extent and voxel intensity are two widely used statistics in neuroimaging inference. Cluster extent is sensitive to spatially extended signals while voxel intensity is better for intense but focal signals. In order to leverage strength from both statistics, several nonparametric permutation methods have been proposed to combine the two methods. Simulation studies have shown that of the different cluster permutation methods, the cluster mass statistic is generally the best. However, to date, there is no parametric cluster mass inference available. In this paper, we propose a cluster mass inference method based on random field theory (RFT). We develop this method for Gaussian images, evaluate it on Gaussian and Gaussianized t-statistic images and investigate its statistical properties via simulation studies and real data. Simulation results show that the method is valid under the null hypothesis and demonstrate that it can be more powerful than the cluster extent inference method. Further, analyses with a single subject and a group fMRI dataset demonstrate better power than traditional cluster size inference, and good accuracy relative to a gold-standard permutation test.
An accurate behavioral model for single-photon avalanche diode statistical performance simulation
NASA Astrophysics Data System (ADS)
Xu, Yue; Zhao, Tingchen; Li, Ding
2018-01-01
An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.
Thomas, Christoph; Brodoefel, Harald; Tsiflikas, Ilias; Bruckner, Friederike; Reimann, Anja; Ketelsen, Dominik; Drosch, Tanja; Claussen, Claus D; Kopp, Andreas; Heuschmid, Martin; Burgstahler, Christof
2010-02-01
To prospectively evaluate the influence of the clinical pretest probability assessed by the Morise score onto image quality and diagnostic accuracy in coronary dual-source computed tomography angiography (DSCTA). In 61 patients, DSCTA and invasive coronary angiography were performed. Subjective image quality and accuracy for stenosis detection (>50%) of DSCTA with invasive coronary angiography as gold standard were evaluated. The influence of pretest probability onto image quality and accuracy was assessed by logistic regression and chi-square testing. Correlations of image quality and accuracy with the Morise score were determined using linear regression. Thirty-eight patients were categorized into the high, 21 into the intermediate, and 2 into the low probability group. Accuracies for the detection of significant stenoses were 0.94, 0.97, and 1.00, respectively. Logistic regressions and chi-square tests showed statistically significant correlations between Morise score and image quality (P < .0001 and P < .001) and accuracy (P = .0049 and P = .027). Linear regression revealed a cutoff Morise score for a good image quality of 16 and a cutoff for a barely diagnostic image quality beyond the upper Morise scale. Pretest probability is a weak predictor of image quality and diagnostic accuracy in coronary DSCTA. A sufficient image quality for diagnostic images can be reached with all pretest probabilities. Therefore, coronary DSCTA might be suitable also for patients with a high pretest probability. Copyright 2010 AUR. Published by Elsevier Inc. All rights reserved.
Mohammed A. Kalkhan; Robin M. Reich; Raymond L. Czaplewski
1996-01-01
A Monte Carlo simulation was used to evaluate the statistical properties of measures of association and the Kappa statistic under double sampling with replacement. Three error matrices representing three levels of classification accuracy of Landsat TM Data consisting of four forest cover types in North Carolina. The overall accuracy of the five indices ranged from 0.35...
CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira
2013-01-01
Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study. PMID:24037076
Carvalho, Suzana Papile Maciel; Brito, Liz Magalhães; Paiva, Luiz Airton Saavedra de; Bicudo, Lucilene Arilho Ribeiro; Crosato, Edgard Michel; Oliveira, Rogério Nogueira de
2013-01-01
Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study.
Fonseca, Pedro; Weysen, Tim; Goelema, Maaike S; Møst, Els I S; Radha, Mustafa; Lunsingh Scheurleer, Charlotte; van den Heuvel, Leonie; Aarts, Ronald M
2017-07-01
To compare the accuracy of automatic sleep staging based on heart rate variability measured from photoplethysmography (PPG) combined with body movements measured with an accelerometer, with polysomnography (PSG) and actigraphy. Using wrist-worn PPG to analyze heart rate variability and an accelerometer to measure body movements, sleep stages and sleep statistics were automatically computed from overnight recordings. Sleep-wake, 4-class (wake/N1 + N2/N3/REM) and 3-class (wake/NREM/REM) classifiers were trained on 135 simultaneously recorded PSG and PPG recordings of 101 healthy participants and validated on 80 recordings of 51 healthy middle-aged adults. Epoch-by-epoch agreement and sleep statistics were compared with actigraphy for a subset of the validation set. The sleep-wake classifier obtained an epoch-by-epoch Cohen's κ between PPG and PSG sleep stages of 0.55 ± 0.14, sensitivity to wake of 58.2 ± 17.3%, and accuracy of 91.5 ± 5.1%. κ and sensitivity were significantly higher than with actigraphy (0.40 ± 0.15 and 45.5 ± 19.3%, respectively). The 3-class classifier achieved a κ of 0.46 ± 0.15 and accuracy of 72.9 ± 8.3%, and the 4-class classifier, a κ of 0.42 ± 0.12 and accuracy of 59.3 ± 8.5%. The moderate epoch-by-epoch agreement and, in particular, the good agreement in terms of sleep statistics suggest that this technique is promising for long-term sleep monitoring, although more evidence is needed to understand whether it can complement PSG in clinical practice. It also offers an improvement in sleep/wake detection over actigraphy for healthy individuals, although this must be confirmed on a larger, clinical population. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Baldewijns, Greet; Luca, Stijn; Nagels, William; Vanrumste, Bart; Croonenborghs, Tom
2015-01-01
It has been shown that gait speed and transfer times are good measures of functional ability in elderly. However, data currently acquired by systems that measure either gait speed or transfer times in the homes of elderly people require manual reviewing by healthcare workers. This reviewing process is time-consuming. To alleviate this burden, this paper proposes the use of statistical process control methods to automatically detect both positive and negative changes in transfer times. Three SPC techniques: tabular CUSUM, standardized CUSUM and EWMA, known for their ability to detect small shifts in the data, are evaluated on simulated transfer times. This analysis shows that EWMA is the best-suited method with a detection accuracy of 82% and an average detection time of 9.64 days.
Turbulence and secondary motions in square duct flow
NASA Astrophysics Data System (ADS)
Pirozzoli, Sergio; Modesti, Davide; Orlandi, Paolo; Grasso, Francesco
2017-11-01
We study turbulent flows in pressure-driven ducts with square cross-section through DNS up to Reτ 1050 . Numerical simulations are carried out over extremely long integration times to get adequate convergence of the flow statistics, and specifically high-fidelity representation of the secondary motions which arise. The intensity of the latter is found to be in the order of 1-2% of the bulk velocity, and unaffected by Reynolds number variations. The smallness of the mean convection terms in the streamwise vorticity equation points to a simple characterization of the secondary flows, which in the asymptotic high-Re regime are found to be approximated with good accuracy by eigenfunctions of the Laplace operator. Despite their effect of redistributing the wall shear stress along the duct perimeter, we find that secondary motions do not have large influence on the mean velocity field, which can be characterized with good accuracy as that resulting from the concurrent effect of four independent flat walls, each controlling a quarter of the flow domain. As a consequence, we find that parametrizations based on the hydraulic diameter concept, and modifications thereof, are successful in predicting the duct friction coefficient. This research was carried out using resources from PRACE EU Grants.
NASA Astrophysics Data System (ADS)
Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans
The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation values<0.5. Although top-down disaggregation of traffic emissions generally exhibits low accuracy, the accuracy is significantly higher in compact cities and might be further improved by applying a correction factor for the city center. Therefore, the method can be used by local environmental authorities in cities with limited resources and with little knowledge on the pollution situation to get an overview on the spatial distribution of the emissions generated by traffic activities.
Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration
NASA Technical Reports Server (NTRS)
Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali
2007-01-01
We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.
Farmer, William H.; Knight, Rodney R.; Eash, David A.; Kasey J. Hutchinson,; Linhart, S. Mike; Christiansen, Daniel E.; Archfield, Stacey A.; Over, Thomas M.; Kiang, Julie E.
2015-08-24
Daily records of streamflow are essential to understanding hydrologic systems and managing the interactions between human and natural systems. Many watersheds and locations lack streamgages to provide accurate and reliable records of daily streamflow. In such ungaged watersheds, statistical tools and rainfall-runoff models are used to estimate daily streamflow. Previous work compared 19 different techniques for predicting daily streamflow records in the southeastern United States. Here, five of the better-performing methods are compared in a different hydroclimatic region of the United States, in Iowa. The methods fall into three classes: (1) drainage-area ratio methods, (2) nonlinear spatial interpolations using flow duration curves, and (3) mechanistic rainfall-runoff models. The first two classes are each applied with nearest-neighbor and map-correlated index streamgages. Using a threefold validation and robust rank-based evaluation, the methods are assessed for overall goodness of fit of the hydrograph of daily streamflow, the ability to reproduce a daily, no-fail storage-yield curve, and the ability to reproduce key streamflow statistics. As in the Southeast study, a nonlinear spatial interpolation of daily streamflow using flow duration curves is found to be a method with the best predictive accuracy. Comparisons with previous work in Iowa show that the accuracy of mechanistic models with at-site calibration is substantially degraded in the ungaged framework.
RAId_DbS: Peptide Identification using Database Searches with Realistic Statistics
Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo
2007-01-01
Background The key to mass-spectrometry-based proteomics is peptide identification. A major challenge in peptide identification is to obtain realistic E-values when assigning statistical significance to candidate peptides. Results Using a simple scoring scheme, we propose a database search method with theoretically characterized statistics. Taking into account possible skewness in the random variable distribution and the effect of finite sampling, we provide a theoretical derivation for the tail of the score distribution. For every experimental spectrum examined, we collect the scores of peptides in the database, and find good agreement between the collected score statistics and our theoretical distribution. Using Student's t-tests, we quantify the degree of agreement between the theoretical distribution and the score statistics collected. The T-tests may be used to measure the reliability of reported statistics. When combined with reported P-value for a peptide hit using a score distribution model, this new measure prevents exaggerated statistics. Another feature of RAId_DbS is its capability of detecting multiple co-eluted peptides. The peptide identification performance and statistical accuracy of RAId_DbS are assessed and compared with several other search tools. The executables and data related to RAId_DbS are freely available upon request. PMID:17961253
A Novel Signal Modeling Approach for Classification of Seizure and Seizure-Free EEG Signals.
Gupta, Anubha; Singh, Pushpendra; Karlekar, Mandar
2018-05-01
This paper presents a signal modeling-based new methodology of automatic seizure detection in EEG signals. The proposed method consists of three stages. First, a multirate filterbank structure is proposed that is constructed using the basis vectors of discrete cosine transform. The proposed filterbank decomposes EEG signals into its respective brain rhythms: delta, theta, alpha, beta, and gamma. Second, these brain rhythms are statistically modeled with the class of self-similar Gaussian random processes, namely, fractional Brownian motion and fractional Gaussian noises. The statistics of these processes are modeled using a single parameter called the Hurst exponent. In the last stage, the value of Hurst exponent and autoregressive moving average parameters are used as features to design a binary support vector machine classifier to classify pre-ictal, inter-ictal (epileptic with seizure free interval), and ictal (seizure) EEG segments. The performance of the classifier is assessed via extensive analysis on two widely used data set and is observed to provide good accuracy on both the data set. Thus, this paper proposes a novel signal model for EEG data that best captures the attributes of these signals and hence, allows to boost the classification accuracy of seizure and seizure-free epochs.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Fayez, Yasmin Mohammed; Tawakkol, Shereen Mostafa; Fahmy, Nesma Mahmoud; Shehata, Mostafa Abd El-Atty
2017-09-01
Simultaneous determination of miconazole (MIC), mometasone furaoate (MF), and gentamicin (GEN) in their pharmaceutical combination. Gentamicin determination is based on derivatization with of o-phthalaldehyde reagent (OPA) without any interference of other cited drugs, while the spectra of MIC and MF are resolved using both successive and progressive resolution techniques. The first derivative spectrum of MF is measured using constant multiplication or spectrum subtraction, while its recovered zero order spectrum is obtained using derivative transformation. Beside the application of constant value method. Zero order spectrum of MIC is obtained by derivative transformation after getting its first derivative spectrum by derivative subtraction method. The novel method namely, differential amplitude modulation is used to get the concentration of MF and MIC, while the novel graphical method namely, concentration value is used to get the concentration of MIC, MF, and GEN. Accuracy and precision testing of the developed methods show good results. Specificity of the methods is ensured and is successfully applied for the analysis of pharmaceutical formulation of the three drugs in combination. ICH guidelines are used for validation of the proposed methods. Statistical data are calculated, and the results are satisfactory revealing no significant difference regarding accuracy and precision.
Mizinga, Kemmy M; Burnett, Thomas J; Brunelle, Sharon L; Wallace, Michael A; Coleman, Mark R
2018-05-01
The U.S. Department of Agriculture, Food Safety Inspection Service regulatory method for monensin, Chemistry Laboratory Guidebook CLG-MON, is a semiquantitative bioautographic method adopted in 1991. Official Method of AnalysisSM (OMA) 2011.24, a modern quantitative and confirmatory LC-tandem MS method, uses no chlorinated solvents and has several advantages, including ease of use, ready availability of reagents and materials, shorter run-time, and higher throughput than CLG-MON. Therefore, a bridging study was conducted to support the replacement of method CLG-MON with OMA 2011.24 for regulatory use. Using fortified bovine tissue samples, CLG-MON yielded accuracies of 80-120% in 44 of the 56 samples tested (one sample had no result, six samples had accuracies of >120%, and five samples had accuracies of 40-160%), but the semiquantitative nature of CLG-MON prevented assessment of precision, whereas OMA 2011.24 had accuracies of 88-110% and RSDr of 0.00-15.6%. Incurred residue results corroborated these results, demonstrating improved accuracy (83.3-114%) and good precision (RSDr of 2.6-20.5%) for OMA 2011.24 compared with CLG-MON (accuracy generally within 80-150%, with exceptions). Furthermore, χ2 analysis revealed no statistically significant difference between the two methods. Thus, the microbiological activity of monensin correlated with the determination of monensin A in bovine tissues, and OMA 2011.24 provided improved accuracy and precision over CLG-MON.
Assessing participation in community-based physical activity programs in Brazil.
Reis, Rodrigo S; Yan, Yan; Parra, Diana C; Brownson, Ross C
2014-01-01
This study aimed to develop and validate a risk prediction model to examine the characteristics that are associated with participation in community-based physical activity programs in Brazil. We used pooled data from three surveys conducted from 2007 to 2009 in state capitals of Brazil with 6166 adults. A risk prediction model was built considering program participation as an outcome. The predictive accuracy of the model was quantified through discrimination (C statistic) and calibration (Brier score) properties. Bootstrapping methods were used to validate the predictive accuracy of the final model. The final model showed sex (women: odds ratio [OR] = 3.18, 95% confidence interval [CI] = 2.14-4.71), having less than high school degree (OR = 1.71, 95% CI = 1.16-2.53), reporting a good health (OR = 1.58, 95% CI = 1.02-2.24) or very good/excellent health (OR = 1.62, 95% CI = 1.05-2.51), having any comorbidity (OR = 1.74, 95% CI = 1.26-2.39), and perceiving the environment as safe to walk at night (OR = 1.59, 95% CI = 1.18-2.15) as predictors of participation in physical activity programs. Accuracy indices were adequate (C index = 0.778, Brier score = 0.031) and similar to those obtained from bootstrapping (C index = 0.792, Brier score = 0.030). Sociodemographic and health characteristics as well as perceptions of the environment are strong predictors of participation in community-based programs in selected cities of Brazil.
Sellbom, Martin; Sansone, Randy A; Songer, Douglas A
2017-09-01
The current study evaluated the utility of the self-harm inventory (SHI) as a proxy for and screening measure of borderline personality disorder (BPD) using several diagnostic and statistical manual of mental disorders (DSM)-based BPD measures as criteria. We used a sample of 145 psychiatric inpatients, who completed the SHI and a series of well-validated, DSM-based self-report measures of BPD. Using a series of latent trait and latent class analyses, we found that the SHI was substantially associated with a latent construct representing BPD, as well as differentiated latent classes of 'high' vs. 'low' BPD, with good accuracy. The SHI can serve as proxy for and a good screening measure for BPD, but future research needs to replicate these findings using structured interview-based measurement of BPD.
Classification of skin cancer images using local binary pattern and SVM classifier
NASA Astrophysics Data System (ADS)
Adjed, Faouzi; Faye, Ibrahima; Ababsa, Fakhreddine; Gardezi, Syed Jamal; Dass, Sarat Chandra
2016-11-01
In this paper, a classification method for melanoma and non-melanoma skin cancer images has been presented using the local binary patterns (LBP). The LBP computes the local texture information from the skin cancer images, which is later used to compute some statistical features that have capability to discriminate the melanoma and non-melanoma skin tissues. Support vector machine (SVM) is applied on the feature matrix for classification into two skin image classes (malignant and benign). The method achieves good classification accuracy of 76.1% with sensitivity of 75.6% and specificity of 76.7%.
NASA Technical Reports Server (NTRS)
Wong, K. W.
1974-01-01
In lunar phototriangulation, there is a complete lack of accurate ground control points. The accuracy analysis of the results of lunar phototriangulation must, therefore, be completely dependent on statistical procedure. It was the objective of this investigation to examine the validity of the commonly used statistical procedures, and to develop both mathematical techniques and computer softwares for evaluating (1) the accuracy of lunar phototriangulation; (2) the contribution of the different types of photo support data on the accuracy of lunar phototriangulation; (3) accuracy of absolute orientation as a function of the accuracy and distribution of both the ground and model points; and (4) the relative slope accuracy between any triangulated pass points.
Guenaga, Katia F; Otoch, Jose P; Artifon, Everson L A
2016-01-01
New surgical techniques in the treatment of rectal cancer have improved survival mainly by reducing local recurrences. A preoperative staging method is required to accurately identify tumor stage and planning the appropriate treatment. MRI and ERUS are currently being used for the local staging (T stage). In this review, the accuracy of MRI and ERUS with rigid probe was compared against the gold standard of the pathological findings in the resection specimens. Five studies met the inclusion criteria and were included in this meta-analysis. The accuracy was 91.0% to ERUS and 86.8% to MRI (p=0.27). The result has no statistical significance but with pronounced heterogeneity between the included trials as well as other published reviews. We can conclude that there is a clear need for good quality, larger scale and prospective studies.
Soil, water, and vegetation conditions in south Texas
NASA Technical Reports Server (NTRS)
Wiegand, C. L.; Gausman, H. W.; Leamer, R. W.; Richardson, A. J.; Everitt, J. H.; Gerbermann, A. H. (Principal Investigator)
1976-01-01
The author has identified the following significant results. Software development for a computer-aided crop and soil survey system is nearing completion. Computer-aided variety classification accuracies using LANDSAT-1 MSS data for a 600 hectare citrus farm were 83% for Redblush grapefruit and 91% for oranges. These accuracies indicate that there is good potential for computer-aided inventories of grapefruit and orange citrus orchards with LANDSAT-type MSS data. Mean digital values of clouds differed statistically from those for crop, soil, and water entities, and those for cloud shadows were enough lower than sunlit crop and soil to be distinguishable. The standard errors of estimate for the calibration of computer compatible tape coordinate system (pixel and record) to earth coordinate system (longitude and latitude) for 6 LANDSAT scenes ranged from 0.72 to 1.50 pixels and from 0.58 to 1.75 records.
NASA Technical Reports Server (NTRS)
Liu, W. T.; Niiler, P. P.
1984-01-01
A simple statistical technique is described to determine monthly mean marine surface-layer humidity, which is essential in the specification of surface latent heat flux, from total water vapor in the atmospheric column measured by space-borne sensors. Good correlation between the two quantities was found in examining the humidity soundings from radiosonde reports of mid-ocean island stations and weather ships. The relation agrees with that obtained from satellite (Seasat) data and ship reports averaged over 2 deg areas and a 92-day period in the North Atlantic and in the tropical Pacific. The results demonstrate that, by using a local regression in the tropical Pacific, total water vapor can be used to determine monthly mean surface layer humidity to an accuracy of 0.4 g/kg. With a global regression, determination to an accuracy of 0.8 g/kg is possible. These accuracies correspond to approximately 10 to 20 W/sq m in the determination of latent heat flux with the bulk parameterization method, provided that other required parameters are known.
NASA Technical Reports Server (NTRS)
Ong, K. M.; Macdoran, P. F.; Thomas, J. B.; Fliegel, H. F.; Skjerve, L. J.; Spitzmesser, D. J.; Batelaan, P. D.; Paine, S. R.; Newsted, M. G.
1976-01-01
A precision geodetic measurement system (Aries, for Astronomical Radio Interferometric Earth Surveying) based on the technique of very long base line interferometry has been designed and implemented through the use of a 9-m transportable antenna and the NASA 64-m antenna of the Deep Space Communications Complex at Goldstone, California. A series of experiments designed to demonstrate the inherent accuracy of a transportable interferometer was performed on a 307-m base line during the period from December 1973 to June 1974. This short base line was chosen in order to obtain a comparison with a conventional survey with a few-centimeter accuracy and to minimize Aries errors due to transmission media effects, source locations, and earth orientation parameters. The base-line vector derived from a weighted average of the measurements, representing approximately 24 h of data, possessed a formal uncertainty of about 3 cm in all components. This average interferometry base-line vector was in good agreement with the conventional survey vector within the statistical range allowed by the combined uncertainties (3-4 cm) of the two techniques.
Prognostic scores in oesophageal or gastric variceal bleeding.
Ohmann, C; Stöltzing, H; Wins, L; Busch, E; Thon, K
1990-05-01
Numerous scoring systems have been developed for the prediction of outcome of variceal bleeding; however, only a few have been evaluated adequately. The object of this study was to improve the classical Child-Pugh score (CPS) and to test other scores from the literature. Patients (n = 82) with endoscopically confirmed variceal bleeding and long-term sclerotherapy were included in the study. Linear logistic regression (LR) was applied to different sets of prognostic variables with regard to 30-day mortality. In addition, scores from the literature were evaluated on the data set. Performance was measured by the accuracy and receiver-operating characteristic curves. The application of LR to all five CPS variables (accuracy, 80%) was superior to the classical CPS (70%). LR with selection from the CPS variables or from other sets of variables resulted in no improvement. Compared with CPS only three scores from the literature, mainly based on subsets of the CPS variables, showed an improved accuracy. It is concluded that CPS is still a good scoring system; however, it can be improved by statistical analysis using the same variables.
NASA Astrophysics Data System (ADS)
Tumanov, Sergiu
A test of goodness of fit based on rank statistics was applied to prove the applicability of the Eggenberger-Polya discrete probability law to hourly SO 2-concentrations measured in the vicinity of single sources. With this end in view, the pollutant concentration was considered an integral quantity which may be accepted if one properly chooses the unit of measurement (in this case μg m -3) and if account is taken of the limited accuracy of measurements. The results of the test being satisfactory, even in the range of upper quantiles, the Eggenberger-Polya law was used in association with numerical modelling to estimate statistical parameters, e.g. quantiles, cumulative probabilities of threshold concentrations to be exceeded, and so on, in the grid points of a network covering the area of interest. This only needs accurate estimations of means and variances of the concentration series which can readily be obtained through routine air pollution dispersion modelling.
Marciano, David; Soize, Sébastien; Metaxas, Georgios; Portefaix, Christophe; Pierot, Laurent
2017-02-01
Data about non-invasive follow-up of aneurysm after stent-assisted coiling is scarce. We aimed to compare time-of-flight (TOF) magnetic resonance angiography (MRA) (3D-TOF-MRA) and contrast-enhanced MRA (CE-MRA) at 3-Tesla, with digital subtraction angiography (DSA) for evaluating aneurysm occlusion and parent artery patency after stent-assisted coiling. In this retrospective single-center study, patients were included if they had an intracranial aneurysm treated by stent-assisted coiling between March 2008 and June 2015, followed with both MRA sequences (3D-TOF-MRA and CE-MRA) at 3-Tesla and DSA, performed in an interval<48hours. Thirty-five aneurysms were included. Regarding aneurysm occlusion evaluation, agreement with DSA was better for CE-MRA (K=0.53) than 3D-TOF-MRA (K=0.28). Diagnostic accuracies for aneurysm remnant depiction were similar for 3D-TOF-MRA and CE-MRA (P=1). Both 3D-TOF-MRA (K=0.05) and CE-MRA (K=-0.04) were unable to detect pathological vessel compared to DSA, without difference in accuracy (P=0.68). For parent artery occlusion detection, agreement with DSA was substantial for 3D-TOF-MRA (K=0.64) and moderate for CE-MRA (K=0.45), with similar good diagnostic accuracies (P=1). After stent-assisted coiling treatment, 3D-TOF-MRA and CE-MRA demonstrated good accuracy to detect aneurysm remnant (but tended to overestimation). Although CE-MRA agreement with DSA was better, there was no statistical difference between 3D-TOF-MRA and CE-MRA accuracies. Both MRAs were unable to provide a precise evaluation of in-stent status but could detect parent vessel occlusion. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Fowler, J Christopher; Madan, Alok; Allen, Jon G; Patriquin, Michelle; Sharp, Carla; Oldham, John M; Frueh, B Christopher
2018-01-01
With the publication of DSM 5 alternative model for personality disorders it is critical to assess the components of the model against evidence-based models such as the five factor model and the DSM-IV-TR categorical model. This study explored the relative clinical utility of these models in screening for borderline personality disorder (BPD). Receiver operator characteristics and diagnostic efficiency statistics were calculated for three personality measures to ascertain the relative diagnostic efficiency of each measure. A total of 1653 adult inpatients at a specialist psychiatric hospital completed SCID-II interviews. Sample 1 (n=653) completed the SCID-II interviews, SCID-II Questionnaire (SCID-II-PQ) and the Big Five Inventory (BFI), while Sample 2 (n=1,000) completed the SCID-II interviews, Personality Inventory for DSM5 (PID-5) and the BFI. BFI measure evidenced moderate accuracy for two composites: High Neuroticism+ low agreeableness composite (AUC=0.72, SE=0.01, p<0.001) and High Neuroticism+ Low+Low Conscientiousness (AUC=0.73, SE=0.01, p<0.0001). The SCID-II-PQ evidenced moderate-to-excellent accuracy (AUC=0.86, SE=0.02, p<0.0001) with a good balance of specificity (SP=0.80) and sensitivity (SN=0.78). The PID-5 BPD algorithm (consisting of elevated emotional lability, anxiousness, separation insecurity, hostility, depressivity, impulsivity, and risk taking) evidenced moderate-to-excellent accuracy (AUC=0.87, SE=0.01, p<0.0001) with a good balance of specificity (SP=0.76) and sensitivity (SN=0.81). Findings generally support the use of SCID-II-PQ and PID-5 BPD algorithm for screening purposes. Furthermore, findings support the accuracy of the DSM 5 alternative model Criteria B trait constellation for diagnosing BPD. Limitations of the study include the single inpatient setting and use of two discrete samples to assess PID-5 and SCID-II-PQ. Copyright © 2017 Elsevier Inc. All rights reserved.
Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li
2015-01-01
Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction. PMID:26540059
Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li
2015-11-03
Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert-Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500-800 and a m range of 50-300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction.
Exploring a Three-Level Model of Calibration Accuracy
ERIC Educational Resources Information Center
Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.
2014-01-01
We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…
Losco, Alessandra; Viganò, Chiara; Conte, Dario; Cesana, Bruno Mario; Basilisco, Guido
2009-05-01
Assessing perianal disease activity is important for the treatment and prognosis of Crohn's disease (CD) patients, but the diagnostic accuracy of the activity indices has not yet been established. The aim of this study was to determine the accuracy and agreement of the Fistula Drainage Assessment (FDA), Perianal Disease Activity Index (PDAI), and computer-assisted anal ultrasound imaging (AUS). Sixty-two consecutive patients with CD and perianal fistulae underwent clinical, FDA, PDAI, and AUS evaluation. Perianal disease was considered active in the presence of visible fistula drainage and/or signs of local inflammation (induration and pain at digital compression) upon clinical examination. The AUS images were analyzed by calculating the mean gray-scale tone of the lesion. The PDAI and gray-scale tone values discriminating active and inactive perianal disease were defined using receiver operating characteristics statistics. Perianal disease was active in 46 patients. The accuracy of the FDA was 87% (confidence interval [CI]: 76%-94%). A PDAI of >4 and a mean gray-scale tone value of 117 maximized sensitivity and specificity; their diagnostic accuracy was, respectively, 87% (CI: 76%-94%) and 81% (CI: 69%-90%). The agreement of the 3 evaluations was fair to moderate. The addition of AUS to the PDAI or FDA increased their diagnostic accuracy to respectively 95% and 98%. The diagnostic accuracy of the FDA, PDAI, and computer-assisted AUS imaging was good in assessing perianal disease activity in patients with CD. The agreement between the techniques was fair to moderate. Overall accuracy can be increased by combining the FDA or PDAI with AUS.
NASA Astrophysics Data System (ADS)
Lievens, Klaus; Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter
2016-09-01
In civil engineering and architecture, the availability of high strength materials and advanced calculation techniques enables the construction of slender footbridges, generally highly sensitive to human-induced excitation. Due to the inherent random character of the human-induced walking load, variability on the pedestrian characteristics must be considered in the response simulation. To assess the vibration serviceability of the footbridge, the statistics of the stochastic dynamic response are evaluated by considering the instantaneous peak responses in a time range. Therefore, a large number of time windows are needed to calculate the mean value and standard deviation of the instantaneous peak values. An alternative method to evaluate the statistics is based on the standard deviation of the response and a characteristic frequency as proposed in wind engineering applications. In this paper, the accuracy of this method is evaluated for human-induced vibrations. The methods are first compared for a group of pedestrians crossing a lightly damped footbridge. Small differences of the instantaneous peak value were found by the method using second order statistics. Afterwards, a TMD tuned to reduce the peak acceleration to a comfort value, was added to the structure. The comparison between both methods in made and the accuracy is verified. It is found that the TMD parameters are tuned sufficiently and good agreements between the two methods are found for the estimation of the instantaneous peak response for a strongly damped structure.
Comparison of Laser Scanning Diagnostic Devices for Early Glaucoma Detection.
Schulze, Andreas; Lamparter, Julia; Pfeiffer, Norbert; Berisha, Fatmire; Schmidtmann, Irene; Hoffmann, Esther M
2015-08-01
To compare the diagnostic accuracy and to evaluate the correlation of optic nerve head and retinal nerve fiber layer thickness values between Fourier-Domain optical coherence tomography (FD-OCT), confocal scanning laser ophthalmoscopy (CSLO), and scanning laser polarimetry (SLP) for early glaucoma detection. Ninety-three patients with early open-angle glaucoma, 58 patients with ocular hypertension, and 60 healthy control subjects were included in this observational, cross-sectional study. All study participants underwent FD-OCT (RTVue-100), CSLO (HRT3), and SLP (GDx VCC) imaging of the optic nerve head and the retinal nerve fiber layer. Area under the receiver operating characteristic curves (AUROC) and Bland-Altman analysis were performed. The parameters with the highest diagnostic accuracy were found for FD-OCT cup-to-disc ratio (AUROC=0.841), for SLP NFI (AUROC=0.835), and for CSLO cup-to-disc ratio (AUROC=0.789). Diagnostic accuracy of the best CSLO and SLP parameter was similar (P=0.259). There was a small statistically significant difference between the best CSLO and FD-OCT parameters for differentiating between glaucoma and healthy eyes (P=0.047). FD-OCT and SLP have a similarly good diagnostic ability to distinguish between early glaucoma and healthy subjects. The diagnostic accuracy of CSLO was comparable with SLP and marginally lower compared with FD-OCT.
Gong, Inna Y; Goodman, Shaun G; Brieger, David; Gale, Chris P; Chew, Derek P; Welsh, Robert C; Huynh, Thao; DeYoung, J Paul; Baer, Carolyn; Gyenes, Gabor T; Udell, Jacob A; Fox, Keith A A; Yan, Andrew T
2017-10-01
Although there are sex differences in management and outcome of acute coronary syndromes (ACS), sex is not a component of Global Registry of Acute Coronary Events (GRACE) risk score (RS) for in-hospital mortality prediction. We sought to determine the prognostic utility of GRACE RS in men and women, and whether its predictive accuracy would be augmented through sex-based modification of its components. Canadian men and women enrolled in GRACE and Canadian Registry of Acute Coronary Events were stratified as ST-segment elevation myocardial infarction (STEMI) or non-ST-segment elevation ACS (NSTE-ACS). GRACE RS was calculated as per original model. Discrimination and calibration were evaluated using the c-statistic and Hosmer-Lemeshow goodness-of-fit test, respectively. Multivariable logistic regression was undertaken to assess potential interactions of sex with GRACE RS components. For the overall cohort (n=14,422), unadjusted in-hospital mortality rate was higher in women than men (4.5% vs. 3.0%, p<0.001). Overall, GRACE RS c-statistic and goodness-of-fit test p-value were 0.85 (95% CI 0.83-0.87) and 0.11, respectively. While the RS had excellent discrimination for all subgroups (c-statistics >0.80), discrimination was lower for women compared to men with STEMI [0.80 (0.75-0.84) vs. 0.86 (0.82-0.89), respectively, p<0.05]. The goodness-of-fit test showed good calibration for women (p=0.86), but suboptimal for men (p=0.031). No significant interaction was evident between sex and RS components (all p>0.25). The GRACE RS is a valid predictor of in-hospital mortality for both men and women with ACS. The lack of interaction between sex and RS components suggests that sex-based modification is not required. Copyright © 2017 Elsevier B.V. All rights reserved.
Merchán-Naranjo, Jessica; Mayoral, María; Rapado-Castro, Marta; Llorente, Cloe; Boada, Leticia; Arango, Celso; Parellada, Mara
2012-01-01
Asperger syndrome (AS) patients show heterogeneous intelligence profiles and the validity of short forms for estimating intelligence has rarely been studied in this population. We analyzed the validity of Wechsler Intelligence Scale (WIS) short forms for estimating full-scale intelligence quotient (FSIQ) and assessing intelligence profiles in 29 AS patients. Only the Information and Block Design dyad meets the study criteria. No statistically significant differences were found between dyad scores and FSIQ scores (t(28) = 1.757; p = 0.09). The dyad has a high correlation with FSIQ, good percentage of variance explained (R(2) = 0.591; p < 0.001), and high consistency with the FSIQ classification (χ(2)(36) = 45.202; p = 0.14). Short forms with good predictive accuracy may not be accurate in clinical groups with atypical cognitive profiles such as AS patients.
NASA Astrophysics Data System (ADS)
Estes, M. G., Jr.; Insaf, T.; Crosson, W. L.; Al-Hamdan, M. Z.
2017-12-01
Heat exposure metrics (maximum and minimum daily temperatures,) have a close relationship with human health. While meteorological station data provide a good source of point measurements, temporal and spatially consistent temperature data are needed for health studies. Reanalysis data such as the North American Land Data Assimilation System's (NLDAS) 12-km gridded product are an effort to resolve spatio-temporal environmental data issues; the resolution may be too coarse to accurately capture the effects of elevation, mixed land/water areas, and urbanization. As part of this NASA Applied Sciences Program funded project, the NLDAS 12-km air temperature product has been downscaled to 1-km using MODIS Land Surface Temperature patterns. Limited validation of the native 12-km NLDAS reanalysis data has been undertaken. Our objective is to evaluate the accuracy of both the 12-km and 1-km downscaled products using the US Historical Climatology Network station data geographically dispersed across New York State. Statistical methods including correlation, scatterplots, time series and summary statistics were used to determine the accuracy of the remotely-sensed maximum and minimum temperature products. The specific effects of elevation and slope on remotely-sensed temperature product accuracy were determined with 10-m digital elevation data that were used to calculate percent slope and link with the temperature products at multiple scales. Preliminary results indicate the downscaled temperature product improves accuracy over the native 12-km temperature product with average correlation improvements from 0.81 to 0.85 for minimum and 0.71 to 0.79 for maximum temperatures in 2009. However, the benefits vary temporally and geographically. Our results will inform health studies using remotely-sensed temperature products to determine health risk from excessive heat by providing a more robust assessment of the accuracy of the 12-km NLDAS product and additional accuracy gained from the 1-km downscaled product. Also, the results will be shared with the National Weather Service to determine potential benefits to heat warning systems and evaluated for inclusion in the Centers of Disease Control and Prevention (CDC) Environmental Public Health Tracking Network as a resource for the health community.
Reproducible segmentation of white matter hyperintensities using a new statistical definition.
Damangir, Soheil; Westman, Eric; Simmons, Andrew; Vrenken, Hugo; Wahlund, Lars-Olof; Spulber, Gabriela
2017-06-01
We present a method based on a proposed statistical definition of white matter hyperintensities (WMH), which can work with any combination of conventional magnetic resonance (MR) sequences without depending on manually delineated samples. T1-weighted, T2-weighted, FLAIR, and PD sequences acquired at 1.5 Tesla from 119 subjects from the Kings Health Partners-Dementia Case Register (healthy controls, mild cognitive impairment, Alzheimer's disease) were used. The segmentation was performed using a proposed definition for WMH based on the one-tailed Kolmogorov-Smirnov test. The presented method was verified, given all possible combinations of input sequences, against manual segmentations and a high similarity (Dice 0.85-0.91) was observed. Comparing segmentations with different input sequences to one another also yielded a high similarity (Dice 0.83-0.94) that exceeded intra-rater similarity (Dice 0.75-0.91). We compared the results with those of other available methods and showed that the segmentation based on the proposed definition has better accuracy and reproducibility in the test dataset used. Overall, the presented definition is shown to produce accurate results with higher reproducibility than manual delineation. This approach can be an alternative to other manual or automatic methods not only because of its accuracy, but also due to its good reproducibility.
Genetic algorithm for the optimization of features and neural networks in ECG signals classification
NASA Astrophysics Data System (ADS)
Li, Hongqiang; Yuan, Danyang; Ma, Xiangdong; Cui, Dianyin; Cao, Lu
2017-01-01
Feature extraction and classification of electrocardiogram (ECG) signals are necessary for the automatic diagnosis of cardiac diseases. In this study, a novel method based on genetic algorithm-back propagation neural network (GA-BPNN) for classifying ECG signals with feature extraction using wavelet packet decomposition (WPD) is proposed. WPD combined with the statistical method is utilized to extract the effective features of ECG signals. The statistical features of the wavelet packet coefficients are calculated as the feature sets. GA is employed to decrease the dimensions of the feature sets and to optimize the weights and biases of the back propagation neural network (BPNN). Thereafter, the optimized BPNN classifier is applied to classify six types of ECG signals. In addition, an experimental platform is constructed for ECG signal acquisition to supply the ECG data for verifying the effectiveness of the proposed method. The GA-BPNN method with the MIT-BIH arrhythmia database achieved a dimension reduction of nearly 50% and produced good classification results with an accuracy of 97.78%. The experimental results based on the established acquisition platform indicated that the GA-BPNN method achieved a high classification accuracy of 99.33% and could be efficiently applied in the automatic identification of cardiac arrhythmias.
Assessing Participation in Community-Based Physical Activity Programs in Brazil
REIS, RODRIGO S.; YAN, YAN; PARRA, DIANA C.; BROWNSON, ROSS C.
2015-01-01
Purpose This study aimed to develop and validate a risk prediction model to examine the characteristics that are associated with participation in community-based physical activity programs in Brazil. Methods We used pooled data from three surveys conducted from 2007 to 2009 in state capitals of Brazil with 6166 adults. A risk prediction model was built considering program participation as an outcome. The predictive accuracy of the model was quantified through discrimination (C statistic) and calibration (Brier score) properties. Bootstrapping methods were used to validate the predictive accuracy of the final model. Results The final model showed sex (women: odds ratio [OR] = 3.18, 95% confidence interval [CI] = 2.14–4.71), having less than high school degree (OR = 1.71, 95% CI = 1.16–2.53), reporting a good health (OR = 1.58, 95% CI = 1.02–2.24) or very good/excellent health (OR = 1.62, 95% CI = 1.05–2.51), having any comorbidity (OR = 1.74, 95% CI = 1.26–2.39), and perceiving the environment as safe to walk at night (OR = 1.59, 95% CI = 1.18–2.15) as predictors of participation in physical activity programs. Accuracy indices were adequate (C index = 0.778, Brier score = 0.031) and similar to those obtained from bootstrapping (C index = 0.792, Brier score = 0.030). Conclusions Sociodemographic and health characteristics as well as perceptions of the environment are strong predictors of participation in community-based programs in selected cities of Brazil. PMID:23846162
New auto-segment method of cerebral hemorrhage
NASA Astrophysics Data System (ADS)
Wang, Weijiang; Shen, Tingzhi; Dang, Hua
2007-12-01
A novel method for Computerized tomography (CT) cerebral hemorrhage (CH) image automatic segmentation is presented in the paper, which uses expert system that models human knowledge about the CH automatic segmentation problem. The algorithm adopts a series of special steps and extracts some easy ignored CH features which can be found by statistic results of mass real CH images, such as region area, region CT number, region smoothness and some statistic CH region relationship. And a seven steps' extracting mechanism will ensure these CH features can be got correctly and efficiently. By using these CH features, a decision tree which models the human knowledge about the CH automatic segmentation problem has been built and it will ensure the rationality and accuracy of the algorithm. Finally some experiments has been taken to verify the correctness and reasonable of the automatic segmentation, and the good correct ratio and fast speed make it possible to be widely applied into practice.
NASA Astrophysics Data System (ADS)
Vathsala, H.; Koolagudi, Shashidhar G.
2017-01-01
In this paper we discuss a data mining application for predicting peninsular Indian summer monsoon rainfall, and propose an algorithm that combine data mining and statistical techniques. We select likely predictors based on association rules that have the highest confidence levels. We then cluster the selected predictors to reduce their dimensions and use cluster membership values for classification. We derive the predictors from local conditions in southern India, including mean sea level pressure, wind speed, and maximum and minimum temperatures. The global condition variables include southern oscillation and Indian Ocean dipole conditions. The algorithm predicts rainfall in five categories: Flood, Excess, Normal, Deficit and Drought. We use closed itemset mining, cluster membership calculations and a multilayer perceptron function in the algorithm to predict monsoon rainfall in peninsular India. Using Indian Institute of Tropical Meteorology data, we found the prediction accuracy of our proposed approach to be exceptionally good.
An online BCI game based on the decoding of users' attention to color stimulus.
Yang, Lingling; Leung, Howard
2013-01-01
Studies have shown that statistically there are differences in theta, alpha and beta band powers when people look at blue and red colors. In this paper, a game has been developed to test whether these statistical differences are good enough for online Brain Computer Interface (BCI) application. We implemented a two-choice BCI game in which the subject makes the choice by looking at a color option and our system decodes the subject's intention by analyzing the EEG signal. In our system, band power features of the EEG data were used to train a support vector machine (SVM) classification model. An online mechanism was adopted to update the classification model during the training stage to account for individual differences. Our results showed that an accuracy of 70%-80% could be achieved and it provided evidence for the possibility in applying color stimuli to BCI applications.
NASA Astrophysics Data System (ADS)
Abdellatef, Hisham E.
2007-04-01
Picric acid, bromocresol green, bromothymol blue, cobalt thiocyanate and molybdenum(V) thiocyanate have been tested as spectrophotometric reagents for the determination of disopyramide and irbesartan. Reaction conditions have been optimized to obtain coloured comoplexes of higher sensitivity and longer stability. The absorbance of ion-pair complexes formed were found to increases linearity with increases in concentrations of disopyramide and irbesartan which were corroborated by correction coefficient values. The developed methods have been successfully applied for the determination of disopyramide and irbesartan in bulk drugs and pharmaceutical formulations. The common excipients and additives did not interfere in their determination. The results obtained by the proposed methods have been statistically compared by means of student t-test and by the variance ratio F-test. The validity was assessed by applying the standard addition technique. The results were compared statistically with the official or reference methods showing a good agreement with high precision and accuracy.
NASA Astrophysics Data System (ADS)
Roccia, S.; Gaulard, C.; Étilé, A.; Chakma, R.
2017-07-01
In the context of nuclear orientation, we propose a new method to correct the multipole mixing ratios for asymmetries in the geometry of the setup but also in the detection system. This method is also robust against temperature fluctuations, beam intensity fluctuations and uncertainties in the nuclear structure of the nuclei. Additionally, this method provides a natural way to combine data from different detectors and make good use of all available statistics. We could use this method to demonstrate the accuracy that can be reached with the PolarEx setup now installed at the ALTO facility.
Shi, Weiwei; Bugrim, Andrej; Nikolsky, Yuri; Nikolskya, Tatiana; Brennan, Richard J
2008-01-01
ABSTRACT The ideal toxicity biomarker is composed of the properties of prediction (is detected prior to traditional pathological signs of injury), accuracy (high sensitivity and specificity), and mechanistic relationships to the endpoint measured (biological relevance). Gene expression-based toxicity biomarkers ("signatures") have shown good predictive power and accuracy, but are difficult to interpret biologically. We have compared different statistical methods of feature selection with knowledge-based approaches, using GeneGo's database of canonical pathway maps, to generate gene sets for the classification of renal tubule toxicity. The gene set selection algorithms include four univariate analyses: t-statistics, fold-change, B-statistics, and RankProd, and their combination and overlap for the identification of differentially expressed probes. Enrichment analysis following the results of the four univariate analyses, Hotelling T-square test, and, finally out-of-bag selection, a variant of cross-validation, were used to identify canonical pathway maps-sets of genes coordinately involved in key biological processes-with classification power. Differentially expressed genes identified by the different statistical univariate analyses all generated reasonably performing classifiers of tubule toxicity. Maps identified by enrichment analysis or Hotelling T-square had lower classification power, but highlighted perturbed lipid homeostasis as a common discriminator of nephrotoxic treatments. The out-of-bag method yielded the best functionally integrated classifier. The map "ephrins signaling" performed comparably to a classifier derived using sparse linear programming, a machine learning algorithm, and represents a signaling network specifically involved in renal tubule development and integrity. Such functional descriptors of toxicity promise to better integrate predictive toxicogenomics with mechanistic analysis, facilitating the interpretation and risk assessment of predictive genomic investigations.
Person-Fit Statistics for Joint Models for Accuracy and Speed
ERIC Educational Resources Information Center
Fox, Jean-Paul; Marianti, Sukaesi
2017-01-01
Response accuracy and response time data can be analyzed with a joint model to measure ability and speed of working, while accounting for relationships between item and person characteristics. In this study, person-fit statistics are proposed for joint models to detect aberrant response accuracy and/or response time patterns. The person-fit tests…
Griffiths, Alex; Beaussier, Anne-Laure; Demeritt, David; Rothstein, Henry
2017-02-01
The Care Quality Commission (CQC) is responsible for ensuring the quality of the health and social care delivered by more than 30 000 registered providers in England. With only limited resources for conducting on-site inspections, the CQC has used statistical surveillance tools to help it identify which providers it should prioritise for inspection. In the face of planned funding cuts, the CQC plans to put more reliance on statistical surveillance tools to assess risks to quality and prioritise inspections accordingly. To evaluate the ability of the CQC's latest surveillance tool, Intelligent Monitoring (IM), to predict the quality of care provided by National Health Service (NHS) hospital trusts so that those at greatest risk of providing poor-quality care can be identified and targeted for inspection. The predictive ability of the IM tool is evaluated through regression analyses and χ 2 testing of the relationship between the quantitative risk score generated by the IM tool and the subsequent quality rating awarded following detailed on-site inspection by large expert teams of inspectors. First, the continuous risk scores generated by the CQC's IM statistical surveillance tool cannot predict inspection-based quality ratings of NHS hospital trusts (OR 0.38 (0.14 to 1.05) for Outstanding/Good, OR 0.94 (0.80 to -1.10) for Good/Requires improvement, and OR 0.90 (0.76 to 1.07) for Requires improvement/Inadequate). Second, the risk scores cannot be used more simply to distinguish the trusts performing poorly-those subsequently rated either 'Requires improvement' or 'Inadequate'-from the trusts performing well-those subsequently rated either 'Good' or 'Outstanding' (OR 1.07 (0.91 to 1.26)). Classifying CQC's risk bandings 1-3 as high risk and 4-6 as low risk, 11 of the high risk trusts were performing well and 43 of the low risk trusts were performing poorly, resulting in an overall accuracy rate of 47.6%. Third, the risk scores cannot be used even more simply to distinguish the worst performing trusts-those subsequently rated 'Inadequate'-from the remaining, better performing trusts (OR 1.11 (0.94 to 1.32)). Classifying CQC's risk banding 1 as high risk and 2-6 as low risk, the highest overall accuracy rate of 72.8% was achieved, but still only 6 of the 13 Inadequate trusts were correctly classified as being high risk. Since the IM statistical surveillance tool cannot predict the outcome of NHS hospital trust inspections, it cannot be used for prioritisation. A new approach to inspection planning is therefore required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Determining wave direction using curvature parameters.
de Queiroz, Eduardo Vitarelli; de Carvalho, João Luiz Baptista
2016-01-01
The curvature of the sea wave was tested as a parameter for estimating wave direction in the search for better results in estimates of wave direction in shallow waters, where waves of different sizes, frequencies and directions intersect and it is difficult to characterize. We used numerical simulations of the sea surface to determine wave direction calculated from the curvature of the waves. Using 1000 numerical simulations, the statistical variability of the wave direction was determined. The results showed good performance by the curvature parameter for estimating wave direction. Accuracy in the estimates was improved by including wave slope parameters in addition to curvature. The results indicate that the curvature is a promising technique to estimate wave directions.•In this study, the accuracy and precision of curvature parameters to measure wave direction are analyzed using a model simulation that generates 1000 wave records with directional resolution.•The model allows the simultaneous simulation of time-series wave properties such as sea surface elevation, slope and curvature and they were used to analyze the variability of estimated directions.•The simultaneous acquisition of slope and curvature parameters can contribute to estimates wave direction, thus increasing accuracy and precision of results.
Simms, Alexander D; Reynolds, Stephanie; Pieper, Karen; Baxter, Paul D; Cattle, Brian A; Batin, Phillip D; Wilson, John I; Deanfield, John E; West, Robert M; Fox, Keith A A; Hall, Alistair S; Gale, Christopher P
2013-01-01
To evaluate the performance of the National Institute for Health and Clinical Excellence (NICE) mini-Global Registry of Acute Coronary Events (GRACE) (MG) and adjusted mini-GRACE (AMG) risk scores. Retrospective observational study. 215 acute hospitals in England and Wales. 137 084 patients discharged from hospital with a diagnosis of acute myocardial infarction (AMI) between 2003 and 2009, as recorded in the Myocardial Ischaemia National Audit Project (MINAP). Model performance indices of calibration accuracy, discriminative and explanatory performance, including net reclassification index (NRI) and integrated discrimination improvement. Of 495 263 index patients hospitalised with AMI, there were 53 196 ST elevation myocardial infarction and 83 888 non-ST elevation myocardial infarction (NSTEMI) (27.7%) cases with complete data for all AMG variables. For AMI, AMG calibration was better than MG calibration (Hosmer-Lemeshow goodness of fit test: p=0.33 vs p<0.05). MG and AMG predictive accuracy and discriminative ability were good (Brier score: 0.10 vs 0.09; C statistic: 0.82 and 0.84, respectively). The NRI of AMG over MG was 8.1% (p<0.05). Model performance was reduced in patients with NSTEMI, chronic heart failure, chronic renal failure and in patients aged ≥85 years. The AMG and MG risk scores, utilised by NICE, demonstrated good performance across a range of indices using MINAP data, but performed less well in higher risk subgroups. Although indices were better for AMG, its application may be constrained by missing predictors.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.
How to select electrical end-use meters for proper measurement of DSM impact estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, M.
1994-12-31
Does metering actually provide higher accuracy impact estimates? The answer is sometimes yes, sometimes no. It depends on how the metered data will be used. DSM impact estimates can be achieved in a variety of ways, including engineering algorithms, modeling and statistical methods. Yet for all of these methods, impacts can be calculated as the difference in pre- and post-installation annual load shapes. Increasingly, end-use metering is being used to either adjust and calibrate a particular estimate method, or measure load shapes directly. It is therefore not surprising that metering has become synonymous with higher accuracy impact estimates. If meteredmore » data is used as a component in an estimating methodology, its relative contribution to accuracy can be analyzed through propagation of error or {open_quotes}POE{close_quotes} analysis. POE analysis is a framework which can be used to evaluate different metering options and their relative effects on cost and accuracy. If metered data is used to directly measure pre- and post-installation load shapes to calculate energy and demand impacts, then the accuracy of the whole metering process directly affects the accuracy of the impact estimate. This paper is devoted to the latter case, where the decision has been made to collect high-accuracy metered data of electrical energy and demand. The underlying assumption is that all meters can yield good results if applied within the scope of their limitations. The objective is to know the application, understand what meters are actually doing to measure and record power, and decide with confidence when a sophisticated meter is required, and when a less expensive type will suffice.« less
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
An accurate test for homogeneity of odds ratios based on Cochran's Q-statistic.
Kulinskaya, Elena; Dollinger, Michael B
2015-06-10
A frequently used statistic for testing homogeneity in a meta-analysis of K independent studies is Cochran's Q. For a standard test of homogeneity the Q statistic is referred to a chi-square distribution with K-1 degrees of freedom. For the situation in which the effects of the studies are logarithms of odds ratios, the chi-square distribution is much too conservative for moderate size studies, although it may be asymptotically correct as the individual studies become large. Using a mixture of theoretical results and simulations, we provide formulas to estimate the shape and scale parameters of a gamma distribution to fit the distribution of Q. Simulation studies show that the gamma distribution is a good approximation to the distribution for Q. Use of the gamma distribution instead of the chi-square distribution for Q should eliminate inaccurate inferences in assessing homogeneity in a meta-analysis. (A computer program for implementing this test is provided.) This hypothesis test is competitive with the Breslow-Day test both in accuracy of level and in power.
NASA Astrophysics Data System (ADS)
Zainudin, M. N. Shah; Sulaiman, Md Nasir; Mustapha, Norwati; Perumal, Thinagaran
2017-10-01
Prior knowledge in pervasive computing recently garnered a lot of attention due to its high demand in various application domains. Human activity recognition (HAR) considered as the applications that are widely explored by the expertise that provides valuable information to the human. Accelerometer sensor-based approach is utilized as devices to undergo the research in HAR since their small in size and this sensor already build-in in the various type of smartphones. However, the existence of high inter-class similarities among the class tends to degrade the recognition performance. Hence, this work presents the method for activity recognition using our proposed features from combinational of spectral analysis with statistical descriptors that able to tackle the issue of differentiating stationary and locomotion activities. The noise signal is filtered using Fourier Transform before it will be extracted using two different groups of features, spectral frequency analysis, and statistical descriptors. Extracted signal later will be classified using random forest ensemble classifier models. The recognition results show the good accuracy performance for stationary and locomotion activities based on USC HAD datasets.
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler B.
2017-01-01
This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.
Variational approach to probabilistic finite elements
NASA Technical Reports Server (NTRS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1991-01-01
Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Variational approach to probabilistic finite elements
NASA Astrophysics Data System (ADS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1991-08-01
Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
Variational approach to probabilistic finite elements
NASA Technical Reports Server (NTRS)
Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.
1987-01-01
Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.
40 CFR 92.127 - Emission measurement accuracy.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...
40 CFR 92.127 - Emission measurement accuracy.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...
40 CFR 92.127 - Emission measurement accuracy.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...
40 CFR 92.127 - Emission measurement accuracy.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...
40 CFR 92.127 - Emission measurement accuracy.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...
Assessing colour-dependent occupation statistics inferred from galaxy group catalogues
NASA Astrophysics Data System (ADS)
Campbell, Duncan; van den Bosch, Frank C.; Hearin, Andrew; Padmanabhan, Nikhil; Berlind, Andreas; Mo, H. J.; Tinker, Jeremy; Yang, Xiaohu
2015-09-01
We investigate the ability of current implementations of galaxy group finders to recover colour-dependent halo occupation statistics. To test the fidelity of group catalogue inferred statistics, we run three different group finders used in the literature over a mock that includes galaxy colours in a realistic manner. Overall, the resulting mock group catalogues are remarkably similar, and most colour-dependent statistics are recovered with reasonable accuracy. However, it is also clear that certain systematic errors arise as a consequence of correlated errors in group membership determination, central/satellite designation, and halo mass assignment. We introduce a new statistic, the halo transition probability (HTP), which captures the combined impact of all these errors. As a rule of thumb, errors tend to equalize the properties of distinct galaxy populations (i.e. red versus blue galaxies or centrals versus satellites), and to result in inferred occupation statistics that are more accurate for red galaxies than for blue galaxies. A statistic that is particularly poorly recovered from the group catalogues is the red fraction of central galaxies as a function of halo mass. Group finders do a good job in recovering galactic conformity, but also have a tendency to introduce weak conformity when none is present. We conclude that proper inference of colour-dependent statistics from group catalogues is best achieved using forward modelling (i.e. running group finders over mock data) or by implementing a correction scheme based on the HTP, as long as the latter is not too strongly model dependent.
40 CFR 89.310 - Analyzer accuracy and specifications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... to § 89.323. (c) Emission measurement accuracy—Bag sampling. (1) Good engineering practice dictates... generally not be used. (2) Some high resolution read-out systems, such as computers, data loggers, and so..., using good engineering judgement, below 15 percent of full scale are made to ensure the accuracy of the...
40 CFR 89.310 - Analyzer accuracy and specifications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... to § 89.323. (c) Emission measurement accuracy—Bag sampling. (1) Good engineering practice dictates... generally not be used. (2) Some high resolution read-out systems, such as computers, data loggers, and so..., using good engineering judgement, below 15 percent of full scale are made to ensure the accuracy of the...
40 CFR 89.310 - Analyzer accuracy and specifications.
Code of Federal Regulations, 2013 CFR
2013-07-01
... to § 89.323. (c) Emission measurement accuracy—Bag sampling. (1) Good engineering practice dictates... generally not be used. (2) Some high resolution read-out systems, such as computers, data loggers, and so..., using good engineering judgement, below 15 percent of full scale are made to ensure the accuracy of the...
40 CFR 89.310 - Analyzer accuracy and specifications.
Code of Federal Regulations, 2012 CFR
2012-07-01
... to § 89.323. (c) Emission measurement accuracy—Bag sampling. (1) Good engineering practice dictates... generally not be used. (2) Some high resolution read-out systems, such as computers, data loggers, and so..., using good engineering judgement, below 15 percent of full scale are made to ensure the accuracy of the...
40 CFR 89.310 - Analyzer accuracy and specifications.
Code of Federal Regulations, 2014 CFR
2014-07-01
... to § 89.323. (c) Emission measurement accuracy—Bag sampling. (1) Good engineering practice dictates... generally not be used. (2) Some high resolution read-out systems, such as computers, data loggers, and so..., using good engineering judgement, below 15 percent of full scale are made to ensure the accuracy of the...
Austin, Peter C; Reeves, Mathew J
2013-03-01
Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Monte Carlo simulations were used to examine this issue. We examined the influence of 3 factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card.
Austin, Peter C.; Reeves, Mathew J.
2015-01-01
Background Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk-adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. Objectives To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Research Design Monte Carlo simulations were used to examine this issue. We examined the influence of three factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk-adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. Results The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. Conclusions The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card. PMID:23295579
Lee, Ming-Fen; Liou, Tsan-Hon; Wang, Weu; Pan, Wen-Harn; Lee, Wei-Jei; Hsu, Chung-Tan; Wu, Suh-Fen; Chen, Hsin-Hung
2013-01-01
Hyperuricemia is closely associated with obesity and metabolic abnormalities, which is also an independent risk factor for cardiovascular diseases. The PPARγ gene, which is linked to obesity and metabolic abnormalities in Han Chinese, might be considered a top candidate gene that is involved in hyperuricemia. This study recruited 457 participants, aged 20-40 years old, to investigate the associations of the PPARγ gene and metabolic parameters with hyperuricemia. Three tag-single nucleotide polymorphisms, rs2292101, rs4684846, and rs1822825, of the PPARγ gene were selected to explore their association with hyperuricemia. Risk genotypes on rs1822825 of the PPARγ gene exhibited statistical significance with hyperuricemia (odds ratio: 1.9; 95% confidence interval: 1.05-3.57). Although gender, body mass index (BMI), serum total cholesterol concentration, or protein intake per day were statistically associated with hyperuricemia, the combination of BMI, gender, and rs1822825, rather than that of age, serum lipid profile, blood pressure, and protein intake per day, satisfied the predictability for hyperuricemia (sensitivity: 69.3%; specificity: 83.7%) in Taiwan-born obese Han Chinese. BMI, gender, and the rs1822825 polymorphism in the PPARγ gene appeared good biomarkers in hyperuricemia; therefore, these powerful indicators may be included in the prediction of hyperuricemia to increase the accuracy of the analysis.
Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi
2015-01-01
Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107
Rattanaumpawan, Pinyo; Wongkamhla, Thanyarak; Thamlikitkul, Visanu
2016-04-01
To determine the accuracy of International Statistical Classification of Disease and Related Health Problems, 10th Revision (ICD-10) coding system in identifying comorbidities and infectious conditions using data from a Thai university hospital administrative database. A retrospective cross-sectional study was conducted among patients hospitalized in six general medicine wards at Siriraj Hospital. ICD-10 code data was identified and retrieved directly from the hospital administrative database. Patient comorbidities were captured using the ICD-10 coding algorithm for the Charlson comorbidity index. Infectious conditions were captured using the groups of ICD-10 diagnostic codes that were carefully prepared by two independent infectious disease specialists. Accuracy of ICD-10 codes combined with microbiological dataf or diagnosis of urinary tract infection (UTI) and bloodstream infection (BSI) was evaluated. Clinical data gathered from chart review was considered the gold standard in this study. Between February 1 and May 31, 2013, a chart review of 546 hospitalization records was conducted. The mean age of hospitalized patients was 62.8 ± 17.8 years and 65.9% of patients were female. Median length of stay [range] was 10.0 [1.0-353.0] days and hospital mortality was 21.8%. Conditions with ICD-10 codes that had good sensitivity (90% or higher) were diabetes mellitus and HIV infection. Conditions with ICD-10 codes that had good specificity (90% or higher) were cerebrovascular disease, chronic lung disease, diabetes mellitus, cancer HIV infection, and all infectious conditions. By combining ICD-10 codes with microbiological results, sensitivity increased from 49.5 to 66%for UTI and from 78.3 to 92.8%for BS. The ICD-10 coding algorithm is reliable only in some selected conditions, including underlying diabetes mellitus and HIV infection. Combining microbiological results with ICD-10 codes increased sensitivity of ICD-10 codes for identifying BSI. Future research is needed to improve the accuracy of hospital administrative coding system in Thailand.
Poster - 56: Preliminary comparison of FF- and FFF-VMAT for prostate plans with higher rectal dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Baochang; Darko, Johnson; Osei, Ernest
2016-08-15
Purpose: A recent retrospective study found 53 patients previously treated to 78Gy/39 using flattened filtered (FF) 6X-VMAT at GRRCC had rectal DVH more than one standard deviation higher than the average. This study was to investigate if using 6FFFor10FFF beams could reduce these DVHs without compromising target coverage. Methods: Twenty patients’ plans were re-planed with 2-arc 6X-VMAT, 6FFF-VMAT and 10FFF-VMAT using the Eclipse TPS following departmental protocol. All plans had the same optimization and normalization, and were evaluated against the acceptance criteria from the QUANTEC and Emami. Statistical differences in the mean dose to OARs (D{sub m}) and PTV homogeneitymore » index (HI) between energies were tested using the paired sample Wilcoxon signed rank statistical method (p<0.05). Beam delivery accuracy was checked on five patients using portal dosimetry (PD). Results: The PTV HI for the 10FFF shows no statistical difference from the 6X. All the OARs, except left femoral head with 6FFF, have significantly lower Dm using 6FFF and 10FFF .There is no difference in the maximum doses to rectum and bladder and are limited by the prescribed doses. Measurements show good agreements in the gamma evaluation (3%/3mm) for all energies. Conclusion: This preliminary study shows that doses to the OARs are reduced using 10FFF for the same target coverage. The plans using 6FFF result in lower doses to some OARs, and statistically different PTV HI. All plans showed very good agreement with measurements.« less
Zewdie, Getie A.; Cox, Dennis D.; Neely Atkinson, E.; Cantor, Scott B.; MacAulay, Calum; Davies, Kalatu; Adewole, Isaac; Buys, Timon P. H.; Follen, Michele
2012-01-01
Abstract. Optical spectroscopy has been proposed as an accurate and low-cost alternative for detection of cervical intraepithelial neoplasia. We previously published an algorithm using optical spectroscopy as an adjunct to colposcopy and found good accuracy (sensitivity=1.00 [95% confidence interval (CI)=0.92 to 1.00], specificity=0.71 [95% CI=0.62 to 0.79]). Those results used measurements taken by expert colposcopists as well as the colposcopy diagnosis. In this study, we trained and tested an algorithm for the detection of cervical intraepithelial neoplasia (i.e., identifying those patients who had histology reading CIN 2 or worse) that did not include the colposcopic diagnosis. Furthermore, we explored the interaction between spectroscopy and colposcopy, examining the importance of probe placement expertise. The colposcopic diagnosis-independent spectroscopy algorithm had a sensitivity of 0.98 (95% CI=0.89 to 1.00) and a specificity of 0.62 (95% CI=0.52 to 0.71). The difference in the partial area under the ROC curves between spectroscopy with and without the colposcopic diagnosis was statistically significant at the patient level (p=0.05) but not the site level (p=0.13). The results suggest that the device has high accuracy over a wide range of provider accuracy and hence could plausibly be implemented by providers with limited training. PMID:22559693
Thermodynamics and proton activities of protic ionic liquids with quantum cluster equilibrium theory
NASA Astrophysics Data System (ADS)
Ingenmey, Johannes; von Domaros, Michael; Perlt, Eva; Verevkin, Sergey P.; Kirchner, Barbara
2018-05-01
We applied the binary Quantum Cluster Equilibrium (bQCE) method to a number of alkylammonium-based protic ionic liquids in order to predict boiling points, vaporization enthalpies, and proton activities. The theory combines statistical thermodynamics of van-der-Waals-type clusters with ab initio quantum chemistry and yields the partition functions (and associated thermodynamic potentials) of binary mixtures over a wide range of thermodynamic phase points. Unlike conventional cluster approaches that are limited to the prediction of thermodynamic properties, dissociation reactions can be effortlessly included into the bQCE formalism, giving access to ionicities, as well. The method is open to quantum chemical methods at any level of theory, but combination with low-cost composite density functional theory methods and the proposed systematic approach to generate cluster sets provides a computationally inexpensive and mostly parameter-free way to predict such properties at good-to-excellent accuracy. Boiling points can be predicted within an accuracy of 50 K, reaching excellent accuracy for ethylammonium nitrate. Vaporization enthalpies are predicted within an accuracy of 20 kJ mol-1 and can be systematically interpreted on a molecular level. We present the first theoretical approach to predict proton activities in protic ionic liquids, with results fitting well into the experimentally observed correlation. Furthermore, enthalpies of vaporization were measured experimentally for some alkylammonium nitrates and an excellent linear correlation with vaporization enthalpies of their respective parent amines is observed.
Perceptual statistical learning over one week in child speech production.
Richtsmeier, Peter T; Goffman, Lisa
2017-07-01
What cognitive mechanisms account for the trajectory of speech sound development, in particular, gradually increasing accuracy during childhood? An intriguing potential contributor is statistical learning, a type of learning that has been studied frequently in infant perception but less often in child speech production. To assess the relevance of statistical learning to developing speech accuracy, we carried out a statistical learning experiment with four- and five-year-olds in which statistical learning was examined over one week. Children were familiarized with and tested on word-medial consonant sequences in novel words. There was only modest evidence for statistical learning, primarily in the first few productions of the first session. This initial learning effect nevertheless aligns with previous statistical learning research. Furthermore, the overall learning effect was similar to an estimate of weekly accuracy growth based on normative studies. The results implicate other important factors in speech sound development, particularly learning via production. Copyright © 2017 Elsevier Inc. All rights reserved.
Khan, Momna; Sultana, Syeda Seema; Jabeen, Nigar; Arain, Uzma; Khans, Salma
2015-02-01
To determine the diagnostic accuracy of visual inspection of cervix using 3% acetic acid as a screening test for early detection of cervical cancer taking histopathology as the gold standard. The cross-sectional study was conducted at Civil Hospital Karachi from July 1 to December 31, 2012 and comprised all sexually active women aged 19-60 years. During speculum examination 3% acetic acid was applied over the cervix with the help of cotton swab. The observations were noted as positive or negative on visual inspection of the cervix after acetic acid application according to acetowhite changes. Colposcopy-guided cervical biopsy was done in patients with positive or abnormal looking cervix. Colposcopic-directed biopsy was taken as the gold standard to assess visual inspection readings. SPSS 17 was used for statistical analysis. There were 500 subjects with a mean age of 35.74 ± 9.64 years. Sensitivity, specifically, positive predicted value, negative predicted value of visual inspection of the cervix after acetic acid application was 93.5%, 95.8%, 76.3%, 99%, and the diagnostic accuracy was 95.6%. Visual inspection of the cervix after acetic acid application is an effective method of detecting pre-invasive phase of cervical cancer and a good alternative to cytological screening for cervical cancer in resource-poor setting like Pakistan and can reduce maternal morbidity and mortality.
Assessing the accuracy of weather radar to track intense rain cells in the Greater Lyon area, France
NASA Astrophysics Data System (ADS)
Renard, Florent; Chapon, Pierre-Marie; Comby, Jacques
2012-01-01
The Greater Lyon is a dense area located in the Rhône Valley in the south east of France. The conurbation counts 1.3 million inhabitants and the rainfall hazard is a great concern. However, until now, studies on rainfall over the Greater Lyon have only been based on the network of rain gauges, despite the presence of a C-band radar located in the close vicinity. Consequently, the first aim of this study was to investigate the hydrological quality of this radar. This assessment, based on comparison of radar estimations and rain-gauges values concludes that the radar data has overall a good quality since 2006. Given this good accuracy, this study made a next step and investigated the characteristics of intense rain cells that are responsible of the majority of floods in the Greater Lyon area. Improved knowledge on these rainfall cells is important to anticipate dangerous events and to improve the monitoring of the sewage system. This paper discusses the analysis of the ten most intense rainfall events in the 2001-2010 period. Spatial statistics pointed towards straight and linear movements of intense rainfall cells, independently on the ground surface conditions and the topography underneath. The speed of these cells was found nearly constant during a rainfall event, but depend from event to ranges on average from 25 to 66 km/h.
Statistical and Machine Learning forecasting methods: Concerns and ways forward
Makridakis, Spyros; Assimakopoulos, Vassilios
2018-01-01
Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784
Survival prediction of trauma patients: a study on US National Trauma Data Bank.
Sefrioui, I; Amadini, R; Mauro, J; El Fallahi, A; Gabbrielli, M
2017-12-01
Exceptional circumstances like major incidents or natural disasters may cause a huge number of victims that might not be immediately and simultaneously saved. In these cases it is important to define priorities avoiding to waste time and resources for not savable victims. Trauma and Injury Severity Score (TRISS) methodology is the well-known and standard system usually used by practitioners to predict the survival probability of trauma patients. However, practitioners have noted that the accuracy of TRISS predictions is unacceptable especially for severely injured patients. Thus, alternative methods should be proposed. In this work we evaluate different approaches for predicting whether a patient will survive or not according to simple and easily measurable observations. We conducted a rigorous, comparative study based on the most important prediction techniques using real clinical data of the US National Trauma Data Bank. Empirical results show that well-known Machine Learning classifiers can outperform the TRISS methodology. Based on our findings, we can say that the best approach we evaluated is Random Forest: it has the best accuracy, the best area under the curve, and k-statistic, as well as the second-best sensitivity and specificity. It has also a good calibration curve. Furthermore, its performance monotonically increases as the dataset size grows, meaning that it can be very effective to exploit incoming knowledge. Considering the whole dataset, it is always better than TRISS. Finally, we implemented a new tool to compute the survival of victims. This will help medical practitioners to obtain a better accuracy than the TRISS tools. Random Forests may be a good candidate solution for improving the predictions on survival upon the standard TRISS methodology.
Stress echocardiography with smartphone: real-time remote reading for regional wall motion.
Scali, Maria Chiara; de Azevedo Bellagamba, Clarissa Carmona; Ciampi, Quirino; Simova, Iana; de Castro E Silva Pretto, José Luis; Djordjevic-Dikic, Ana; Dodi, Claudio; Cortigiani, Lauro; Zagatina, Angela; Trambaiolo, Paolo; Torres, Marco R; Citro, Rodolfo; Colonna, Paolo; Paterni, Marco; Picano, Eugenio
2017-11-01
The diffusion of smart-phones offers access to the best remote expertise in stress echo (SE). To evaluate the reliability of SE based on smart-phone filming and reading. A set of 20 SE video-clips were read in random sequence with a multiple choice six-answer test by ten readers from five different countries (Italy, Brazil, Serbia, Bulgaria, Russia) of the "SE2020" study network. The gold standard to assess accuracy was a core-lab expert reader in agreement with angiographic verification (0 = wrong, 1 = right). The same set of 20 SE studies were read, in random order and >2 months apart, on desktop Workstation and via smartphones by ten remote readers. Image quality was graded from 1 = poor but readable, to 3 = excellent. Kappa (k) statistics was used to assess intra- and inter-observer agreement. The image quality was comparable in desktop workstation vs. smartphone (2.0 ± 0.5 vs. 2.4 ± 0.7, p = NS). The average reading time per case was similar for desktop versus smartphone (90 ± 39 vs. 82 ± 54 s, p = NS). The overall diagnostic accuracy of the ten readers was similar for desktop workstation vs. smartphone (84 vs. 91%, p = NS). Intra-observer agreement (desktop vs. smartphone) was good (k = 0.81 ± 0.14). Inter-observer agreement was good and similar via desktop or smartphone (k = 0.69 vs. k = 0.72, p = NS). The diagnostic accuracy and consistency of SE reading among certified readers was high and similar via desktop workstation or via smartphone.
The bag-of-frames approach: A not so sufficient model for urban soundscapes.
Lagrange, Mathieu; Lafay, Grégoire; Défréville, Boris; Aucouturier, Jean-Julien
2015-11-01
The "bag-of-frames" (BOF) approach, which encodes audio signals as the long-term statistical distribution of short-term spectral features, is commonly regarded as an effective and sufficient way to represent environmental sound recordings (soundscapes). The present paper describes a conceptual replication of a use of the BOF approach in a seminal article using several other soundscape datasets, with results strongly questioning the adequacy of the BOF approach for the task. As demonstrated in this paper, the good accuracy originally reported with BOF likely resulted from a particularly permissive dataset with low within-class variability. Soundscape modeling, therefore, may not be the closed case it was once thought to be.
Evaluating model accuracy for model-based reasoning
NASA Technical Reports Server (NTRS)
Chien, Steve; Roden, Joseph
1992-01-01
Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.
Li, Yaohang; Liu, Hui; Rata, Ionel; Jakobsson, Eric
2013-02-25
The rapidly increasing number of protein crystal structures available in the Protein Data Bank (PDB) has naturally made statistical analyses feasible in studying complex high-order inter-residue correlations. In this paper, we report a context-based secondary structure potential (CSSP) for assessing the quality of predicted protein secondary structures generated by various prediction servers. CSSP is a sequence-position-specific knowledge-based potential generated based on the potentials of mean force approach, where high-order inter-residue interactions are taken into consideration. The CSSP potential is effective in identifying secondary structure predictions with good quality. In 56% of the targets in the CB513 benchmark, the optimal CSSP potential is able to recognize the native secondary structure or a prediction with Q3 accuracy higher than 90% as best scored in the predicted secondary structures generated by 10 popularly used secondary structure prediction servers. In more than 80% of the CB513 targets, the predicted secondary structures with the lowest CSSP potential values yield higher than 80% Q3 accuracy. Similar performance of CSSP is found on the CASP9 targets as well. Moreover, our computational results also show that the CSSP potential using triplets outperforms the CSSP potential using doublets and is currently better than the CSSP potential using quartets.
Prediction of Multiple-Trait and Multiple-Environment Genomic Data Using Recommender Systems.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José C; Mota-Sanchez, David; Estrada-González, Fermín; Gillberg, Jussi; Singh, Ravi; Mondal, Suchismita; Juliana, Philomin
2018-01-04
In genomic-enabled prediction, the task of improving the accuracy of the prediction of lines in environments is difficult because the available information is generally sparse and usually has low correlations between traits. In current genomic selection, although researchers have a large amount of information and appropriate statistical models to process it, there is still limited computing efficiency to do so. Although some statistical models are usually mathematically elegant, many of them are also computationally inefficient, and they are impractical for many traits, lines, environments, and years because they need to sample from huge normal multivariate distributions. For these reasons, this study explores two recommender systems: item-based collaborative filtering (IBCF) and the matrix factorization algorithm (MF) in the context of multiple traits and multiple environments. The IBCF and MF methods were compared with two conventional methods on simulated and real data. Results of the simulated and real data sets show that the IBCF technique was slightly better in terms of prediction accuracy than the two conventional methods and the MF method when the correlation was moderately high. The IBCF technique is very attractive because it produces good predictions when there is high correlation between items (environment-trait combinations) and its implementation is computationally feasible, which can be useful for plant breeders who deal with very large data sets. Copyright © 2018 Montesinos-Lopez et al.
Prediction of Multiple-Trait and Multiple-Environment Genomic Data Using Recommender Systems
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José C.; Mota-Sanchez, David; Estrada-González, Fermín; Gillberg, Jussi; Singh, Ravi; Mondal, Suchismita; Juliana, Philomin
2018-01-01
In genomic-enabled prediction, the task of improving the accuracy of the prediction of lines in environments is difficult because the available information is generally sparse and usually has low correlations between traits. In current genomic selection, although researchers have a large amount of information and appropriate statistical models to process it, there is still limited computing efficiency to do so. Although some statistical models are usually mathematically elegant, many of them are also computationally inefficient, and they are impractical for many traits, lines, environments, and years because they need to sample from huge normal multivariate distributions. For these reasons, this study explores two recommender systems: item-based collaborative filtering (IBCF) and the matrix factorization algorithm (MF) in the context of multiple traits and multiple environments. The IBCF and MF methods were compared with two conventional methods on simulated and real data. Results of the simulated and real data sets show that the IBCF technique was slightly better in terms of prediction accuracy than the two conventional methods and the MF method when the correlation was moderately high. The IBCF technique is very attractive because it produces good predictions when there is high correlation between items (environment–trait combinations) and its implementation is computationally feasible, which can be useful for plant breeders who deal with very large data sets. PMID:29097376
A global goodness-of-fit statistic for Cox regression models.
Parzen, M; Lipsitz, S R
1999-06-01
In this paper, a global goodness-of-fit test statistic for a Cox regression model, which has an approximate chi-squared distribution when the model has been correctly specified, is proposed. Our goodness-of-fit statistic is global and has power to detect if interactions or higher order powers of covariates in the model are needed. The proposed statistic is similar to the Hosmer and Lemeshow (1980, Communications in Statistics A10, 1043-1069) goodness-of-fit statistic for binary data as well as Schoenfeld's (1980, Biometrika 67, 145-153) statistic for the Cox model. The methods are illustrated using data from a Mayo Clinic trial in primary billiary cirrhosis of the liver (Fleming and Harrington, 1991, Counting Processes and Survival Analysis), in which the outcome is the time until liver transplantation or death. The are 17 possible covariates. Two Cox proportional hazards models are fit to the data, and the proposed goodness-of-fit statistic is applied to the fitted models.
Hu, Xuefei; Waller, Lance A; Lyapustin, Alexei; Wang, Yujie; Liu, Yang
2014-10-16
Multiple studies have developed surface PM 2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM 2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM 2.5 . In this paper, we examined whether remotely sensed fire count data could improve PM 2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM 2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM 2.5 across the models considered. Cross validation (CV) generated an R 2 of 0.69, a mean prediction error of 2.75 µg/m 3 , and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m 3 , indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m 3 , exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM 2.5 concentration estimation, especially in areas and seasons prone to fire events.
Diagnostic accuracy of hepatorenal index in the detection and grading of hepatic steatosis.
Chauhan, Anil; Sultan, Laith R; Furth, Emma E; Jones, Lisa P; Khungar, Vandana; Sehgal, Chandra M
2016-11-12
The objectives of our study were to assess the accuracy of hepatorenal index (HRI) in detection and grading of hepatic steatosis and to evaluate various factors that can affect the HRI measurement. Forty-five patients, who had undergone an abdominal sonographic examination within 30 days of liver biopsy, were enrolled. The HRI was calculated as the ratio of the mean brightness levels of the liver and renal parenchymas. The effect of the measurement technique on the HRI was evaluated by using various sizes, depths, and locations of the regions of interest (ROIs) in the liver. The measurements were obtained by two observers. The HRI was compared with the subjective grading of steatosis. The optimal HRI cutoff to detect steatosis was 2.01, yielding a sensitivity of 62.5% and specificity of 95.2%. Subjective grading had a sensitivity of 87.5% and specificity of 62.5%. HRIs of the hepatic steatosis group were statistically different from the no-steatosis group (p < 0.05). However, there was no statistically significant difference between mild steatosis and no-steatosis groups (p value = 0.72). There was a strong correlation between different HRIs based on variable placements of ROIs, except when the ROIs were positioned randomly. Interclass correlation coefficient for measurements performed by two observers was 0.74 (confidence interval: 0.58-0.86). The HRI is an effective tool for detecting hepatic steatosis. It provides similar accuracy for different methods of ROI placement (except for random placement) and has good interobserver agreement. It, however, is unable to effectively differentiate between absent and mild steatosis. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 44:580-586, 2016. © 2016 Wiley Periodicals, Inc.
Sharp, Adam L; Jones, Jason P; Wu, Ivan; Huynh, Dan; Kocher, Keith E; Shah, Nirav R; Gould, Michael K
2016-04-01
Pneumonia severity tools were primarily developed in cohorts of hospitalized patients, limiting their applicability to the emergency department (ED). We describe current community ED admission practices and examine the accuracy of the CURB-65 to predict 30-day mortality for patients, either discharged or admitted with community-acquired pneumonia (CAP). A retrospective, observational study of adult CAP encounters in 14 community EDs within an integrated healthcare system. We calculated CURB-65 scores for all encounters and described the use of hospitalization, stratified by each score (0-5). We then used each score as a cutoff to calculate sensitivity, specificity, positive predictive value, negative predictive value (NPV), positive likelihood ratios, and negative likelihood ratios for predicting 30-day mortality. The sample included 21,183 ED encounters for CAP (7,952 discharged and 13,231 admitted). The C-statistic describing the accuracy of CURB-65 for predicting 30-day mortality in the full sample was 0.761 (95% confidence interval [CI], 0.747-0.774). The C-statistic was 0.864 (95% CI, 0.821-0.906) among patients discharged from the ED compared with 0.689 (95% CI, 0.672-0.705) among patients who were admitted. Among all ED encounters a CURB-65 threshold of ≥1 was 92.8% sensitive and 38.0% specific for predicting mortality, with a 99.9% NPV. Among all encounters, 62.5% were admitted, including 36.2% of those at lowest risk (CURB-65 = 0). CURB-65 had very good accuracy for predicting 30-day mortality among patients discharged from the ED. This severity tool may help ED providers risk stratify patients to assist with disposition decisions and identify unwarranted variation in patient care. © 2016 by the Society for Academic Emergency Medicine.
Hu, Xuefei; Waller, Lance A.; Lyapustin, Alexei; Wang, Yujie; Liu, Yang
2017-01-01
Multiple studies have developed surface PM2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM2.5. In this paper, we examined whether remotely sensed fire count data could improve PM2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM2.5 across the models considered. Cross validation (CV) generated an R2 of 0.69, a mean prediction error of 2.75 µg/m3, and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m3, indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m3, exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM2.5 concentration estimation, especially in areas and seasons prone to fire events. PMID:28967648
Clinical utility and validation of the Couple's Communicative Evaluation Scale.
West, Craig E
2005-10-01
This study assessed the validity and clinical utility of a new test, the Couple's Communicative Evaluation Scale. With 24 couples from a variety of resources, e.g., churches, newspaper, and colleges, a discriminant analysis using the Dyadic Adjustment Scale, indicated that satisfied couples could be discriminated from issatisfied couples with 91-96% accuracy. Significant differences on the scale were found for means between 7 distressed and 16 nondistressed couples using the satisfaction/dissatisfaction cutoff score of 200 on the Dyadic Adjustment Scale and significant differences on the individual scales were found for means between 16 distressed and 31 nondistressed individuals using the satisfaction/dissatisfaction cutoff score of 100 on the Dyadic Adjustment Scale. Demographic variables, e.g., age, marriage length, were statistically significant. Scale scores were highly correlated with those on the Dyadic Adjustment Scale, indicating good validity. Using all 400 items, an alpha of .99 indicated good internal consistency for the verbal, nonverbal, and listening communication scores.
Zhang, Xinmiao; Liao, Xiaoling; Wang, Chunjuan; Liu, Liping; Wang, Chunxue; Zhao, Xingquan; Pan, Yuesong; Wang, Yilong; Wang, Yongjun
2015-08-01
The DRAGON score predicts functional outcome of ischemic stroke patients treated with intravenous thrombolysis. Our aim was to evaluate its utility in a Chinese stroke population. Patients with acute ischemic stroke treated with intravenous thrombolysis were prospectively registered in the Thrombolysis Implementation and Monitor of acute ischemic Stroke in China. We excluded patients with basilar artery occlusion and missing data, leaving 970 eligible patients. We calculated the DRAGON score, and the clinical outcome was measured by the modified Rankin Scale at 3 months. Model discrimination was quantified by calculating the C statistic. Calibration was assessed using Pearson correlation coefficient. The C statistic was .73 (.70-.76) for good outcome and .75 (.70-.79) for miserable outcome. Proportions of patients with good outcome were 94%, 83%, 70%, and 0% for 0 to 1, 2, 3, and 8 to 10 score points, respectively. Proportions of patients with miserable outcome were 0%, 3%, 9%, and 50% for 0 to 1, 2, 3, and 8 to 10 points, respectively. There was high correlation between predicted and observed probability of 3-month favorable and miserable outcome in the external validation cohort (Pearson correlation coefficient, .98 and .98, respectively, both P < .0001). The DRAGON score showed good performance to predict functional outcome after tissue-type plasminogen activator treatment in the Chinese population. This study demonstrated the accuracy and usability of the DRAGON score in the Chinese population in daily practice. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Statistical Capability Study of a Helical Grinding Machine Producing Screw Rotors
NASA Astrophysics Data System (ADS)
Holmes, C. S.; Headley, M.; Hart, P. W.
2017-08-01
Screw compressors depend for their efficiency and reliability on the accuracy of the rotors, and therefore on the machinery used in their production. The machinery has evolved over more than half a century in response to customer demands for production accuracy, efficiency, and flexibility, and is now at a high level on all three criteria. Production equipment and processes must be capable of maintaining accuracy over a production run, and this must be assessed statistically under strictly controlled conditions. This paper gives numerical data from such a study of an innovative machine tool and shows that it is possible to meet the demanding statistical capability requirements.
3D source localization of interictal spikes in epilepsy patients with MRI lesions
NASA Astrophysics Data System (ADS)
Ding, Lei; Worrell, Gregory A.; Lagerlund, Terrence D.; He, Bin
2006-08-01
The present study aims to accurately localize epileptogenic regions which are responsible for epileptic activities in epilepsy patients by means of a new subspace source localization approach, i.e. first principle vectors (FINE), using scalp EEG recordings. Computer simulations were first performed to assess source localization accuracy of FINE in the clinical electrode set-up. The source localization results from FINE were compared with the results from a classic subspace source localization approach, i.e. MUSIC, and their differences were tested statistically using the paired t-test. Other factors influencing the source localization accuracy were assessed statistically by ANOVA. The interictal epileptiform spike data from three adult epilepsy patients with medically intractable partial epilepsy and well-defined symptomatic MRI lesions were then studied using both FINE and MUSIC. The comparison between the electrical sources estimated by the subspace source localization approaches and MRI lesions was made through the coregistration between the EEG recordings and MRI scans. The accuracy of estimations made by FINE and MUSIC was also evaluated and compared by R2 statistic, which was used to indicate the goodness-of-fit of the estimated sources to the scalp EEG recordings. The three-concentric-spheres head volume conductor model was built for each patient with three spheres of different radii which takes the individual head size and skull thickness into consideration. The results from computer simulations indicate that the improvement of source spatial resolvability and localization accuracy of FINE as compared with MUSIC is significant when simulated sources are closely spaced, deep, or signal-to-noise ratio is low in a clinical electrode set-up. The interictal electrical generators estimated by FINE and MUSIC are in concordance with the patients' structural abnormality, i.e. MRI lesions, in all three patients. The higher R2 values achieved by FINE than MUSIC indicate that FINE provides a more satisfactory fitting of the scalp potential measurements than MUSIC in all patients. The present results suggest that FINE provides a useful brain source imaging technique, from clinical EEG recordings, for identifying and localizing epileptogenic regions in epilepsy patients with focal partial seizures. The present study may lead to the establishment of a high-resolution source localization technique from scalp-recorded EEGs for aiding presurgical planning in epilepsy patients.
Bolin, Jocelyn Holden; Finch, W Holmes
2014-01-01
Statistical classification of phenomena into observed groups is very common in the social and behavioral sciences. Statistical classification methods, however, are affected by the characteristics of the data under study. Statistical classification can be further complicated by initial misclassification of the observed groups. The purpose of this study is to investigate the impact of initial training data misclassification on several statistical classification and data mining techniques. Misclassification conditions in the three group case will be simulated and results will be presented in terms of overall as well as subgroup classification accuracy. Results show decreased classification accuracy as sample size, group separation and group size ratio decrease and as misclassification percentage increases with random forests demonstrating the highest accuracy across conditions.
El-Kafrawy, Dina S; Belal, Tarek S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H
2017-05-01
This work describes the development, validation, and application of two simple, accurate, and reliable methods for the determination of ursodeoxycholic acid (UDCA) in bulk powder and in pharmaceutical dosage forms. The carboxylic acid group in UDCA was exploited for the development of these novel methods. Method 1 is the colorimetric determination of the drug based on its reaction with 2-nitrophenylhydrazine hydrochloride in the presence of a water-soluble carbodiimide coupler [1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide hydrochloride] and pyridine to produce an acid hydrazide derivative, which ionizes to yield an intense violet color with maximum absorption at 553 nm. Method 2 uses reversed-phase HPLC with diode-array detection for the determination of UDCA after precolumn derivatization using the same reaction mentioned above. The acid hydrazide reaction product was separated using a Pinnacle DB C8 column (4.6 × 150 mm, 5 μm particle size) and a mobile phase consisting of 0.01 M acetate buffer (pH 3)-methanol-acetonitrile (30 + 30 + 40, v/v/v) isocratically pumped at a flow rate of 1 mL/min. Ibuprofen was used as the internal standard (IS). The peaks of the reaction product and IS were monitored at 400 nm. Different experimental parameters for both methods were carefully optimized. Analytical performance of the developed methods were statistically validated for linearity, range, precision, accuracy, specificity, robustness, LOD, and LOQ. Calibration curves showed good linear relationships for concentration ranges 32-192 and 60-600 μg/mL for methods 1 and 2, respectively. The proposed methods were successfully applied for the assay of UDCA in bulk form, capsules, and oral suspension with good accuracy and precision. Assay results were statistically compared with a reference pharmacopeial HPLC method, and no significant differences were observed between proposed and reference methods.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940
Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.
Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe
2012-01-01
Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.
A new statistical model for subgrid dispersion in large eddy simulations of particle-laden flows
NASA Astrophysics Data System (ADS)
Muela, Jordi; Lehmkuhl, Oriol; Pérez-Segarra, Carles David; Oliva, Asensi
2016-09-01
Dispersed multiphase turbulent flows are present in many industrial and commercial applications like internal combustion engines, turbofans, dispersion of contaminants, steam turbines, etc. Therefore, there is a clear interest in the development of models and numerical tools capable of performing detailed and reliable simulations about these kind of flows. Large Eddy Simulations offer good accuracy and reliable results together with reasonable computational requirements, making it a really interesting method to develop numerical tools for particle-laden turbulent flows. Nonetheless, in multiphase dispersed flows additional difficulties arises in LES, since the effect of the unresolved scales of the continuous phase over the dispersed phase is lost due to the filtering procedure. In order to solve this issue a model able to reconstruct the subgrid velocity seen by the particles is required. In this work a new model for the reconstruction of the subgrid scale effects over the dispersed phase is presented and assessed. This innovative methodology is based in the reconstruction of statistics via Probability Density Functions (PDFs).
Recall accuracy of mobile phone calls among Japanese young people.
Kiyohara, Kosuke; Wake, Kanako; Watanabe, Soichi; Arima, Takuji; Sato, Yasuto; Kojimahara, Noriko; Taki, Masao; Yamaguchi, Naohito
2016-11-01
This study aimed to elucidate the recall accuracy of mobile phone calls among young people using new software-modified phone (SMP) technology. A total of 198 Japanese students aged between 10 and 24 years were instructed to use a SMP for 1 month to record their actual call statuses. Ten to 12 months after this period, face-to-face interviews were conducted to obtain the self-reported call statuses during the monitoring period. Using the SMP record as the gold standard of validation, the recall accuracy of phone calls was evaluated. A total of 19% of the participants (34/177) misclassified their laterality (i.e., the dominant side of ear used while making calls), with the level of agreement being moderate (κ-statistics, 0.449). The level of agreement between the self-reports and SMP records was relatively good for the duration of calls (Pearson's r, 0.620), as compared with the number of calls (Pearson's r, 0.561). The recall was prone to small systematic and large random errors for both the number and duration of calls. Such a large random recall error for the amount of calls and misclassification of laterality suggest that the results of epidemiological studies of mobile phone use based on self-assessment should be interpreted cautiously.
Accuracy of a Digital Weight Scale Relative to the Nintendo Wii in Measuring Limb Load Asymmetry
Kumar, NS Senthil; Omar, Baharudin; Joseph, Leonard H; Hamdan, Nor; Htwe, Ohnmar; Hamidun, Nursalbiyah
2014-01-01
[Purpose] The aim of the present study was to investigate the accuracy of a digital weight scale relative to the Wii in limb loading measurement during static standing. [Methods] This was a cross-sectional study conducted at a public university teaching hospital. The sample consisted of 24 participants (12 with osteoarthritis and 12 healthy) recruited through convenient sampling. Limb loading measurements were obtained using a digital weight scale and the Nintendo Wii in static standing with three trials under an eyes-open condition. The limb load asymmetry was computed as the symmetry index. [Results] The accuracy of measurement with the digital weight scale relative to the Nintendo Wii was analyzed using the receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov test (K-S test). The area under the ROC curve was found to be 0.67. Logistic regression confirmed the validity of digital weight scale relative to the Nintendo Wii. The D statistics value from the K-S test was found to be 0.16, which confirmed that there was no significant difference in measurement between the equipment. [Conclusion] The digital weight scale is an accurate tool for measuring limb load asymmetry. The low price, easy availability, and maneuverability make it a good potential tool in clinical settings for measuring limb load asymmetry. PMID:25202181
Accuracy of a digital weight scale relative to the nintendo wii in measuring limb load asymmetry.
Kumar, Ns Senthil; Omar, Baharudin; Joseph, Leonard H; Hamdan, Nor; Htwe, Ohnmar; Hamidun, Nursalbiyah
2014-08-01
[Purpose] The aim of the present study was to investigate the accuracy of a digital weight scale relative to the Wii in limb loading measurement during static standing. [Methods] This was a cross-sectional study conducted at a public university teaching hospital. The sample consisted of 24 participants (12 with osteoarthritis and 12 healthy) recruited through convenient sampling. Limb loading measurements were obtained using a digital weight scale and the Nintendo Wii in static standing with three trials under an eyes-open condition. The limb load asymmetry was computed as the symmetry index. [Results] The accuracy of measurement with the digital weight scale relative to the Nintendo Wii was analyzed using the receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov test (K-S test). The area under the ROC curve was found to be 0.67. Logistic regression confirmed the validity of digital weight scale relative to the Nintendo Wii. The D statistics value from the K-S test was found to be 0.16, which confirmed that there was no significant difference in measurement between the equipment. [Conclusion] The digital weight scale is an accurate tool for measuring limb load asymmetry. The low price, easy availability, and maneuverability make it a good potential tool in clinical settings for measuring limb load asymmetry.
Acosta-Mesa, Héctor-Gabriel; Rechy-Ramírez, Fernando; Mezura-Montes, Efrén; Cruz-Ramírez, Nicandro; Hernández Jiménez, Rodolfo
2014-06-01
In this work, we present a novel application of time series discretization using evolutionary programming for the classification of precancerous cervical lesions. The approach optimizes the number of intervals in which the length and amplitude of the time series should be compressed, preserving the important information for classification purposes. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classification, the complexity measured as the number of different strings needed to represent the complete data set, and the compression rate assessed as the length of the discrete representation. This discretization approach is evaluated using a time series data based on temporal patterns observed during a classical test used in cervical cancer detection; the classification accuracy reached by our method is compared with the well-known times series discretization algorithm SAX and the dimensionality reduction method PCA. Statistical analysis of the classification accuracy shows that the discrete representation is as efficient as the complete raw representation for the present application, reducing the dimensionality of the time series length by 97%. This representation is also very competitive in terms of classification accuracy when compared with similar approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Muneer, Tariq; Zhang, Xiaodong; Wood, John
2002-03-01
Delta-T Device Limited of Cambridge, UK have developed an integrated device which enables simultaneous measurement of horizontal global and diffuse irradiance as well as sunshine status at any given instance in time. To evaluate the performance of this new device, horizontal global and diffuse irradiance data were simultaneously collected from Delta-T device and Napier University's CIE First Class daylight monitoring station. To enable a cross check a Kipp & Zonen CM11 global irradiance sensor has also been installed in Currie, south-west Edinburgh. Sunshine duration data have been recorded at the Royal Botanical Garden, Edinburgh using their Campbell-Stokes recorder. Hourly data sets were analysed and plotted within the Microsoft Excel environment. Using the common statistical measures, Root Mean Square Difference (RMSD) and Mean Bias Difference (MBD) the accuracy of measurements of Delta-T sensor's horizontal global and diffuse irradiance, and sunshine duration were investigated. The results show a good performance on the part of Delta-T device for the measurement of global and diffuse irradiance. The sunshine measurements were found to have a lack of consistency and accuracy. It is argued herein that the distance between the respective sensors and the poor accuracy of Campbell-Stokes recorder may be contributing factors to this phenomenon.
Communication Accuracy in Magazine Science Reporting.
ERIC Educational Resources Information Center
Borman, Susan Cray
1978-01-01
Evaluators with scientific expertise who analyzed the accuracy of popularized science news in mass circulation magazines found that the over-all accuracy of the magazine articles was good, and that the major problem was the omission of relevant information. (GW)
Risk prediction models of breast cancer: a systematic review of model performances.
Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin
2012-05-01
The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.
Statistical Validation of Image Segmentation Quality Based on a Spatial Overlap Index1
Zou, Kelly H.; Warfield, Simon K.; Bharatha, Aditya; Tempany, Clare M.C.; Kaus, Michael R.; Haker, Steven J.; Wells, William M.; Jolesz, Ferenc A.; Kikinis, Ron
2005-01-01
Rationale and Objectives To examine a statistical validation method based on the spatial overlap between two sets of segmentations of the same anatomy. Materials and Methods The Dice similarity coefficient (DSC) was used as a statistical validation metric to evaluate the performance of both the reproducibility of manual segmentations and the spatial overlap accuracy of automated probabilistic fractional segmentation of MR images, illustrated on two clinical examples. Example 1: 10 consecutive cases of prostate brachytherapy patients underwent both preoperative 1.5T and intraoperative 0.5T MR imaging. For each case, 5 repeated manual segmentations of the prostate peripheral zone were performed separately on preoperative and on intraoperative images. Example 2: A semi-automated probabilistic fractional segmentation algorithm was applied to MR imaging of 9 cases with 3 types of brain tumors. DSC values were computed and logit-transformed values were compared in the mean with the analysis of variance (ANOVA). Results Example 1: The mean DSCs of 0.883 (range, 0.876–0.893) with 1.5T preoperative MRI and 0.838 (range, 0.819–0.852) with 0.5T intraoperative MRI (P < .001) were within and at the margin of the range of good reproducibility, respectively. Example 2: Wide ranges of DSC were observed in brain tumor segmentations: Meningiomas (0.519–0.893), astrocytomas (0.487–0.972), and other mixed gliomas (0.490–0.899). Conclusion The DSC value is a simple and useful summary measure of spatial overlap, which can be applied to studies of reproducibility and accuracy in image segmentation. We observed generally satisfactory but variable validation results in two clinical applications. This metric may be adapted for similar validation tasks. PMID:14974593
Portnoy, Galina A; Haskell, Sally G; King, Matthew W; Maskin, Rachel; Gerber, Megan R; Iverson, Katherine M
2018-06-06
Veterans are at heightened risk for perpetrating intimate partner violence (IPV), yet there is limited evidence to inform practice and policy for the detection of IPV perpetration. The present study evaluated the accuracy and acceptability of a potential IPV perpetration screening tool for use with women veterans. A national sample of women veterans completed a 2016 web-based survey that included a modified 5-item Extended-Hurt/Insult/Threaten/Scream (Modified E-HITS) and the Revised Conflict Tactics Scales (CTS-2). Items also assessed women's perceptions of the acceptability and appropriateness of the modified E-HITS questions for use in healthcare settings. Accuracy statistics, including sensitivity and specificity, were calculated using the CTS-2 as the reference standard. Primary measures included the Modified E-HITS (index test), CTS-2 (reference standard), and items assessing acceptability. This study included 187 women, of whom 31 women veterans (16.6%) reported past-6-month IPV perpetration on the CTS-2. The Modified E-HITS demonstrated good overall accuracy (area under the curve, 0.86; 95% confidence interval, 0.78-0.94). In addition, the majority of women perceived the questions to be acceptable and appropriate. Findings demonstrate that the Modified E-HITS is promising as a low-burden tool for detecting of IPV perpetration among women veterans. This tool may help the Veterans Health Administration and other health care providers detect IPV perpetration and offer appropriate referrals for comprehensive assessment and services. Published by Elsevier Inc.
A new in silico classification model for ready biodegradability, based on molecular fragments.
Lombardo, Anna; Pizzo, Fabiola; Benfenati, Emilio; Manganaro, Alberto; Ferrari, Thomas; Gini, Giuseppina
2014-08-01
Regulations such as the European REACH (Registration, Evaluation, Authorization and restriction of Chemicals) often require chemicals to be evaluated for ready biodegradability, to assess the potential risk for environmental and human health. Because not all chemicals can be tested, there is an increasing demand for tools for quick and inexpensive biodegradability screening, such as computer-based (in silico) theoretical models. We developed an in silico model starting from a dataset of 728 chemicals with ready biodegradability data (MITI-test Ministry of International Trade and Industry). We used the novel software SARpy to automatically extract, through a structural fragmentation process, a set of substructures statistically related to ready biodegradability. Then, we analysed these substructures in order to build some general rules. The model consists of a rule-set made up of the combination of the statistically relevant fragments and of the expert-based rules. The model gives good statistical performance with 92%, 82% and 76% accuracy on the training, test and external set respectively. These results are comparable with other in silico models like BIOWIN developed by the United States Environmental Protection Agency (EPA); moreover this new model includes an easily understandable explanation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Huang, Hongxin; Inoue, Takashi; Tanaka, Hiroshi
2011-08-01
We studied the long-term optical performance of an adaptive optics scanning laser ophthalmoscope that uses a liquid crystal on silicon spatial light modulator to correct ocular aberrations. The system achieved good compensation of aberrations while acquiring images of fine retinal structures, excepting during sudden eye movements. The residual wavefront aberrations collected over several minutes in several situations were statistically analyzed. The mean values of the root-mean-square residual wavefront errors were 23-30 nm, and for around 91-94% of the effective time the errors were below the Marechal criterion for diffraction limited imaging. The ability to axially shift the imaging plane to different retinal depths was also demonstrated.
NASA Astrophysics Data System (ADS)
Mohamed, Omar Ahmed; Hasan Masood, Syed; Lal Bhowmik, Jahar
2018-02-01
In the additive manufacturing (AM) market, the question is raised by industry and AM users on how reproducible and repeatable the fused deposition modeling (FDM) process is in providing good dimensional accuracy. This paper aims to investigate and evaluate the repeatability and reproducibility of the FDM process through a systematic approach to answer this frequently asked question. A case study based on the statistical gage repeatability and reproducibility (gage R&R) technique is proposed to investigate the dimensional variations in the printed parts of the FDM process. After running the simulation and analysis of the data, the FDM process capability is evaluated, which would help the industry for better understanding the performance of FDM technology.
A model of the human observer and decision maker
NASA Technical Reports Server (NTRS)
Wewerinke, P. H.
1981-01-01
The decision process is described in terms of classical sequential decision theory by considering the hypothesis that an abnormal condition has occurred by means of a generalized likelihood ratio test. For this, a sufficient statistic is provided by the innovation sequence which is the result of the perception an information processing submodel of the human observer. On the basis of only two model parameters, the model predicts the decision speed/accuracy trade-off and various attentional characteristics. A preliminary test of the model for single variable failure detection tasks resulted in a very good fit of the experimental data. In a formal validation program, a variety of multivariable failure detection tasks was investigated and the predictive capability of the model was demonstrated.
Variance estimates and confidence intervals for the Kappa measure of classification accuracy
M. A. Kalkhan; R. M. Reich; R. L. Czaplewski
1997-01-01
The Kappa statistic is frequently used to characterize the results of an accuracy assessment used to evaluate land use and land cover classifications obtained by remotely sensed data. This statistic allows comparisons of alternative sampling designs, classification algorithms, photo-interpreters, and so forth. In order to make these comparisons, it is...
Numerical modeling and model updating for smart laminated structures with viscoelastic damping
NASA Astrophysics Data System (ADS)
Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan
2018-07-01
This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.
Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation
NASA Astrophysics Data System (ADS)
Arotaritei, D.; Rotariu, C.
2015-09-01
In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).
Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.
Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko
2017-07-01
Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.
Perspectives of Maine Forest Cover Change from Landsat Imagery and Forest Inventory Analysis (FIA)
Steven Sader; Michael Hoppus; Jacob Metzler; Suming Jin
2005-01-01
A forest change detection map was developed to document forest gains and losses during the decade of the 1990s. The effectiveness of the Landsat imagery and methods for detecting Maine forest cover change are indicated by the good accuracy assessment results: forest-no change, forest loss, and forest gain accuracy were 90, 88, and 92% respectively, and the good...
Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values
Alves, Gelio; Yu, Yi-Kuo
2014-01-01
Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491
A Synergy Cropland of China by Fusing Multiple Existing Maps and Statistics.
Lu, Miao; Wu, Wenbin; You, Liangzhi; Chen, Di; Zhang, Li; Yang, Peng; Tang, Huajun
2017-07-12
Accurate information on cropland extent is critical for scientific research and resource management. Several cropland products from remotely sensed datasets are available. Nevertheless, significant inconsistency exists among these products and the cropland areas estimated from these products differ considerably from statistics. In this study, we propose a hierarchical optimization synergy approach (HOSA) to develop a hybrid cropland map of China, circa 2010, by fusing five existing cropland products, i.e., GlobeLand30, Climate Change Initiative Land Cover (CCI-LC), GlobCover 2009, MODIS Collection 5 (MODIS C5), and MODIS Cropland, and sub-national statistics of cropland area. HOSA simplifies the widely used method of score assignment into two steps, including determination of optimal agreement level and identification of the best product combination. The accuracy assessment indicates that the synergy map has higher accuracy of spatial locations and better consistency with statistics than the five existing datasets individually. This suggests that the synergy approach can improve the accuracy of cropland mapping and enhance consistency with statistics.
Dissimilarity representations in lung parenchyma classification
NASA Astrophysics Data System (ADS)
Sørensen, Lauge; de Bruijne, Marleen
2009-02-01
A good problem representation is important for a pattern recognition system to be successful. The traditional approach to statistical pattern recognition is feature representation. More specifically, objects are represented by a number of features in a feature vector space, and classifiers are built in this representation. This is also the general trend in lung parenchyma classification in computed tomography (CT) images, where the features often are measures on feature histograms. Instead, we propose to build normal density based classifiers in dissimilarity representations for lung parenchyma classification. This allows for the classifiers to work on dissimilarities between objects, which might be a more natural way of representing lung parenchyma. In this context, dissimilarity is defined between CT regions of interest (ROI)s. ROIs are represented by their CT attenuation histogram and ROI dissimilarity is defined as a histogram dissimilarity measure between the attenuation histograms. In this setting, the full histograms are utilized according to the chosen histogram dissimilarity measure. We apply this idea to classification of different emphysema patterns as well as normal, healthy tissue. Two dissimilarity representation approaches as well as different histogram dissimilarity measures are considered. The approaches are evaluated on a set of 168 CT ROIs using normal density based classifiers all showing good performance. Compared to using histogram dissimilarity directly as distance in a emph{k} nearest neighbor classifier, which achieves a classification accuracy of 92.9%, the best dissimilarity representation based classifier is significantly better with a classification accuracy of 97.0% (text{emph{p" border="0" class="imgtopleft"> = 0.046).
a Band Selection Method for High Precision Registration of Hyperspectral Image
NASA Astrophysics Data System (ADS)
Yang, H.; Li, X.
2018-04-01
During the registration of hyperspectral images and high spatial resolution images, too much bands in a hyperspectral image make it difficult to select bands with good registration performance. Terrible bands are possible to reduce matching speed and accuracy. To solve this problem, an algorithm based on Cram'er-Rao lower bound theory is proposed to select good matching bands in this paper. The algorithm applies the Cram'er-Rao lower bound theory to the study of registration accuracy, and selects good matching bands by CRLB parameters. Experiments show that the algorithm in this paper can choose good matching bands and provide better data for the registration of hyperspectral image and high spatial resolution image.
Modeling the Lyα Forest in Collisionless Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, Daniele; Oñorbe, José; Lukić, Zarija
2016-08-11
Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present in this paper "Iteratively Matched Statistics" (IMS), a novel method to accurately model the Lyα forest with collisionless N-body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) andmore » the power spectrum of the real-space Lyα forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N-body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Lyα forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N-body simulations with achievable mean inter-particle separations in large-volume simulations. Finally, in addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic "mock" skies for Lyα forest surveys.« less
Sirc-cvs cytotoxicity test: an alternative for predicting rodent acute systemic toxicity.
Kitagaki, Masato; Wakuri, Shinobu; Hirota, Morihiko; Tanaka, Noriho; Itagaki, Hiroshi
2006-10-01
An in vitro crystal violet staining method using the rabbit cornea-derived cell line (SIRC-CVS) has been developed as an alternative to predict acute systemic toxicity in rodents. Seventy-nine chemicals, the in vitro cytotoxicity of which was already reported by the Multicenter Evaluation of In vitro Toxicity (MEIC) and ICCVAM/ECVAM, were selected as test compounds. The cells were incubated with the chemicals for 72 hrs and the IC(50) and IC(35) values (microg/mL) were obtained. The results were compared to the in vivo (rat or mouse) "most toxic" oral, intraperitoneal, subcutaneous and intravenous LD(50) values (mg/kg) taken from the RTECS database for each of the chemicals by using Pearson's correlation statistics. The following parameters were calculated: accuracy, sensitivity, specificity, prevalence, positive predictability, and negative predictability. Good linear correlations (Pearson's coefficient; r>0.6) were observed between either the IC(50) or the IC(35) values and all the LD(50) values. Among them, a statistically significant high correlation (r=0.8102, p<0.001) required for acute systemic toxicity prediction was obtained between the IC(50) values and the oral LD(50) values. By using the cut-off concentrations of 2,000 mg/kg (LD(50)) and 4,225 microg/mL (IC(50)), no false negatives were observed, and the accuracy was 84.8%. From this, it is concluded that this method could be used to predict the acute systemic toxicity potential of chemicals in rodents.
NASA Astrophysics Data System (ADS)
Kumar, Rakesh; Li, Zheng; Levin, Deborah A.
2011-05-01
In this work, we propose a new heat accommodation model to simulate freely expanding homogeneous condensation flows of gaseous carbon dioxide using a new approach, the statistical Bhatnagar-Gross-Krook method. The motivation for the present work comes from the earlier work of Li et al. [J. Phys. Chem. 114, 5276 (2010)] in which condensation models were proposed and used in the direct simulation Monte Carlo method to simulate the flow of carbon dioxide from supersonic expansions of small nozzles into near-vacuum conditions. Simulations conducted for stagnation pressures of one and three bar were compared with the measurements of gas and cluster number densities, cluster size, and carbon dioxide rotational temperature obtained by Ramos et al. [Phys. Rev. A 72, 3204 (2005)]. Due to the high computational cost of direct simulation Monte Carlo method, comparison between simulations and data could only be performed for these stagnation pressures, with good agreement obtained beyond the condensation onset point, in the farfield. As the stagnation pressure increases, the degree of condensation also increases; therefore, to improve the modeling of condensation onset, one must be able to simulate higher stagnation pressures. In simulations of an expanding flow of argon through a nozzle, Kumar et al. [AIAA J. 48, 1531 (2010)] found that the statistical Bhatnagar-Gross-Krook method provides the same accuracy as direct simulation Monte Carlo method, but, at one half of the computational cost. In this work, the statistical Bhatnagar-Gross-Krook method was modified to account for internal degrees of freedom for multi-species polyatomic gases. With the computational approach in hand, we developed and tested a new heat accommodation model for a polyatomic system to properly account for the heat release of condensation. We then developed condensation models in the framework of the statistical Bhatnagar-Gross-Krook method. Simulations were found to agree well with the experiment for all stagnation pressure cases (1-5 bar), validating the accuracy of the Bhatnagar-Gross-Krook based condensation model in capturing the physics of condensation.
ERIC Educational Resources Information Center
Vivo, Juana-Maria; Franco, Manuel
2008-01-01
This article attempts to present a novel application of a method of measuring accuracy for academic success predictors that could be used as a standard. This procedure is known as the receiver operating characteristic (ROC) curve, which comes from statistical decision techniques. The statistical prediction techniques provide predictor models and…
NASA Astrophysics Data System (ADS)
North, Matt; Petropoulos, George; Ireland, Gareth; Rendal, Daisy; Carlson, Toby
2015-04-01
With current predicted climate change, there is an increased requirement to gain knowledge on the terrestrial biosphere, for numerous agricultural, hydrological and meteorological applications. To this end, Soil Vegetation Atmospheric Transfer (SVAT) models are quickly becoming the preferred scientific tool to monitor, at fine temporal and spatial resolutions, detailed information on numerous parameters associated with Earth system interactions. Validation of any model is critical to assess its accuracy, generality and realism to distinctive ecosystems and subsequently acts as important step before its operational distribution. In this study, the SimSphere SVAT model has been validated to fifteen different sites of the FLUXNET network, where model performance was statistically evaluated by directly comparing the model predictions vs in situ data, for cloud free days with a high energy balance closure. Specific focus is given to the models ability to simulate parameters associated with the energy balance, namely Shortwave Incoming Solar Radiation (Rg), Net Radiation (Rnet), Latent Heat (LE), Sensible Heat (H), Air Temperature at 1.3m (Tair 1.3m) and Air temperature at 50m (Tair 50m). Comparisons were performed for a number distinctive ecosystem types and for 150 days in total using in-situ data from ground observational networks acquired from the year 2011 alone. Evaluation of the models' coherence to reality was evaluated on the basis of a series of statistical parameters including RMSD, R2, Scatter, Bias, MAE , NASH index, Slope and Intercept. Results showed good to very good agreement between predicted and observed datasets, particularly so for LE, H, Tair 1.3m and Tair 50m where mean error distribution values indicated excellent model performance. Due to the systematic underestimation, poorer simulation accuracies were exhibited for Rg and Rnet, yet all values reported are still analogous to other validatory studies of its kind. In overall, the model demonstrated greatest simulation accuracies within ecologically stable sites, where low inter-annual change in vegetation phenology was exhibited, such as open woodland savannah, shrub land and mulga woodland, whereas poorer simulation accuracies were attained in cropland and grazing pasture sites. This study results present its first comprehensive validation. It is also very timely due to the rapidly expanding global use of the model, both as a standalone tool used for research, education and training in several institutions worldwide, but also for its synergistic applications to Earth Observation data. Currently, several space agencies are evaluating the model 's use synergistically with Earth Observation data in providing spatio-temporal estimates of energy fluxes and / or soil moisture at operational level. Key Words: SimSphere, Validation, FLUXNET, SVAT, Shortwave Incoming Solar Radiation, Net Radiation, Latent Heat, Sensible Heat, Air Temperature
Ranucci, Marco; Castelvecchio, Serenella; Menicanti, Lorenzo; Frigiola, Alessandro; Pelissero, Gabriele
2010-03-01
The European system for cardiac operative risk evaluation (EuroSCORE) is currently used in many institutions and is considered a reference tool in many countries. We hypothesised that too many variables were included in the EuroSCORE using limited patient series. We tested different models using a limited number of variables. A total of 11150 adult patients undergoing cardiac operations at our institution (2001-2007) were retrospectively analysed. The 17 risk factors composing the EuroSCORE were separately analysed and ranked for accuracy of prediction of hospital mortality. Seventeen models were created by progressively including one factor at a time. The models were compared for accuracy with a receiver operating characteristics (ROC) analysis and area under the curve (AUC) evaluation. Calibration was tested with Hosmer-Lemeshow statistics. Clinical performance was assessed by comparing the predicted with the observed mortality rates. The best accuracy (AUC 0.76) was obtained using a model including only age, left ventricular ejection fraction, serum creatinine, emergency operation and non-isolated coronary operation. The EuroSCORE AUC (0.75) was not significantly different. Calibration and clinical performance were better in the five-factor model than in the EuroSCORE. Only in high-risk patients were 12 factors needed to achieve a good performance. Including many factors in multivariable logistic models increases the risk for overfitting, multicollinearity and human error. A five-factor model offers the same level of accuracy but demonstrated better calibration and clinical performance. Models with a limited number of factors may work better than complex models when applied to a limited number of patients. Copyright (c) 2009 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.
Fully automatic and precise data analysis developed for time-of-flight mass spectrometry.
Meyer, Stefan; Riedo, Andreas; Neuland, Maike B; Tulej, Marek; Wurz, Peter
2017-09-01
Scientific objectives of current and future space missions are focused on the investigation of the origin and evolution of the solar system with the particular emphasis on habitability and signatures of past and present life. For in situ measurements of the chemical composition of solid samples on planetary surfaces, the neutral atmospheric gas and the thermal plasma of planetary atmospheres, the application of mass spectrometers making use of time-of-flight mass analysers is a technique widely used. However, such investigations imply measurements with good statistics and, thus, a large amount of data to be analysed. Therefore, faster and especially robust automated data analysis with enhanced accuracy is required. In this contribution, an automatic data analysis software, which allows fast and precise quantitative data analysis of time-of-flight mass spectrometric data, is presented and discussed in detail. A crucial part of this software is a robust and fast peak finding algorithm with a consecutive numerical integration method allowing precise data analysis. We tested our analysis software with data from different time-of-flight mass spectrometers and different measurement campaigns thereof. The quantitative analysis of isotopes, using automatic data analysis, yields results with an accuracy of isotope ratios up to 100 ppm for a signal-to-noise ratio (SNR) of 10 4 . We show that the accuracy of isotope ratios is in fact proportional to SNR -1 . Furthermore, we observe that the accuracy of isotope ratios is inversely proportional to the mass resolution. Additionally, we show that the accuracy of isotope ratios is depending on the sample width T s by T s 0.5 . Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Accuracy of four commonly used color vision tests in the identification of cone disorders.
Thiadens, Alberta A H J; Hoyng, Carel B; Polling, Jan Roelof; Bernaerts-Biskop, Riet; van den Born, L Ingeborgh; Klaver, Caroline C W
2013-04-01
To determine which color vision test is most appropriate for the identification of cone disorders. In a clinic-based study, four commonly used color vision tests were compared between patients with cone dystrophy (n = 37), controls with normal visual acuity (n = 35), and controls with low vision (n = 39) and legal blindness (n = 11). Mean outcome measures were specificity, sensitivity, positive predictive value and discriminative accuracy of the Ishihara test, Hardy-Rand-Rittler (HRR) test, and the Lanthony and Farnsworth Panel D-15 tests. In the comparison between cone dystrophy and all controls, sensitivity, specificity and predictive value were highest for the HRR and Ishihara tests. When patients were compared to controls with normal vision, discriminative accuracy was highest for the HRR test (c-statistic for PD-axes 1, for T-axis 0.851). When compared to controls with poor vision, discriminative accuracy was again highest for the HRR test (c-statistic for PD-axes 0.900, for T-axis 0.766), followed by the Lanthony Panel D-15 test (c-statistic for PD-axes 0.880, for T-axis 0.500) and Ishihara test (c-statistic 0.886). Discriminative accuracies of all tests did not further decrease when patients were compared to controls who were legally blind. The HRR, Lanthony Panel D-15 and Ishihara all have a high discriminative accuracy to identify cone disorders, but the highest scores were for the HRR test. Poor visual acuity slightly decreased the accuracy of all tests. Our advice is to use the HRR test since this test also allows for evaluation of all three color axes and quantification of color defects.
Air Quality Forecasting through Different Statistical and Artificial Intelligence Techniques
NASA Astrophysics Data System (ADS)
Mishra, D.; Goyal, P.
2014-12-01
Urban air pollution forecasting has emerged as an acute problem in recent years because there are sever environmental degradation due to increase in harmful air pollutants in the ambient atmosphere. In this study, there are different types of statistical as well as artificial intelligence techniques are used for forecasting and analysis of air pollution over Delhi urban area. These techniques are principle component analysis (PCA), multiple linear regression (MLR) and artificial neural network (ANN) and the forecasting are observed in good agreement with the observed concentrations through Central Pollution Control Board (CPCB) at different locations in Delhi. But such methods suffers from disadvantages like they provide limited accuracy as they are unable to predict the extreme points i.e. the pollution maximum and minimum cut-offs cannot be determined using such approach. Also, such methods are inefficient approach for better output forecasting. But with the advancement in technology and research, an alternative to the above traditional methods has been proposed i.e. the coupling of statistical techniques with artificial Intelligence (AI) can be used for forecasting purposes. The coupling of PCA, ANN and fuzzy logic is used for forecasting of air pollutant over Delhi urban area. The statistical measures e.g., correlation coefficient (R), normalized mean square error (NMSE), fractional bias (FB) and index of agreement (IOA) of the proposed model are observed in better agreement with the all other models. Hence, the coupling of statistical and artificial intelligence can be use for the forecasting of air pollutant over urban area.
Combining accuracy assessment of land-cover maps with environmental monitoring programs
Stephen V. Stehman; Raymond L. Czaplewski; Sarah M. Nusser; Limin Yang; Zhiliang Zhu
2000-01-01
A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring...
NASA Astrophysics Data System (ADS)
Chu, Huaqiang; Gu, Mingyan; Consalvi, Jean-Louis; Liu, Fengshan; Zhou, Huaichun
2016-03-01
The effects of total pressure on gas radiation heat transfer are investigated in 1D parallel plate geometry containing isothermal and homogeneous media and an inhomogeneous and non-isothermal CO2-H2O mixture under conditions relevant to oxy-fuel combustion using the line-by-line (LBL), statistical narrow-band (SNB), statistical narrow-band correlated-k (SNBCK), weighted-sum-of-grey-gases (WSGG), and full-spectrum correlated-k (FSCK) models. The LBL calculations were conducted using the HITEMP2010 and CDSD-1000 databases and the LBL results serve as the benchmark solution to evaluate the accuracy of the other models. Calculations of the SNB, SNBCK, and FSCK were conducted using both the 1997 EM2C SNB parameters and their recently updated 2012 parameters to investigate how the SNB model parameters affect the results under oxy-fuel combustion conditions at high pressures. The WSGG model considered is the recently developed one by Bordbar et al. [19] for oxy-fuel combustion based on LBL calculations using HITEMP2010. The total pressure considered ranges from 1 up to 30 atm. The total pressure significantly affects gas radiation transfer primarily through the increase in molecule number density and only slightly through spectral line broadening. Using the 1997 EM2C SNB model parameters the accuracy of SNB and SNBCK is very good and remains essentially independent of the total pressure. When using the 2012 EM2C SNB model parameters the SNB and SNBCK results are less accurate and their error increases with increasing the total pressure. The WSGG model has the lowest accuracy and the best computational efficiency among the models investigated. The errors of both WSGG and FSCK using the 2012 EM2C SNB model parameters increase when the total pressure is increased from 1 to 10 atm, but remain nearly independent of the total pressure beyond 10 atm. When using the 1997 EM2C SNB model parameters the accuracy of FSCK only slightly decreases with increasing the total pressure.
Accuracy of Person-Fit Statistics: A Monte Carlo Study of the Influence of Aberrance Rates
ERIC Educational Resources Information Center
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane
2011-01-01
Using a Monte Carlo experimental design, this research examined the relationship between answer patterns' aberrance rates and person-fit statistics (PFS) accuracy. It was observed that as the aberrance rate increased, the detection rates of PFS also increased until, in some situations, a peak was reached and then the detection rates of PFS…
Statistical Reference Datasets
National Institute of Standards and Technology Data Gateway
Statistical Reference Datasets (Web, free access) The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.
NASA Astrophysics Data System (ADS)
Young, S. L.; Kress, B. T.; Rodriguez, J. V.; McCollough, J. P.
2013-12-01
Operational specifications of space environmental hazards can be an important input used by decision makers. Ideally the specification would come from on-board sensors, but for satellites where that capability is not available another option is to map data from remote observations to the location of the satellite. This requires a model of the physical environment and an understanding of its accuracy for mapping applications. We present a statistical comparison between magnetic field model mappings of solar energetic particle observations made by NOAA's Geostationary Operational Environmental Satellites (GOES) to the location of the Combined Release and Radiation Effects Satellite (CRRES). Because CRRES followed a geosynchronous transfer orbit which precessed in local time this allows us to examine the model accuracy between LEO and GEO orbits across a range of local times. We examine the accuracy of multiple magnetic field models using a variety of statistics and examine their utility for operational purposes.
Assessing the accuracy and stability of variable selection ...
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti
Sharma, Reetika; Verma, Neelam; Kaushal, Vijay; Sharma, Dev Raj; Sharma, Dhruv
2017-01-01
The study is undertaken to correlate the fine-needle aspiration cytology (FNAC) findings with histopathology in a spectrum of thyroid lesions and to find the diagnostic accuracy of fine-needle aspiration (FNA) so that unnecessary thyroidectomies can be avoided in benign lesions. This study was carried out over the period of 1-year (May 1, 2012-April, 30 2013). FNA specimens obtained from 200 patients were analyzed. Of these, only 40 patients underwent surgery and their thyroid specimens were subjected to histopathological examination. The age of the patients ranged from 9 to 82 years with mean age being 43 years. There was female preponderance, with male to female ratio being 1:7. On cytology out of 200 cases, 148 (74%) were benign, 25 (12.5%) were malignant, 16 (8%) were indeterminate, and 11 (5.5%) were nondiagnostic. Only 40 patients underwent surgery. On histopathology, 21 (52.5%) cases were benign and 19 (47.5%) were malignant. The statistical analysis of cytohistological correlation for both benign and malignant lesions revealed sensitivity, specificity, and diagnostic accuracy of 84%, 100% and 90%, respectively. FNAC is a minimally invasive, highly accurate and cost-effective procedure for the assessment of patients with thyroid lesions and has high -sensitivity and specificity. It acts as a good screening test and avoids unnecessary thyroidectomies.
Koh, D-M; Collins, D J; Wallace, T; Chau, I; Riddell, A M
2012-07-01
To compare the diagnostic accuracy of gadolinium-ethoxybenzyl-diethylenetriaminepentaacetic acid (Gd-EOB-DTPA)-enhanced MRI, diffusion-weighted MRI (DW-MRI) and a combination of both techniques for the detection of colorectal hepatic metastases. 72 patients with suspected colorectal liver metastases underwent Gd-EOB-DTPA MRI and DW-MRI. Images were retrospectively reviewed with unenhanced T(1) and T(2) weighted images as Gd-EOB-DTPA image set, DW-MRI image set and combined image set by two independent radiologists. Each lesion detected was scored for size, location and likelihood of metastasis, and compared with surgery and follow-up imaging. Diagnostic accuracy was compared using receiver operating characteristics and interobserver agreement by kappa statistics. 417 lesions (310 metastases, 107 benign) were found in 72 patients. For both readers, diagnostic accuracy using the combined image set was higher [area under the curve (Az)=0.96, 0.97] than Gd-EOB-DTPA image set (Az=0.86, 0.89) or DW-MRI image set (Az=0.93, 0.92). Using combined image set improved identification of liver metastases compared with Gd-EOB-DTPA image set (p<0.001) or DW-MRI image set (p<0.001). There was very good interobserver agreement for lesion classification (κ=0.81-0.88). Combining DW-MRI with Gd-EOB-DTPA-enhanced T(1) weighted MRI significantly improved the detection of colorectal liver metastases.
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
Hand and goods judgment algorithm based on depth information
NASA Astrophysics Data System (ADS)
Li, Mingzhu; Zhang, Jinsong; Yan, Dan; Wang, Qin; Zhang, Ruiqi; Han, Jing
2016-03-01
A tablet computer with a depth camera and a color camera is loaded on a traditional shopping cart. The inside information of the shopping cart is obtained by two cameras. In the shopping cart monitoring field, it is very important for us to determine whether the customer with goods in or out of the shopping cart. This paper establishes a basic framework for judging empty hand, it includes the hand extraction process based on the depth information, process of skin color model building based on WPCA (Weighted Principal Component Analysis), an algorithm for judging handheld products based on motion and skin color information, statistical process. Through this framework, the first step can ensure the integrity of the hand information, and effectively avoids the influence of sleeve and other debris, the second step can accurately extract skin color and eliminate the similar color interference, light has little effect on its results, it has the advantages of fast computation speed and high efficiency, and the third step has the advantage of greatly reducing the noise interference and improving the accuracy.
How Accurate Are Infrared Luminosities from Monochromatic Photometric Extrapolation?
NASA Astrophysics Data System (ADS)
Lin, Zesen; Fang, Guanwen; Kong, Xu
2016-12-01
Template-based extrapolations from only one photometric band can be a cost-effective method to estimate the total infrared (IR) luminosities ({L}{IR}) of galaxies. By utilizing multi-wavelength data that covers across 0.35-500 μm in GOODS-North and GOODS-South fields, we investigate the accuracy of this monochromatic extrapolated {L}{IR} based on three IR spectral energy distribution (SED) templates out to z˜ 3.5. We find that the Chary & Elbaz template provides the best estimate of {L}{IR} in Herschel/Photodetector Array Camera and Spectrometer (PACS) bands, while the Dale & Helou template performs best in Herschel/Spectral and Photometric Imaging Receiver (SPIRE) bands. To estimate {L}{IR}, we suggest that extrapolations from the available longest wavelength PACS band based on the Chary & Elbaz template can be a good estimator. Moreover, if the PACS measurement is unavailable, extrapolations from SPIRE observations but based on the Dale & Helou template can also provide a statistically unbiased estimate for galaxies at z≲ 2. The emission with a rest-frame 10-100 μm range of IR SED can be well described by all three templates, but only the Dale & Helou template shows a nearly unbiased estimate of the emission of the rest-frame submillimeter part.
A Priori Estimation of Organic Reaction Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emami, Fateme S.; Vahid, Amir; Wylie, Elizabeth K.
2015-07-21
A thermodynamically guided calculation of free energies of substrate and product molecules allows for the estimation of the yields of organic reactions. The non-ideality of the system and the solvent effects are taken into account through the activity coefficients calculated at the molecular level by perturbed-chain statistical associating fluid theory (PC-SAFT). The model is iteratively trained using a diverse set of reactions with yields that have been reported previously. This trained model can then estimate a priori the yields of reactions not included in the training set with an accuracy of ca. ±15 %. This ability has the potential tomore » translate into significant economic savings through the selection and then execution of only those reactions that can proceed in good yields.« less
NASA Astrophysics Data System (ADS)
Dolimont, Adrien; Michotte, Sebastien; Rivière-Lorphèvre, Edouard; Ducobu, François; Vivès, Solange; Godet, Stéphane; Henkes, Tom; Filippi, Enrico
2017-10-01
The use of additive manufacturing processes keeps growing in aerospace and biomedical industry. Among the numerous existing technologies, the Electron Beam Melting process has advantages (good dimensional accuracy, fully dense parts) and disadvantages (powder handling, support structure, high surface roughness). Analyzes of the surface characteristics are interesting to get a better understanding of the EBM operations. But that kind of analyzes is not often found in the literature. The main goal of this study is to determine if it is possible to improve the surface roughness by modifying some parameters of the process (scan speed function, number of contours, order of contours, etc.) on samples with different thicknesses. The experimental work on the surface roughness leads to a statistical analysis of 586 measures of EBM simple geometry parts.
NASA Astrophysics Data System (ADS)
Sehad, Mounir; Lazri, Mourad; Ameur, Soltane
2017-03-01
In this work, a new rainfall estimation technique based on the high spatial and temporal resolution of the Spinning Enhanced Visible and Infra Red Imager (SEVIRI) aboard the Meteosat Second Generation (MSG) is presented. This work proposes efficient scheme rainfall estimation based on two multiclass support vector machine (SVM) algorithms: SVM_D for daytime and SVM_N for night time rainfall estimations. Both SVM models are trained using relevant rainfall parameters based on optical, microphysical and textural cloud proprieties. The cloud parameters are derived from the Spectral channels of the SEVIRI MSG radiometer. The 3-hourly and daily accumulated rainfall are derived from the 15 min-rainfall estimation given by the SVM classifiers for each MSG observation image pixel. The SVMs were trained with ground meteorological radar precipitation scenes recorded from November 2006 to March 2007 over the north of Algeria located in the Mediterranean region. Further, the SVM_D and SVM_N models were used to estimate 3-hourly and daily rainfall using data set gathered from November 2010 to March 2011 over north Algeria. The results were validated against collocated rainfall observed by rain gauge network. Indeed, the statistical scores given by correlation coefficient, bias, root mean square error and mean absolute error, showed good accuracy of rainfall estimates by the present technique. Moreover, rainfall estimates of our technique were compared with two high accuracy rainfall estimates methods based on MSG SEVIRI imagery namely: random forests (RF) based approach and an artificial neural network (ANN) based technique. The findings of the present technique indicate higher correlation coefficient (3-hourly: 0.78; daily: 0.94), and lower mean absolute error and root mean square error values. The results show that the new technique assign 3-hourly and daily rainfall with good and better accuracy than ANN technique and (RF) model.
Guo, Jing; Simon, James H; Sedghizadeh, Parish; Soliman, Osman N; Chapman, Travis; Enciso, Reyes
2013-12-01
The purpose of this study was to evaluate the reliability and accuracy of cone-beam computed tomographic (CBCT) imaging against the histopathologic diagnosis for the differential diagnosis of periapical cysts (cavitated lesions) from (solid) granulomas. Thirty-six periapical lesions were imaged using CBCT scans. Apicoectomy surgeries were conducted for histopathological examination. Evaluator 1 examined each CBCT scan for the presence of 6 radiologic characteristics of a cyst (ie, location, periphery, shape, internal structure, effects on surrounding structure, and perforation of the cortical plate). Not every cyst showed all radiologic features (eg, not all cysts perforate the cortical plate). For the purpose of finding the minimum number of diagnostic criteria present in a scan to diagnose a lesion as a cyst, we conducted 6 receiver operating characteristic curve analyses comparing CBCT diagnoses with the histopathologic diagnosis. Two other independent evaluators examined the CBCT lesions. Statistical tests were conducted to examine the accuracy, inter-rater reliability, and intrarater reliability of CBCT images. Findings showed that a score of ≥4 positive findings was the optimal scoring system. The accuracies of differential diagnoses of 3 evaluators were moderate (area under the curve = 0.76, 0.70, and 0.69 for evaluators 1, 2, and 3, respectively). The inter-rater agreement of the 3 evaluators was excellent (α = 0.87). The intrarater agreement was good to excellent (κ = 0.71, 0.76, and 0.77). CBCT images can provide a moderately accurate diagnosis between cysts and granulomas. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
DESIGNA ND ANALYSIS FOR THEMATIC MAP ACCURACY ASSESSMENT: FUNDAMENTAL PRINCIPLES
Before being used in scientific investigations and policy decisions, thematic maps constructed from remotely sensed data should be subjected to a statistically rigorous accuracy assessment. The three basic components of an accuracy assessment are: 1) the sampling design used to s...
Limits on the Accuracy of Linking. Research Report. ETS RR-10-22
ERIC Educational Resources Information Center
Haberman, Shelby J.
2010-01-01
Sampling errors limit the accuracy with which forms can be linked. Limitations on accuracy are especially important in testing programs in which a very large number of forms are employed. Standard inequalities in mathematical statistics may be used to establish lower bounds on the achievable inking accuracy. To illustrate results, a variety of…
Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy
ERIC Educational Resources Information Center
Maris, Gunter; van der Maas, Han
2012-01-01
Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…
A Balanced Approach to Adaptive Probability Density Estimation.
Kovacs, Julio A; Helmick, Cailee; Wriggers, Willy
2017-01-01
Our development of a Fast (Mutual) Information Matching (FIM) of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE) method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.
A high-frequency warm shallow water acoustic communications channel model and measurements.
Chitre, Mandar
2007-11-01
Underwater acoustic communication is a core enabling technology with applications in ocean monitoring using remote sensors and autonomous underwater vehicles. One of the more challenging underwater acoustic communication channels is the medium-range very shallow warm-water channel, common in tropical coastal regions. This channel exhibits two key features-extensive time-varying multipath and high levels of non-Gaussian ambient noise due to snapping shrimp-both of which limit the performance of traditional communication techniques. A good understanding of the communications channel is key to the design of communication systems. It aids in the development of signal processing techniques as well as in the testing of the techniques via simulation. In this article, a physics-based channel model for the very shallow warm-water acoustic channel at high frequencies is developed, which are of interest to medium-range communication system developers. The model is based on ray acoustics and includes time-varying statistical effects as well as non-Gaussian ambient noise statistics observed during channel studies. The model is calibrated and its accuracy validated using measurements made at sea.
Statistical analysis of the MODIS atmosphere products for the Tomsk region
NASA Astrophysics Data System (ADS)
Afonin, Sergey V.; Belov, Vladimir V.; Engel, Marina V.
2005-10-01
The paper presents the results of using the MODIS Atmosphere Products satellite information to study the atmospheric characteristics (the aerosol and water vapor) in the Tomsk Region (56-61°N, 75-90°E) in 2001-2004. The satellite data were received from the NASA Goddard Distributed Active Archive Center (DAAC) through the INTERNET.To use satellite data for a solution of scientific and applied problems, it is very important to know their accuracy. Despite the results of validation of the MODIS data have already been available in the literature, we decided to carry out additional investigations for the Tomsk Region. The paper presents the results of validation of the aerosol optical thickness (AOT) and total column precipitable water (TCPW), which are in good agreement with the test data. The statistical analysis revealed some interesting facts. Thus, for example, analyzing the data on the spatial distribution of the average seasonal values of AOT or TCPW for 2001-2003 in the Tomsk Region, we established that instead of the expected spatial homogeneity of these distributions, they have similar spatial structures.
Deriving photometric redshifts using fuzzy archetypes and self-organizing maps - II. Implementation
NASA Astrophysics Data System (ADS)
Speagle, Joshua S.; Eisenstein, Daniel J.
2017-07-01
With an eye towards the computational requirements of future large-scale surveys such as Euclid and Large Synoptic Survey Telescope (LSST) that will require photometric redshifts (photo-z's) for ≳ 109 objects, we investigate a variety of ways that 'fuzzy archetypes' can be used to improve photometric redshifts and explore their respective statistical interpretations. We characterize their relative performance using an idealized LSST ugrizY and Euclid YJH mock catalogue of 10 000 objects spanning z = 0-6 at Y = 24 mag. We find most schemes are able to robustly identify redshift probability distribution functions that are multimodal and/or poorly constrained. Once these objects are flagged and removed, the results are generally in good agreement with the strict accuracy requirements necessary to meet Euclid weak lensing goals for most redshifts between 0.8 ≲ z ≲ 2. These results demonstrate the statistical robustness and flexibility that can be gained by combining template-fitting and machine-learning methods and provide useful insights into how astronomers can further exploit the colour-redshift relation.
Crossing statistic: reconstructing the expansion history of the universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafieloo, Arman, E-mail: arman@ewha.ac.kr
2012-08-01
We present that by combining Crossing Statistic [1,2] and Smoothing method [3-5] one can reconstruct the expansion history of the universe with a very high precision without considering any prior on the cosmological quantities such as the equation of state of dark energy. We show that the presented method performs very well in reconstruction of the expansion history of the universe independent of the underlying models and it works well even for non-trivial dark energy models with fast or slow changes in the equation of state of dark energy. Accuracy of the reconstructed quantities along with independence of the methodmore » to any prior or assumption gives the proposed method advantages to the other non-parametric methods proposed before in the literature. Applying on the Union 2.1 supernovae combined with WiggleZ BAO data we present the reconstructed results and test the consistency of the two data sets in a model independent manner. Results show that latest available supernovae and BAO data are in good agreement with each other and spatially flat ΛCDM model is in concordance with the current data.« less
Experimentally probing topological order and its breakdown through modular matrices
NASA Astrophysics Data System (ADS)
Luo, Zhihuang; Li, Jun; Li, Zhaokai; Hung, Ling-Yan; Wan, Yidun; Peng, Xinhua; Du, Jiangfeng
2018-02-01
The modern concept of phases of matter has undergone tremendous developments since the first observation of topologically ordered states in fractional quantum Hall systems in the 1980s. In this paper, we explore the following question: in principle, how much detail of the physics of topological orders can be observed using state of the art technologies? We find that using surprisingly little data, namely the toric code Hamiltonian in the presence of generic disorders and detuning from its exactly solvable point, the modular matrices--characterizing anyonic statistics that are some of the most fundamental fingerprints of topological orders--can be reconstructed with very good accuracy solely by experimental means. This is an experimental realization of these fundamental signatures of a topological order, a test of their robustness against perturbations, and a proof of principle--that current technologies have attained the precision to identify phases of matter and, as such, probe an extended region of phase space around the soluble point before its breakdown. Given the special role of anyonic statistics in quantum computation, our work promises myriad applications both in probing and realistically harnessing these exotic phases of matter.
ERIC Educational Resources Information Center
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane
2009-01-01
To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…
Estimation of diagnostic test accuracy without full verification: a review of latent class methods
Collins, John; Huynh, Minh
2014-01-01
The performance of a diagnostic test is best evaluated against a reference test that is without error. For many diseases, this is not possible, and an imperfect reference test must be used. However, diagnostic accuracy estimates may be biased if inaccurately verified status is used as the truth. Statistical models have been developed to handle this situation by treating disease as a latent variable. In this paper, we conduct a systematized review of statistical methods using latent class models for estimating test accuracy and disease prevalence in the absence of complete verification. PMID:24910172
Statistical characterization of short wind waves from stereo images of the sea surface
NASA Astrophysics Data System (ADS)
Mironov, Alexey; Yurovskaya, Maria; Dulov, Vladimir; Hauser, Danièle; Guérin, Charles-Antoine
2013-04-01
We propose a methodology to extract short-scale statistical characteristics of the sea surface topography by means of stereo image reconstruction. The possibilities and limitations of the technique are discussed and tested on a data set acquired from an oceanographic platform at the Black Sea. The analysis shows that reconstruction of the topography based on stereo method is an efficient way to derive non-trivial statistical properties of surface short- and intermediate-waves (say from 1 centimer to 1 meter). Most technical issues pertaining to this type of datasets (limited range of scales, lacunarity of data or irregular sampling) can be partially overcome by appropriate processing of the available points. The proposed technique also allows one to avoid linear interpolation which dramatically corrupts properties of retrieved surfaces. The processing technique imposes that the field of elevation be polynomially detrended, which has the effect of filtering out the large scales. Hence the statistical analysis can only address the small-scale components of the sea surface. The precise cut-off wavelength, which is approximatively half the patch size, can be obtained by applying a high-pass frequency filter on the reference gauge time records. The results obtained for the one- and two-points statistics of small-scale elevations are shown consistent, at least in order of magnitude, with the corresponding gauge measurements as well as other experimental measurements available in the literature. The calculation of the structure functions provides a powerful tool to investigate spectral and statistical properties of the field of elevations. Experimental parametrization of the third-order structure function, the so-called skewness function, is one of the most important and original outcomes of this study. This function is of primary importance in analytical scattering models from the sea surface and was up to now unavailable in field conditions. Due to the lack of precise reference measurements for the small-scale wave field, we could not quantify exactly the accuracy of the retrieval technique. However, it appeared clearly that the obtained accuracy is good enough for the estimation of second-order statistical quantities (such as the correlation function), acceptable for third-order quantities (such as the skwewness function) and insufficient for fourth-order quantities (such as kurtosis). Therefore, the stereo technique in the present stage should not be thought as a self-contained universal tool to characterize the surface statistics. Instead, it should be used in conjunction with other well calibrated but sparse reference measurement (such as wave gauges) for cross-validation and calibration. It then completes the statistical analysis in as much as it provides a snapshot of the three-dimensional field and allows for the evaluation of higher-order spatial statistics.
Confronting Passive and Active Sensors with Non-Gaussian Statistics
Rodríguez-Gonzálvez, Pablo.; Garcia-Gago, Jesús.; Gomez-Lahoz, Javier.; González-Aguilera, Diego.
2014-01-01
This paper has two motivations: firstly, to compare the Digital Surface Models (DSM) derived by passive (digital camera) and by active (terrestrial laser scanner) remote sensing systems when applied to specific architectural objects, and secondly, to test how well the Gaussian classic statistics, with its Least Squares principle, adapts to data sets where asymmetrical gross errors may appear and whether this approach should be changed for a non-parametric one. The field of geomatic technology automation is immersed in a high demanding competition in which any innovation by one of the contenders immediately challenges the opponents to propose a better improvement. Nowadays, we seem to be witnessing an improvement of terrestrial photogrammetry and its integration with computer vision to overcome the performance limitations of laser scanning methods. Through this contribution some of the issues of this “technological race” are examined from the point of view of photogrammetry. A new software is introduced and an experimental test is designed, performed and assessed to try to cast some light on this thrilling match. For the case considered in this study, the results show good agreement between both sensors, despite considerable asymmetry. This asymmetry suggests that the standard Normal parameters are not adequate to assess this type of data, especially when accuracy is of importance. In this case, standard deviation fails to provide a good estimation of the results, whereas the results obtained for the Median Absolute Deviation and for the Biweight Midvariance are more appropriate measures. PMID:25196104
Kim, Jung Hoon; Lee, Jae Young; Baek, Jee Hyun; Eun, Hyo Won; Kim, Young Jae; Han, Joon Koo; Choi, Byung Ihn
2015-02-01
OBJECTIVE. The purposes of this study were to compare staging accuracy of high-resolution sonography (HRUS) with combined low- and high-MHz transducers with that of conventional sonography for gallbladder cancer and to investigate the differences in the imaging findings of neoplastic and nonneoplastic gallbladder polyps. MATERIALS AND METHODS. Our study included 37 surgically proven gallbladder cancer (T1a = 7, T1b = 2, T2 = 22, T3 = 6), including 15 malignant neoplastic polyps and 73 surgically proven polyps (neoplastic = 31, nonneoplastic = 42) that underwent HRUS and conventional transabdominal sonography. Two radiologists assessed T-category and predefined polyp findings on HRUS and conventional transabdominal sonography. Statistical analyses were performed using chi-square and McNemar tests. RESULTS. The diagnostic accuracy for the T category was T1a = 92-95%, T1b = 89-95%, T2 = 78-86%, and T3 = 84-89%, all with good agreement (κ = 0.642) using HRUS. The diagnostic accuracy for differentiating T1 from T2 or greater than T2 was 92% and 89% on HRUS and 65% and 70% with conventional transabdominal sonography. Statistically common findings for neoplastic polyps included size greater than 1 cm, single lobular surface, vascular core, hypoechoic polyp, and hypoechoic foci (p < 0.05). The value of HRUS in the differential diagnosis of a gallbladder polyp was more clearly depicted internal echo foci than conventional transabdominal sonography (39 vs 21). A polyp size greater than 1 cm was independently associated with a neoplastic polyp (odds ratio = 7.5, p = 0.02). The AUC of a polyp size greater than 1 cm was 0.877. The sensitivity and specificity were 66.67% and 89.13%, respectively. CONCLUSION. HRUS is a simple method that enables accurate T categorization of gallbladder carcinoma. It provides high-resolution images of gallbladder polyps and may have a role in stratifying the risk for malignancy.
NASA Astrophysics Data System (ADS)
Kez, V.; Liu, F.; Consalvi, J. L.; Ströhle, J.; Epple, B.
2016-03-01
The oxy-fuel combustion is a promising CO2 capture technology from combustion systems. This process is characterized by much higher CO2 concentrations in the combustion system compared to that of the conventional air-fuel combustion. To accurately predict the enhanced thermal radiation in oxy-fuel combustion, it is essential to take into account the non-gray nature of gas radiation. In this study, radiation heat transfer in a 3D model gas turbine combustor under two test cases at 20 atm total pressure was calculated by various non-gray gas radiation models, including the statistical narrow-band (SNB) model, the statistical narrow-band correlated-k (SNBCK) model, the wide-band correlated-k (WBCK) model, the full spectrum correlated-k (FSCK) model, and several weighted sum of gray gases (WSGG) models. Calculations of SNB, SNBCK, and FSCK were conducted using the updated EM2C SNB model parameters. Results of the SNB model are considered as the benchmark solution to evaluate the accuracy of the other models considered. Results of SNBCK and FSCK are in good agreement with the benchmark solution. The WBCK model is less accurate than SNBCK or FSCK. Considering the three formulations of the WBCK model, the multiple gases formulation is the best choice regarding the accuracy and computational cost. The WSGG model with the parameters of Bordbar et al. (2014) [20] is the most accurate of the three investigated WSGG models. Use of the gray WSSG formulation leads to significant deviations from the benchmark data and should not be applied to predict radiation heat transfer in oxy-fuel combustion systems. A best practice to incorporate the state-of-the-art gas radiation models for high accuracy of radiation heat transfer calculations at minimal increase in computational cost in CFD simulation of oxy-fuel combustion systems for pressure path lengths up to about 10 bar m is suggested.
A Generic multi-dimensional feature extraction method using multiobjective genetic programming.
Zhang, Yang; Rockett, Peter I
2009-01-01
In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.
Dalal, Ankur; Moss, Randy H.; Stanley, R. Joe; Stoecker, William V.; Gupta, Kapil; Calcara, David A.; Xu, Jin; Shrestha, Bijaya; Drugge, Rhett; Malters, Joseph M.; Perry, Lindall A.
2011-01-01
Dermoscopy, also known as dermatoscopy or epiluminescence microscopy (ELM), permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. White areas, prominent in early malignant melanoma and melanoma in situ, contribute to early detection of these lesions. An adaptive detection method has been investigated to identify white and hypopigmented areas based on lesion histogram statistics. Using the Euclidean distance transform, the lesion is segmented in concentric deciles. Overlays of the white areas on the lesion deciles are determined. Calculated features of automatically detected white areas include lesion decile ratios, normalized number of white areas, absolute and relative size of largest white area, relative size of all white areas, and white area eccentricity, dispersion, and irregularity. Using a back-propagation neural network, the white area statistics yield over 95% diagnostic accuracy of melanomas from benign nevi. White and hypopigmented areas in melanomas tend to be central or paracentral. The four most powerful features on multivariate analysis are lesion decile ratios. Automatic detection of white and hypopigmented areas in melanoma can be accomplished using lesion statistics. A neural network can achieve good discrimination of melanomas from benign nevi using these areas. Lesion decile ratios are useful white area features. PMID:21074971
Wojcik, Pawel Jerzy; Pereira, Luís; Martins, Rodrigo; Fortunato, Elvira
2014-01-13
An efficient mathematical strategy in the field of solution processed electrochromic (EC) films is outlined as a combination of an experimental work, modeling, and information extraction from massive computational data via statistical software. Design of Experiment (DOE) was used for statistical multivariate analysis and prediction of mixtures through a multiple regression model, as well as the optimization of a five-component sol-gel precursor subjected to complex constraints. This approach significantly reduces the number of experiments to be realized, from 162 in the full factorial (L=3) and 72 in the extreme vertices (D=2) approach down to only 30 runs, while still maintaining a high accuracy of the analysis. By carrying out a finite number of experiments, the empirical modeling in this study shows reasonably good prediction ability in terms of the overall EC performance. An optimized ink formulation was employed in a prototype of a passive EC matrix fabricated in order to test and trial this optically active material system together with a solid-state electrolyte for the prospective application in EC displays. Coupling of DOE with chromogenic material formulation shows the potential to maximize the capabilities of these systems and ensures increased productivity in many potential solution-processed electrochemical applications.
a Comparative Analysis of Five Cropland Datasets in Africa
NASA Astrophysics Data System (ADS)
Wei, Y.; Lu, M.; Wu, W.
2018-04-01
The food security, particularly in Africa, is a challenge to be resolved. The cropland area and spatial distribution obtained from remote sensing imagery are vital information. In this paper, according to cropland area and spatial location, we compare five global cropland datasets including CCI Land Cover, GlobCover, MODIS Collection 5, GlobeLand30 and Unified Cropland in circa 2010 of Africa in terms of cropland area and spatial location. The accuracy of cropland area calculated from five datasets was analyzed compared with statistic data. Based on validation samples, the accuracies of spatial location for the five cropland products were assessed by error matrix. The results show that GlobeLand30 has the best fitness with the statistics, followed by MODIS Collection 5 and Unified Cropland, GlobCover and CCI Land Cover have the lower accuracies. For the accuracy of spatial location of cropland, GlobeLand30 reaches the highest accuracy, followed by Unified Cropland, MODIS Collection 5 and GlobCover, CCI Land Cover has the lowest accuracy. The spatial location accuracy of five datasets in the Csa with suitable farming condition is generally higher than in the Bsk.
Belal, Tarek S; El-Kafrawy, Dina S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H
2016-02-15
This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415nm. The fourth method involves the formation of a yellow complex peaking at 361nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Belal, Tarek S.; El-Kafrawy, Dina S.; Mahrous, Mohamed S.; Abdel-Khalek, Magdi M.; Abo-Gharam, Amira H.
2016-02-01
This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524 nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490 nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415 nm. The fourth method involves the formation of a yellow complex peaking at 361 nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8 μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre
2010-06-01
Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less
Story Goodness in Adolescents with Autism Spectrum Disorder (ASD) and in Optimal Outcomes from ASD
ERIC Educational Resources Information Center
Canfield, Allison R.; Eigsti, Inge-Marie; de Marchena, Ashley; Fein, Deborah
2016-01-01
Purpose: This study examined narrative quality of adolescents with autism spectrum disorder (ASD) using a well-studied "story goodness" coding system. Method: Narrative samples were analyzed for distinct aspects of story goodness and rated by naïve readers on dimensions of story goodness, accuracy, cohesiveness, and oddness. Adolescents…
Evaluation of a Performance-Based Expert Elicitation: WHO Global Attribution of Foodborne Diseases.
Aspinall, W P; Cooke, R M; Havelaar, A H; Hoffmann, S; Hald, T
2016-01-01
For many societally important science-based decisions, data are inadequate, unreliable or non-existent, and expert advice is sought. In such cases, procedures for eliciting structured expert judgments (SEJ) are increasingly used. This raises questions regarding validity and reproducibility. This paper presents new findings from a large-scale international SEJ study intended to estimate the global burden of foodborne disease on behalf of WHO. The study involved 72 experts distributed over 134 expert panels, with panels comprising thirteen experts on average. Elicitations were conducted in five languages. Performance-based weighted solutions for target questions of interest were formed for each panel. These weights were based on individual expert's statistical accuracy and informativeness, determined using between ten and fifteen calibration variables from the experts' field with known values. Equal weights combinations were also calculated. The main conclusions on expert performance are: (1) SEJ does provide a science-based method for attribution of the global burden of foodborne diseases; (2) equal weighting of experts per panel increased statistical accuracy to acceptable levels, but at the cost of informativeness; (3) performance-based weighting increased informativeness, while retaining accuracy; (4) due to study constraints individual experts' accuracies were generally lower than in other SEJ studies, and (5) there was a negative correlation between experts' informativeness and statistical accuracy which attenuated as accuracy improved, revealing that the least accurate experts drive the negative correlation. It is shown, however, that performance-based weighting has the ability to yield statistically accurate and informative combinations of experts' judgments, thereby offsetting this contrary influence. The present findings suggest that application of SEJ on a large scale is feasible, and motivate the development of enhanced training and tools for remote elicitation of multiple, internationally-dispersed panels.
Evaluation of a Performance-Based Expert Elicitation: WHO Global Attribution of Foodborne Diseases
Aspinall, W. P.; Cooke, R. M.; Havelaar, A. H.; Hoffmann, S.; Hald, T.
2016-01-01
For many societally important science-based decisions, data are inadequate, unreliable or non-existent, and expert advice is sought. In such cases, procedures for eliciting structured expert judgments (SEJ) are increasingly used. This raises questions regarding validity and reproducibility. This paper presents new findings from a large-scale international SEJ study intended to estimate the global burden of foodborne disease on behalf of WHO. The study involved 72 experts distributed over 134 expert panels, with panels comprising thirteen experts on average. Elicitations were conducted in five languages. Performance-based weighted solutions for target questions of interest were formed for each panel. These weights were based on individual expert’s statistical accuracy and informativeness, determined using between ten and fifteen calibration variables from the experts' field with known values. Equal weights combinations were also calculated. The main conclusions on expert performance are: (1) SEJ does provide a science-based method for attribution of the global burden of foodborne diseases; (2) equal weighting of experts per panel increased statistical accuracy to acceptable levels, but at the cost of informativeness; (3) performance-based weighting increased informativeness, while retaining accuracy; (4) due to study constraints individual experts’ accuracies were generally lower than in other SEJ studies, and (5) there was a negative correlation between experts' informativeness and statistical accuracy which attenuated as accuracy improved, revealing that the least accurate experts drive the negative correlation. It is shown, however, that performance-based weighting has the ability to yield statistically accurate and informative combinations of experts' judgments, thereby offsetting this contrary influence. The present findings suggest that application of SEJ on a large scale is feasible, and motivate the development of enhanced training and tools for remote elicitation of multiple, internationally-dispersed panels. PMID:26930595
Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment
Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.; Harrison, D. E. Jr.
A variable time step integration algorithm for carrying out molecular dynamics simulations of atomic collision cascades is proposed which evaluates the interaction forces only once per time step. The algorithm is tested on some model problems which have exact solutions and is compared against other common methods. These comparisons show that the method has good stability and accuracy. Applications to Ar/sup +/ bombardment of Cu and Si show good accuracy and improved speed to the original method (D. E. Harrison, W. L. Gay, and H. M. Effron, J. Math. Phys. /bold 10/, 1179 (1969)).
Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Xiaoyuan; Du, Liang; Duan, Dongliang
GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less
Synchrophasor Data Correction under GPS Spoofing Attack: A State Estimation Based Approach
Fan, Xiaoyuan; Du, Liang; Duan, Dongliang
2017-02-01
GPS spoofing attack (GSA) has been shown to be one of the most imminent threats to almost all cyber-physical systems incorporated with the civilian GPS signal. Specifically, for our current agenda of the modernization of the power grid, this may greatly jeopardize the benefits provided by the pervasively installed phasor measurement units (PMU). In this study, we consider the case where synchrophasor data from PMUs are compromised due to the presence of a single GSA, and show that it can be corrected by signal processing techniques. In particular, we introduce a statistical model for synchrophasorbased power system state estimation (SE),more » and then derive the spoofing-matched algorithms for synchrophasor data correction against GPS spoofing attack. Different testing scenarios in IEEE 14-, 30-, 57-, 118-bus systems are simulated to show the proposed algorithms’ performance on GSA detection and state estimation. Numerical results demonstrate that our proposed algorithms can consistently locate and correct the spoofed synchrophasor data with good accuracy as long as the system observability is satisfied. Finally, the accuracy of state estimation is significantly improved compared with the traditional weighted least square method and approaches the performance under the Genie-aided method.« less
Li, Yi; Tseng, Yufeng J.; Pan, Dahua; Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Hopfinger, Anton J.
2008-01-01
Currently, the only validated methods to identify skin sensitization effects are in vivo models, such as the Local Lymph Node Assay (LLNA) and guinea pig studies. There is a tremendous need, in particular due to novel legislation, to develop animal alternatives, eg. Quantitative Structure-Activity Relationship (QSAR) models. Here, QSAR models for skin sensitization using LLNA data have been constructed. The descriptors used to generate these models are derived from the 4D-molecular similarity paradigm and are referred to as universal 4D-fingerprints. A training set of 132 structurally diverse compounds and a test set of 15 structurally diverse compounds were used in this study. The statistical methodologies used to build the models are logistic regression (LR), and partial least square coupled logistic regression (PLS-LR), which prove to be effective tools for studying skin sensitization measures expressed in the two categorical terms of sensitizer and non-sensitizer. QSAR models with low values of the Hosmer-Lemeshow goodness-of-fit statistic, χHL2, are significant and predictive. For the training set, the cross-validated prediction accuracy of the logistic regression models ranges from 77.3% to 78.0%, while that of PLS-logistic regression models ranges from 87.1% to 89.4%. For the test set, the prediction accuracy of logistic regression models ranges from 80.0%-86.7%, while that of PLS-logistic regression models ranges from 73.3%-80.0%. The QSAR models are made up of 4D-fingerprints related to aromatic atoms, hydrogen bond acceptors and negatively partially charged atoms. PMID:17226934
Subjective global assessment of nutritional status in children.
Mahdavi, Aida Malek; Ostadrahimi, Alireza; Safaiyan, Abdolrasool
2010-10-01
This study was aimed to compare the subjective and objective nutritional assessments and to analyse the performance of subjective global assessment (SGA) of nutritional status in diagnosing undernutrition in paediatric patients. One hundred and forty children (aged 2-12 years) hospitalized consecutively in Tabriz Paediatric Hospital from June 2008 to August 2008 underwent subjective assessment using the SGA questionnaire and objective assessment, including anthropometric and biochemical measurements. Agreement between two assessment methods was analysed by the kappa (κ) statistic. Statistical indicators including (sensitivity, specificity, predictive values, error rates, accuracy, powers, likelihood ratios and odds ratio) between SGA and objective assessment method were determined. The overall prevalence of undernutrition according to the SGA (70.7%) was higher than that by objective assessment of nutritional status (48.5%). Agreement between the two evaluation methods was only fair to moderate (κ = 0.336, P < 0.001). The sensitivity, specificity, positive and negative predictive value of the SGA method for screening undernutrition in this population were 88.235%, 45.833%, 60.606% and 80.487%, respectively. Accuracy, positive and negative power of the SGA method were 66.428%, 56.074% and 41.25%, respectively. Likelihood ratio positive, likelihood ratio negative and odds ratio of the SGA method were 1.628, 0.256 and 6.359, respectively. Our findings indicated that in assessing nutritional status of children, there is not a good level of agreement between SGA and objective nutritional assessment. In addition, SGA is a highly sensitive tool for assessing nutritional status and could identify children at risk of developing undernutrition. © 2009 Blackwell Publishing Ltd.
Wild Fire Risk Map in the Eastern Steppe of Mongolia Using Spatial Multi-Criteria Analysis
NASA Astrophysics Data System (ADS)
Nasanbat, Elbegjargal; Lkhamjav, Ochirkhuyag
2016-06-01
Grassland fire is a cause of major disturbance to ecosystems and economies throughout the world. This paper investigated to identify risk zone of wildfire distributions on the Eastern Steppe of Mongolia. The study selected variables for wildfire risk assessment using a combination of data collection, including Social Economic, Climate, Geographic Information Systems, Remotely sensed imagery, and statistical yearbook information. Moreover, an evaluation of the result is used field validation data and assessment. The data evaluation resulted divided by main three group factors Environmental, Social Economic factor, Climate factor and Fire information factor into eleven input variables, which were classified into five categories by risk levels important criteria and ranks. All of the explanatory variables were integrated into spatial a model and used to estimate the wildfire risk index. Within the index, five categories were created, based on spatial statistics, to adequately assess respective fire risk: very high risk, high risk, moderate risk, low and very low. Approximately more than half, 68 percent of the study area was predicted accuracy to good within the very high, high risk and moderate risk zones. The percentages of actual fires in each fire risk zone were as follows: very high risk, 42 percent; high risk, 26 percent; moderate risk, 13 percent; low risk, 8 percent; and very low risk, 11 percent. The main overall accuracy to correct prediction from the model was 62 percent. The model and results could be support in spatial decision making support system processes and in preventative wildfire management strategies. Also it could be help to improve ecological and biodiversity conservation management.
NASA Technical Reports Server (NTRS)
Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)
1979-01-01
A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.
Yamada, Hiroshi; Saeki, Minako; Ito, Junko; Kawada, Kazuhiro; Higurashi, Aya; Funakoshi, Hiromi; Takeda, Kohji
2015-02-01
The pulse CO-Oximeter (Radical-7; Masimo Corp., Irvine, CA) is a multi-wavelength spectrophotometric method for noninvasive continuous monitoring of hemoglobin (SpHb). Because evaluating the relative change in blood volume (ΔBV) is crucial to avoid hypovolemia and hypotension during hemodialysis, it would be of great clinical benefit if ΔBV could be estimated by measurement of SpHb during hemodialysis. The capability of the pulse CO-Oximeter to monitor ΔBV depends on the relative trending accuracy of SpHb. The purpose of the current study was to evaluate the relative trending accuracy of SpHb by the pulse CO-Oximeter using Crit-Line as a reference device. In 12 patients who received hemodialysis (total 22 sessions) in the intensive care unit, ΔBV was determined from SpHb. Relative changes in blood volume determined from SpHb were calculated according to the equation: ΔBV(SpHb)=[starting SpHb]/[current SpHb] - 1. The absolute values of SpHb and hematocrit measured by Crit-Line (CL-Hct) showed poor correlation. On the contrary, linear regression analysis showed good correlation between ΔBV(SpHb) and the relative change in blood volume measured by Crit-Line [ΔBV(CL-Hct)] (r=0.83; P≤0.001). Bland-Altman analysis also revealed good agreement between ΔBV(SpHb) and ΔBV(CL-Hct) (bias, -0.77%; precision, 3.41%). Polar plot analysis revealed good relative trending accuracy of SpHb with an angular bias of 4.1° and radial limits of agreement of 24.4° (upper) and -16.2° (lower). The results of the current study indicate that SpHb measurement with the pulse CO-Oximeter has good relative trending accuracy.
Alaska national hydrography dataset positional accuracy assessment study
Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy
2013-01-01
Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.
Random forests for classification in ecology
Cutler, D.R.; Edwards, T.C.; Beard, K.H.; Cutler, A.; Hess, K.T.; Gibson, J.; Lawler, J.J.
2007-01-01
Classification procedures are some of the most widely used statistical methods in ecology. Random forests (RF) is a new and powerful statistical classifier that is well established in other disciplines but is relatively unknown in ecology. Advantages of RF compared to other statistical classifiers include (1) very high classification accuracy; (2) a novel method of determining variable importance; (3) ability to model complex interactions among predictor variables; (4) flexibility to perform several types of statistical data analysis, including regression, classification, survival analysis, and unsupervised learning; and (5) an algorithm for imputing missing values. We compared the accuracies of RF and four other commonly used statistical classifiers using data on invasive plant species presence in Lava Beds National Monument, California, USA, rare lichen species presence in the Pacific Northwest, USA, and nest sites for cavity nesting birds in the Uinta Mountains, Utah, USA. We observed high classification accuracy in all applications as measured by cross-validation and, in the case of the lichen data, by independent test data, when comparing RF to other common classification methods. We also observed that the variables that RF identified as most important for classifying invasive plant species coincided with expectations based on the literature. ?? 2007 by the Ecological Society of America.
Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Brunett, Acacia
2015-04-26
The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less
The Role of Pattern Goodness in the Reproduction of Backward Masked Patterns
ERIC Educational Resources Information Center
Bell, Herbert H.; Handel, Stephen
1976-01-01
The purpose of the present work was to investigate the relation between pattern goodness and accuracy of reproduction in backward masking. It may be hypothesized that good patterns, being easier to encode as wholes, will be reproduced more easily than poorer patterns. Four experiments were performed. (Author)
Harold S.J. Zald; Janet L. Ohmann; Heather M. Roberts; Matthew J. Gregory; Emilie B. Henderson; Robert J. McGaughey; Justin Braaten
2014-01-01
This study investigated how lidar-derived vegetation indices, disturbance history from Landsat time series (LTS) imagery, plot location accuracy, and plot size influenced accuracy of statistical spatial models (nearest-neighbor imputation maps) of forest vegetation composition and structure. Nearest-neighbor (NN) imputation maps were developed for 539,000 ha in the...
Forest tree species discrimination in western Himalaya using EO-1 Hyperion
NASA Astrophysics Data System (ADS)
George, Rajee; Padalia, Hitendra; Kushwaha, S. P. S.
2014-05-01
The information acquired in the narrow bands of hyperspectral remote sensing data has potential to capture plant species spectral variability, thereby improving forest tree species mapping. This study assessed the utility of spaceborne EO-1 Hyperion data in discrimination and classification of broadleaved evergreen and conifer forest tree species in western Himalaya. The pre-processing of 242 bands of Hyperion data resulted into 160 noise-free and vertical stripe corrected reflectance bands. Of these, 29 bands were selected through step-wise exclusion of bands (Wilk's Lambda). Spectral Angle Mapper (SAM) and Support Vector Machine (SVM) algorithms were applied to the selected bands to assess their effectiveness in classification. SVM was also applied to broadband data (Landsat TM) to compare the variation in classification accuracy. All commonly occurring six gregarious tree species, viz., white oak, brown oak, chir pine, blue pine, cedar and fir in western Himalaya could be effectively discriminated. SVM produced a better species classification (overall accuracy 82.27%, kappa statistic 0.79) than SAM (overall accuracy 74.68%, kappa statistic 0.70). It was noticed that classification accuracy achieved with Hyperion bands was significantly higher than Landsat TM bands (overall accuracy 69.62%, kappa statistic 0.65). Study demonstrated the potential utility of narrow spectral bands of Hyperion data in discriminating tree species in a hilly terrain.
Evaluating the decision accuracy and speed of clinical data visualizations.
Pieczkiewicz, David S; Finkelstein, Stanley M
2010-01-01
Clinicians face an increasing volume of biomedical data. Assessing the efficacy of systems that enable accurate and timely clinical decision making merits corresponding attention. This paper discusses the multiple-reader multiple-case (MRMC) experimental design and linear mixed models as means of assessing and comparing decision accuracy and latency (time) for decision tasks in which clinician readers must interpret visual displays of data. These tools can assess and compare decision accuracy and latency (time). These experimental and statistical techniques, used extensively in radiology imaging studies, offer a number of practical and analytic advantages over more traditional quantitative methods such as percent-correct measurements and ANOVAs, and are recommended for their statistical efficiency and generalizability. An example analysis using readily available, free, and commercial statistical software is provided as an appendix. While these techniques are not appropriate for all evaluation questions, they can provide a valuable addition to the evaluative toolkit of medical informatics research.
Methods and statistics for combining motif match scores.
Bailey, T L; Gribskov, M
1998-01-01
Position-specific scoring matrices are useful for representing and searching for protein sequence motifs. A sequence family can often be described by a group of one or more motifs, and an effective search must combine the scores for matching a sequence to each of the motifs in the group. We describe three methods for combining match scores and estimating the statistical significance of the combined scores and evaluate the search quality (classification accuracy) and the accuracy of the estimate of statistical significance of each. The three methods are: 1) sum of scores, 2) sum of reduced variates, 3) product of score p-values. We show that method 3) is superior to the other two methods in both regards, and that combining motif scores indeed gives better search accuracy. The MAST sequence homology search algorithm utilizing the product of p-values scoring method is available for interactive use and downloading at URL http:/(/)www.sdsc.edu/MEME.
Compositional Solution Space Quantification for Probabilistic Software Analysis
NASA Technical Reports Server (NTRS)
Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem
2014-01-01
Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.
12 CFR 268.601 - EEO group statistics.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 3 2010-01-01 2010-01-01 false EEO group statistics. 268.601 Section 268.601... RULES REGARDING EQUAL OPPORTUNITY Matters of General Applicability § 268.601 EEO group statistics. (a... solely statistical purpose for which the data is being collected, the need for accuracy, the Board's...
Apolipoprotein M can discriminate HNF1A-MODY from Type 1 diabetes.
Mughal, S A; Park, R; Nowak, N; Gloyn, A L; Karpe, F; Matile, H; Malecki, M T; McCarthy, M I; Stoffel, M; Owen, K R
2013-02-01
Missed diagnosis of maturity-onset diabetes of the young (MODY) has led to an interest in biomarkers that enable efficient prioritization of patients for definitive molecular testing. Apolipoprotein M (apoM) was suggested as a biomarker for hepatocyte nuclear factor 1 alpha (HNF1A)-MODY because of its reduced expression in Hnf1a(-/-) mice. However, subsequent human studies examining apoM as a biomarker have yielded conflicting results. We aimed to evaluate apoM as a biomarker for HNF1A-MODY using a highly specific and sensitive ELISA. ApoM concentration was measured in subjects with HNF1A-MODY (n = 69), Type 1 diabetes (n = 50), Type 2 diabetes (n = 120) and healthy control subjects (n = 100). The discriminative accuracy of apoM and of the apoM/HDL ratio for diabetes aetiology was evaluated. Mean (standard deviation) serum apoM concentration (μmol/l) was significantly lower for subjects with HNF1A-MODY [0.86 (0.29)], than for those with Type 1 diabetes [1.37 (0.26), P = 3.1 × 10(-18) ) and control subjects [1.34 (0.22), P = 7.2 × 10(-19) ). There was no significant difference in apoM concentration between subjects with HNF1A-MODY and Type 2 diabetes [0.89 (0.28), P = 0.13]. The C-statistic measure of discriminative accuracy for apoM was 0.91 for HNF1A-MODY vs. Type 1 diabetes, indicating high discriminative accuracy. The apoM/HDL ratio was significantly lower in HNF1A-MODY than other study groups. However, this ratio did not perform well in discriminating HNF1A-MODY from either Type 1 diabetes (C-statistic = 0.79) or Type 2 diabetes (C-statistic = 0.68). We confirm an earlier report that serum apoM levels are lower in HNF1A-MODY than in controls. Serum apoM provides good discrimination between HNF1A-MODY and Type 1 diabetes and warrants further investigation for clinical utility in diabetes diagnostics. © 2012 The Authors. Diabetic Medicine © 2012 Diabetes UK.
Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.
Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq
2016-01-01
This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.
NASA Technical Reports Server (NTRS)
Lucarini, Valerio; Russell, Gary L.; Hansen, James E. (Technical Monitor)
2002-01-01
Results are presented for two greenhouse gas experiments of the Goddard Institute for Space Studies Atmosphere-Ocean Model (AOM). The computed trends of surface pressure, surface temperature, 850, 500 and 200 mb geopotential heights and related temperatures of the model for the time frame 1960-2000 are compared to those obtained from the National Centers for Environmental Prediction observations. A spatial correlation analysis and mean value comparison are performed, showing good agreement. A brief general discussion about the statistics of trend detection is presented. The domain of interest is the Northern Hemisphere (NH) because of the higher reliability of both the model results and the observations. The accuracy that this AOM has in describing the observed regional and NH climate trends makes it reliable in forecasting future climate changes.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; El-Abasawi, Nasr M.; El-Olemy, Ahmed; Serag, Ahmed
2018-02-01
Five simple spectrophotometric methods were developed for the determination of simeprevir in the presence of its oxidative degradation product namely, ratio difference, mean centering, derivative ratio using the Savitsky-Golay filters, second derivative and continuous wavelet transform. These methods are linear in the range of 2.5-40 μg/mL and validated according to the ICH guidelines. The obtained results of accuracy, repeatability and precision were found to be within the acceptable limits. The specificity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. Furthermore, these methods were statistically comparable to RP-HPLC method and good results were obtained. So, they can be used for the routine analysis of simeprevir in quality-control laboratories.
NASA Astrophysics Data System (ADS)
Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George
2017-09-01
Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.
Eisenberg, Sarita; Guo, Ling-Yu
2016-05-01
This article reviews the existing literature on the diagnostic accuracy of two grammatical accuracy measures for differentiating children with and without language impairment (LI) at preschool and early school age based on language samples. The first measure, the finite verb morphology composite (FVMC), is a narrow grammatical measure that computes children's overall accuracy of four verb tense morphemes. The second measure, percent grammatical utterances (PGU), is a broader grammatical measure that computes children's accuracy in producing grammatical utterances. The extant studies show that FVMC demonstrates acceptable (i.e., 80 to 89% accurate) to good (i.e., 90% accurate or higher) diagnostic accuracy for children between 4;0 (years;months) and 6;11 in conversational or narrative samples. In contrast, PGU yields acceptable to good diagnostic accuracy for children between 3;0 and 8;11 regardless of sample types. Given the diagnostic accuracy shown in the literature, we suggest that FVMC and PGU can be used as one piece of evidence for identifying children with LI in assessment when appropriate. However, FVMC or PGU should not be used as therapy goals directly. Instead, when children are low in FVMC or PGU, we suggest that follow-up analyses should be conducted to determine the verb tense morphemes or grammatical structures that children have difficulty with. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Hind, Jacqueline A.; Gensler, Gary; Brandt, Diane K.; Miller Gardner, Patricia J.; Blumenthal, Loreen; Gramigna, Gary D.; Kosek, Steven; Lundy, Donna; McGarvey-Toler, Susan; Rockafellow, Susan; Sullivan, Paula A.; Villa, Marybell; Gill, Gary D.; Lindblad, Anne S.; Logemann, Jeri A.; Robbins, JoAnne
2009-01-01
Accurate detection and classification of aspiration is a critical component of videofluoroscopic swallowing evaluation, the most commonly utilized instrumental method for dysphagia diagnosis and treatment. Currently published literature indicates that inter-judge reliability for the identification of aspiration ranges from poor to fairly good depending on the amount of training provided to clinicians. The majority of extant studies compared judgments among clinicians. No studies included judgments made during the use of a postural compensatory strategy. The purpose of this study was to examine the accuracy of judgments made by speech-language pathologists (SLPs) practicing in hospitals compared with unblinded expert judges when identifying aspiration and using the 8-point Penetration/Aspiration Scale. Clinicians received extensive training for the detection of aspiration and minimal training on use of the Penetration/Aspiration Scale. Videofluoroscopic data were collected from 669 patients as part of a large, randomized clinical trial and include judgments of 10,200 swallows made by 76 clinicians from 44 hospitals in 11 states. Judgments were made on swallows during use of dysphagia compensatory strategies: chin down posture with thin-liquids and thickened liquids (nectar-thick and honey-thick consistencies) in a head neutral posture. The subject population included patients with Parkinson’s disease and/or dementia. Kappa statistics indicate high accuracy for all interventions by SLPs for identification of aspiration (all К > .86) and variable accuracy (range 69%–76%) using the Penetration/Aspiration Scale when compared to expert judges. It is concluded that while the accuracy of identifying the presence of aspiration by SLPs is excellent, more extensive training and/or image enhancement is recommended for precise use of the Penetration/Aspiration Scale. PMID:18953607
Mubeen; K.R., Vijayalakshmi; Bhuyan, Sanat Kumar; Panigrahi, Rajat G; Priyadarshini, Smita R; Misra, Satyaranjan; Singh, Chandravir
2014-01-01
Objectives: The identification and radiographic interpretation of periapical bone lesions is important for accurate diagnosis and treatment. The present study was undertaken to study the feasibility and diagnostic accuracy of colour coded digital radiographs in terms of presence and size of lesion and to compare the diagnostic accuracy of colour coded digital images with direct digital images and conventional radiographs for assessing periapical lesions. Materials and Methods: Sixty human dry cadaver hemimandibles were obtained and periapical lesions were created in first and second premolar teeth at the junction of cancellous and cortical bone using a micromotor handpiece and carbide burs of sizes 2, 4 and 6. After each successive use of round burs, a conventional, RVG and colour coded image was taken for each specimen. All the images were evaluated by three observers. The diagnostic accuracy for each bur and image mode was calculated statistically. Results: Our results showed good interobserver (kappa > 0.61) agreement for the different radiographic techniques and for the different bur sizes. Conventional Radiography outperformed Digital Radiography in diagnosing periapical lesions made with Size two bur. Both were equally diagnostic for lesions made with larger bur sizes. Colour coding method was least accurate among all the techniques. Conclusion: Conventional radiography traditionally forms the backbone in the diagnosis, treatment planning and follow-up of periapical lesions. Direct digital imaging is an efficient technique, in diagnostic sense. Colour coding of digital radiography was feasible but less accurate however, this imaging technique, like any other, needs to be studied continuously with the emphasis on safety of patients and diagnostic quality of images. PMID:25584318
Becker, Anton S; Cornelius, Alexander; Reiner, Cäcilia S; Stocker, Daniel; Ulbrich, Erika J; Barth, Borna K; Mortezavi, Ashkan; Eberli, Daniel; Donati, Olivio F
2017-09-01
to simultaneously evaluate interreader agreement and diagnostic accuracy in the of PI-RADS v2 and compare it to v1. A total of 67 patients (median age 65.3 y, range 51.2-78.2 y; PSA 6.8μg/L, 0.2-33μg/L) undergoing MRI of the prostate and subsequent transperineal template biopsy within ≤6 months from MRI were included. Four readers from two institutions evaluated the likelihood of prostate cancer using PI-RADS v1 and v2 in two separate reading sessions ≥3 months apart. Interreader agreement was assessed for each pulse-sequence and for total PI-RADS scores using the intraclass correlation coefficient (ICC). Differences were considered significant for non-overlapping 95%-confidence intervals. Diagnostic accuracy was assessed with the area under the receiver operating characteristic curve (A Z ). A p-value <0.05 was considered statistically significant. Interreader agreement for DCE-scores was good in v2 (ICC 2 =0.70; 95% CI: 0.66-0.74) and slightly lower in v1 (ICC 1 =0.64, 0.59-0.69). Agreement for DWI scores (ICC 1 =0.77, ICC 2 =0.76) as well as final PI-RADS scores per quadrant were nearly identical (ICC 1 =ICC 2 =0.71). Diagnostic accuracy showed no significant differences (p=0.09-0.93) between v1 and v2 in any of the readers (range: A Z =0.78-0.88). PI-RADS scores show similar interreader agreement in v2 and v1 at comparable diagnostic performance. The simplification of the DCE interpretation in v2 might slightly improve agreement while not negatively affecting diagnostic performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Accuracy of the Defining Characteristics of the Nursing Diagnosis Hypothermia in Newborns.
de Aquino, Wislla Ketlly Menezes; Lopes, Marcos Venícios de Oliveira; da Silva, Viviane Martins; Fróes, Nathaly Bianka Moraes; de Menezes, Angélica Paixão; Almeida, Aline de Aquino Peres; Sobreira, Bianca Alves
2017-09-18
To analyze the accuracy of the defining characteristics of hypothermia in newborns and to verify associations between defining characteristics and clinical variables. A cross-sectional accuracy study with statistical analysis. Slow capillary refill, decrease in ventilation, peripheral vasoconstriction, and insufficient weight gain were the defining characteristics with the highest specificity values, while slow gastric emptying, skin cool to touch, irritability, and bradycardia were the defining characteristics with the highest values for both sensitivity and specificity. Slow gastric emptying, skin cool to touch, irritability, and bradycardia are good clinical indicators to infer initial stages of hypothermia and to confirm its presence. Accuracy measures may contribute to the improvement of the diagnostic inferential process. Analisar acurácia das características definidoras de Hipotermia em recém-nascidos e identificar a associação delas com variáveis clínicas. MÉTODO: Estudo de acurácia transversal com análise estatística. Preenchimento capilar lento, diminuição da ventilação, vasoconstrição periférica e ganho de peso insuficiente apresentaram valores altos de especificidade enquanto esvaziamento gástrico lento, pele fria, irritabilidade e bradicardia apresentaram valores elevados de sensibilidade e especificidade. CONCLUSÃO: Esvaziamento gástrico lento, pele fria, irritabilidade e bradicardia são úteis para inferir estágios iniciais de hipotermia e para confirmação diagnóstica. IMPLICAÇÕES PARA PRÁTICA DE ENFERMAGEM: Medidas de acurácia podem contribuir para o processo de inferência do diagnóstico hipotermia. © 2017 NANDA International, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xin, E-mail: xinshih86029@gmail.com; Zhao, Xiangmo, E-mail: xinshih86029@gmail.com; Hui, Fei, E-mail: xinshih86029@gmail.com
Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations ismore » constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.« less
Prashanth, Kudige Nagaraj; Swamy, Nagaraju; Basavaiah, Kanakapura
2016-01-01
Two simple and selective spectrophotometric methods are described for the determination of trifluoperazine dihydrochloride (TFH) as base form (TFP) in bulk drug, and in tablets. The methods are based on the molecular charge-transfer complexation of trifluoperazine base (TFP) with either 2,4,6-trinitrophenol (picric acid; PA) or 2,4-dinitrophenol (DNP). The yellow colored radical anions formed are quantified at 410 run (PA method) or 415 nm (DNP method). The assay conditions were optimized for both the methods. Beer's law is obeyed over the concentration ranges of 1.5-24.0 pg/mL in PA method and 5.0-80.0 µg/mL in DNP method, with respective molar absorptivity values of 1.03 x 10(4) and 6.91 x 10(3) L mol-1 cm-1. The reaction stoichiometry in both methods was evaluated by Job's method of continuous variations and was found to be 1 : 2 (TFP : PA, TFP : DNP). The developed methods were successfully applied to the determination of TFP in pure form and commercial tablets with good accuracy and precision. Statistical comparison of the results was performed using Student's t-test and F-ratio at 95% confidence level and the results showed no significant difference between the reference and proposed methods with regard to accuracy and precision. Further, the accuracy and reliability of the methods were confirmed by recovery studies via standard addition technique.
Molina, Sergio L; Stodden, David F
2018-04-01
This study examined variability in throwing speed and spatial error to test the prediction of an inverted-U function (i.e., impulse-variability [IV] theory) and the speed-accuracy trade-off. Forty-five 9- to 11-year-old children were instructed to throw at a specified percentage of maximum speed (45%, 65%, 85%, and 100%) and hit the wall target. Results indicated no statistically significant differences in variable error across the target conditions (p = .72), failing to support the inverted-U hypothesis. Spatial accuracy results indicated no statistically significant differences with mean radial error (p = .18), centroid radial error (p = .13), and bivariate variable error (p = .08) also failing to support the speed-accuracy trade-off in overarm throwing. As neither throwing performance variability nor accuracy changed across percentages of maximum speed in this sample of children as well as in a previous adult sample, current policy and practices of practitioners may need to be reevaluated.
Persson, Karin; Barca, Maria Lage; Cavallin, Lena; Brækhus, Anne; Knapskog, Anne-Brita; Selbæk, Geir; Engedal, Knut
2017-01-01
Background Different clinically feasible methods for evaluation of medial temporal lobe atrophy exists and are useful in diagnostic work-up of Alzheimer's disease (AD). Purpose To compare the diagnostic properties of two clinically available magnetic resonance imaging (MRI)-based methods-an automated volumetric software, NeuroQuant® (NQ) (evaluation of hippocampus volume) and the Scheltens scale (visual evaluation of medial temporal lobe atrophy [MTA])-in patients with AD dementia, and subjective and mild cognitive impairment (non-dementia). Material and Methods MRIs from 56 patients (31 AD, 25 non-dementia) were assessed with both methods. Correlations between the methods were calculated and receiver operating curve (ROC) analyses that yield area under the curve (AUC) statistics were conducted. Results High correlations were found between the two MRI assessments for the total hippocampal volume measured with NQ and mean MTA score (-0.753, P < 0.001), for the right (-0.767, P < 0.001), and for the left (-0.675, P < 0.001) sides. The NQ total measure yielded somewhat higher AUC (0.88, "good") compared to the MTA mean measure (0.80, "good") in the comparison of patients with AD and non-dementia, but the accuracy was in favor of the MTA scale. Conclusion The two methods correlated highly and both methods reached equally "good" power.
Buczinski, S; Vandeweerd, J M
2016-09-01
Provision of good quality colostrum [i.e., immunoglobulin G (IgG) concentration ≥50g/L] is the first step toward ensuring proper passive transfer of immunity for young calves. Precise quantification of colostrum IgG levels cannot be easily performed on the farm. Assessment of the refractive index using a Brix scale with a refractometer has been described as being highly correlated with IgG concentration in colostrum. The aim of this study was to perform a systematic review of the diagnostic accuracy of Brix refractometry to diagnose good quality colostrum. From 101 references initially obtain ed, 11 were included in the systematic review meta-analysis representing 4,251 colostrum samples. The prevalence of good colostrum samples with IgG ≥50g/L varied from 67.3 to 92.3% (median 77.9%). Specific estimates of accuracy [sensitivity (Se) and specificity (Sp)] were obtained for different reported cut-points using a hierarchical summary receiver operating characteristic curve model. For the cut-point of 22% (n=8 studies), Se=80.2% (95% CI: 71.1-87.0%) and Sp=82.6% (71.4-90.0%). Decreasing the cut-point to 18% increased Se [96.1% (91.8-98.2%)] and decreased Sp [54.5% (26.9-79.6%)]. Modeling the effect of these Brix accuracy estimates using a stochastic simulation and Bayes theorem showed that a positive result with the 22% Brix cut-point can be used to diagnose good quality colostrum (posttest probability of a good colostrum: 94.3% (90.7-96.9%). The posttest probability of good colostrum with a Brix value <18% was only 22.7% (12.3-39.2%). Based on this study, the 2 cut-points could be alternatively used to select good quality colostrum (sample with Brix ≥22%) or to discard poor quality colostrum (sample with Brix <18%). When sample results are between these 2 values, colostrum supplementation should be considered. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
Gunetti, Monica; Castiglia, Sara; Rustichelli, Deborah; Mareschi, Katia; Sanavio, Fiorella; Muraro, Michela; Signorino, Elena; Castello, Laura; Ferrero, Ivana; Fagioli, Franca
2012-05-31
The quality and safety of advanced therapy products must be maintained throughout their production and quality control cycle to ensure their final use in patients. We validated the cell count method according to the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use and European Pharmacopoeia, considering the tests' accuracy, precision, repeatability, linearity and range. As the cell count is a potency test, we checked accuracy, precision, and linearity, according to ICH Q2. Briefly our experimental approach was first to evaluate the accuracy of Fast Read 102® compared to the Bürker chamber. Once the accuracy of the alternative method was demonstrated, we checked the precision and linearity test only using Fast Read 102®. The data were statistically analyzed by average, standard deviation and coefficient of variation percentages inter and intra operator. All the tests performed met the established acceptance criteria of a coefficient of variation of less than ten percent. For the cell count, the precision reached by each operator had a coefficient of variation of less than ten percent (total cells) and under five percent (viable cells). The best range of dilution, to obtain a slope line value very similar to 1, was between 1:8 and 1:128. Our data demonstrated that the Fast Read 102® count method is accurate, precise and ensures the linearity of the results obtained in a range of cell dilution. Under our standard method procedures, this assay may thus be considered a good quality control method for the cell count as a batch release quality control test. Moreover, the Fast Read 102® chamber is a plastic, disposable device that allows a number of samples to be counted in the same chamber. Last but not least, it overcomes the problem of chamber washing after use and so allows a cell count in a clean environment such as that in a Cell Factory. In a good manufacturing practice setting the disposable cell counting devices will allow a single use of the count chamber they can then be thrown away, thus avoiding the waste disposal of vital dye (e.g. Trypan Blue) or lysing solution (e.g. Tuerk solution).
2012-01-01
Background The quality and safety of advanced therapy products must be maintained throughout their production and quality control cycle to ensure their final use in patients. We validated the cell count method according to the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use and European Pharmacopoeia, considering the tests’ accuracy, precision, repeatability, linearity and range. Methods As the cell count is a potency test, we checked accuracy, precision, and linearity, according to ICH Q2. Briefly our experimental approach was first to evaluate the accuracy of Fast Read 102® compared to the Bürker chamber. Once the accuracy of the alternative method was demonstrated, we checked the precision and linearity test only using Fast Read 102®. The data were statistically analyzed by average, standard deviation and coefficient of variation percentages inter and intra operator. Results All the tests performed met the established acceptance criteria of a coefficient of variation of less than ten percent. For the cell count, the precision reached by each operator had a coefficient of variation of less than ten percent (total cells) and under five percent (viable cells). The best range of dilution, to obtain a slope line value very similar to 1, was between 1:8 and 1:128. Conclusions Our data demonstrated that the Fast Read 102® count method is accurate, precise and ensures the linearity of the results obtained in a range of cell dilution. Under our standard method procedures, this assay may thus be considered a good quality control method for the cell count as a batch release quality control test. Moreover, the Fast Read 102® chamber is a plastic, disposable device that allows a number of samples to be counted in the same chamber. Last but not least, it overcomes the problem of chamber washing after use and so allows a cell count in a clean environment such as that in a Cell Factory. In a good manufacturing practice setting the disposable cell counting devices will allow a single use of the count chamber they can then be thrown away, thus avoiding the waste disposal of vital dye (e.g. Trypan Blue) or lysing solution (e.g. Tuerk solution). PMID:22650233
Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.
Lima, Clodoaldo A M; Coelho, André L V
2011-10-01
We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). Copyright © 2011 Elsevier B.V. All rights reserved.
Kaneta, Tomohiro; Nakatsuka, Masahiro; Nakamura, Kei; Seki, Takashi; Yamaguchi, Satoshi; Tsuboi, Masahiro; Meguro, Kenichi
2016-01-01
SPECT is an important diagnostic tool for dementia. Recently, statistical analysis of SPECT has been commonly used for dementia research. In this study, we evaluated the accuracy of visual SPECT evaluation and/or statistical analysis for the diagnosis (Dx) of Alzheimer disease (AD) and other forms of dementia in our community-based study "The Osaki-Tajiri Project." Eighty-nine consecutive outpatients with dementia were enrolled and underwent brain perfusion SPECT with 99mTc-ECD. Diagnostic accuracy of SPECT was tested using 3 methods: visual inspection (SPECT Dx), automated diagnostic tool using statistical analysis with easy Z-score imaging system (eZIS Dx), and visual inspection plus eZIS (integrated Dx). Integrated Dx showed the highest sensitivity, specificity, and accuracy, whereas eZIS was the second most accurate method. We also observed that a higher than expected rate of SPECT images indicated false-negative cases of AD. Among these, 50% showed hypofrontality and were diagnosed as frontotemporal lobar degeneration. These cases typically showed regional "hot spots" in the primary sensorimotor cortex (ie, a sensorimotor hot spot sign), which we determined were associated with AD rather than frontotemporal lobar degeneration. We concluded that the diagnostic abilities were improved by the integrated use of visual assessment and statistical analysis. In addition, the detection of a sensorimotor hot spot sign was useful to detect AD when hypofrontality is present and improved the ability to properly diagnose AD.
Scout trajectory error propagation computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1982-01-01
Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.
Kang, Seung-Gul; Kang, Jae Myeong; Ko, Kwang-Pil; Park, Seon-Cheol; Mariani, Sara; Weng, Jia
2017-06-01
To compare the accuracy of the commercial Fitbit Flex device (FF) with polysomnography (PSG; the gold-standard method) in insomnia disorder patients and good sleepers. Participants wore an FF and actigraph while undergoing overnight PSG. Primary outcomes were intraclass correlation coefficients (ICCs) of the total sleep time (TST) and sleep efficiency (SE), and the frequency of clinically acceptable agreement between the FF in normal mode (FFN) and PSG. The sensitivity, specificity, and accuracy of detecting sleep epochs were compared among FFN, actigraphy, and PSG. The ICCs of the TST between FFN and PSG in the insomnia (ICC=0.886) and good-sleepers (ICC=0.974) groups were excellent, but the ICC of SE was only fair in both groups. The TST and SE were overestimated for FFN by 6.5min and 1.75%, respectively, in good sleepers, and by 32.9min and 7.9% in the insomnia group with respect to PSG. The frequency of acceptable agreement of FFN and PSG was significantly lower (p=0.006) for the insomnia group (39.4%) than for the good-sleepers group (82.4%). The sensitivity and accuracy of FFN in an epoch-by-epoch comparison with PSG was good and comparable to those of actigraphy, but the specificity was poor in both groups. The ICC of TST in the FFN-PSG comparison was excellent in both groups, and the frequency of agreement was high in good sleepers but significantly lower in insomnia patients. These limitations need to be considered when applying commercial sleep trackers for clinical and research purposes in insomnia. Copyright © 2017 Elsevier Inc. All rights reserved.
GPS/GLONASS Combined Precise Point Positioning with Receiver Clock Modeling
Wang, Fuhong; Chen, Xinghan; Guo, Fei
2015-01-01
Research has demonstrated that receiver clock modeling can reduce the correlation coefficients among the parameters of receiver clock bias, station height and zenith tropospheric delay. This paper introduces the receiver clock modeling to GPS/GLONASS combined precise point positioning (PPP), aiming to better separate the receiver clock bias and station coordinates and therefore improve positioning accuracy. Firstly, the basic mathematic models including the GPS/GLONASS observation equations, stochastic model, and receiver clock model are briefly introduced. Then datasets from several IGS stations equipped with high-stability atomic clocks are used for kinematic PPP tests. To investigate the performance of PPP, including the positioning accuracy and convergence time, a week of (1–7 January 2014) GPS/GLONASS data retrieved from these IGS stations are processed with different schemes. The results indicate that the positioning accuracy as well as convergence time can benefit from the receiver clock modeling. This is particularly pronounced for the vertical component. Statistic RMSs show that the average improvement of three-dimensional positioning accuracy reaches up to 30%–40%. Sometimes, it even reaches over 60% for specific stations. Compared to the GPS-only PPP, solutions of the GPS/GLONASS combined PPP are much better no matter if the receiver clock offsets are modeled or not, indicating that the positioning accuracy and reliability are significantly improved with the additional GLONASS satellites in the case of insufficient number of GPS satellites or poor geometry conditions. In addition to the receiver clock modeling, the impacts of different inter-system timing bias (ISB) models are investigated. For the case of a sufficient number of satellites with fairly good geometry, the PPP performances are not seriously affected by the ISB model due to the low correlation between the ISB and the other parameters. However, the refinement of ISB model weakens the correlation between coordinates and ISB estimates and finally enhance the PPP performance in the case of poor observation conditions. PMID:26134106
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.
Yu, Q W; Zhang, P; Zhou, S B; Hu, Y; Ji, M X; Luo, Y C; You, H L; Yao, Z X
2016-07-01
To observe the accommodative accuracy of children with early-onset myopia at different near-work distances, and discuss the relationship between accommodative accuracy and early-onset myopia. This was a case-control study. Thirty-seven emmetropic children, 41 early-onset myopic children without correction, and 39 early-onset myopic children with spectacles, aged 7 to 13 years, were included. Measures of refractive errors and accommodative accuracy at four near-work distances, including 50 cm, 40 cm, 30 cm, and 20 cm, were made using the binocular fusion cross cylinder (FCC) of an automatic phoropter. Most candidates showed accommodative lags, including the children with emmetropia. The ratio of lags in all candidates at different near-work distances was 75.21% (50 cm), 87.18% (40 cm), 92.31% (30 cm), and 98.29% (20 cm), respectively. All accommodative accuracies became worse, and the accommodative lag ratio and values of FCC increased, along with the shortening of the distance. The difference in accommodative accuracy among groups was statistically significant at 30 cm (χ(2)=7.852, P= 0.020) and 20 cm (χ(2)=6.480, P=0.039). The values of FCC among groups were significantly different at 30 cm (F=3.626, P=0.030) and 20 cm (F=3.703, P=0.028), but not at 50 cm and 40 cm (P>0.05). In addition, the FCC values of 30 cm and 20 cm had a statistically significant difference between myopic children without correction [(1.25±0.44) D and (1.76±0.43) D] and emmetropic children [(0.95±0.52) D and (1.41±0.58) D] (P=0.012, 0.008). The correlation between diopters of myopia and accommodative accuracy at different nearwork distances was not statistically significant (P>0.05). However, the correlation between diopters of myopia and the accommodative lag value (FCC) at 20 cm was statistically significant (r=0.246, P=0.028). The closer the near-work distance is, the worse the accommodative accuracy is. This is more significant in early-onset myopia, especially myopia without correction, than emmetropia. Wearing spectacles may improve the threshold and sensitivity of accommodations, and the accommodative accuracy at near-work distances (<30 cm) to some extent. The poor accommodative accuracy at near-work distances may be not related to early-onset myopia, but the value of FCC (20 cm) is related to early-onset myopia. The higher the FCC value is, the higher the diopter is. (Chin J Ophthalmol, 2016, 52: 520-524).
Short text sentiment classification based on feature extension and ensemble classifier
NASA Astrophysics Data System (ADS)
Liu, Yang; Zhu, Xie
2018-05-01
With the rapid development of Internet social media, excavating the emotional tendencies of the short text information from the Internet, the acquisition of useful information has attracted the attention of researchers. At present, the commonly used can be attributed to the rule-based classification and statistical machine learning classification methods. Although micro-blog sentiment analysis has made good progress, there still exist some shortcomings such as not highly accurate enough and strong dependence from sentiment classification effect. Aiming at the characteristics of Chinese short texts, such as less information, sparse features, and diverse expressions, this paper considers expanding the original text by mining related semantic information from the reviews, forwarding and other related information. First, this paper uses Word2vec to compute word similarity to extend the feature words. And then uses an ensemble classifier composed of SVM, KNN and HMM to analyze the emotion of the short text of micro-blog. The experimental results show that the proposed method can make good use of the comment forwarding information to extend the original features. Compared with the traditional method, the accuracy, recall and F1 value obtained by this method have been improved.
Recursive feature selection with significant variables of support vectors.
Tsai, Chen-An; Huang, Chien-Hsun; Chang, Ching-Wei; Chen, Chun-Houh
2012-01-01
The development of DNA microarray makes researchers screen thousands of genes simultaneously and it also helps determine high- and low-expression level genes in normal and disease tissues. Selecting relevant genes for cancer classification is an important issue. Most of the gene selection methods use univariate ranking criteria and arbitrarily choose a threshold to choose genes. However, the parameter setting may not be compatible to the selected classification algorithms. In this paper, we propose a new gene selection method (SVM-t) based on the use of t-statistics embedded in support vector machine. We compared the performance to two similar SVM-based methods: SVM recursive feature elimination (SVMRFE) and recursive support vector machine (RSVM). The three methods were compared based on extensive simulation experiments and analyses of two published microarray datasets. In the simulation experiments, we found that the proposed method is more robust in selecting informative genes than SVMRFE and RSVM and capable to attain good classification performance when the variations of informative and noninformative genes are different. In the analysis of two microarray datasets, the proposed method yields better performance in identifying fewer genes with good prediction accuracy, compared to SVMRFE and RSVM.
Spatial and thematic assessment of object-based forest stand delineation using an OFA-matrix
NASA Astrophysics Data System (ADS)
Hernando, A.; Tiede, D.; Albrecht, F.; Lang, S.
2012-10-01
The delineation and classification of forest stands is a crucial aspect of forest management. Object-based image analysis (OBIA) can be used to produce detailed maps of forest stands from either orthophotos or very high resolution satellite imagery. However, measures are then required for evaluating and quantifying both the spatial and thematic accuracy of the OBIA output. In this paper we present an approach for delineating forest stands and a new Object Fate Analysis (OFA) matrix for accuracy assessment. A two-level object-based orthophoto analysis was first carried out to delineate stands on the Dehesa Boyal public land in central Spain (Avila Province). Two structural features were first created for use in class modelling, enabling good differentiation between stands: a relational tree cover cluster feature, and an arithmetic ratio shadow/tree feature. We then extended the OFA comparison approach with an OFA-matrix to enable concurrent validation of thematic and spatial accuracies. Its diagonal shows the proportion of spatial and thematic coincidence between a reference data and the corresponding classification. New parameters for Spatial Thematic Loyalty (STL), Spatial Thematic Loyalty Overall (STLOVERALL) and Maximal Interfering Object (MIO) are introduced to summarise the OFA-matrix accuracy assessment. A stands map generated by OBIA (classification data) was compared with a map of the same area produced from photo interpretation and field data (reference data). In our example the OFA-matrix results indicate good spatial and thematic accuracies (>65%) for all stand classes except for the shrub stands (31.8%), and a good STLOVERALL (69.8%). The OFA-matrix has therefore been shown to be a valid tool for OBIA accuracy assessment.
Air Combat Training: Good Stick Index Validation. Final Report for Period 3 April 1978-1 April 1979.
ERIC Educational Resources Information Center
Moore, Samuel B.; And Others
A study was conducted to investigate and statistically validate a performance measuring system (the Good Stick Index) in the Tactical Air Command Combat Engagement Simulator I (TAC ACES I) Air Combat Maneuvering (ACM) training program. The study utilized a twelve-week sample of eighty-nine student pilots to statistically validate the Good Stick…
Bram, Jessyka Maria de França; Talib, Leda Leme; Joaquim, Helena Passarelli Giroud; Sarno, Tamires Alves; Gattaz, Wagner Farid; Forlenza, Orestes Vicente
2018-05-29
The clinical diagnosis of Alzheimer's disease (AD) is a probabilistic formulation that may lack accuracy particularly at early stages of the dementing process. Abnormalities in amyloid-beta precursor protein (APP) metabolism and in the level of APP secretases have been demonstrated in platelets, and to a lesser extent in leukocytes, of AD patients, with conflicting results. The aim of the present study was to compare the protein level of the APP secretases A-disintegrin and metalloprotease 10 (ADAM10), Beta-site APP-cleaving enzyme 1 (BACE1), and presenilin-1 (PSEN1) in platelets and leukocytes from 20 non-medicated older adults with AD and 20 healthy elders, and to determine the potential use of these biomarkers to discriminate cases of AD from controls. The protein levels of all APP secretases were significantly higher in platelets compared to leukocytes. We found statistically a significant decrease in ADAM10 (52.5%, p < 0.0001) and PSEN1 (32%, p = 0.02) in platelets from AD patients compared to controls, but not in leukocytes. Combining all three secretases to generate receiver-operating characteristic (ROC) curves, we found a good discriminatory effect (AD vs. controls) when using platelets (the area under the curve-AUC-0.90, sensitivity 88.9%, specificity 66.7%, p = 0.003), but not in leukocytes (AUC 0.65, sensitivity 77.8%, specificity 50.0%, p = 0.2). Our findings indicate that platelets represent a better biological matrix than leukocytes to address the peripheral level of APP secretases. In addition, combining the protein level of ADAM10, BACE1, and PSEN1 in platelets, yielded a good accuracy to discriminate AD from controls.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom
2018-07-01
A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D0) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing.
Martens, Roland M; Bechten, Arianne; Ingala, Silvia; van Schijndel, Ronald A; Machado, Vania B; de Jong, Marcus C; Sanchez, Esther; Purcell, Derk; Arrighi, Michael H; Brashear, Robert H; Wattjes, Mike P; Barkhof, Frederik
2018-03-01
Immunotherapeutic treatments targeting amyloid-β plaques in Alzheimer's disease (AD) are associated with the presence of amyloid-related imaging abnormalities with oedema or effusion (ARIA-E), whose detection and classification is crucial to evaluate subjects enrolled in clinical trials. To investigate the applicability of subtraction MRI in the ARIA-E detection using an established ARIA-E-rating scale. We included 75 AD patients receiving bapineuzumab treatment, including 29 ARIA-E cases. Five neuroradiologists rated their brain MRI-scans with and without subtraction images. The accuracy of evaluating the presence of ARIA-E, intraclass correlation coefficient (ICC) and specific agreement was calculated. Subtraction resulted in higher sensitivity (0.966) and lower specificity (0.970) than native images (0.959, 0.991, respectively). Individual rater detection was excellent. ICC scores ranged from excellent to good, except for gyral swelling (moderate). Excellent negative and good positive specific agreement among all ARIA-E imaging features was reported in both groups. Combining sulcal hyperintensity and gyral swelling significantly increased positive agreement for subtraction images. Subtraction MRI has potential as a visual aid increasing the sensitivity of ARIA-E assessment. However, in order to improve its usefulness isotropic acquisition and enhanced training are required. The ARIA-E rating scale may benefit from combining sulcal hyperintensity and swelling. • Subtraction technique can improve detection amyloid-related imaging-abnormalities with edema/effusion in Alzheimer's patients. • The value of ARIA-E detection, classification and monitoring using subtraction was assessed. • Validation of an established ARIA-E rating scale, recommendations for improvement are reported. • Complementary statistical methods were employed to measure accuracy, inter-rater-reliability and specific agreement.
5D-QSAR for spirocyclic sigma1 receptor ligands by Quasar receptor surface modeling.
Oberdorf, Christoph; Schmidt, Thomas J; Wünsch, Bernhard
2010-07-01
Based on a contiguous and structurally as well as biologically diverse set of 87 sigma(1) ligands, a 5D-QSAR study was conducted in which a quasi-atomistic receptor surface modeling approach (program package Quasar) was applied. The superposition of the ligands was performed with the tool Pharmacophore Elucidation (MOE-package), which takes all conformations of the ligands into account. This procedure led to four pharmacophoric structural elements with aromatic, hydrophobic, cationic and H-bond acceptor properties. Using the aligned structures a 3D-model of the ligand binding site of the sigma(1) receptor was obtained, whose general features are in good agreement with previous assumptions on the receptor structure, but revealed some novel insights since it represents the receptor surface in more detail. Thus, e.g., our model indicates the presence of an H-bond acceptor moiety in the binding site as counterpart to the ligands' cationic ammonium center, rather than a negatively charged carboxylate group. The presented QSAR model is statistically valid and represents the biological data of all tested compounds, including a test set of 21 ligands not used in the modeling process, with very good to excellent accuracy [q(2) (training set, n=66; leave 1/3 out) = 0.84, p(2) (test set, n=21)=0.64]. Moreover, the binding affinities of 13 further spirocyclic sigma(1) ligands were predicted with reasonable accuracy (mean deviation in pK(i) approximately 0.8). Thus, in addition to novel insights into the requirements for binding of spirocyclic piperidines to the sigma(1) receptor, the presented model can be used successfully in the rational design of new sigma(1) ligands. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.
Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom
2018-07-05
A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D 0 ) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing. Copyright © 2018 Elsevier B.V. All rights reserved.
Reddy, Jagan Mohan; Prashanti, E; Kumar, G Vinay; Suresh Sajjan, M C; Mathew, Xavier
2009-01-01
The dual-arch impression technique is convenient in that it makes the required maxillary and mandibular impressions, as well as the inter-occlusal record in one procedure. The accuracy of inter-abutment distance in dies fabricated from dual-arch impression technique remains in question because there is little information available in the literature. This study was conducted to evaluate the accuracy of inter-abutment distance in dies obtained from full arch dual-arch trays with those obtained from full arch stock metal trays. The metal dual-arch trays showed better accuracy followed by the plastic dual-arch and stock dentulous trays, respectively, though statistically insignificant. The pouring sequence did not have any effect on the inter-abutment distance statistically, though pouring the non-working side of the dual-arch impression first showed better accuracy.
Psychometric Evaluation of the Ford Insomnia Response to Stress Test (FIRST) in Early Pregnancy.
Gelaye, Bizu; Zhong, Qiu-Yue; Barrios, Yasmin V; Redline, Susan; Drake, Christopher L; Williams, Michelle A
2016-04-15
To evaluate the construct validity and factor structure of the Spanish-language version of the Ford Insomnia Response to Stress Test questionnaire (FIRST-S) when used in early pregnancy. A cohort of 647 women were interviewed at ≤ 16 weeks of gestation to collect information regarding lifestyle, demographic, and sleep characteristics. The factorial structure of the FIRST-S was tested through exploratory and confirmatory factor analyses (EFA and CFA). Internal consistency and construct validity were also assessed by evaluating the association between the FIRST-S with symptoms of depression, anxiety, and sleep quality. Item response theory (IRT) analyses were conducted to complement classical test theory (CTT) analytic approaches. The mean score of the FIRST-S was 13.8 (range: 9-33). The results of the EFA showed that the FIRST-S contained a one-factor solution that accounted for 69.8% of the variance. The FIRST-S items showed good internal consistency (Cronbach α = 0.81). CFA results corroborated the one-factor structure finding from the EFA; and yielded measures indicating goodness of fit (comparative fit index of 0.902) and accuracy (root mean square error of approximation of 0.057). The FIRST-S had good construct validity as demonstrated by statistically significant associations of FIRST-S scores with sleep quality, antepartum depression and anxiety symptoms. Finally, results from IRT analyses suggested excellent item infit and outfit measures. The FIRST-S was found to have good construct validity and internal consistency for assessing vulnerability to insomnia during early pregnancy. © 2016 American Academy of Sleep Medicine.
Maiti, Saumen; Erram, V C; Gupta, Gautam; Tiwari, Ram Krishna; Kulkarni, U D; Sangpal, R R
2013-04-01
Deplorable quality of groundwater arising from saltwater intrusion, natural leaching and anthropogenic activities is one of the major concerns for the society. Assessment of groundwater quality is, therefore, a primary objective of scientific research. Here, we propose an artificial neural network-based method set in a Bayesian neural network (BNN) framework and employ it to assess groundwater quality. The approach is based on analyzing 36 water samples and inverting up to 85 Schlumberger vertical electrical sounding data. We constructed a priori model by suitably parameterizing geochemical and geophysical data collected from the western part of India. The posterior model (post-inversion) was estimated using the BNN learning procedure and global hybrid Monte Carlo/Markov Chain Monte Carlo optimization scheme. By suitable parameterization of geochemical and geophysical parameters, we simulated 1,500 training samples, out of which 50 % samples were used for training and remaining 50 % were used for validation and testing. We show that the trained model is able to classify validation and test samples with 85 % and 80 % accuracy respectively. Based on cross-correlation analysis and Gibb's diagram of geochemical attributes, the groundwater qualities of the study area were classified into following three categories: "Very good", "Good", and "Unsuitable". The BNN model-based results suggest that groundwater quality falls mostly in the range of "Good" to "Very good" except for some places near the Arabian Sea. The new modeling results powered by uncertainty and statistical analyses would provide useful constrain, which could be utilized in monitoring and assessment of the groundwater quality.
NASA Astrophysics Data System (ADS)
Lucarini, Valerio; Russell, Gary L.
2002-08-01
Results are presented for two greenhouse gas experiments of the Goddard Institute for Space Studies atmosphere-ocean model (AOM). The computed trends of surface pressure; surface temperature; 850, 500, and 200 mbar geopotential heights; and related temperatures of the model for the time frame 1960-2000 are compared with those obtained from the National Centers for Enviromental Prediction (NCEP) observations. The domain of interest is the Northern Hemisphere because of the higher reliability of both the model results and the observations. A spatial correlation analysis and a mean value comparison are performed, showing good agreement in terms of statistical significance for most of the variables considered in the winter and annual means. However, the 850 mbar temperature trends do not show significant positive correlation, and the surface pressure and 850 mbar geopotential height mean trends confidence intervals do not overlap. A brief general discussion about the statistics of trend detection is presented. The accuracy that this AOM has in describing the regional and NH mean climate trends inferred from NCEP through the atmosphere suggests that it may be reliable in forecasting future climate changes.
A preliminary study on identification of Thai rice samples by INAA and statistical analysis
NASA Astrophysics Data System (ADS)
Kongsri, S.; Kukusamude, C.
2017-09-01
This study aims to investigate the elemental compositions in 93 Thai rice samples using instrumental neutron activation analysis (INAA) and to identify rice according to their types and rice cultivars using statistical analysis. As, Mg, Cl, Al, Br, Mn, K, Rb and Zn in Thai jasmine rice and Sung Yod rice samples were successfully determined by INAA. The accuracy and precision of the INAA method were verified by SRM 1568a Rice Flour. All elements were found to be in a good agreement with the certified values. The precisions in term of %RSD were lower than 7%. The LODs were obtained in range of 0.01 to 29 mg kg-1. The concentration of 9 elements distributed in Thai rice samples was evaluated and used as chemical indicators to identify the type of rice samples. The result found that Mg, Cl, As, Br, Mn, K, Rb, and Zn concentrations in Thai jasmine rice samples are significantly different but there was no evidence that Al is significantly different from concentration in Sung Yod rice samples at 95% confidence interval. Our results may provide preliminary information for discrimination of rice samples and may be useful database of Thai rice.
A High-Resolution Capability for Large-Eddy Simulation of Jet Flows
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2011-01-01
A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.
The Effect of Technological Devices on Cervical Lordosis.
Öğrenci, Ahmet; Koban, Orkun; Yaman, Onur; Dalbayrak, Sedat; Yılmaz, Mesut
2018-03-15
There is a need for cervical flexion and even cervical hyperflexion for the use of technological devices, especially mobile phones. We investigated the effect of this use on the cervical lordosis angle. A group of 156 patients who applied with only neck pain between 2013-2016 and had no additional problems were included. Patients are specifically questioned about mobile phone, tablet, and other devices usage. The value obtained by multiplying the year of usage and the average usage (hour) in daily life was determined as the total usage value (an average hour per day x year: hy). Cervical lordosis angles were statistically compared with the total time of use. In the general ROC analysis, the cut-off value was found to be 20.5 hy. When the cut-off value is tested, the overall accuracy is very good with 72.4%. The true estimate of true risk and non-risk is quite high. The ROC analysis is statistically significant. The use of computing devices, especially mobile telephones, and the increase in the flexion of the cervical spine indicate that cervical vertebral problems will increase even in younger people in future. Also, to using with attention at this point, ergonomic devices must also be developed.
NASA Astrophysics Data System (ADS)
Glavanović, Siniša; Glavanović, Marija; Tomišić, Vladislav
2016-03-01
The UV spectrophotometric methods for simultaneous quantitative determination of paracetamol and tramadol in paracetamol-tramadol tablets were developed. The spectrophotometric data obtained were processed by means of partial least squares (PLS) and genetic algorithm coupled with PLS (GA-PLS) methods in order to determine the content of active substances in the tablets. The results gained by chemometric processing of the spectroscopic data were statistically compared with those obtained by means of validated ultra-high performance liquid chromatographic (UHPLC) method. The accuracy and precision of data obtained by the developed chemometric models were verified by analysing the synthetic mixture of drugs, and by calculating recovery as well as relative standard error (RSE). A statistically good agreement was found between the amounts of paracetamol determined using PLS and GA-PLS algorithms, and that obtained by UHPLC analysis, whereas for tramadol GA-PLS results were proven to be more reliable compared to those of PLS. The simplest and the most accurate and precise models were constructed by using the PLS method for paracetamol (mean recovery 99.5%, RSE 0.89%) and the GA-PLS method for tramadol (mean recovery 99.4%, RSE 1.69%).
Stout, David B.; Chatziioannou, Arion F.
2012-01-01
Micro-CT is widely used in preclinical studies of small animals. Due to the low soft-tissue contrast in typical studies, segmentation of soft tissue organs from noncontrast enhanced micro-CT images is a challenging problem. Here, we propose an atlas-based approach for estimating the major organs in mouse micro-CT images. A statistical atlas of major trunk organs was constructed based on 45 training subjects. The statistical shape model technique was used to include inter-subject anatomical variations. The shape correlations between different organs were described using a conditional Gaussian model. For registration, first the high-contrast organs in micro-CT images were registered by fitting the statistical shape model, while the low-contrast organs were subsequently estimated from the high-contrast organs using the conditional Gaussian model. The registration accuracy was validated based on 23 noncontrast-enhanced and 45 contrast-enhanced micro-CT images. Three different accuracy metrics (Dice coefficient, organ volume recovery coefficient, and surface distance) were used for evaluation. The Dice coefficients vary from 0.45 ± 0.18 for the spleen to 0.90 ± 0.02 for the lungs, the volume recovery coefficients vary from for the liver to 1.30 ± 0.75 for the spleen, the surface distances vary from 0.18 ± 0.01 mm for the lungs to 0.72 ± 0.42 mm for the spleen. The registration accuracy of the statistical atlas was compared with two publicly available single-subject mouse atlases, i.e., the MOBY phantom and the DIGIMOUSE atlas, and the results proved that the statistical atlas is more accurate than the single atlases. To evaluate the influence of the training subject size, different numbers of training subjects were used for atlas construction and registration. The results showed an improvement of the registration accuracy when more training subjects were used for the atlas construction. The statistical atlas-based registration was also compared with the thin-plate spline based deformable registration, commonly used in mouse atlas registration. The results revealed that the statistical atlas has the advantage of improving the estimation of low-contrast organs. PMID:21859613
NASA Astrophysics Data System (ADS)
Wang, Dong
2016-03-01
Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical features, some other popular statistical models including linear discriminant analysis, quadratic discriminant analysis, classification and regression tree and naive Bayes classifier, are compared with the developed method. The results show that the developed method has the highest prediction accuracies among these statistical models. Additionally, selection of the number of new significant features and parameter selection of K-nearest neighbors are thoroughly investigated.
NEUTRON STAR MASS–RADIUS CONSTRAINTS USING EVOLUTIONARY OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A. L.; Morsink, S. M.; Fiege, J. D.
The equation of state of cold supra-nuclear-density matter, such as in neutron stars, is an open question in astrophysics. A promising method for constraining the neutron star equation of state is modeling pulse profiles of thermonuclear X-ray burst oscillations from hot spots on accreting neutron stars. The pulse profiles, constructed using spherical and oblate neutron star models, are comparable to what would be observed by a next-generation X-ray timing instrument like ASTROSAT , NICER , or a mission similar to LOFT . In this paper, we showcase the use of an evolutionary optimization algorithm to fit pulse profiles to determinemore » the best-fit masses and radii. By fitting synthetic data, we assess how well the optimization algorithm can recover the input parameters. Multiple Poisson realizations of the synthetic pulse profiles, constructed with 1.6 million counts and no background, were fitted with the Ferret algorithm to analyze both statistical and degeneracy-related uncertainty and to explore how the goodness of fit depends on the input parameters. For the regions of parameter space sampled by our tests, the best-determined parameter is the projected velocity of the spot along the observer’s line of sight, with an accuracy of ≤3% compared to the true value and with ≤5% statistical uncertainty. The next best determined are the mass and radius; for a neutron star with a spin frequency of 600 Hz, the best-fit mass and radius are accurate to ≤5%, with respective uncertainties of ≤7% and ≤10%. The accuracy and precision depend on the observer inclination and spot colatitude, with values of ∼1% achievable in mass and radius if both the inclination and colatitude are ≳60°.« less
Schmitt, R; Christopoulos, G; Wagner, M; Krimmer, H; Fodor, S; van Schoonhoven, J; Prommersberger, K J
2011-02-01
The purpose of this prospective study is to assess the diagnostic value of intravenously applied contrast agent for diagnosing osteonecrosis of the proximal fragment in scaphoid nonunion, and to compare the imaging results with intraoperative findings. In 88 patients (7 women, 81 men) suffering from symptomatic scaphoid nonunion, preoperative MRI was performed (coronal PD-w FSE fs, sagittal-oblique T1-w SE nonenhanced and T1-w SE fs contrast-enhanced, sagittal T2*-w GRE). MRI interpretation was based on the intensity of contrast enhancement: 0 = none, 1 = focal, 2 = diffuse. Intraoperatively, the osseous viability was scored by means of bleeding points on the osteotomy site of the proximal scaphoid fragment: 0=absent, 1 = moderate, 2 = good. Intraoperatively, 17 necrotic, 29 compromised, and 42 normal proximal fragments were found. In nonenhanced MRI, bone viability was judged necrotic in 1 patient, compromised in 20 patients, and unaffected in 67 patients. Contrast-enhanced MRI revealed 14 necrotic, 21 compromised, and 53 normal proximal fragments. Judging surgical findings as the standard of reference, statistical analysis for nonenhanced MRI was: sensitivity 6.3%, specificity 100%, positive PV 100%, negative PV 82.6%, and accuracy 82.9%; statistics for contrast-enhanced MRI was: sensitivity 76.5%, specificity 98.6%, positive PV 92.9%, negative PV 94.6%, and accuracy 94.3%. Sensitivity for detecting avascular proximal fragments was significantly better (p<0.001) in contrast-enhanced MRI in comparison to nonenhanced MRI. Viability of the proximal fragment in scaphoid nonunion can be significantly better assessed with the use of contrast-enhanced MRI as compared to nonenhanced MRI. Bone marrow edema is an inferior indicator of osteonecrosis. Application of intravenous gadolinium is recommended for imaging scaphoid nonunion. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Sforza, Chiarella; De Menezes, Marcio; Bresciani, Elena; Cerón-Zapata, Ana M; López-Palacio, Ana M; Rodriguez-Ardila, Myriam J; Berrio-Gutiérrez, Lina M
2012-07-01
To assess a three-dimensional stereophotogrammetric method for palatal cast digitization of children with unilateral cleft lip and palate. As part of a collaboration between the University of Milan (Italy) and the University CES of Medellin (Colombia), 96 palatal cast models obtained from neonatal patients with unilateral cleft lip and palate were obtained and digitized using a three-dimensional stereophotogrammetric imaging system. Three-dimensional measurements (cleft width, depth, length) were made separately for the longer and shorter cleft segments on the digital dental cast surface between landmarks, previously marked. Seven linear measurements were computed. Systematic and random errors between operators' tracings, and accuracy on geometric objects of known size were calculated. In addition, mean measurements from three-dimensional stereophotographs were compared statistically with those from direct anthropometry. The three-dimensional method presented good accuracy error (<0.9%) on measuring geometric objects. No systematic errors between operators' measurements were found (p > .05). Statistically significant differences (p < 5%) were noted for different methods (caliper versus stereophotogrammetry) for almost all distances analyzed, with mean absolute difference values ranging between 0.22 and 3.41 mm. Therefore, rates for the technical error of measurement and relative error magnitude were scored as moderate for Ag-Am and poor for Ag-Pg and Am-Pm distances. Generally, caliper values were larger than three-dimensional stereophotogrammetric values. Three-dimensional stereophotogrammetric systems have some advantages over direct anthropometry, and therefore the method could be sufficiently precise and accurate on palatal cast digitization with unilateral cleft lip and palate. This would be useful for clinical analyses in maxillofacial, plastic, and aesthetic surgery.
Lobbes, Marc B I; Lalji, Ulrich; Houwers, Janneke; Nijssen, Estelle C; Nelemans, Patty J; van Roozendaal, Lori; Smidt, Marjolein L; Heuts, Esther; Wildberger, Joachim E
2014-07-01
Feasibility studies have shown that contrast-enhanced spectral mammography (CESM) increases diagnostic accuracy of mammography. We studied diagnostic accuracy of CESM in patients referred from the breast cancer screening programme, who have a lower disease prevalence than previously published papers on CESM. During 6 months, all women referred to our hospital were eligible for CESM. Two radiologists blinded to the final diagnosis provided BI-RADS classifications for conventional mammography and CESM. Statistical significance of differences between mammography and CESM was calculated using McNemar's test. Receiver operating characteristic (ROC) curves were constructed for both imaging modalities. Of the 116 eligible women, 113 underwent CESM. CESM increased sensitivity to 100.0% (+3.1%), specificity to 87.7% (+45.7%), PPV to 76.2% (+36.5%) and NPV to 100.0% (+2.9%) as compared to mammography. Differences between conventional mammography and CESM were statistically significant (p < 0.0001). A similar trend was observed in the ROC curve. For conventional mammography, AUC was 0.779. With CESM, AUC increased to 0.976 (p < 0.0001). In addition, good agreement between tumour diameters measured using CESM, breast MRI and histopathology was observed. CESM increases diagnostic performance of conventional mammography, even in lower prevalence patient populations such as referrals from breast cancer screening. • CESM is feasible in the workflow of referrals from routine breast screening. • CESM is superior to mammography, even in low disease prevalence populations. • CESM has an extremely high negative predictive value for breast cancer. • CESM is comparable to MRI in assessment of breast cancer extent. • CESM is comparable to histopathology in assessment of breast cancer extent.
Influence of cone beam CT enhancement filters on diagnosis ability of longitudinal root fractures
Nascimento, M C C; Nejaim, Y; de Almeida, S M; Bóscolo, F N; Haiter-Neto, F; Sobrinho, L C
2014-01-01
Objectives: To determine whether cone beam CT (CBCT) enhancement filters influence the diagnosis of longitudinal root fractures. Methods: 40 extracted human posterior teeth were endodontically prepared, and fractures with no separation of fragments were made in 20 teeth of this sample. The teeth were placed in a dry mandible and scanned using a Classic i-CAT® CBCT device (Imaging Sciences International, Inc., Hatfield, PA). Evaluations were performed with and without CBCT filters (Sharpen Mild, Sharpen Super Mild, S9, Sharpen, Sharpen 3 × 3, Angio Sharpen Medium 5 × 5, Angio Sharpen High 5 × 5 and Shadow 3 × 3) by three oral radiologists. Inter- and intraobserver agreement was calculated by the kappa test. Accuracy, sensitivity, specificity and positive and negative predictive values were determined. McNemar test was applied for agreement between all images vs the gold standard and original images vs images with filters (p < 0.05). Results: Means of intraobserver agreement ranged from good to excellent. Angio Sharpen Medium 5 × 5 filter obtained the highest positive predictive value (80.0%) and specificity value (76.5%). Angio Sharpen High 5 × 5 filter obtained the highest sensitivity (78.9%) and accuracy (77.5%) value. Negative predictive value was the highest (82.9%) for S9 filter. The McNemar test showed no statistically significant differences between images with and without CBCT filters (p > 0.05). Conclusions: Although no statistical differences was observed in the diagnosis of root fractures when using filters, these filters seem to improve diagnostic capacity for longitudinal root fractures. Further in vitro studies with endodontic-treated teeth and research in vivo should be considered. PMID:24408819
Schueler, Sabine; Walther, Stefan; Schuetz, Georg M; Schlattmann, Peter; Dewey, Marc
2013-06-01
To evaluate the methodological quality of diagnostic accuracy studies on coronary computed tomography (CT) angiography using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) tool. Each QUADAS item was individually defined to adapt it to the special requirements of studies on coronary CT angiography. Two independent investigators analysed 118 studies using 12 QUADAS items. Meta-regression and pooled analyses were performed to identify possible effects of methodological quality items on estimates of diagnostic accuracy. The overall methodological quality of coronary CT studies was merely moderate. They fulfilled a median of 7.5 out of 12 items. Only 9 of the 118 studies fulfilled more than 75 % of possible QUADAS items. One QUADAS item ("Uninterpretable Results") showed a significant influence (P = 0.02) on estimates of diagnostic accuracy with "no fulfilment" increasing specificity from 86 to 90 %. Furthermore, pooled analysis revealed that each QUADAS item that is not fulfilled has the potential to change estimates of diagnostic accuracy. The methodological quality of studies investigating the diagnostic accuracy of non-invasive coronary CT is only moderate and was found to affect the sensitivity and specificity. An improvement is highly desirable because good methodology is crucial for adequately assessing imaging technologies. • Good methodological quality is a basic requirement in diagnostic accuracy studies. • Most coronary CT angiography studies have only been of moderate design quality. • Weak methodological quality will affect the sensitivity and specificity. • No improvement in methodological quality was observed over time. • Authors should consider the QUADAS checklist when undertaking accuracy studies.
Raymond L. Czaplewski
2003-01-01
No thematic map is perfect. Some pixels or polygons are not accurately classified, no matter how well the map is crafted. Therefore, thematic maps need metadata that sufficiently characterize the nature and degree of these imperfections. To decision-makers, an accuracy assessment helps judge the risks of using imperfect geospatial data. To analysts, an accuracy...
Applications and accuracy of the parallel diagonal dominant algorithm
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1993-01-01
The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.
Sex determination using cheiloscopy and mandibular canine index as a tool in forensic dentistry.
Singh, Jaspal; Gupta, Kapil D; Sardana, Varun; Balappanavar, Ashwini Y; Malhotra, Garima
2012-07-01
Establishment of a person's individuality is important for legal as well as humanitarian purpose and gender determination is an essential step in identifying an individual. In forensic odontology the sum total of all the characteristics of teeth and their associated structures provide a unique totality and forms the basis for personal identification. To investigate the accuracy of various methods employed in sex determination such as cheiloscopy and mandibular canine index (MCI). The study group comprises adults between 20 and 25 years of age, who were assessed for gender identification using lip prints and MCI. The results were subjected to statistical analysis. MCI and lip prints were found to be accurate and specific for sex determination. There is scope for use of these methods in criminal investigations, personal identification, and genetic studies. Thus, dental tissues make good witnesses although they speak softly, they never lie and they never forget.
NASA Technical Reports Server (NTRS)
Canfield, R. C.; Ricchiazzi, P. J.
1980-01-01
An approximate probabilistic radiative transfer equation and the statistical equilibrium equations are simultaneously solved for a model hydrogen atom consisting of three bound levels and ionization continuum. The transfer equation for L-alpha, L-beta, H-alpha, and the Lyman continuum is explicitly solved assuming complete redistribution. The accuracy of this approach is tested by comparing source functions and radiative loss rates to values obtained with a method that solves the exact transfer equation. Two recent model solar-flare chromospheres are used for this test. It is shown that for the test atmospheres the probabilistic method gives values of the radiative loss rate that are characteristically good to a factor of 2. The advantage of this probabilistic approach is that it retains a description of the dominant physical processes of radiative transfer in the complete redistribution case, yet it achieves a major reduction in computational requirements.
Laser-induced tissue fluorescence in radiofrequency tissue-fusion characterization.
Su, Lei; Fonseca, Martina B; Arya, Shobhit; Kudo, Hiromi; Goldin, Robert; Hanna, George B; Elson, Daniel S
2014-01-01
Heat-induced tissue fusion is an important procedure in modern surgery and can greatly reduce trauma, complications, and mortality during minimally invasive surgical blood vessel anastomosis, but it may also have further benefits if applied to other tissue types such as small and large intestine anastomoses. We present a tissue-fusion characterization technology using laser-induced fluorescence spectroscopy, which provides further insight into tissue constituent variations at the molecular level. In particular, an increase of fluorescence intensity in 450- to 550-nm range for 375- and 405-nm excitation suggests that the collagen cross-linking in fused tissues increased. Our experimental and statistical analyses showed that, by using fluorescence spectral data, good fusion could be differentiated from other cases with an accuracy of more than 95%. This suggests that the fluorescence spectroscopy could be potentially used as a feedback control method in online tissue-fusion monitoring.
Cosmic-ray elemental abundances from 1 to 10 GeV per amu for boron through nickel
NASA Technical Reports Server (NTRS)
Dwyer, Robert; Meyer, Peter
1987-01-01
The relative abundances of cosmic-ray nuclei in the charge range boron through nickel over the energy range 1-10 GeV per amu were measured with a balloon-borne detector. The instrument consists of a scintillation and Cerenkov counter telescope with a multiwire proportional chamber hodoscope and has been flown in four high-altitude balloon flights. Good charge resolution (sigma = 0.2 charge units at iron) and high statistical accuracy have been achieved. These data are used to derive the energy dependence of the leakage path length using the leaky box model of propagation and confinement in the galaxy. This energy dependence is found to be best fit by lambda = E(tot) exp -n, where n = 0.49 + or - 0.06 over 1-10 GeV per amu. Relative abundances at the source are consistent with an energy-independent composition.
Saad, Ahmed S; Attia, Ali K; Alaraki, Manal S; Elzanfaly, Eman S
2015-11-05
Five different spectrophotometric methods were applied for simultaneous determination of fenbendazole and rafoxanide in their binary mixture; namely first derivative, derivative ratio, ratio difference, dual wavelength and H-point standard addition spectrophotometric methods. Different factors affecting each of the applied spectrophotometric methods were studied and the selectivity of the applied methods was compared. The applied methods were validated as per the ICH guidelines and good accuracy; specificity and precision were proven within the concentration range of 5-50 μg/mL for both drugs. Statistical analysis using one-way ANOVA proved no significant differences among the proposed methods for the determination of the two drugs. The proposed methods successfully determined both drugs in laboratory prepared and commercially available binary mixtures, and were found applicable for the routine analysis in quality control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.
Pérez-Garrido, Alfonso; Morales Helguera, Aliuska; Abellán Guillén, Adela; Cordeiro, M Natália D S; Garrido Escudero, Amalio
2009-01-15
This paper reports a QSAR study for predicting the complexation of a large and heterogeneous variety of substances (233 organic compounds) with beta-cyclodextrins (beta-CDs). Several different theoretical molecular descriptors, calculated solely from the molecular structure of the compounds under investigation, and an efficient variable selection procedure, like the Genetic Algorithm, led to models with satisfactory global accuracy and predictivity. But the best-final QSAR model is based on Topological descriptors meanwhile offering a reasonable interpretation. This QSAR model was able to explain ca. 84% of the variance in the experimental activity, and displayed very good internal cross-validation statistics and predictivity on external data. It shows that the driving forces for CD complexation are mainly hydrophobic and steric (van der Waals) interactions. Thus, the results of our study provide a valuable tool for future screening and priority testing of beta-CDs guest molecules.
NASA Astrophysics Data System (ADS)
Phan, Raymond; Androutsos, Dimitrios
2008-01-01
In this paper, we present a logo and trademark retrieval system for unconstrained color image databases that extends the Color Edge Co-occurrence Histogram (CECH) object detection scheme. We introduce more accurate information to the CECH, by virtue of incorporating color edge detection using vector order statistics. This produces a more accurate representation of edges in color images, in comparison to the simple color pixel difference classification of edges as seen in the CECH. Our proposed method is thus reliant on edge gradient information, and as such, we call this the Color Edge Gradient Co-occurrence Histogram (CEGCH). We use this as the main mechanism for our unconstrained color logo and trademark retrieval scheme. Results illustrate that the proposed retrieval system retrieves logos and trademarks with good accuracy, and outperforms the CECH object detection scheme with higher precision and recall.
NASA Astrophysics Data System (ADS)
Singh, Veena D.; Daharwal, Sanjay J.
2017-01-01
Three multivariate calibration spectrophotometric methods were developed for simultaneous estimation of Paracetamol (PARA), Enalapril maleate (ENM) and Hydrochlorothiazide (HCTZ) in tablet dosage form; namely multi-linear regression calibration (MLRC), trilinear regression calibration method (TLRC) and classical least square (CLS) method. The selectivity of the proposed methods were studied by analyzing the laboratory prepared ternary mixture and successfully applied in their combined dosage form. The proposed methods were validated as per ICH guidelines and good accuracy; precision and specificity were confirmed within the concentration range of 5-35 μg mL- 1, 5-40 μg mL- 1 and 5-40 μg mL- 1of PARA, HCTZ and ENM, respectively. The results were statistically compared with reported HPLC method. Thus, the proposed methods can be effectively useful for the routine quality control analysis of these drugs in commercial tablet dosage form.
Improved biliary detection and diagnosis through intelligent machine analysis.
Logeswaran, Rajasvaran
2012-09-01
This paper reports on work undertaken to improve automated detection of bile ducts in magnetic resonance cholangiopancreatography (MRCP) images, with the objective of conducting preliminary classification of the images for diagnosis. The proposed I-BDeDIMA (Improved Biliary Detection and Diagnosis through Intelligent Machine Analysis) scheme is a multi-stage framework consisting of successive phases of image normalization, denoising, structure identification, object labeling, feature selection and disease classification. A combination of multiresolution wavelet, dynamic intensity thresholding, segment-based region growing, region elimination, statistical analysis and neural networks, is used in this framework to achieve good structure detection and preliminary diagnosis. Tests conducted on over 200 clinical images with known diagnosis have shown promising results of over 90% accuracy. The scheme outperforms related work in the literature, making it a viable framework for computer-aided diagnosis of biliary diseases. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
UWB pulse detection and TOA estimation using GLRT
NASA Astrophysics Data System (ADS)
Xie, Yan; Janssen, Gerard J. M.; Shakeri, Siavash; Tiberius, Christiaan C. J. M.
2017-12-01
In this paper, a novel statistical approach is presented for time-of-arrival (TOA) estimation based on first path (FP) pulse detection using a sub-Nyquist sampling ultra-wide band (UWB) receiver. The TOA measurement accuracy, which cannot be improved by averaging of the received signal, can be enhanced by the statistical processing of a number of TOA measurements. The TOA statistics are modeled and analyzed for a UWB receiver using threshold crossing detection of a pulse signal with noise. The detection and estimation scheme based on the Generalized Likelihood Ratio Test (GLRT) detector, which captures the full statistical information of the measurement data, is shown to achieve accurate TOA estimation and allows for a trade-off between the threshold level, the noise level, the amplitude and the arrival time of the first path pulse, and the accuracy of the obtained final TOA.
Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed
NASA Astrophysics Data System (ADS)
Arif, N.; Danoedoro, P.; Hartono
2017-12-01
Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.
Mitsunaga, Tisha; Hedt-Gauthier, Bethany L; Ngizwenayo, Elias; Farmer, Didi Bertrand; Gaju, Erick; Drobac, Peter; Basinga, Paulin; Hirschhorn, Lisa; Rich, Michael L; Winch, Peter J; Ngabo, Fidele; Mugeni, Cathy
2015-08-01
Community health workers (CHWs) collect data for routine services, surveys and research in their communities. However, quality of these data is largely unknown. Utilizing poor quality data can result in inefficient resource use, misinformation about system gaps, and poor program management and effectiveness. This study aims to measure CHW data accuracy, defined as agreement between household registers compared to household member interview and client records in one district in Eastern province, Rwanda. We used cluster-lot quality assurance sampling to randomly sample six CHWs per cell and six households per CHW. We classified cells as having 'poor' or 'good' accuracy for household registers for five indicators, calculating point estimates of percent of households with accurate data by health center. We evaluated 204 CHW registers and 1,224 households for accuracy across 34 cells in southern Kayonza. Point estimates across health centers ranged from 79 to 100% for individual indicators and 61 to 72% for the composite indicator. Recording error appeared random for all but the widely under-reported number of women on modern family planning method. Overall, accuracy was largely 'good' across cells, with varying results by indicator. Program managers should identify optimum thresholds for 'good' data quality and interventions to reach them according to data use. Decreasing variability and improving quality will facilitate potential of these routinely-collected data to be more meaningful for community health program management. We encourage further studies assessing CHW data quality and the impact training, supervision and other strategies have on improving it.
Milker, Yvonne; Weinkauf, Manuel F G; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J; Schmiedl, Gerhard
2017-01-01
We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0-200 m) but far less accuracy of 130 m for deeper water depths (200-1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene.
Church, Peter C; Greer, Mary-Louise C; Cytter-Kuint, Ruth; Doria, Andrea S; Griffiths, Anne M; Turner, Dan; Walters, Thomas D; Feldman, Brian M
2017-05-01
Magnetic resonance enterography (MRE) is increasingly relied upon for noninvasive assessment of intestinal inflammation in Crohn disease. However very few studies have examined the diagnostic accuracy of individual MRE signs in children. We have created an MR-based multi-item measure of intestinal inflammation in children with Crohn disease - the Pediatric Inflammatory Crohn's MRE Index (PICMI). To inform item selection for this instrument, we explored the inter-rater agreement and diagnostic accuracy of individual MRE signs of inflammation in pediatric Crohn disease and compared our findings with the reference standards of the weighted Pediatric Crohn's Disease Activity Index (wPCDAI) and C-reactive protein (CRP). In this cross-sectional single-center study, MRE studies in 48 children with diagnosed Crohn disease (66% male, median age 15.5 years) were reviewed by two independent radiologists for the presence of 15 MRE signs of inflammation. Using kappa statistics we explored inter-rater agreement for each MRE sign across 10 anatomical segments of the gastrointestinal tract. We correlated MRE signs with the reference standards using correlation coefficients. Radiologists measured the length of inflamed bowel in each segment of the gastrointestinal tract. In each segment, MRE signs were scored as either binary (0-absent, 1-present), or ordinal (0-absent, 1-mild, 2-marked). These segmental scores were weighted by the length of involved bowel and were summed to produce a weighted score per patient for each MRE sign. Using a combination of wPCDAI≥12.5 and CRP≥5 to define active inflammation, we calculated area under the receiver operating characteristic curve (AUC) for each weighted MRE sign. Bowel wall enhancement, wall T2 hyperintensity, wall thickening and wall diffusion-weighted imaging (DWI) hyperintensity were most commonly identified. Inter-rater agreement was best for decreased motility and wall DWI hyperintensity (kappa≥0.64). Correlation between MRE signs and wPCDAI was higher than with CRP. AUC was highest (≥0.75) for ulcers, wall enhancement, wall thickening, wall T2 hyperintensity and wall DWI hyperintensity. Some MRE signs had good inter-rater agreement and AUC for detection of inflammation in children with Crohn disease.
Weinkauf, Manuel F. G.; Titschack, Jürgen; Freiwald, Andre; Krüger, Stefan; Jorissen, Frans J.; Schmiedl, Gerhard
2017-01-01
We present paleo-water depth reconstructions for the Pefka E section deposited on the island of Rhodes (Greece) during the early Pleistocene. For these reconstructions, a transfer function (TF) using modern benthic foraminifera surface samples from the Adriatic and Western Mediterranean Seas has been developed. The TF model gives an overall predictive accuracy of ~50 m over a water depth range of ~1200 m. Two separate TF models for shallower and deeper water depth ranges indicate a good predictive accuracy of 9 m for shallower water depths (0–200 m) but far less accuracy of 130 m for deeper water depths (200–1200 m) due to uneven sampling along the water depth gradient. To test the robustness of the TF, we randomly selected modern samples to develop random TFs, showing that the model is robust for water depths between 20 and 850 m while greater water depths are underestimated. We applied the TF to the Pefka E fossil data set. The goodness-of-fit statistics showed that most fossil samples have a poor to extremely poor fit to water depth. We interpret this as a consequence of a lack of modern analogues for the fossil samples and removed all samples with extremely poor fit. To test the robustness and significance of the reconstructions, we compared them to reconstructions from an alternative TF model based on the modern analogue technique and applied the randomization TF test. We found our estimates to be robust and significant at the 95% confidence level, but we also observed that our estimates are strongly overprinted by orbital, precession-driven changes in paleo-productivity and corrected our estimates by filtering out the precession-related component. We compared our corrected record to reconstructions based on a modified plankton/benthos (P/B) ratio, excluding infaunal species, and to stable oxygen isotope data from the same section, as well as to paleo-water depth estimates for the Lindos Bay Formation of other sediment sections of Rhodes. These comparisons indicate that our orbital-corrected reconstructions are reasonable and reflect major tectonic movements of Rhodes during the early Pleistocene. PMID:29166653
Accuracy of Estimating Solar Radiation Pressure for GEO Debris with Tumbling Effect
NASA Astrophysics Data System (ADS)
Chao, Chia-Chun George
2009-03-01
The accuracy of estimating solar radiation pressure for GEO debris is examined and demonstrated, via numerical simulations, by fitting a batch (months) of simulated position vectors. These simulated position vectors are generated from a "truth orbit" with added white noise using high-precision numerical integration tools. After the long-arc fit of the simulated observations (position vectors), one can accurately and reliably determine how close the estimated value of solar radiation pressure is to the truth. Results of this study show that the inherent accuracy in estimating the solar radiation pressure coefficient can be as good as 1% if a long-arc fit span up to 180 days is used and the satellite is not tumbling. The corresponding position prediction accuracy can be as good as, in maximum error, 1 km along in-track, 0.3 km along radial and 0.1 km along cross-track up to 30 days. Similar accuracies can be expected when the object is tumbling as long as the rate of attitude change is different from the orbit rate. Results of this study reveal an important phenomenon that the solar radiation pressure significantly affects the orbit motion when the spin rate is equal to the orbit rate.
Tomblin, J. Bruce; Peng, Shu-Chen; Spencer, Linda J.; Lu, Nelson
2011-01-01
Purpose This study characterized the development of speech sound production in prelingually deaf children with a minimum of 8 years of cochlear implant (CI) experience. Method Twenty-seven pediatric CI recipients' spontaneous speech samples from annual evaluation sessions were phonemically transcribed. Accuracy for these speech samples was evaluated in piecewise regression models. Results As a group, pediatric CI recipients showed steady improvement in speech sound production following implantation, but the improvement rate declined after 6 years of device experience. Piecewise regression models indicated that the slope estimating the participants' improvement rate was statistically greater than 0 during the first 6 years postimplantation, but not after 6 years. The group of pediatric CI recipients' accuracy of speech sound production after 4 years of device experience reasonably predicts their speech sound production after 5–10 years of device experience. Conclusions The development of speech sound production in prelingually deaf children stabilizes after 6 years of device experience, and typically approaches a plateau by 8 years of device use. Early growth in speech before 4 years of device experience did not predict later rates of growth or levels of achievement. However, good predictions could be made after 4 years of device use. PMID:18695018
Accurate prediction of RNA-binding protein residues with two discriminative structural descriptors.
Sun, Meijian; Wang, Xia; Zou, Chuanxin; He, Zenghui; Liu, Wei; Li, Honglin
2016-06-07
RNA-binding proteins participate in many important biological processes concerning RNA-mediated gene regulation, and several computational methods have been recently developed to predict the protein-RNA interactions of RNA-binding proteins. Newly developed discriminative descriptors will help to improve the prediction accuracy of these prediction methods and provide further meaningful information for researchers. In this work, we designed two structural features (residue electrostatic surface potential and triplet interface propensity) and according to the statistical and structural analysis of protein-RNA complexes, the two features were powerful for identifying RNA-binding protein residues. Using these two features and other excellent structure- and sequence-based features, a random forest classifier was constructed to predict RNA-binding residues. The area under the receiver operating characteristic curve (AUC) of five-fold cross-validation for our method on training set RBP195 was 0.900, and when applied to the test set RBP68, the prediction accuracy (ACC) was 0.868, and the F-score was 0.631. The good prediction performance of our method revealed that the two newly designed descriptors could be discriminative for inferring protein residues interacting with RNAs. To facilitate the use of our method, a web-server called RNAProSite, which implements the proposed method, was constructed and is freely available at http://lilab.ecust.edu.cn/NABind .
Honda, Michitaka
2014-04-01
Several improvements were implemented in the edge method of presampled modulation transfer function measurements (MTFs). The estimation technique for edge angle was newly developed by applying an algorithm for principal components analysis. The error in the estimation was statistically confirmed to be less than 0.01 even in the presence of quantum noise. Secondly, the geometrical edge slope was approximated using a rationalized number, making it possible to obtain an oversampled edge response function (ESF) with equal intervals. Thirdly, the final MTFs were estimated using the average of multiple MTFs calculated for local areas. This averaging operation eliminates the errors caused by the rationalized approximation. Computer-simulated images were used to evaluate the accuracy of our method. The relative error between the estimated MTF and the theoretical MTF at the Nyquist frequency was less than 0.5% when the MTF was expressed as a sinc function. For MTFs representing an indirect detector and phase-contrast detector, good agreement was also observed for the estimated MTFs for each. The high accuracy of the MTF estimation was also confirmed, even for edge angles of around 10 degrees, which suggests the potential for simplification of the measurement conditions. The proposed method could be incorporated into an automated measurement technique using a software application.
Lin, Pingtan; Zhao, Shulin; Lu, Xin; Ye, Fanggui; Wang, Hengshan
2013-08-01
A CE method based on a dual-enzyme co-immobilized capillary microreactor was developed for the simultaneous screening of multiple enzyme inhibitors. The capillary microreactor was prepared by co-immobilizing adenosine deaminase and xanthine oxidase on the inner wall at the inlet end of the separation capillary. The enzymes were first immobilized on gold nanoparticles, and the functionalized gold nanoparticles were then assembled on the inner wall at the inlet end of the separation capillary treated with polyethyleneimine. With the developed CE method, the substrates and products were baseline separated within 3 min. The activity of the immobilized enzyme can be directly detected by measuring the peak height of the products. A statistical parameter Z' factor was recommended for evaluation of the accuracy of a drug screening system. In the present study, it was calculated to be larger than 0.5, implying a good accuracy. Finally, screening a small compound library containing two known enzyme inhibitors and 20 natural extracts by the proposed method was demonstrated. The known inhibitors were identified, and some natural extracts were found to be positive for two-enzyme inhibition by the present method. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Unbiased feature selection in learning random forests for high-dimensional data.
Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi
2015-01-01
Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.
Application of activated barrier hopping theory to viscoplastic modeling of glassy polymers
NASA Astrophysics Data System (ADS)
Sweeney, J.; Spencer, P. E.; Vgenopoulos, D.; Babenko, M.; Boutenel, F.; Caton-Rose, P.; Coates, P. D.
2018-05-01
An established statistical mechanical theory of amorphous polymer deformation has been incorporated as a plastic mechanism into a constitutive model and applied to a range of polymer mechanical deformations. The temperature and rate dependence of the tensile yield of PVC, as reported in early studies, has been modeled to high levels of accuracy. Tensile experiments on PET reported here are analyzed similarly and good accuracy is also achieved. The frequently observed increase in the gradient of the plot of yield stress against logarithm of strain rate is an inherent feature of the constitutive model. The form of temperature dependence of the yield that is predicted by the model is found to give an accurate representation. The constitutive model is developed in two-dimensional form and implemented as a user-defined subroutine in the finite element package ABAQUS. This analysis is applied to the tensile experiments on PET, in some of which strain is localized in the form of shear bands and necks. These deformations are modeled with partial success, though adiabatic heating of the instability causes inaccuracies for this isothermal implementation of the model. The plastic mechanism has advantages over the Eyring process, is equally tractable, and presents no particular difficulties in implementation with finite elements.
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Klosko, S. M.; Wagner, C. A.
1986-01-01
The accuracy and validation of global gravity models based on satellite data are discussed, responding to the statistical analysis of Lambeck and Coleman (1983) (LC). Included are an evaluation of the LC error spectra, a summary of independent-observation calibrations of the error estimates of the Goddard Earth Models (GEM) 9 and L2 (Lerch et al., 1977, 1979, 1982, 1983, and 1985), a comparison of GEM-L2 with GRIM-3B (Reigber et al., 1983), a comparison of recent models with LAGEOS laser ranging, and a summary of resonant-orbit model tests. It is concluded that the accuracy of GEMs 9, 10, and L2 is much higher than claimed by LC, that the GEMs are in good agreement with independent observations and with GRIM-3B, and that the GEM calibrations were adequate. In a reply by LC, a number of specific questions regarding the error estimates are addressed, and it is pointed out that the intermodel discrepancies of the greatest geophysical interest are those in the higher-order coefficients, not discussed in the present comment. It is argued that the differences among the geoid heights of even the most recent models are large enough to call for considerable improvements.
Bai, Jing; Yang, Wei; Wang, Song; Guan, Rui-Hong; Zhang, Hui; Fu, Jing-Jing; Wu, Wei; Yan, Kun
2016-07-01
The purpose of this study was to explore the diagnostic value of the arrival time difference between lesions and surrounding lung tissue on contrast-enhanced sonography of subpleural pulmonary lesions. A total of 110 patients with subpleural pulmonary lesions who underwent both conventional and contrast-enhanced sonography and had a definite diagnosis were enrolled. After contrast agent injection, the arrival times in the lesion, lung, and chest wall were recorded. The arrival time differences between various tissues were also calculated. Statistical analysis showed a significant difference in the lesion arrival time, the arrival time difference between the lesion and lung, and the arrival time difference between the chest wall and lesion (all P < .001) for benign and malignant lesions. Receiver operating characteristic curve analysis revealed that the optimal diagnostic criterion was the arrival time difference between the lesion and lung, and that the best cutoff point was 2.5 seconds (later arrival signified malignancy). This new diagnostic criterion showed superior diagnostic accuracy (97.1%) compared to conventional diagnostic criteria. The individualized diagnostic method based on an arrival time comparison using contrast-enhanced sonography had high diagnostic accuracy (97.1%) with good feasibility and could provide useful diagnostic information for subpleural pulmonary lesions.
Application of activated barrier hopping theory to viscoplastic modeling of glassy polymers
NASA Astrophysics Data System (ADS)
Sweeney, J.; Spencer, P. E.; Vgenopoulos, D.; Babenko, M.; Boutenel, F.; Caton-Rose, P.; Coates, P. D.
2017-10-01
An established statistical mechanical theory of amorphous polymer deformation has been incorporated as a plastic mechanism into a constitutive model and applied to a range of polymer mechanical deformations. The temperature and rate dependence of the tensile yield of PVC, as reported in early studies, has been modeled to high levels of accuracy. Tensile experiments on PET reported here are analyzed similarly and good accuracy is also achieved. The frequently observed increase in the gradient of the plot of yield stress against logarithm of strain rate is an inherent feature of the constitutive model. The form of temperature dependence of the yield that is predicted by the model is found to give an accurate representation. The constitutive model is developed in two-dimensional form and implemented as a user-defined subroutine in the finite element package ABAQUS. This analysis is applied to the tensile experiments on PET, in some of which strain is localized in the form of shear bands and necks. These deformations are modeled with partial success, though adiabatic heating of the instability causes inaccuracies for this isothermal implementation of the model. The plastic mechanism has advantages over the Eyring process, is equally tractable, and presents no particular difficulties in implementation with finite elements.
CANDELS/GOODS-S, CDFS, and ECDFS: photometric redshifts for normal and X-ray-detected galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Li-Ting; Salvato, Mara; Nandra, Kirpal
2014-11-20
We present photometric redshifts and associated probability distributions for all detected sources in the Extended Chandra Deep Field South (ECDFS). This work makes use of the most up-to-date data from the Cosmic Assembly Near-IR Deep Legacy Survey (CANDELS) and the Taiwan ECDFS Near-Infrared Survey (TENIS) in addition to other data. We also revisit multi-wavelength counterparts for published X-ray sources from the 4 Ms CDFS and 250 ks ECDFS surveys, finding reliable counterparts for 1207 out of 1259 sources (∼96%). Data used for photometric redshifts include intermediate-band photometry deblended using the TFIT method, which is used for the first time inmore » this work. Photometric redshifts for X-ray source counterparts are based on a new library of active galactic nuclei/galaxy hybrid templates appropriate for the faint X-ray population in the CDFS. Photometric redshift accuracy for normal galaxies is 0.010 and for X-ray sources is 0.014 and outlier fractions are 4% and 5.2%, respectively. The results within the CANDELS coverage area are even better, as demonstrated both by spectroscopic comparison and by galaxy-pair statistics. Intermediate-band photometry, even if shallow, is valuable when combined with deep broadband photometry. For best accuracy, templates must include emission lines.« less
In-line monitoring of pellet coating thickness growth by means of visual imaging.
Oman Kadunc, Nika; Sibanc, Rok; Dreu, Rok; Likar, Boštjan; Tomaževič, Dejan
2014-08-15
Coating thickness is the most important attribute of coated pharmaceutical pellets as it directly affects release profiles and stability of the drug. Quality control of the coating process of pharmaceutical pellets is thus of utmost importance for assuring the desired end product characteristics. A visual imaging technique is presented and examined as a process analytic technology (PAT) tool for noninvasive continuous in-line and real time monitoring of coating thickness of pharmaceutical pellets during the coating process. Images of pellets were acquired during the coating process through an observation window of a Wurster coating apparatus. Image analysis methods were developed for fast and accurate determination of pellets' coating thickness during a coating process. The accuracy of the results for pellet coating thickness growth obtained in real time was evaluated through comparison with an off-line reference method and a good agreement was found. Information about the inter-pellet coating uniformity was gained from further statistical analysis of the measured pellet size distributions. Accuracy and performance analysis of the proposed method showed that visual imaging is feasible as a PAT tool for in-line and real time monitoring of the coating process of pharmaceutical pellets. Copyright © 2014 Elsevier B.V. All rights reserved.
Kılıç, D; Göksu, E; Kılıç, T; Buyurgan, C S
2018-05-01
The aim of this randomized cross-over study was to compare one-minute and two-minute continuous chest compressions in terms of chest compression only CPR quality metrics on a mannequin model in the ED. Thirty-six emergency medicine residents participated in this study. In the 1-minute group, there was no statistically significant difference in the mean compression rate (p=0.83), mean compression depth (p=0.61), good compressions (p=0.31), the percentage of complete release (p=0.07), adequate compression depth (p=0.11) or the percentage of good rate (p=51) over the four-minute time period. Only flow time was statistically significant among the 1-minute intervals (p<0.001). In the 2-minute group, the mean compression depth (p=0.19), good compression (p=0.92), the percentage of complete release (p=0.28), adequate compression depth (p=0.96), and the percentage of good rate (p=0.09) were not statistically significant over time. In this group, the number of compressions (248±31 vs 253±33, p=0.01) and mean compression rates (123±15 vs 126±17, p=0.01) and flow time (p=0.001) were statistically significant along the two-minute intervals. There was no statistically significant difference in the mean number of chest compressions per minute, mean chest compression depth, the percentage of good compressions, complete release, adequate chest compression depth and percentage of good compression between the 1-minute and 2-minute groups. There was no statistically significant difference in the quality metrics of chest compressions between 1- and 2-minute chest compression only groups. Copyright © 2017 Elsevier Inc. All rights reserved.
40 CFR 86.1338-84 - Emission measurement accuracy.
Code of Federal Regulations, 2010 CFR
2010-07-01
... engineering practice dictates that exhaust emission sample analyzer readings below 15 percent of full scale... computers, data loggers, etc., can provide sufficient accuracy and resolution below 15 percent of full scale... spaced points, using good engineering judgement, below 15 percent of full scale are made to ensure the...
40 CFR 86.1338-84 - Emission measurement accuracy.
Code of Federal Regulations, 2013 CFR
2013-07-01
... engineering practice dictates that exhaust emission sample analyzer readings below 15 percent of full scale... computers, data loggers, etc., can provide sufficient accuracy and resolution below 15 percent of full scale... spaced points, using good engineering judgement, below 15 percent of full scale are made to ensure the...
40 CFR 86.1338-84 - Emission measurement accuracy.
Code of Federal Regulations, 2012 CFR
2012-07-01
... engineering practice dictates that exhaust emission sample analyzer readings below 15 percent of full scale... computers, data loggers, etc., can provide sufficient accuracy and resolution below 15 percent of full scale... spaced points, using good engineering judgement, below 15 percent of full scale are made to ensure the...
40 CFR 86.1338-84 - Emission measurement accuracy.
Code of Federal Regulations, 2011 CFR
2011-07-01
... engineering practice dictates that exhaust emission sample analyzer readings below 15 percent of full scale... computers, data loggers, etc., can provide sufficient accuracy and resolution below 15 percent of full scale... spaced points, using good engineering judgement, below 15 percent of full scale are made to ensure the...
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler
2017-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) performs orbit determination (OD) for the Aqua and Aura satellites. Both satellites are located in low Earth orbit (LEO), and are part of what is considered the A-Train satellite constellation. Both spacecraft are currently in the science phase of their respective missions. The FDF has recently been tasked with delivering definitive covariance for each satellite.The main source of orbit determination used for these missions is the Orbit Determination Toolkit developed by Analytical Graphics Inc. (AGI). This software uses an Extended Kalman Filter (EKF) to estimate the states of both spacecraft. The filter incorporates force modelling, ground station and space network measurements to determine spacecraft states. It also generates a covariance at each measurement. This covariance can be useful for evaluating the overall performance of the tracking data measurements and the filter itself. An accurate covariance is also useful for covariance propagation which is utilized in collision avoidance operations. It is also valuable when attempting to determine if the current orbital solution will meet mission requirements in the future.This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The Chi-square statistic is calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance.For the EKF to correctly calculate the covariance, error models associated with tracking data measurements must be accurately tuned. Over estimating or under estimating these error values can have detrimental effects on the overall filter performance. The filter incorporates ground station measurements, which can be tuned based on the accuracy of the individual ground stations. It also includes measurements from the NASA space network (SN), which can be affected by the assumed accuracy of the TDRS satellite state at the time of the measurement.The force modelling in the EKF is also an important factor that affects the propagation accuracy and covariance sizing. The dominant force in the LEO orbit regime is the drag force caused by atmospheric drag. Accurate accounting of the drag force is especially important for the accuracy of the propagated state. The implementation of a box and wing model to improve drag estimation accuracy, and its overall effect on the covariance state is explored.The process of tuning the EKF for Aqua and Aura support is described, including examination of the measurement errors of available observation types (Doppler and range), and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-square statistic, calculated based of the ODTK EKF solutions, are assessed versus accepted norms for the orbit regime.
Dėdelė, Audrius; Miškinytė, Auksė
2015-09-01
In many countries, road traffic is one of the main sources of air pollution associated with adverse effects on human health and environment. Nitrogen dioxide (NO2) is considered to be a measure of traffic-related air pollution, with concentrations tending to be higher near highways, along busy roads, and in the city centers, and the exceedances are mainly observed at measurement stations located close to traffic. In order to assess the air quality in the city and the air pollution impact on public health, air quality models are used. However, firstly, before the model can be used for these purposes, it is important to evaluate the accuracy of the dispersion modelling as one of the most widely used method. The monitoring and dispersion modelling are two components of air quality monitoring system (AQMS), in which statistical comparison was made in this research. The evaluation of the Atmospheric Dispersion Modelling System (ADMS-Urban) was made by comparing monthly modelled NO2 concentrations with the data of continuous air quality monitoring stations in Kaunas city. The statistical measures of model performance were calculated for annual and monthly concentrations of NO2 for each monitoring station site. The spatial analysis was made using geographic information systems (GIS). The calculation of statistical parameters indicated a good ADMS-Urban model performance for the prediction of NO2. The results of this study showed that the agreement of modelled values and observations was better for traffic monitoring stations compared to the background and residential stations.
Mapping irrigated lands at 250-m scale by merging MODIS data and National Agricultural Statistics
Pervez, Md Shahriar; Brown, Jesslyn F.
2010-01-01
Accurate geospatial information on the extent of irrigated land improves our understanding of agricultural water use, local land surface processes, conservation or depletion of water resources, and components of the hydrologic budget. We have developed a method in a geospatial modeling framework that assimilates irrigation statistics with remotely sensed parameters describing vegetation growth conditions in areas with agricultural land cover to spatially identify irrigated lands at 250-m cell size across the conterminous United States for 2002. The geospatial model result, known as the Moderate Resolution Imaging Spectroradiometer (MODIS) Irrigated Agriculture Dataset (MIrAD-US), identified irrigated lands with reasonable accuracy in California and semiarid Great Plains states with overall accuracies of 92% and 75% and kappa statistics of 0.75 and 0.51, respectively. A quantitative accuracy assessment of MIrAD-US for the eastern region has not yet been conducted, and qualitative assessment shows that model improvements are needed for the humid eastern regions where the distinction in annual peak NDVI between irrigated and non-irrigated crops is minimal and county sizes are relatively small. This modeling approach enables consistent mapping of irrigated lands based upon USDA irrigation statistics and should lead to better understanding of spatial trends in irrigated lands across the conterminous United States. An improved version of the model with revised datasets is planned and will employ 2007 USDA irrigation statistics.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-23
... in Tables A and B. Table D--Borrower Closing Costs and Seller Concessions Descriptive Statistics by... accuracy of the statistical data illustrating the correlation between higher seller concessions and an...
Kostopoulos, Spiros A; Asvestas, Pantelis A; Kalatzis, Ioannis K; Sakellaropoulos, George C; Sakkis, Theofilos H; Cavouras, Dionisis A; Glotsos, Dimitris T
2017-09-01
The aim of this study was to propose features that evaluate pictorial differences between melanocytic nevus (mole) and melanoma lesions by computer-based analysis of plain photography images and to design a cross-platform, tunable, decision support system to discriminate with high accuracy moles from melanomas in different publicly available image databases. Digital plain photography images of verified mole and melanoma lesions were downloaded from (i) Edinburgh University Hospital, UK, (Dermofit, 330moles/70 melanomas, under signed agreement), from 5 different centers (Multicenter, 63moles/25 melanomas, publicly available), and from the Groningen University, Netherlands (Groningen, 100moles/70 melanomas, publicly available). Images were processed for outlining the lesion-border and isolating the lesion from the surrounding background. Fourteen features were generated from each lesion evaluating texture (4), structure (5), shape (4) and color (1). Features were subjected to statistical analysis for determining differences in pictorial properties between moles and melanomas. The Probabilistic Neural Network (PNN) classifier, the exhaustive search features selection, the leave-one-out (LOO), and the external cross-validation (ECV) methods were used to design the PR-system for discriminating between moles and melanomas. Statistical analysis revealed that melanomas as compared to moles were of lower intensity, of less homogenous surface, had more dark pixels with intensities spanning larger spectra of gray-values, contained more objects of different sizes and gray-levels, had more asymmetrical shapes and irregular outlines, had abrupt intensity transitions from lesion to background tissue, and had more distinct colors. The PR-system designed by the Dermofit images scored on the Dermofit images, using the ECV, 94.1%, 82.9%, 96.5% for overall accuracy, sensitivity, specificity, on the Multicenter Images 92.0%, 88%, 93.7% and on the Groningen Images 76.2%, 73.9%, 77.8% respectively. The PR-system as designed by the Dermofit image database could be fine-tuned to classify with good accuracy plain photography moles/melanomas images of other databases employing different image capturing equipment and protocols. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimated Accuracy of Three Common Trajectory Statistical Methods
NASA Technical Reports Server (NTRS)
Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.
2011-01-01
Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.
LP-search and its use in analysis of the accuracy of control systems with acoustical models
NASA Technical Reports Server (NTRS)
Sergeyev, V. I.; Sobol, I. M.; Statnikov, R. B.; Statnikov, I. N.
1973-01-01
The LP-search is proposed as an analog of the Monte Carlo method for finding values in nonlinear statistical systems. It is concluded that: To attain the required accuracy in solution to the problem of control for a statistical system in the LP-search, a considerably smaller number of tests is required than in the Monte Carlo method. The LP-search allows the possibility of multiple repetitions of tests under identical conditions and observability of the output variables of the system.
Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M
2011-12-01
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
Sterile Basics of Compounding: Relationship Between Syringe Size and Dosing Accuracy.
Kosinski, Tracy M; Brown, Michael C; Zavala, Pedro J
2018-01-01
The purpose of this study was to investigate the accuracy and reproducibility of a 2-mL volume injection using a 3-mL and 10-mL syringe with pharmacy student compounders. An exercise was designed to assess each student's accuracy in compounding a sterile preparation with the correct 4-mg strength using a 3-mL and 10-mL syringe. The average ondansetron dose when compounded with the 3-mL syringe was 4.03 mg (standard deviation ± 0.45 mg), which was not statistically significantly different than the intended 4-mg desired dose (P=0.497). The average ondansetron dose when compounded with the 10-mL syringe was 4.18 mg (standard deviation + 0.68 mg), which was statistically significantly different than the intended 4-mg desired dose (P=0.002). Additionally, there also was a statistically significant difference in the average ondansetron dose compounded using a 3-mL syringe (4.03 mg) and a 10-mL syringe (4.18 mg) (P=0.027). The accuracy and reproducibility of the 2-mL desired dose volume decreased as the compounding syringe size increased from 3 mL to 10 mL. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G
2017-07-01
Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.
León-Reina, L; García-Maté, M; Álvarez-Pinazo, G; Santacruz, I; Vallcorba, O; De la Torre, A G; Aranda, M A G
2016-06-01
This study reports 78 Rietveld quantitative phase analyses using Cu K α 1 , Mo K α 1 and synchrotron radiations. Synchrotron powder diffraction has been used to validate the most challenging analyses. From the results for three series with increasing contents of an analyte (an inorganic crystalline phase, an organic crystalline phase and a glass), it is inferred that Rietveld analyses from high-energy Mo K α 1 radiation have slightly better accuracies than those obtained from Cu K α 1 radiation. This behaviour has been established from the results of the calibration graphics obtained through the spiking method and also from Kullback-Leibler distance statistic studies. This outcome is explained, in spite of the lower diffraction power for Mo radiation when compared to Cu radiation, as arising because of the larger volume tested with Mo and also because higher energy allows one to record patterns with fewer systematic errors. The limit of detection (LoD) and limit of quantification (LoQ) have also been established for the studied series. For similar recording times, the LoDs in Cu patterns, ∼0.2 wt%, are slightly lower than those derived from Mo patterns, ∼0.3 wt%. The LoQ for a well crystallized inorganic phase using laboratory powder diffraction was established to be close to 0.10 wt% in stable fits with good precision. However, the accuracy of these analyses was poor with relative errors near to 100%. Only contents higher than 1.0 wt% yielded analyses with relative errors lower than 20%.
NASA Astrophysics Data System (ADS)
DSouza, Adora M.; Abidin, Anas Z.; Leistritz, Lutz; Wismüller, Axel
2017-02-01
We investigate the applicability of large-scale Granger Causality (lsGC) for extracting a measure of multivariate information flow between pairs of regional brain activities from resting-state functional MRI (fMRI) and test the effectiveness of these measures for predicting a disease state. Such pairwise multivariate measures of interaction provide high-dimensional representations of connectivity profiles for each subject and are used in a machine learning task to distinguish between healthy controls and individuals presenting with symptoms of HIV Associated Neurocognitive Disorder (HAND). Cognitive impairment in several domains can occur as a result of HIV infection of the central nervous system. The current paradigm for assessing such impairment is through neuropsychological testing. With fMRI data analysis, we aim at non-invasively capturing differences in brain connectivity patterns between healthy subjects and subjects presenting with symptoms of HAND. To classify the extracted interaction patterns among brain regions, we use a prototype-based learning algorithm called Generalized Matrix Learning Vector Quantization (GMLVQ). Our approach to characterize connectivity using lsGC followed by GMLVQ for subsequent classification yields good prediction results with an accuracy of 87% and an area under the ROC curve (AUC) of up to 0.90. We obtain a statistically significant improvement (p<0.01) over a conventional Granger causality approach (accuracy = 0.76, AUC = 0.74). High accuracy and AUC values using our multivariate method to connectivity analysis suggests that our approach is able to better capture changes in interaction patterns between different brain regions when compared to conventional Granger causality analysis known from the literature.
Wu, S.-S.; Qiu, X.; Usery, E.L.; Wang, L.
2009-01-01
Detailed urban land use data are important to government officials, researchers, and businesspeople for a variety of purposes. This article presents an approach to classifying detailed urban land use based on geometrical, textural, and contextual information of land parcels. An area of 6 by 14 km in Austin, Texas, with land parcel boundaries delineated by the Travis Central Appraisal District of Travis County, Texas, is tested for the approach. We derive fifty parcel attributes from relevant geographic information system (GIS) and remote sensing data and use them to discriminate among nine urban land uses: single family, multifamily, commercial, office, industrial, civic, open space, transportation, and undeveloped. Half of the 33,025 parcels in the study area are used as training data for land use classification and the other half are used as testing data for accuracy assessment. The best result with a decision tree classification algorithm has an overall accuracy of 96 percent and a kappa coefficient of 0.78, and two naive, baseline models based on the majority rule and the spatial autocorrelation rule have overall accuracy of 89 percent and 79 percent, respectively. The algorithm is relatively good at classifying single-family, multifamily, commercial, open space, and undeveloped land uses and relatively poor at classifying office, industrial, civic, and transportation land uses. The most important attributes for land use classification are the geometrical attributes, particularly those related to building areas. Next are the contextual attributes, particularly those relevant to the spatial relationship between buildings, then the textural attributes, particularly the semivariance texture statistic from 0.61-m resolution images.
Schizophrenia classification using functional network features
NASA Astrophysics Data System (ADS)
Rish, Irina; Cecchi, Guillermo A.; Heuton, Kyle
2012-03-01
This paper focuses on discovering statistical biomarkers (features) that are predictive of schizophrenia, with a particular focus on topological properties of fMRI functional networks. We consider several network properties, such as node (voxel) strength, clustering coefficients, local efficiency, as well as just a subset of pairwise correlations. While all types of features demonstrate highly significant statistical differences in several brain areas, and close to 80% classification accuracy, the most remarkable results of 93% accuracy are achieved by using a small subset of only a dozen of most-informative (lowest p-value) correlation features. Our results suggest that voxel-level correlations and functional network features derived from them are highly informative about schizophrenia and can be used as statistical biomarkers for the disease.
[Clinical research=design*measurements*statistical analyses].
Furukawa, Toshiaki
2012-06-01
A clinical study must address true endpoints that matter for the patients and the doctors. A good clinical study starts with a good clinical question. Formulating a clinical question in the form of PECO can sharpen one's original question. In order to perform a good clinical study one must have a knowledge of study design, measurements and statistical analyses: The first is taught by epidemiology, the second by psychometrics and the third by biostatistics.
Evaluation of trade influence on economic growth rate by computational intelligence approach
NASA Astrophysics Data System (ADS)
Sokolov-Mladenović, Svetlana; Milovančević, Milos; Mladenović, Igor
2017-01-01
In this study was analyzed the influence of trade parameters on the economic growth forecasting accuracy. Computational intelligence method was used for the analyzing since the method can handle highly nonlinear data. It is known that the economic growth could be modeled based on the different trade parameters. In this study five input parameters were considered. These input parameters were: trade in services, exports of goods and services, imports of goods and services, trade and merchandise trade. All these parameters were calculated as added percentages in gross domestic product (GDP). The main goal was to select which parameters are the most impactful on the economic growth percentage. GDP was used as economic growth indicator. Results show that the imports of goods and services has the highest influence on the economic growth forecasting accuracy.
Deconstructing Statistical Analysis
ERIC Educational Resources Information Center
Snell, Joel
2014-01-01
Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a…
An adaptive state of charge estimation approach for lithium-ion series-connected battery system
NASA Astrophysics Data System (ADS)
Peng, Simin; Zhu, Xuelai; Xing, Yinjiao; Shi, Hongbing; Cai, Xu; Pecht, Michael
2018-07-01
Due to the incorrect or unknown noise statistics of a battery system and its cell-to-cell variations, state of charge (SOC) estimation of a lithium-ion series-connected battery system is usually inaccurate or even divergent using model-based methods, such as extended Kalman filter (EKF) and unscented Kalman filter (UKF). To resolve this problem, an adaptive unscented Kalman filter (AUKF) based on a noise statistics estimator and a model parameter regulator is developed to accurately estimate the SOC of a series-connected battery system. An equivalent circuit model is first built based on the model parameter regulator that illustrates the influence of cell-to-cell variation on the battery system. A noise statistics estimator is then used to attain adaptively the estimated noise statistics for the AUKF when its prior noise statistics are not accurate or exactly Gaussian. The accuracy and effectiveness of the SOC estimation method is validated by comparing the developed AUKF and UKF when model and measurement statistics noises are inaccurate, respectively. Compared with the UKF and EKF, the developed method shows the highest SOC estimation accuracy.
Upgrade Summer Severe Weather Tool
NASA Technical Reports Server (NTRS)
Watson, Leela
2011-01-01
The goal of this task was to upgrade to the existing severe weather database by adding observations from the 2010 warm season, update the verification dataset with results from the 2010 warm season, use statistical logistic regression analysis on the database and develop a new forecast tool. The AMU analyzed 7 stability parameters that showed the possibility of providing guidance in forecasting severe weather, calculated verification statistics for the Total Threat Score (TTS), and calculated warm season verification statistics for the 2010 season. The AMU also performed statistical logistic regression analysis on the 22-year severe weather database. The results indicated that the logistic regression equation did not show an increase in skill over the previously developed TTS. The equation showed less accuracy than TTS at predicting severe weather, little ability to distinguish between severe and non-severe weather days, and worse standard categorical accuracy measures and skill scores over TTS.
Bryant, Fred B
2016-12-01
This paper introduces a special section of the current issue of the Journal of Evaluation in Clinical Practice that includes a set of 6 empirical articles showcasing a versatile, new machine-learning statistical method, known as optimal data (or discriminant) analysis (ODA), specifically designed to produce statistical models that maximize predictive accuracy. As this set of papers clearly illustrates, ODA offers numerous important advantages over traditional statistical methods-advantages that enhance the validity and reproducibility of statistical conclusions in empirical research. This issue of the journal also includes a review of a recently published book that provides a comprehensive introduction to the logic, theory, and application of ODA in empirical research. It is argued that researchers have much to gain by using ODA to analyze their data. © 2016 John Wiley & Sons, Ltd.
Peng, Fei; Li, Jiao-ting; Long, Min
2015-03-01
To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.
Steinborn, M; Fiegler, J; Kraus, V; Denne, C; Hapfelmeier, A; Wurzinger, L; Hahn, H
2011-12-01
We performed a cadaver study to evaluate the accuracy of measurements of the optic nerve and the optic nerve sheath for high resolution US (HRUS) and magnetic resonance imaging (MRI). Five Thiel-fixated cadaver specimens of the optic nerve were examined with HRUS and MRI. Measurements of the optic nerve and the ONSD were performed before and after the filling of the optic nerve sheath with saline solution. Statistical analysis included the calculation of the agreement of measurements and the evaluation of the intraobserver and interobserver variation. Overall a good correlation of measurement values between HRUS and MRI can be found (mean difference: 0.02-0.97 mm). The repeatability coefficient (RC) and concordance correlation coefficient (CCC) values were good to excellent for most acquisitions (RC 0.2-1.11 mm; CCC 0.684-0.949). The highest variation of measurement values was found for transbulbar sonography (RC 0.58-1.83 mm; CCC 0.615/0.608). If decisive anatomic structures are clearly depicted and the measuring points are set correctly, there is a good correlation between HRUS and MRI measurements of the optic nerve and the ONSD even on transbulbar sonography. As most of the standard and cut-off values that have been published for ultrasound are significantly lower than the results obtained with MRI, a reevaluation of sonographic ONSD measurement with correlation to MRI is necessary. © Georg Thieme Verlag KG Stuttgart · New York.
Automatic and objective assessment of alternating tapping performance in Parkinson's disease.
Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker
2013-12-09
This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions ('speed', 'accuracy', 'fatigue' and 'arrhythmia') and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping.
Inter-arch digital model vs. manual cast measurements: Accuracy and reliability.
Kiviahde, Heikki; Bukovac, Lea; Jussila, Päivi; Pesonen, Paula; Sipilä, Kirsi; Raustia, Aune; Pirttiniemi, Pertti
2017-06-28
The purpose of this study was to evaluate the accuracy and reliability of inter-arch measurements using digital dental models and conventional dental casts. Thirty sets of dental casts with permanent dentition were examined. Manual measurements were done with a digital caliper directly on the dental casts, and digital measurements were made on 3D models by two independent examiners. Intra-class correlation coefficients (ICC), a paired sample t-test or Wilcoxon signed-rank test, and Bland-Altman plots were used to evaluate intra- and inter-examiner error and to determine the accuracy and reliability of the measurements. The ICC values were generally good for manual and excellent for digital measurements. The Bland-Altman plots of all the measurements showed good agreement between the manual and digital methods and excellent inter-examiner agreement using the digital method. Inter-arch occlusal measurements on digital models are accurate and reliable and are superior to manual measurements.
Estimating Classification Consistency and Accuracy for Cognitive Diagnostic Assessment
ERIC Educational Resources Information Center
Cui, Ying; Gierl, Mark J.; Chang, Hua-Hua
2012-01-01
This article introduces procedures for the computation and asymptotic statistical inference for classification consistency and accuracy indices specifically designed for cognitive diagnostic assessments. The new classification indices can be used as important indicators of the reliability and validity of classification results produced by…
NASA Astrophysics Data System (ADS)
Laura, J. R.; Miller, D.; Paul, M. V.
2012-03-01
An accuracy assessment of AMES Stereo Pipeline derived DEMs for lunar site selection using weighted spatial dependence simulation and a call for outside AMES derived DEMs to facilitate a statistical precision analysis.
Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations
NASA Astrophysics Data System (ADS)
Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong
2017-08-01
Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.
Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle
NASA Astrophysics Data System (ADS)
Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon
2018-03-01
Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.
Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A
2018-02-01
To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Rosenbaum, D. S.; Ruskin, J. N.; Garan, H.; Cohen, R. J.
1998-01-01
OBJECTIVE: To investigate the accuracy of signal averaged electrocardiography (SAECG) and measurement of microvolt level T wave alternans as predictors of susceptibility to ventricular arrhythmias. DESIGN: Analysis of new data from a previously published prospective investigation. SETTING: Electrophysiology laboratory of a major referral hospital. PATIENTS AND INTERVENTIONS: 43 patients, not on class I or class III antiarrhythmic drug treatment, undergoing invasive electrophysiological testing had SAECG and T wave alternans measurements. The SAECG was considered positive in the presence of one (SAECG-I) or two (SAECG-II) of three standard criteria. T wave alternans was considered positive if the alternans ratio exceeded 3.0. MAIN OUTCOME MEASURES: Inducibility of sustained ventricular tachycardia or fibrillation during electrophysiological testing, and 20 month arrhythmia-free survival. RESULTS: The accuracy of T wave alternans in predicting the outcome of electrophysiological testing was 84% (p < 0.0001). Neither SAECG-I (accuracy 60%; p < 0.29) nor SAECG-II (accuracy 71%; p < 0.10) was a statistically significant predictor of electrophysiological testing. SAECG, T wave alternans, electrophysiological testing, and follow up data were available in 36 patients while not on class I or III antiarrhythmic agents. The accuracy of T wave alternans in predicting the outcome of arrhythmia-free survival was 86% (p < 0.030). Neither SAECG-I (accuracy 65%; p < 0.21) nor SAECG-II (accuracy 71%; p < 0.48) was a statistically significant predictor of arrhythmia-free survival. CONCLUSIONS: T wave alternans was a highly significant predictor of the outcome of electrophysiological testing and arrhythmia-free survival, while SAECG was not a statistically significant predictor. Although these results need to be confirmed in prospective clinical studies, they suggest that T wave alternans may serve as a non-invasive probe for screening high risk populations for malignant ventricular arrhythmias.
Slip, David J.; Hocking, David P.; Harcourt, Robert G.
2016-01-01
Constructing activity budgets for marine animals when they are at sea and cannot be directly observed is challenging, but recent advances in bio-logging technology offer solutions to this problem. Accelerometers can potentially identify a wide range of behaviours for animals based on unique patterns of acceleration. However, when analysing data derived from accelerometers, there are many statistical techniques available which when applied to different data sets produce different classification accuracies. We investigated a selection of supervised machine learning methods for interpreting behavioural data from captive otariids (fur seals and sea lions). We conducted controlled experiments with 12 seals, where their behaviours were filmed while they were wearing 3-axis accelerometers. From video we identified 26 behaviours that could be grouped into one of four categories (foraging, resting, travelling and grooming) representing key behaviour states for wild seals. We used data from 10 seals to train four predictive classification models: stochastic gradient boosting (GBM), random forests, support vector machine using four different kernels and a baseline model: penalised logistic regression. We then took the best parameters from each model and cross-validated the results on the two seals unseen so far. We also investigated the influence of feature statistics (describing some characteristic of the seal), testing the models both with and without these. Cross-validation accuracies were lower than training accuracy, but the SVM with a polynomial kernel was still able to classify seal behaviour with high accuracy (>70%). Adding feature statistics improved accuracies across all models tested. Most categories of behaviour -resting, grooming and feeding—were all predicted with reasonable accuracy (52–81%) by the SVM while travelling was poorly categorised (31–41%). These results show that model selection is important when classifying behaviour and that by using animal characteristics we can strengthen the overall accuracy. PMID:28002450
A Severe Sepsis Mortality Prediction Model and Score for Use with Administrative Data
Ford, Dee W.; Goodwin, Andrew J.; Simpson, Annie N.; Johnson, Emily; Nadig, Nandita; Simpson, Kit N.
2016-01-01
Objective Administrative data is used for research, quality improvement, and health policy in severe sepsis. However, there is not a sepsis-specific tool applicable to administrative data with which to adjust for illness severity. Our objective was to develop, internally validate, and externally validate a severe sepsis mortality prediction model and associated mortality prediction score. Design Retrospective cohort study using 2012 administrative data from five US states. Three cohorts of patients with severe sepsis were created: 1) ICD-9-CM codes for severe sepsis/septic shock, 2) ‘Martin’ approach, and 3) ‘Angus’ approach. The model was developed and internally validated in ICD-9-CM cohort and externally validated in other cohorts. Integer point values for each predictor variable were generated to create a sepsis severity score. Setting Acute care, non-federal hospitals in NY, MD, FL, MI, and WA Subjects Patients in one of three severe sepsis cohorts: 1) explicitly coded (n=108,448), 2) Martin cohort (n=139,094), and 3) Angus cohort (n=523,637) Interventions None Measurements and Main Results Maximum likelihood estimation logistic regression to develop a predictive model for in-hospital mortality. Model calibration and discrimination assessed via Hosmer-Lemeshow goodness-of-fit (GOF) and C-statistics respectively. Primary cohort subset into risk deciles and observed versus predicted mortality plotted. GOF demonstrated p>0.05 for each cohort demonstrating sound calibration. C-statistic ranged from low of 0.709 (sepsis severity score) to high of 0.838 (Angus cohort) suggesting good to excellent model discrimination. Comparison of observed versus expected mortality was robust although accuracy decreased in highest risk decile. Conclusions Our sepsis severity model and score is a tool that provides reliable risk adjustment for administrative data. PMID:26496452
Han, Mi-Soon; Park, Yongjung; Kim, Hyon-Suk
2017-07-26
Hepatitis C virus (HCV) genotype is a predictive marker for treatment response. We sequentially evaluated the performances of two nucleic acid amplification tests (NAATs) and one serology assay for HCV genotype: Abbott RealTime genotype II (RealTime II), GeneMatrix restriction fragment mass polymorphism (RFMP), and Sysmex HISCL HCV Gr (HISCL Gr). We examined 281 clinical samples with three assays. The accuracy was assessed using the HCV Genotype Performance Panel PHW204 (SeraCare Life Sciences) for two NAATs. Discrepant cases were re-genotyped by the Versant HCV v.2.0 (line probe 2.0) assay. With the RealTime II assay, clinic samples were analyzed as follows: genotypes 1b (43.1%), 2 (40.2%), 1 subtypes other than 1a and 1b (12.5%), 3 (1.8%), 4 (1.4%), 1a (0.7%), 6 (0.4%), and mixed (1.1%). The RealTime II and RFMP assays showed a type concordance rate of 97.5% (274/281) (κ=0.80) and no significant discordance (p=0.25). Both assays accurately genotyped all samples in the Performance Panel by the subtype level. The HISCL Gr assay showed concordance rates of about 91% (κ<0.40) and statistically significant discordances with two NAATs (p<0.05). In confirmation tests, the results of RFMP assay were the most consistent with those of Versant 2.0 assay. The three HCV assays provided genotyping and serotyping results with good concordance rates. The two NAATs (RealTime II and RFMP) showed comparable performance and good agreement. However, the results of the HISCL Gr assay showed statistically significant differences with those of the NAATs.
Hashim, Mazlan
2015-01-01
This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning. PMID:25898919
Shahabi, Himan; Hashim, Mazlan
2015-04-22
This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning.
Weighted statistical parameters for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rimoldini, Lorenzo
2014-01-01
Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.
Su, Zhong; Zhang, Lisha; Ramakrishnan, V; Hagan, Michael; Anscher, Mitchell
2011-05-01
To evaluate both the Calypso Systems' (Calypso Medical Technologies, Inc., Seattle, WA) localization accuracy in the presence of wireless metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters of dose verification system (DVS, Sicel Technologies, Inc., Morrisville, NC) and the dosimeters' reading accuracy in the presence of wireless electromagnetic transponders inside a phantom. A custom-made, solid-water phantom was fabricated with space for transponders and dosimeters. Two inserts were machined with positioning grooves precisely matching the dimensions of the transponders and dosimeters and were arranged in orthogonal and parallel orientations, respectively. To test the transponder localization accuracy with/without presence of dosimeters (hypothesis 1), multivariate analyses were performed on transponder-derived localization data with and without dosimeters at each preset distance to detect statistically significant localization differences between the control and test sets. To test dosimeter dose-reading accuracy with/without presence of transponders (hypothesis 2), an approach of alternating the transponder presence in seven identical fraction dose (100 cGy) deliveries and measurements was implemented. Two-way analysis of variance was performed to examine statistically significant dose-reading differences between the two groups and the different fractions. A relative-dose analysis method was also used to evaluate transponder impact on dose-reading accuracy after dose-fading effect was removed by a second-order polynomial fit. Multivariate analysis indicated that hypothesis 1 was false; there was a statistically significant difference between the localization data from the control and test sets. However, the upper and lower bounds of the 95% confidence intervals of the localized positional differences between the control and test sets were less than 0.1 mm, which was significantly smaller than the minimum clinical localization resolution of 0.5 mm. For hypothesis 2, analysis of variance indicated that there was no statistically significant difference between the dosimeter readings with and without the presence of transponders. Both orthogonal and parallel configurations had difference of polynomial-fit dose to measured dose values within 1.75%. The phantom study indicated that the Calypso System's localization accuracy was not affected clinically due to the presence of DVS wireless MOSFET dosimeters and the dosimeter-measured doses were not affected by the presence of transponders. Thus, the same patients could be implanted with both transponders and dosimeters to benefit from improved accuracy of radiotherapy treatments offered by conjunctional use of the two systems.
Analysis of model development strategies: predicting ventral hernia recurrence.
Holihan, Julie L; Li, Linda T; Askenasy, Erik P; Greenberg, Jacob A; Keith, Jerrod N; Martindale, Robert G; Roth, J Scott; Liang, Mike K
2016-11-01
There have been many attempts to identify variables associated with ventral hernia recurrence; however, it is unclear which statistical modeling approach results in models with greatest internal and external validity. We aim to assess the predictive accuracy of models developed using five common variable selection strategies to determine variables associated with hernia recurrence. Two multicenter ventral hernia databases were used. Database 1 was randomly split into "development" and "internal validation" cohorts. Database 2 was designated "external validation". The dependent variable for model development was hernia recurrence. Five variable selection strategies were used: (1) "clinical"-variables considered clinically relevant, (2) "selective stepwise"-all variables with a P value <0.20 were assessed in a step-backward model, (3) "liberal stepwise"-all variables were included and step-backward regression was performed, (4) "restrictive internal resampling," and (5) "liberal internal resampling." Variables were included with P < 0.05 for the Restrictive model and P < 0.10 for the Liberal model. A time-to-event analysis using Cox regression was performed using these strategies. The predictive accuracy of the developed models was tested on the internal and external validation cohorts using Harrell's C-statistic where C > 0.70 was considered "reasonable". The recurrence rate was 32.9% (n = 173/526; median/range follow-up, 20/1-58 mo) for the development cohort, 36.0% (n = 95/264, median/range follow-up 20/1-61 mo) for the internal validation cohort, and 12.7% (n = 155/1224, median/range follow-up 9/1-50 mo) for the external validation cohort. Internal validation demonstrated reasonable predictive accuracy (C-statistics = 0.772, 0.760, 0.767, 0.757, 0.763), while on external validation, predictive accuracy dipped precipitously (C-statistic = 0.561, 0.557, 0.562, 0.553, 0.560). Predictive accuracy was equally adequate on internal validation among models; however, on external validation, all five models failed to demonstrate utility. Future studies should report multiple variable selection techniques and demonstrate predictive accuracy on external data sets for model validation. Copyright © 2016 Elsevier Inc. All rights reserved.
SA36. Atypical Memory Structure Related to Recollective Ability
Greenland-White, Sarah; Niendam, Tara
2017-01-01
Abstract Background: People with schizophrenia have impaired recognition memory and disproportionate recollection rather than familiarity deficits. This pattern also occurs in individuals with early psychosis (EP) and those at clinical high risk (CHR; Ragland et al., 2016). Additionally, these groups show atypical relationships between different memory processes, with patients demonstrating a stronger reliance on familiarity to support recognition accuracy. However, it is unclear whether these group differences represent a compensatory “trade-off” in memory strategies, whereby patients adopt an overreliance on familiarity to compensate for impaired recollection. We examined data from the Relational and Item-Specific memory task (RiSE) in healthy control (HC), EP and CHR participants, and contrasted subgroups with and without prominent recollection impairments. Interrelations between these memory processes (accuracy, recollection, and familiarity) were examined with Structural Equation Modeling (SEM). Methods: A total of 181 individuals (57 HC, 101 EP, and 21 CHR) completed the RiSE. Measures of recognition accuracy, familiarity, and recollection were computed. We divided the patient group into those with poor recollection (overall d’ recognition accuracy < 1.5, n = 52) and those with good recollection (overall d’ recollection accuracy ≥ 1.5, n = 70). SEM was used to investigate the pattern of memory relationships between HC and patient groups as well as between patients with good versus bad recollection. Results: Recollection and familiarity were negatively correlated in the HC group (r = −.467, P < .01) and in the patient group, though more weakly (r = −.288,P < .05). Improved recollection was correlated with overall improvement in recognition accuracy for both the groups (HC r = .771, P < .01; r = .753, P < .01). Improved familiarity was associated with higher recognition accuracy in the patient group only (.361, P < .01). Moreover, patients with poor recollection showed a stronger association (Fisher’s Z = 2.58, P < .01) between familiarity performance and recognition accuracy (.718, P < .01) than patients with good recollection performance (.396, P < .01). Conclusion: Results suggest that patients may be overrelying on more intact familiarity processes to support recognition accuracy. This potential compensatory strategy is particularly marked in those patients with the worst recollection abilities. The finding that recognition accuracy remains impaired in both patient subgroups, however, reveals that this compensatory familiarity-based strategy is not fully successful. Further work is needed to understand how patients can be remediated for their consistently impaired recollection processes.
NASA Technical Reports Server (NTRS)
Gramenopoulos, N. (Principal Investigator)
1973-01-01
The author has identified the following significant results. For the recognition of terrain types, spatial signatures are developed from the diffraction patterns of small areas of ERTS-1 images. This knowledge is exploited for the measurements of a small number of meaningful spatial features from the digital Fourier transforms of ERTS-1 image cells containing 32 x 32 picture elements. Using these spatial features and a heuristic algorithm, the terrain types in the vicinity of Phoenix, Arizona were recognized by the computer with a high accuracy. Then, the spatial features were combined with spectral features and using the maximum likelihood criterion the recognition accuracy of terrain types increased substantially. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. Nonlinear transformations of the feature vectors are required so that the terrain class statistics become approximately Gaussian. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month but vary substantially between seasons.
Statistical algorithms improve accuracy of gene fusion detection
Hsieh, Gillian; Bierman, Rob; Szabo, Linda; Lee, Alex Gia; Freeman, Donald E.; Watson, Nathaniel; Sweet-Cordero, E. Alejandro
2017-01-01
Abstract Gene fusions are known to play critical roles in tumor pathogenesis. Yet, sensitive and specific algorithms to detect gene fusions in cancer do not currently exist. In this paper, we present a new statistical algorithm, MACHETE (Mismatched Alignment CHimEra Tracking Engine), which achieves highly sensitive and specific detection of gene fusions from RNA-Seq data, including the highest Positive Predictive Value (PPV) compared to the current state-of-the-art, as assessed in simulated data. We show that the best performing published algorithms either find large numbers of fusions in negative control data or suffer from low sensitivity detecting known driving fusions in gold standard settings, such as EWSR1-FLI1. As proof of principle that MACHETE discovers novel gene fusions with high accuracy in vivo, we mined public data to discover and subsequently PCR validate novel gene fusions missed by other algorithms in the ovarian cancer cell line OVCAR3. These results highlight the gains in accuracy achieved by introducing statistical models into fusion detection, and pave the way for unbiased discovery of potentially driving and druggable gene fusions in primary tumors. PMID:28541529
Pre-Then-Post Testing: A Tool To Improve the Accuracy of Management Training Program Evaluation.
ERIC Educational Resources Information Center
Mezoff, Bob
1981-01-01
Explains a procedure to avoid the detrimental biases of conventional self-reports of training outcomes. The evaluation format provided is a method for using statistical procedures to increase the accuracy of self-reports by overcoming response-shift-bias. (Author/MER)
A New Chemotherapeutic Investigation: Piracetam Effects on Dyslexia.
ERIC Educational Resources Information Center
Chase, Christopher H.; Schmitt, R. Larry
1984-01-01
Compared to placebo controls, 28 individuals treated with Piracetam (a new drug thought to enhance learning and memory consolidation) showed statistically significant improvements above baseline scores on measures of effective reading accuracy and comprehension, reading speed, and writing accuracy. The medication was well tolerated and showed no…
Analyzing thematic maps and mapping for accuracy
Rosenfield, G.H.
1982-01-01
Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.
Humans make efficient use of natural image statistics when performing spatial interpolation.
D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S
2013-12-16
Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.
Diagnostic accuracy for major depression in multiple sclerosis using self-report questionnaires.
Fischer, Anja; Fischer, Marcus; Nicholls, Robert A; Lau, Stephanie; Poettgen, Jana; Patas, Kostas; Heesen, Christoph; Gold, Stefan M
2015-09-01
Multiple sclerosis and major depressive disorder frequently co-occur but depression often remains undiagnosed in this population. Self-rated depression questionnaires are a good option where clinician-based standardized diagnostics are not feasible. However, there is a paucity of data on diagnostic accuracy of self-report measures for depression in multiple sclerosis (MS). Moreover, head-to-head comparisons of common questionnaires are largely lacking. This could be particularly relevant for high-risk patients with depressive symptoms. Here, we compare the diagnostic accuracy of the Beck Depression Inventory (BDI) and 30-item version of the Inventory of Depressive Symptomatology Self-Rated (IDS-SR30) for major depressive disorder (MSS) against diagnosis by a structured clinical interview. Patients reporting depressive symptoms completed the BDI, the IDS-SR30 and underwent diagnostic assessment (Mini International Neuropsychiatric Interview, M.I.N.I.). Receiver-Operating Characteristic analyses were performed, providing error estimates and false-positive/negative rates of suggested thresholds. Data from n = 31 MS patients were available. BDI and IDS-SR30 total score were significantly correlated (r = 0.82). The IDS-SR30total score, cognitive subscore, and BDI showed excellent to good accuracy (area under the curve (AUC) 0.86, 0.91, and 0.85, respectively). Both the IDS-SR30 and the BDI are useful to quantify depressive symptoms showing good sensitivity and specificity. The IDS-SR30 cognitive subscale may be useful as a screening tool and to quantify affective/cognitive depressive symptomatology.
EVOLUTION OF THE MAGNETIC FIELD LINE DIFFUSION COEFFICIENT AND NON-GAUSSIAN STATISTICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snodin, A. P.; Ruffolo, D.; Matthaeus, W. H.
The magnetic field line random walk (FLRW) plays an important role in the transport of energy and particles in turbulent plasmas. For magnetic fluctuations that are transverse or almost transverse to a large-scale mean magnetic field, theories describing the FLRW usually predict asymptotic diffusion of magnetic field lines perpendicular to the mean field. Such theories often depend on the assumption that one can relate the Lagrangian and Eulerian statistics of the magnetic field via Corrsin’s hypothesis, and additionally take the distribution of magnetic field line displacements to be Gaussian. Here we take an ordinary differential equation (ODE) model with thesemore » underlying assumptions and test how well it describes the evolution of the magnetic field line diffusion coefficient in 2D+slab magnetic turbulence, by comparisons to computer simulations that do not involve such assumptions. In addition, we directly test the accuracy of the Corrsin approximation to the Lagrangian correlation. Over much of the studied parameter space we find that the ODE model is in fairly good agreement with computer simulations, in terms of both the evolution and asymptotic values of the diffusion coefficient. When there is poor agreement, we show that this can be largely attributed to the failure of Corrsin’s hypothesis rather than the assumption of Gaussian statistics of field line displacements. The degree of non-Gaussianity, which we measure in terms of the kurtosis, appears to be an indicator of how well Corrsin’s approximation works.« less
Quinlivan, L; Cooper, J; Davies, L; Hawton, K; Gunnell, D; Kapur, N
2016-01-01
Objectives The aims of this review were to calculate the diagnostic accuracy statistics of risk scales following self-harm and consider which might be the most useful scales in clinical practice. Design Systematic review. Methods We based our search terms on those used in the systematic reviews carried out for the National Institute for Health and Care Excellence self-harm guidelines (2012) and evidence update (2013), and updated the searches through to February 2015 (CINAHL, EMBASE, MEDLINE, and PsychINFO). Methodological quality was assessed and three reviewers extracted data independently. We limited our analysis to cohort studies in adults using the outcome of repeat self-harm or attempted suicide. We calculated diagnostic accuracy statistics including measures of global accuracy. Statistical pooling was not possible due to heterogeneity. Results The eight papers included in the final analysis varied widely according to methodological quality and the content of scales employed. Overall, sensitivity of scales ranged from 6% (95% CI 5% to 6%) to 97% (CI 95% 94% to 98%). The positive predictive value (PPV) ranged from 5% (95% CI 3% to 9%) to 84% (95% CI 80% to 87%). The diagnostic OR ranged from 1.01 (95% CI 0.434 to 2.5) to 16.3 (95%CI 12.5 to 21.4). Scales with high sensitivity tended to have low PPVs. Conclusions It is difficult to be certain which, if any, are the most useful scales for self-harm risk assessment. No scales perform sufficiently well so as to be recommended for routine clinical use. Further robust prospective studies are warranted to evaluate risk scales following an episode of self-harm. Diagnostic accuracy statistics should be considered in relation to the specific service needs, and scales should only be used as an adjunct to assessment. PMID:26873046
Application of real rock pore-threat statistics to a regular pore network model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakibul, M.; Sarker, H.; McIntyre, D.
2011-01-01
This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data.« less
A comparison study of different facial soft tissue analysis methods.
Kook, Min-Suk; Jung, Seunggon; Park, Hong-Ju; Oh, Hee-Kyun; Ryu, Sun-Youl; Cho, Jin-Hyoung; Lee, Jae-Seo; Yoon, Suk-Ja; Kim, Min-Soo; Shin, Hyo-Keun
2014-07-01
The purpose of this study was to evaluate several different facial soft tissue measurement methods. After marking 15 landmarks in the facial area of 12 mannequin heads of different sizes and shapes, facial soft tissue measurements were performed by the following 5 methods: Direct anthropometry, Digitizer, 3D CT, 3D scanner, and DI3D system. With these measurement methods, 10 measurement values representing the facial width, height, and depth were determined twice with a one week interval by one examiner. These data were analyzed with the SPSS program. The position created based on multi-dimensional scaling showed that direct anthropometry, 3D CT, digitizer, 3D scanner demonstrated relatively similar values, while the DI3D system showed slightly different values. All 5 methods demonstrated good accuracy and had a high coefficient of reliability (>0.92) and a low technical error (<0.9 mm). The measured value of the distance between the right and left medial canthus obtained by using the DI3D system was statistically significantly different from that obtained by using the digital caliper, digitizer and laser scanner (p < 0.05), but the other measured values were not significantly different. On evaluating the reproducibility of measurement methods, two measurement values (Ls-Li, G-Pg) obtained by using direct anthropometry, one measurement value (N'-Prn) obtained by using the digitizer, and four measurement values (EnRt-EnLt, AlaRt-AlaLt, ChRt-ChLt, Sn-Pg) obtained by using the DI3D system, were statistically significantly different. However, the mean measurement error in every measurement method was low (<0.7 mm). All measurement values obtained by using the 3D CT and 3D scanner did not show any statistically significant difference. The results of this study show that all 3D facial soft tissue analysis methods demonstrate favorable accuracy and reproducibility, and hence they can be used in clinical practice and research studies. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Application of real rock pore-throat statistics to a regular pore network model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarker, M.R.; McIntyre, D.; Ferer, M.
2011-01-01
This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data. Introduction« less
Extracting oil palm crown from WorldView-2 satellite image
NASA Astrophysics Data System (ADS)
Korom, A.; Phua, M.-H.; Hirata, Y.; Matsuura, T.
2014-02-01
Oil palm (OP) is the most commercial crop in Malaysia. Estimating the crowns is important for biomass estimation from high resolution satellite (HRS) image. This study examined extraction of individual OP crown from a WorldView-2 image using twofold algorithms, i.e., masking of Non-OP pixels and detection of individual OP crown based on the watershed segmentation of greyscale images. The study site was located in Beluran district, central Sabah, where matured OPs with the age ranging from 15 to 25 years old have been planted. We examined two compound vegetation indices of (NDVI+1)*DVI and NDII for masking non-OP crown areas. Using kappa statistics, an optimal threshold value was set with the highest accuracy at 90.6% for differentiating OP crown areas from Non-OP areas. After the watershed segmentation of OP crown areas with additional post-procedures, about 77% of individual OP crowns were successfully detected in comparison to the manual based delineation. Shape and location of each crown segment was then assessed based on a modified version of the goodness measures of Möller et al which was 0.3, indicating an acceptable CSGM (combined segmentation goodness measures) agreements between the automated and manually delineated crowns (perfect case is '1').
Accuracy of Digital vs. Conventional Implant Impressions
Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.
2015-01-01
The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423
Spatial Pattern Classification for More Accurate Forecasting of Variable Energy Resources
NASA Astrophysics Data System (ADS)
Novakovskaia, E.; Hayes, C.; Collier, C.
2014-12-01
The accuracy of solar and wind forecasts is becoming increasingly essential as grid operators continue to integrate additional renewable generation onto the electric grid. Forecast errors affect rate payers, grid operators, wind and solar plant maintenance crews and energy traders through increases in prices, project down time or lost revenue. While extensive and beneficial efforts were undertaken in recent years to improve physical weather models for a broad spectrum of applications these improvements have generally not been sufficient to meet the accuracy demands of system planners. For renewables, these models are often used in conjunction with additional statistical models utilizing both meteorological observations and the power generation data. Forecast accuracy can be dependent on specific weather regimes for a given location. To account for these dependencies it is important that parameterizations used in statistical models change as the regime changes. An automated tool, based on an artificial neural network model, has been developed to identify different weather regimes as they impact power output forecast accuracy at wind or solar farms. In this study, improvements in forecast accuracy were analyzed for varying time horizons for wind farms and utility-scale PV plants located in different geographical regions.
On the control of brain-computer interfaces by users with cerebral palsy.
Daly, Ian; Billinger, Martin; Laparra-Hernández, José; Aloise, Fabio; García, Mariano Lloria; Faller, Josef; Scherer, Reinhold; Müller-Putz, Gernot
2013-09-01
Brain-computer interfaces (BCIs) have been proposed as a potential assistive device for individuals with cerebral palsy (CP) to assist with their communication needs. However, it is unclear how well-suited BCIs are to individuals with CP. Therefore, this study aims to investigate to what extent these users are able to gain control of BCIs. This study is conducted with 14 individuals with CP attempting to control two standard online BCIs (1) based upon sensorimotor rhythm modulations, and (2) based upon steady state visual evoked potentials. Of the 14 users, 8 are able to use one or other of the BCIs, online, with a statistically significant level of accuracy, without prior training. Classification results are driven by neurophysiological activity and not seen to correlate with occurrences of artifacts. However, many of these users' accuracies, while statistically significant, would require either more training or more advanced methods before practical BCI control would be possible. The results indicate that BCIs may be controlled by individuals with CP but that many issues need to be overcome before practical application use may be achieved. This is the first study to assess the ability of a large group of different individuals with CP to gain control of an online BCI system. The results indicate that six users could control a sensorimotor rhythm BCI and three a steady state visual evoked potential BCI at statistically significant levels of accuracy (SMR accuracies; mean ± STD, 0.821 ± 0.116, SSVEP accuracies; 0.422 ± 0.069). Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
21 CFR 211.165 - Testing and release for distribution.
Code of Federal Regulations, 2010 CFR
2010-04-01
... products meet each appropriate specification and appropriate statistical quality control criteria as a condition for their approval and release. The statistical quality control criteria shall include appropriate acceptance levels and/or appropriate rejection levels. (e) The accuracy, sensitivity, specificity, and...
NASA Astrophysics Data System (ADS)
Lee, J.; Kang, S.; Jang, K.; Ko, J.; Hong, S.
2012-12-01
Crop productivity is associated with the food security and hence, several models have been developed to estimate crop yield by combining remote sensing data with carbon cycle processes. In present study, we attempted to estimate crop GPP and NPP using algorithm based on the LUE model and a simplified respiration model. The state of Iowa and Illinois was chosen as the study site for estimating the crop yield for a period covering the 5 years (2006-2010), as it is the main Corn-Belt area in US. Present study focuses on developing crop-specific parameters for corn and soybean to estimate crop productivity and yield mapping using satellite remote sensing data. We utilized a 10 km spatial resolution daily meteorological data from WRF to provide cloudy-day meteorological variables but in clear-say days, MODIS-based meteorological data were utilized to estimate daily GPP, NPP, and biomass. County-level statistics on yield, area harvested, and productions were used to test model predicted crop yield. The estimated input meteorological variables from MODIS and WRF showed with good agreements with the ground observations from 6 Ameriflux tower sites in 2006. For examples, correlation coefficients ranged from 0.93 to 0.98 for Tmin and Tavg ; from 0.68 to 0.85 for daytime mean VPD; from 0.85 to 0.96 for daily shortwave radiation, respectively. We developed county-specific crop conversion coefficient, i.e. ratio of yield to biomass on 260 DOY and then, validated the estimated county-level crop yield with the statistical yield data. The estimated corn and soybean yields at the county level ranged from 671 gm-2 y-1 to 1393 gm-2 y-1 and from 213 gm-2 y-1 to 421 gm-2 y-1, respectively. The county-specific yield estimation mostly showed errors less than 10%. Furthermore, we estimated crop yields at the state level which were validated against the statistics data and showed errors less than 1%. Further analysis for crop conversion coefficient was conducted for 200 DOY and 280 DOY. For the case of 280 DOY, Crop yield estimation showed better accuracy for soybean at county level. Though the case of 200 DOY resulted in less accuracy (i.e. 20% mean bias), it provides a useful tool for early forecasting of crop yield. We improved the spatial accuracy of estimated crop yield at county level by developing county-specific crop conversion coefficient. Our results indicate that the aboveground crop biomass can be estimated successfully with the simple LUE and respiration models combined with MODIS data and then, county-specific conversion coefficient can be different with each other across different counties. Hence, applying region-specific conversion coefficient is necessary to estimate crop yield with better accuracy.
Thematic Accuracy Assessment of the 2011 National Land Cover Database (NLCD)
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment o...
Kok, P; Pitman, A G; Cawson, J N; Gledhill, S; Kremer, S; Lawson, J; Mehta, K; Mercuri, V; Shnier, D; Taft, R; Zentner, L
2010-08-01
The study aims to determine if any association exists between visual memory performance and diagnostic accuracy performance in a group of radiologist mammogram readers. One hundred proven mammograms (23 with cancers) were grouped into 5 sets of 20 cases, with sets being of equal difficulty. Pairs of sets were presented in 5 reads (40 cases per read, order random) to a panel of 8 radiologist readers (either present or past screening readers, with experience range from <1 year to >20 years). The readers were asked to either 'clear' or 'call back' cases depending on need for further workup, and at post-baseline reads to indicate whether each case was 'new' or 'old' (i.e. remembered from prior read). Two sets were presented only at baseline (40 cases per reader), and were used to calculate the reader's false recollection rate. Three sets were repeated post-baseline once or twice (100 cases per reader). Reading conditions were standardised. Memory performance differed markedly between readers. The number of correctly remembered cases (of 100 'old' cases) had a median of 10.5 and range of 0-58. The observed number of false recollections (of 40 'totally new' cases) had a median of 2 and range of 0-17. Diagnostic performance measures were mean (range): sensitivity 0.68 (0.54-0.81); specificity 0.82 (0.74-0.91); positive predictive value (PPV) 0.55 (0.50-0.65); negative predictive value (NPV) 0.89 (0.86-0.93) and accuracy 0.78 (0.76-0.83). Confidence intervals (CIs; 95%) for each reader overlapped for all the diagnostic parameters, indicating a lack of statistically significant difference between the readers at the 5% level. The most sensitive and the most specific reader showed a trend away from each other on sensitivity, specificity, NPV and PPV; their accuracies were 0.76 and 0.82, respectively, and their accuracy 95% CIs overlapped considerably. Correlation analysis by reader showed no association between observed memory performance and any of the diagnostic accuracy measures in our group of readers. In particular, there was no correlation between diagnostic accuracy and memory performance. There was no association between visual memory performance and diagnostic accuracy as a screening mammographer in our group of eight representative readers. Whether a radiologist has a good or a bad visual memory for cases, and in particular mammograms, should not impact on his or her performance as a radiologist and mammogram reader.
Electroencephalography Predicts Poor and Good Outcomes After Cardiac Arrest: A Two-Center Study.
Rossetti, Andrea O; Tovar Quiroga, Diego F; Juan, Elsa; Novy, Jan; White, Roger D; Ben-Hamouda, Nawfel; Britton, Jeffrey W; Oddo, Mauro; Rabinstein, Alejandro A
2017-07-01
The prognostic role of electroencephalography during and after targeted temperature management in postcardiac arrest patients, relatively to other predictors, is incompletely known. We assessed performances of electroencephalography during and after targeted temperature management toward good and poor outcomes, along with other recognized predictors. Cohort study (April 2009 to March 2016). Two academic hospitals (Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland; Mayo Clinic, Rochester, MN). Consecutive comatose adults admitted after cardiac arrest, identified through prospective registries. All patients were managed with targeted temperature management, receiving prespecified standardized clinical, neurophysiologic (particularly, electroencephalography during and after targeted temperature management), and biochemical evaluations. We assessed electroencephalography variables (reactivity, continuity, epileptiform features, and prespecified "benign" or "highly malignant" patterns based on the American Clinical Neurophysiology Society nomenclature) and other clinical, neurophysiologic (somatosensory-evoked potential), and biochemical prognosticators. Good outcome (Cerebral Performance Categories 1 and 2) and mortality predictions at 3 months were calculated. Among 357 patients, early electroencephalography reactivity and continuity and flexor or better motor reaction had greater than 70% positive predictive value for good outcome; reactivity (80.4%; 95% CI, 75.9-84.4%) and motor response (80.1%; 95% CI, 75.6-84.1%) had highest accuracy. Early benign electroencephalography heralded good outcome in 86.2% (95% CI, 79.8-91.1%). False positive rates for mortality were less than 5% for epileptiform or nonreactive early electroencephalography, nonreactive late electroencephalography, absent somatosensory-evoked potential, absent pupillary or corneal reflexes, presence of myoclonus, and neuron-specific enolase greater than 75 µg/L; accuracy was highest for early electroencephalography reactivity (86.6%; 95% CI, 82.6-90.0). Early highly malignant electroencephalography had an false positive rate of 1.5% with accuracy of 85.7% (95% CI, 81.7-89.2%). This study provides class III evidence that electroencephalography reactivity predicts both poor and good outcomes, and motor reaction good outcome after cardiac arrest. Electroencephalography reactivity seems to be the best discriminator between good and poor outcomes. Standardized electroencephalography interpretation seems to predict both conditions during and after targeted temperature management.
A laboratory evaluation of the influence of weighing gauges performance on extreme events statistics
NASA Astrophysics Data System (ADS)
Colli, Matteo; Lanza, Luca
2014-05-01
The effects of inaccurate ground based rainfall measurements on the information derived from rain records is yet not much documented in the literature. La Barbera et al. (2002) investigated the propagation of the systematic mechanic errors of tipping bucket type rain gauges (TBR) into the most common statistics of rainfall extremes, e.g. in the assessment of the return period T (or the related non-exceedance probability) of short-duration/high intensity events. Colli et al. (2012) and Lanza et al. (2012) extended the analysis to a 22-years long precipitation data set obtained from a virtual weighing type gauge (WG). The artificial WG time series was obtained basing on real precipitation data measured at the meteo-station of the University of Genova and modelling the weighing gauge output as a linear dynamic system. This approximation was previously validated with dedicated laboratory experiments and is based on the evidence that the accuracy of WG measurements under real world/time varying rainfall conditions is mainly affected by the dynamic response of the gauge (as revealed during the last WMO Field Intercomparison of Rainfall Intensity Gauges). The investigation is now completed by analyzing actual measurements performed by two common weighing gauges, the OTT Pluvio2 load-cell gauge and the GEONOR T-200 vibrating-wire gauge, since both these instruments demonstrated very good performance under previous constant flow rate calibration efforts. A laboratory dynamic rainfall generation system has been arranged and validated in order to simulate a number of precipitation events with variable reference intensities. Such artificial events were generated basing on real world rainfall intensity (RI) records obtained from the meteo-station of the University of Genova so that the statistical structure of the time series is preserved. The influence of the WG RI measurements accuracy on the associated extreme events statistics is analyzed by comparing the original intensity-duration-frequency (IDF) curves with those obtained from the measuring of the simulated rain events. References: Colli, M., L.G. Lanza, and P. La Barbera, (2012). Weighing gauges measurement errors and the design rainfall for urban scale applications, 9th International Workshop On Precipitation In Urban Areas, 6-9 December, 2012, St. Moritz, Switzerland Lanza, L.G., M. Colli, and P. La Barbera (2012). On the influence of rain gauge performance on extreme events statistics: the case of weighing gauges, EGU General Assembly 2012, April 22th, Wien, Austria La Barbera, P., L.G. Lanza, and L. Stagi, (2002). Influence of systematic mechanical errors of tipping-bucket rain gauges on the statistics of rainfall extremes. Water Sci. Techn., 45(2), 1-9.
Raidullah, Ebadullah; Francis, Maria L.
2014-01-01
Objectives: This study aimed to evaluate the accuracy of Root ZX in determining working length in presence of normal saline, 0.2% chlorhexidine and 2.5% of sodium hypochlorite. Material and Methods: Sixty extracted, single rooted, single canal human teeth were used. Teeth were decoronated at CEJ and actual canal length determined. Then working length measurements were obtained with Root ZX in presence of normal saline 0.9%, 0.2% chlorhexidine and 2.5% NaOCl. The working length obtained with Root ZX were compared with actual canal length and subjected to statistical analysis. Results: No statistical significant difference was found between actual canal length and Root ZX measurements in presence of normal saline and 0.2% chlorhexidine. Highly statistical difference was found between actual canal length and Root ZX measurements in presence of 2.5% of NaOCl, however all the measurements were within the clinically acceptable range of ±0.5mm. Conclusion: The accuracy of EL measurement of Root ZX within±0.5 mm of AL was consistently high in the presence of 0.2% chlorhexidine, normal saline and 2.5% sodium hypochlorite. Clinical significance: This study signifies the efficacy of ROOT ZX (Third generation apex locator) as a dependable aid in endodontic working length. Key words:Electronic apex locator, working length, root ZX accuracy, intracanal irrigating solutions. PMID:24596634
Reliability of Soft Tissue Model Based Implant Surgical Guides; A Methodological Mistake.
Sabour, Siamak; Dastjerdi, Elahe Vahid
2012-08-20
Abstract We were interested to read the paper by Maney P and colleagues published in the July 2012 issue of J Oral Implantol. The authors aimed to assess the reliability of soft tissue model based implant surgical guides reported that the accuracy was evaluated using software. 1 I found the manuscript title of Maney P, et al. incorrect and misleading. Moreover, they reported twenty-two sites (46.81%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). As the authors point out in their conclusion, Soft tissue models do not always provide sufficient accuracy for implant surgical guide fabrication.Reliability (precision) and validity (accuracy) are two different methodological issues in researches. Sensitivity, specificity, PPV, NPV, likelihood ratio positive (true positive/false negative) and likelihood ratio negative (false positive/ true negative) as well as odds ratio (true results\\false results - preferably more than 50) are among the tests to evaluate the validity (accuracy) of a single test compared to a gold standard.2-4 It is not clear that the reported twenty-two sites (46.81%) which were considered accurate related to which of the above mentioned estimates for validity analysis. Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t.test which all of them are among common mistakes in reliability analysis 5. Briefly, for quantitative variable Intra Class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too. Regarding reliability or agreement, it is good to know that for computing kappa value, just concordant cells are being considered, whereas discordant cells should also be taking into account in order to reach a correct estimation of agreement (Weighted kappa).2-4 As a take home message, for reliability and validity analysis, appropriate tests should be applied.
GPU-accelerated Monte Carlo convolution/superposition implementation for dose calculation.
Zhou, Bo; Yu, Cedric X; Chen, Danny Z; Hu, X Sharon
2010-11-01
Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution/superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution/superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors' GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. A speedup in the range of 6.7-11.4x is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors' GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article.
Bhimarao; Bhat, Venkataramana; Gowda, Puttanna VN
2015-01-01
Background The high incidence of IUGR and its low recognition lead to increasing perinatal morbidity and mortality for which prediction of IUGR with timely management decisions is of paramount importance. Many studies have compared the efficacy of several gestational age independent parameters and found that TCD/AC is a better predictor of asymmetric IUGR. Aim To compare the accuracy of transcerebellar diameter/abdominal circumference with head circumference/abdominal circumference in predicting asymmetric intrauterine growth retardation after 20 weeks of gestation. Materials and Methods The prospective study was conducted over a period of one year on 50 clinically suspected IUGR pregnancies who were evaluated with 3.5 MHz frequency ultrasound scanner by a single sonologist. BPD, HC, AC and FL along with TCD were measured for assessing the sonological gestational age. Two morphometric ratios- TCD/AC and HC/AC were calculated. Estimated fetal weight was calculated for all these pregnancies and its percentile was determined. Statistical Methods The TCD/AC and HC/AC ratios were correlated with advancing gestational age to know if these were related to GA. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and diagnostic accuracy (DA) for TCD/AC and HC/AC ratios in evaluating IUGR fetuses were calculated. Results In the present study, linear relation of TCD and HC in IUGR fetuses with gestation was noted. The sensitivity, specificity, PPV, NPV & DA were 88%, 93.5%, 77.1%, 96.3% & 92.4% respectively for TCD/AC ratio versus 84%, 92%, 72.4%, 95.8% & 90.4% respectively for HC/AC ratio in predicting IUGR. Conclusion Both ratios were gestational age independent and can be used in detecting IUGR with good diagnostic accuracy. However, TCD/AC ratio had a better diagnostic validity and accuracy compared to HC/AC ratio in predicting asymmetric IUGR. PMID:26557588
Magnetic Resonance Enterography to Assess Multifocal and Multicentric Bowel Endometriosis.
Nyangoh Timoh, Krystel; Stewart, Zelda; Benjoar, Mikhael; Beldjord, Selma; Ballester, Marcos; Bazot, Marc; Thomassin-Naggara, Isabelle; Darai, Emile
To prospectively determine the accuracy of magnetic resonance enterography (MRE) compared with conventional magnetic resonance imaging (MRI) for multifocal (i.e., multiple lesions affecting the same digestive segment) and multicentric (i.e., multiple lesions affecting several digestive segments) bowel endometriosis. A prospective study (Canadian Task Force classification II-2). Tenon University Hospital, Paris, France. Patients with MRI-suspected colorectal endometriosis scheduled for colorectal resection from April 2014 to February 2016 were included. Patients underwent both 1.5-Tesla MRI and MRE as well as laparoscopically assisted and open colorectal resections. The diagnostic performance of MRI and MRE was evaluated for sensitivity, specificity, positive and negative predictive values, accuracy, and positive and negative likelihood ratios (LRs). The interobserver variability of the experienced and junior radiologists was quantified using weighted statistics. Forty-seven patients were included. Twenty-two (46.8%) patients had unifocal lesions, 14 (30%) had multifocal lesions, and 11 (23.4%) had multicentric lesions. The sensitivity, specificity, positive LR, and negative LR for the diagnosis of multifocal lesions were 0.29 (6/21), 1.00 (23/24), 15.36, and 0.71 for MRI and 0.57 (12/21), 0.89 (23/25), 4.95, and 0.58 for MRE. The sensitivity, specificity, positive LR, and negative LR for the diagnosis of multicentric lesions were 0.18 (1/11), 1.00 (1/1), 15, and 0.80 for MRI and 0.46 (5/11), 0.92 (33/36), 5.45, and 0.60 for MRE. Lower accuracies for MRI compared with MRE to diagnose multicentric (p = .01) and multifocal lesions (p = .004) were noted. The interobserver agreement for MRE was good for both multifocality (κ = 0.80) and multicentricity (κ = 0.61). MRE has better accuracy for diagnosing multifocal and multicentric bowel endometriosis than conventional MRI. Copyright © 2018. Published by Elsevier Inc.
Improvement of the diagnostic accuracy of MRA with subtraction technique in cerebral vasospasm.
Hamaguchi, Akiyoshi; Fujima, Noriyuki; Yoshida, Daisuke; Hamaguchi, Naoko; Kodera, Shuichi
2014-01-01
Vasospasm has been considered the most severe acute complication after subarachnoid hemorrhage (SAH). MRA is not considered ideal for detecting cerebral vasospasm because of background including the hemorrhage. The aim of this study is to evaluate the efficacy of Subtraction MRA (SMRA) by comparing it to that of conventional MRA (CMRA) for diagnosis of cerebral vasospasm. Arteries were assigned to one of three categories based on the degree of MRA diagnostic quality of vasospasm (quality score): 0, bad … 2, good. Furthermore each artery was assigned to one of four categories based on the degree of vasospasm severity (SV score): 0, no vasospasm … 3, severe. The value of the difference between DSA-SV score and MRA-SV score was defined as the DIF score. CMRA and SMRA were compared for each arterial region with regard to quality score and DIF score. The average CMRA and SMRA quality score were 1.46 and 1.79; the difference was statistically significant. The average CMRA and SMRA DIF score were 1.08 and .60; the difference was statistically significant. Diagnosis of cerebral vasospasm is more accurate by SMRA than by CMRA. The advantages are its noninvasive nature and its ability to detect cerebral vasospasm. Copyright © 2014 by the American Society of Neuroimaging.
NASA Astrophysics Data System (ADS)
Singh Pradhan, Ananta Man; Kang, Hyo-Sub; Kim, Yun-Tae
2016-04-01
This study uses a physically based approach to evaluate the factor of safety of the hillslope for different hydrological conditions, in Mt Umyeon, south of Seoul. The hydrological conditions were determined using intensity and duration of whole Korea of known landslide inventory data. Quantile regression statistical method was used to ascertain different probability warning levels on the basis of rainfall thresholds. Physically based models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical probabilistic methods can include other causative factors which influence the slope stability such as forest, soil and geology, but rely on good landslide inventories of the site. In this study a hybrid approach has described that combines the physically-based landslide susceptibility for different hydrological conditions. A presence-only based maximum entropy model was used to hybrid and analyze relation of landslide with conditioning factors. About 80% of the landslides were listed among the unstable sites identified in the proposed model, thereby presenting its effectiveness and accuracy in determining unstable areas and areas that require evacuation. These cumulative rainfall thresholds provide a valuable reference to guide disaster prevention authorities in the issuance of warning levels with the ability to reduce losses and save lives.
Assessment of Cell Line Models of Primary Human Cells by Raman Spectral Phenotyping
Swain, Robin J.; Kemp, Sarah J.; Goldstraw, Peter; Tetley, Teresa D.; Stevens, Molly M.
2010-01-01
Abstract Researchers have previously questioned the suitability of cell lines as models for primary cells. In this study, we used Raman microspectroscopy to characterize live A549 cells from a unique molecular biochemical perspective to shed light on their suitability as a model for primary human pulmonary alveolar type II (ATII) cells. We also investigated a recently developed transduced type I (TT1) cell line as a model for alveolar type I (ATI) cells. Single-cell Raman spectra provide unique biomolecular fingerprints that can be used to characterize cellular phenotypes. A multivariate statistical analysis of Raman spectra indicated that the spectra of A549 and TT1 cells are characterized by significantly lower phospholipid content compared to ATII and ATI spectra because their cytoplasm contains fewer surfactant lamellar bodies. Furthermore, we found that A549 spectra are statistically more similar to ATI spectra than to ATII spectra. The spectral variation permitted phenotypic classification of cells based on Raman spectral signatures with >99% accuracy. These results suggest that A549 cells are not a good model for ATII cells, but TT1 cells do provide a reasonable model for ATI cells. The findings have far-reaching implications for the assessment of cell lines as suitable primary cellular models in live cultures. PMID:20409492
NASA Technical Reports Server (NTRS)
Greenbauer-Seng, L. A.
1983-01-01
The accurate determination of trace metals and fuels is an important requirement in much of the research into and development of alternative fuels for aerospace applications. Recognizing the detrimental effects of certain metals on fuel performance and fuel systems at the part per million and in some cases part per billion levels requires improved accuracy in determining these low concentration elements. Accurate analyses are also required to ensure interchangeability of analysis results between vendor, researcher, and end use for purposes of quality control. Previous interlaboratory studies have demonstrated the inability of different laboratories to agree on the results of metal analysis, particularly at low concentration levels, yet typically good precisions are reported within a laboratory. An interlaboratory study was designed to gain statistical information about the sources of variation in the reported concentrations. Five participant laboratories were used on a fee basis and were not informed of the purpose of the analyses. The effects of laboratory, analytical technique, concentration level, and ashing additive were studied in four fuel types for 20 elements of interest. The prescribed sample preparation schemes (variations of dry ashing) were used by all of the laboratories. The analytical data were statistically evaluated using a computer program for the analysis of variance technique.
Kamath, Padmaja; Fernandez, Alberto; Giralt, Francesc; Rallo, Robert
2015-01-01
Nanoparticles are likely to interact in real-case application scenarios with mixtures of proteins and biomolecules that will absorb onto their surface forming the so-called protein corona. Information related to the composition of the protein corona and net cell association was collected from literature for a library of surface-modified gold and silver nanoparticles. For each protein in the corona, sequence information was extracted and used to calculate physicochemical properties and statistical descriptors. Data cleaning and preprocessing techniques including statistical analysis and feature selection methods were applied to remove highly correlated, redundant and non-significant features. A weighting technique was applied to construct specific signatures that represent the corona composition for each nanoparticle. Using this basic set of protein descriptors, a new Protein Corona Structure-Activity Relationship (PCSAR) that relates net cell association with the physicochemical descriptors of the proteins that form the corona was developed and validated. The features that resulted from the feature selection were in line with already published literature, and the computational model constructed on these features had a good accuracy (R(2)LOO=0.76 and R(2)LMO(25%)=0.72) and stability, with the advantage that the fingerprints based on physicochemical descriptors were independent of the specific proteins that form the corona.
Coxen, Christopher L.; Frey, Jennifer K.; Carleton, Scott A.; Collins, Daniel P.
2017-01-01
Species distribution models can provide critical baseline distribution information for the conservation of poorly understood species. Here, we compared the performance of band-tailed pigeon (Patagioenas fasciata) species distribution models created using Maxent and derived from two separate presence-only occurrence data sources in New Mexico: 1) satellite tracked birds and 2) observations reported in eBird basic data set. Both models had good accuracy (test AUC > 0.8 and True Skill Statistic > 0.4), and high overlap between suitability scores (I statistic 0.786) and suitable habitat patches (relative rank 0.639). Our results suggest that, at the state-wide level, eBird occurrence data can effectively model similar species distributions as satellite tracking data. Climate change models for the band-tailed pigeon predict a 35% loss in area of suitable climate by 2070 if CO2 emissions drop to 1990 levels by 2100, and a 45% loss by 2070 if we continue current CO2 emission levels through the end of the century. These numbers may be conservative given the predicted increase in drought, wildfire, and forest pest impacts to the coniferous forests the species inhabits in New Mexico. The northern portion of the species’ range in New Mexico is predicted to be the most viable through time.
The Effect of Technological Devices on Cervical Lordosis
Öğrenci, Ahmet; Koban, Orkun; Yaman, Onur; Dalbayrak, Sedat; Yılmaz, Mesut
2018-01-01
PURPOSE: There is a need for cervical flexion and even cervical hyperflexion for the use of technological devices, especially mobile phones. We investigated the effect of this use on the cervical lordosis angle. MATERIAL AND METHODS: A group of 156 patients who applied with only neck pain between 2013–2016 and had no additional problems were included. Patients are specifically questioned about mobile phone, tablet, and other devices usage. The value obtained by multiplying the year of usage and the average usage (hour) in daily life was determined as the total usage value (an average hour per day x year: hy). Cervical lordosis angles were statistically compared with the total time of use. RESULTS: In the general ROC analysis, the cut-off value was found to be 20.5 hy. When the cut-off value is tested, the overall accuracy is very good with 72.4%. The true estimate of true risk and non-risk is quite high. The ROC analysis is statistically significant. CONCLUSION: The use of computing devices, especially mobile telephones, and the increase in the flexion of the cervical spine indicate that cervical vertebral problems will increase even in younger people in future. Also, to using with attention at this point, ergonomic devices must also be developed. PMID:29610602
Luo, Jiaquan; Wu, Chunyang; Huang, Zhongren; Pan, Zhimin; Li, Zhiyun; Zhong, Junlong; Chen, Yiwei; Han, Zhimin; Cao, Kai
2017-04-01
This is a cadaver specimen study to confirm new pedicle screw (PS) entry point and trajectory for subaxial cervical PS insertion. To assess the accuracy of the lateral vertebral notch-referred PS insertion technique in subaxial cervical spine in cadaver cervical spine. Reported morphometric landmarks used to guide the surgeon in PS insertion show significant variability. In the previous study, we proposed a new technique (as called "notch-referred" technique) primarily based on coronal multiplane reconstruction images (CMRI) and cortical integrity after PS insertion in cadavers. However, the PS position in cadaveric cervical segment was not confirmed radiologically. Therefore, the difference between the pedicle trajectory and the PS trajectory using the notch-referred technique needs to be illuminated. Twelve cadaveric cervical spines were conducted with PS insertion using the lateral vertebral notch-referred technique. The guideline for entry point and trajectory for each vertebra was established based on the morphometric data from our previous study. After 3.5-mm diameter screw insertion, each vertebra was dissected and inspected for pedicle trajectory by CT scan. The pedicle trajectory and PS trajectory were measured and compared in axial plane. The perforation rate was assessed radiologically and was graded from ideal to unacceptable: Grade 0 = screw in pedicle; Grade I = perforation of pedicle wall less than one-fourth of the screw diameter; Grade II = perforation more than one-fourth of the screw diameter but less than one-second; Grade III = perforation more than one-second outside of the screw diameter. In addition, pedicle width between the acceptable and unacceptable screws was compared. A total of 120 pedicle screws were inserted. The perforation rate of pedicle screws was 78.3% in grade 0 (excellent PS position), 10.0% in grade I (good PS position), 8.3% in grade II (fair PS position), and 3.3% in grade III (poor PS position). The overall accepted accuracy of pedicle screws was 96.7% (Grade 0 + Grade I + Grade II), and only 3.3% had critical breach. There was no statistical difference between the pedicle trajectory and PS trajectory (p > 0.05). Compared to the pedicle width (4.4 ± 0.7 mm) in acceptably inserted screw, the unacceptably screw is 3.2 ± 0.3 mm which was statistically different (p < 0.05). The accuracy of the notch-referred PS insertion in cadaveric subaxial cervical spine is satisfactory.
Farrell, Mary Beth
2018-06-01
This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being measured. A wide confidence interval indicates that if the experiment were repeated multiple times on other samples, the measured statistic would lie within a wide range of possibilities. The confidence interval relies on the SE. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1981-01-01
Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.
Ikeuchi, Hiroko; Ikuta, Ko
2016-09-01
In the last decade, posterior instrumented fusion using percutaneous pedicle screws (PPSs) had been growing in popularity, and its safety and good clinical results have been reported. However, there have been few previous reports of the accuracy of PPS placement compared with that of conventional open screw insertion in an institution. This study aimed to evaluate the accuracy of PPS placement compared with that of conventional open technique. One hundred patients were treated with posterior instrumented fusion of the thoracic and lumbar spine from April 2008 to July 2013. Four cases of revised instrumentation surgery were excluded. In this study, the pedicle screws inserted below Th7 were investigated, therefore, a total of 455 screws were enrolled. Two hundred and ninety-three pedicle screws were conventional open-inserted screws (O-group) and 162 screws were PPSs (P-group). We conducted a comparative study about the accuracy of placement between the two groups. Postoperative computed tomography scans were carried out to all patients, and the pedicle screw position was assessed according to a scoring system described by Zdichavsky et al. (Eur J Trauma 30:241-247, 2004; Eur J Trauma 30:234-240, 2004) and a classification described by Wiesner et al. (Spine 24:1599-1603, 1999). Based on Zdichavsky's scoring system, the number of grade Ia screws was 283 (96.6 %) in the O-group and 153 (94.4 %) in the P-group, whereas 5 screws (1.7 %) in the O-group and one screw (0.6 %) in the P-group were grade IIIa/IIIb. Meanwhile, the pedicle wall penetrations based on Wiesner classification were demonstrated in 20 screws (6.8 %) in the O-group, and 12 screws (7.4 %) in the P-group. No neurologic complications were observed and no screws had to be replaced in both groups. The PPSs could be ideally inserted without complications. There were no statistically significant differences about the accuracy between the conventional open insertion and PPS placement.
ERIC Educational Resources Information Center
Cockburn, Stewart
1969-01-01
The basic requirements of all good prose are clarity, accuracy, brevity, and simplicity. Especially in public prose--in which the meaning is the crux of the article or speech--concise, vigorous English demands a minimum of adjectives, a maximum use of the active voice, nouns carefully chosen, a logical argument with no labored or obscure points,…
Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models
ERIC Educational Resources Information Center
Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning
2012-01-01
The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…
Ueno, Tamio; Matuda, Junichi; Yamane, Nobuhisa
2013-03-01
To evaluate the occurrence of out-of acceptable ranges and accuracy of antimicrobial susceptibility tests, we applied a new statistical tool to the Inter-Laboratory Quality Control Program established by the Kyushu Quality Control Research Group. First, we defined acceptable ranges of minimum inhibitory concentration (MIC) for broth microdilution tests and inhibitory zone diameter for disk diffusion tests on the basis of Clinical and Laboratory Standards Institute (CLSI) M100-S21. In the analysis, more than two out-of acceptable range results in the 20 tests were considered as not allowable according to the CLSI document. Of the 90 participating laboratories, 46 (51%) experienced one or more occurrences of out-of acceptable range results. Then, a binomial test was applied to each participating laboratory. The results indicated that the occurrences of out-of acceptable range results in the 11 laboratories were significantly higher when compared to the CLSI recommendation (allowable rate < or = 0.05). The standard deviation indices(SDI) were calculated by using reported results, mean and standard deviation values for the respective antimicrobial agents tested. In the evaluation of accuracy, mean value from each laboratory was statistically compared with zero using a Student's t-test. The results revealed that 5 of the 11 above laboratories reported erroneous test results that systematically drifted to the side of resistance. In conclusion, our statistical approach has enabled us to detect significantly higher occurrences and source of interpretive errors in antimicrobial susceptibility tests; therefore, this approach can provide us with additional information that can improve the accuracy of the test results in clinical microbiology laboratories.
Homeyer, Nadine; Stoll, Friederike; Hillisch, Alexander; Gohlke, Holger
2014-08-12
Correctly ranking compounds according to their computed relative binding affinities will be of great value for decision making in the lead optimization phase of industrial drug discovery. However, the performance of existing computationally demanding binding free energy calculation methods in this context is largely unknown. We analyzed the performance of the molecular mechanics continuum solvent, the linear interaction energy (LIE), and the thermodynamic integration (TI) approach for three sets of compounds from industrial lead optimization projects. The data sets pose challenges typical for this early stage of drug discovery. None of the methods was sufficiently predictive when applied out of the box without considering these challenges. Detailed investigations of failures revealed critical points that are essential for good binding free energy predictions. When data set-specific features were considered accordingly, predictions valuable for lead optimization could be obtained for all approaches but LIE. Our findings lead to clear recommendations for when to use which of the above approaches. Our findings also stress the important role of expert knowledge in this process, not least for estimating the accuracy of prediction results by TI, using indicators such as the size and chemical structure of exchanged groups and the statistical error in the predictions. Such knowledge will be invaluable when it comes to the question which of the TI results can be trusted for decision making.
Salivary progesterone and cervical length measurement as predictors of spontaneous preterm birth.
Maged, Ahmed M; Mohesen, Mohamed; Elhalwagy, Ahmed; Abdelhafiz, Ali
2015-07-01
To evaluate the efficacy of salivary progesterone, cervical length measurement in predicting preterm birth (PTB). Prospective observational study included 240 pregnant women with gestational age (GA) 26-34 weeks classified into two equal groups; group one are high risk for PTB (those with symptoms of uterine contractions or history of one or more spontaneous preterm delivery or second trimester abortion) and group 2 are controls. There was a highly significant difference between the two study groups regarding GA at delivery (31.3 ± 3.75 in high risk versus 38.5 ± 1.3 in control), cervical length measured by transvaginal ultrasound (24.7 ± 8.6 in high risk versus 40.1 ± 4.67 in control) and salivary progesterone level (728.9 ± 222.3 in high risk versus 1099.9 ± 189.4 in control; p < 0.001). There was a statistically significant difference between levels of salivary progesterone at different GA among the high risk group (p value 0.035) but not in low risk group (p value 0.492). CL measurement showed a sensitivity of 71.5% with 100% specificity, 100% PPV, 69.97% NPV and accuracy of 83%, while salivary progesterone showed a sensitivity of 84% with 90% specificity, 89.8% PPV, 85.9% NPV and accuracy of 92.2%. The measurement of both salivary progesterone and cervical length are good predictors for development of PTB.
Jun, Min-Ho; Kim, Soochan; Ku, Boncho; Cho, JungHee; Kim, Kahye; Yoo, Ho-Ryong; Kim, Jaeuk U
2018-01-12
We investigated segmental phase angles (PAs) in the four limbs using a multi-frequency bioimpedance analysis (MF-BIA) technique for noninvasively diagnosing diabetes mellitus. We conducted a meal tolerance test (MTT) for 45 diabetic and 45 control subjects stratified by age, sex and body mass index (BMI). HbA1c and the waist-to-hip-circumference ratio (WHR) were measured before meal intake, and we measured the glucose levels and MF-BIA PAs 5 times for 2 hours after meal intake. We employed a t-test to examine the statistical significance and the area under the curve (AUC) of the receiver operating characteristics (ROC) to test the classification accuracy using segmental PAs at 5, 50, and 250 kHz. Segmental PAs were independent of the HbA1c or glucose levels, or their changes caused by the MTT. However, the segmental PAs were good indicators for noninvasively screening diabetes In particular, leg PAs in females and arm PAs in males showed best classification accuracy (AUC = 0.827 for males, AUC = 0.845 for females). Lastly, we introduced the PA at maximum reactance (PAmax), which is independent of measurement frequencies and can be obtained from any MF-BIA device using a Cole-Cole model, thus showing potential as a useful biomarker for diabetes.
NASA Astrophysics Data System (ADS)
Daniel, Amuthachelvi; Prakasarao, Aruna; Ganesan, Singaravelu
2018-02-01
The molecular level changes associated with oncogenesis precede the morphological changes in cells and tissues. Hence molecular level diagnosis would promote early diagnosis of the disease. Raman spectroscopy is capable of providing specific spectral signature of various biomolecules present in the cells and tissues under various pathological conditions. The aim of this work is to develop a non-linear multi-class statistical methodology for discrimination of normal, neoplastic and malignant cells/tissues. The tissues were classified as normal, pre-malignant and malignant by employing Principal Component Analysis followed by Artificial Neural Network (PC-ANN). The overall accuracy achieved was 99%. Further, to get an insight into the quantitative biochemical composition of the normal, neoplastic and malignant tissues, a linear combination of the major biochemicals by non-negative least squares technique was fit to the measured Raman spectra of the tissues. This technique confirms the changes in the major biomolecules such as lipids, nucleic acids, actin, glycogen and collagen associated with the different pathological conditions. To study the efficacy of this technique in comparison with histopathology, we have utilized Principal Component followed by Linear Discriminant Analysis (PC-LDA) to discriminate the well differentiated, moderately differentiated and poorly differentiated squamous cell carcinoma with an accuracy of 94.0%. And the results demonstrated that Raman spectroscopy has the potential to complement the good old technique of histopathology.
Bayesian-MCMC-based parameter estimation of stealth aircraft RCS models
NASA Astrophysics Data System (ADS)
Xia, Wei; Dai, Xiao-Xia; Feng, Yuan
2015-12-01
When modeling a stealth aircraft with low RCS (Radar Cross Section), conventional parameter estimation methods may cause a deviation from the actual distribution, owing to the fact that the characteristic parameters are estimated via directly calculating the statistics of RCS. The Bayesian-Markov Chain Monte Carlo (Bayesian-MCMC) method is introduced herein to estimate the parameters so as to improve the fitting accuracies of fluctuation models. The parameter estimations of the lognormal and the Legendre polynomial models are reformulated in the Bayesian framework. The MCMC algorithm is then adopted to calculate the parameter estimates. Numerical results show that the distribution curves obtained by the proposed method exhibit improved consistence with the actual ones, compared with those fitted by the conventional method. The fitting accuracy could be improved by no less than 25% for both fluctuation models, which implies that the Bayesian-MCMC method might be a good candidate among the optimal parameter estimation methods for stealth aircraft RCS models. Project supported by the National Natural Science Foundation of China (Grant No. 61101173), the National Basic Research Program of China (Grant No. 613206), the National High Technology Research and Development Program of China (Grant No. 2012AA01A308), the State Scholarship Fund by the China Scholarship Council (CSC), and the Oversea Academic Training Funds, and University of Electronic Science and Technology of China (UESTC).
Fully accelerating quantum Monte Carlo simulations of real materials on GPU clusters
NASA Astrophysics Data System (ADS)
Esler, Kenneth
2011-03-01
Quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles, combining very high accuracy with extreme parallel scalability. By solving the many-body Schrödinger equation through a stochastic projection, it achieves greater accuracy than mean-field methods and better scaling with system size than quantum chemical methods, enabling scientific discovery across a broad spectrum of disciplines. In recent years, graphics processing units (GPUs) have provided a high-performance and low-cost new approach to scientific computing, and GPU-based supercomputers are now among the fastest in the world. The multiple forms of parallelism afforded by QMC algorithms make the method an ideal candidate for acceleration in the many-core paradigm. We present the results of porting the QMCPACK code to run on GPU clusters using the NVIDIA CUDA platform. Using mixed precision on GPUs and MPI for intercommunication, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core CPUs alone, while reproducing the double-precision CPU results within statistical error. We discuss the algorithm modifications necessary to achieve good performance on this heterogeneous architecture and present the results of applying our code to molecules and bulk materials. Supported by the U.S. DOE under Contract No. DOE-DE-FG05-08OR23336 and by the NSF under No. 0904572.
Narrative Skills, Gender, Culture, and Children's Long-Term Memory Accuracy of a Staged Event
ERIC Educational Resources Information Center
Klemfuss, J. Zoe; Wang, Qi
2017-01-01
This study examined the extent to which school-aged children's general narrative skills provide cognitive benefits for accurate remembering or enable good storytelling that undermines memory accuracy. European American and Chinese American 6-year-old boys and girls (N = 114) experienced a staged event in the laboratory and were asked to tell a…
Good Practices for Learning to Recognize Actions Using FV and VLAD.
Wu, Jianxin; Zhang, Yu; Lin, Weiyao
2016-12-01
High dimensional representations such as Fisher vectors (FV) and vectors of locally aggregated descriptors (VLAD) have shown state-of-the-art accuracy for action recognition in videos. The high dimensionality, on the other hand, also causes computational difficulties when scaling up to large-scale video data. This paper makes three lines of contributions to learning to recognize actions using high dimensional representations. First, we reviewed several existing techniques that improve upon FV or VLAD in image classification, and performed extensive empirical evaluations to assess their applicability for action recognition. Our analyses of these empirical results show that normality and bimodality are essential to achieve high accuracy. Second, we proposed a new pooling strategy for VLAD and three simple, efficient, and effective transformations for both FV and VLAD. Both proposed methods have shown higher accuracy than the original FV/VLAD method in extensive evaluations. Third, we proposed and evaluated new feature selection and compression methods for the FV and VLAD representations. This strategy uses only 4% of the storage of the original representation, but achieves comparable or even higher accuracy. Based on these contributions, we recommend a set of good practices for action recognition in videos for practitioners in this field.
Figueroa, José; Guarachi, Juan Pablo; Matas, José; Arnander, Magnus; Orrego, Mario
2016-04-01
Computed tomography (CT) is widely used to assess component rotation in patients with poor results after total knee arthroplasty (TKA). The purpose of this study was to simultaneously determine the accuracy and reliability of CT in measuring TKA component rotation. TKA components were implanted in dry-bone models and assigned to two groups. The first group (n = 7) had variable femoral component rotations, and the second group (n = 6) had variable tibial tray rotations. CT images were then used to assess component rotation. Accuracy of CT rotational assessment was determined by mean difference, in degrees, between implanted component rotation and CT-measured rotation. Intraclass correlation coefficient (ICC) was applied to determine intra-observer and inter-observer reliability. Femoral component accuracy showed a mean difference of 2.5° and the tibial tray a mean difference of 3.2°. There was good intra- and inter-observer reliability for both components, with a femoral ICC of 0.8 and 0.76, and tibial ICC of 0.68 and 0.65, respectively. CT rotational assessment accuracy can differ from true component rotation by approximately 3° for each component. It does, however, have good inter- and intra-observer reliability.
Shah, Shabir A; Naqash, Talib Amin; Padmanabhan, T V; Subramanium; Lambodaran; Nazir, Shazana
2014-03-01
The sole objective of casting procedure is to provide a metallic duplication of missing tooth structure, with as great accuracy as possible. The ability to produce well fitting castings require strict adherence to certain fundamentals. A study was undertaken to comparatively evaluate the effect on casting accuracy by subjecting the invested wax patterns to burnout after different time intervals. The effect on casting accuracy using metal ring into a pre heated burnout furnace and using split ring was also carried. The readings obtained were tabulated and subjected to statistical analysis.
Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G
2016-05-09
The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.
Blind image quality assessment based on aesthetic and statistical quality-aware features
NASA Astrophysics Data System (ADS)
Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi
2017-07-01
The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.
The control of manual entry accuracy in management/engineering information systems, phase 1
NASA Technical Reports Server (NTRS)
Hays, Daniel; Nocke, Henry; Wilson, Harold; Woo, John, Jr.; Woo, June
1987-01-01
It was shown that clerical personnel can be tested for proofreading performance under simulated industrial conditions. A statistical study showed that errors in proofreading follow an extreme value probability theory. The study showed that innovative man/machine interfaces can be developed to improve and control accuracy during data entry.
An improved method for determining force balance calibration accuracy
NASA Technical Reports Server (NTRS)
Ferris, Alice T.
1993-01-01
The results of an improved statistical method used at Langley Research Center for determining and stating the accuracy of a force balance calibration are presented. The application of the method for initial loads, initial load determination, auxiliary loads, primary loads, and proof loads is described. The data analysis is briefly addressed.
ERIC Educational Resources Information Center
Zhang, Bo
2010-01-01
This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…
Søreide, Kjetil; Kørner, Hartwig; Søreide, Jon Arne
2011-01-01
In surgical research, the ability to correctly classify one type of condition or specific outcome from another is of great importance for variables influencing clinical decision making. Receiver-operating characteristic (ROC) curve analysis is a useful tool in assessing the diagnostic accuracy of any variable with a continuous spectrum of results. In order to rule a disease state in or out with a given test, the test results are usually binary, with arbitrarily chosen cut-offs for defining disease versus health, or for grading of disease severity. In the postgenomic era, the translation from bench-to-bedside of biomarkers in various tissues and body fluids requires appropriate tools for analysis. In contrast to predetermining a cut-off value to define disease, the advantages of applying ROC analysis include the ability to test diagnostic accuracy across the entire range of variable scores and test outcomes. In addition, ROC analysis can easily examine visual and statistical comparisons across tests or scores. ROC is also favored because it is thought to be independent from the prevalence of the condition under investigation. ROC analysis is used in various surgical settings and across disciplines, including cancer research, biomarker assessment, imaging evaluation, and assessment of risk scores.With appropriate use, ROC curves may help identify the most appropriate cutoff value for clinical and surgical decision making and avoid confounding effects seen with subjective ratings. ROC curve results should always be put in perspective, because a good classifier does not guarantee the expected clinical outcome. In this review, we discuss the fundamental roles, suggested presentation, potential biases, and interpretation of ROC analysis in surgical research.
Establishing school day pedometer step count cut-points using ROC curves in low-income children.
Burns, Ryan D; Brusseau, Timothy A; Fu, You; Hannon, James C
2016-05-01
Previous research has not established pedometer step count cut-points that discriminate children that meet school day physical activity recommendations using a tri-axial ActiGraph accelerometer criterion. The purpose of this study was to determine step count cut-points that associate with 30min of school day moderate-to-vigorous physical activity (MVPA) in school-aged children. Participants included 1053 school-aged children (mean age=8.4±1.8years) recruited from three low-income schools from the state of Utah in the U.S. Physical activity was assessed using Yamax DigiWalker CW600 pedometers and ActiGraph wGT3X-BT triaxial accelerometers that were concurrently worn during school hours. Data were collected at each school during the 2014-2015 school year. Receiver operating characteristic (ROC) curves were used to determine pedometer step count cut-points that associated with at least 30min of MVPA during school hours. Cut-points were determined using the maximum Youden's J statistic (J max). For the total sample, the area-under-the-curve (AUC) was 0.77 (p<0.001) with a pedometer cut-point of 5505 steps (J max=0.46, Sensitivity=63%, Specificity=84%; Accuracy=76%). Step counts showed greater diagnostic ability in girls (AUC=0.81, p<0.001; Cut-point=5306 steps; Accuracy=78.8%) compared to boys (AUC=0.72, p<0.01; Cut-point=5786 steps; Accuracy=71.4%). Pedometer step counts showed good diagnostic ability in girls and fair diagnostic ability in boys for discriminating children that met at least 30min of MVPA during school hours. Copyright © 2016 Elsevier Inc. All rights reserved.
Precision and Accuracy of a Digital Impression Scanner in Full-Arch Implant Rehabilitation.
Pesce, Paolo; Pera, Francesco; Setti, Paolo; Menini, Maria
To evaluate the accuracy and precision of a digital scanner used to scan four implants positioned according to an immediate loading implant protocol and to assess the accuracy of an aluminum framework fabricated from a digital impression. Five master casts reproducing different edentulous maxillae with four tilted implants were used. Four scan bodies were screwed onto the low-profile abutments, and a digital intraoral scanner was used to perform five digital impressions of each master cast. To assess trueness, a metal framework of the best digital impression was produced with computer-aided design/computer-assisted manufacture (CAD/CAM) technology and passive fit was assessed with the Sheffield test. Gaps between the frameworks and the implant analogs were measured with a stereomicroscope. To assess precision, three-dimensional (3D) point cloud processing software was used to measure the deviations between the five digital impressions of each cast by producing a color map. The deviation values were grouped in three classes, and differences were assessed between class 2 (representing lower discrepancies) and the assembled classes 1 and 3 (representing the higher negative and positive discrepancies, respectively). The frameworks showed a mean gap of < 30 μm (range: 2 to 47 μm). A statistically significant difference was found between the two groups by the 3D point cloud software, with higher frequencies of points in class 2 than in grouped classes 1 and 3 (P < .001). Within the limits of this in vitro study, it appears that a digital impression may represent a reliable method for fabricating full-arch implant frameworks with good passive fit when tilted implants are present.
Kassamali, Rahil Hussein; Hoey, Edward T D; Ganeshan, Arul; Littlehales, Tracey
2013-01-01
This feasibility study aimed to obtain initial data to assess the performance of a novel noncontrast spoiled magnetic resonance (MR) angiography technique (fresh-blood imaging [FBI]) compared to gadolinium-enhanced MR (Gd-MR) angiography for evaluation of the aorto-iliac and lower extremity arteries. Thirteen patients with suspected lower extremity arterial disease that had undergone Gd-MR angiography and FBI at the same session were randomly included in the study. FBI was performed using an ECG-gated ow-spoiled T2-weighted half-Fourier fast spin-echo sequence. For analysis, the aortoiliac and lower limb arteries were divided into 18 anatomical segments. Two blinded readers individually graded image quality of FBI and also assessed the presence and severity of any stenotic lesions. A similar analysis was performed for the Gd-MR angiography images. A total of 385 arterial segments were analyzed; 34 segments were excluded due to degraded image quality (1.3% of Gd- MR vs. 8% of FBI-MR angiography images). FBI-MR angiography had comparable accuracy to Gd-MR angiography for assessment of the above knee vessels with high kappa statistics (large arteries, 0.91; small arteries, 0.86) and high sensitivity (large arteries, 98.1%; small arteries, 88.6%) and specificity (large arteries, 97.2%; small arteries, 97.6%) using Gd-MR angiography as the gold standard. Initial results show good agreement between FBI-MR angiography and Gd-MR angiography in the diagnosis of peripheral arterial disease, making FBI a potential alternative in patients with renal impairment. FBI showed highest accuracy in the above knee vessels. Technological refinements are required to improve accuracy for assessing the calf and pedal vessels.
Zhang, Li; Zhao, Xinming; Ouyang, Han; Wang, Shuang; Zhou, Chunwu
2016-08-22
The purpose of this study was to investigate the diagnostic value of 3.0-T (1)H magnetic resonance spectroscopy ((1)H MRS) in primary malignant hepatic tumors and to compare the effects of (1)H MRS on the diagnostic accuracy of liver-occupying lesions between junior and experienced radiologists. This study included 50 healthy volunteers and 40 consecutive patients (50 lesions). Informed consent was obtained from each subject. Images were obtained on clinical whole-body 3.0-T MR system. Point -Resolved Spectroscopy was used to obtain the spectroscopy image. All conventional images were reviewed blindly by junior radiologist and experienced radiologist, respectively. The choline-containing compounds peak area (CCC-A) was measured with SAGE software, and the choline-containing compound ratio (∆CCC) was calculated. The efficacy of CCC-A and ∆CCC in the diagnosis of primary malignant hepatic tumors was determined by plotting receiver operating characteristic (ROC) curves. We also compared the effects of MRS on the diagnostic accuracy of liver-occupying lesions with junior and experienced radiologist. A significant increase in mean CCC-A was observed in malignant tumors compared with benign tumors. The ROC curve showed ∆CCC had a high discriminatory ability in diagnosing primary malignant hepatic tumors with a sensitivity and specificity of 94.3 and 93.3 %, respectively. The ∆CCC area under the curve (AUC) was 0.97 that was larger than that of both junior and experienced radiologist, while the significantly statistical difference was only obtained between ∆CCC and junior radiologist (P = 0.01). (1)H MRS with ∆CCC demonstrates good efficacy in diagnosing primary malignant hepatic tumors. The technique improves the accuracy of diagnosing liver-occupying lesions, particularly for junior radiologists.
Hinz, Antje; Fischer, Andrew T
2011-10-01
To compare the accuracy of ultrasonographic and radiographic examination for evaluation of articular lesions in horses. Cross-sectional study. Horses (n = 137) with articular lesions. Radiographic and ultrasonographic examinations of the affected joint(s) were performed before diagnostic or therapeutic arthroscopic surgery. Findings were recorded and compared to lesions identified during arthroscopy. In 254 joints, 432 lesions were identified by arthroscopy. The overall accuracy was 82.9% for ultrasonography and 62.2% for radiography (P < .0001) with a sensitivity of 91.4% for ultrasonography and 66.7% for radiography (P < .0001). The difference in specificity was not statistically significant (P = .2628). The negative predictive value for ultrasonography was 31.5% and 13.2% for radiography (P = .0022), the difference for the positive predictive value was not statistically significant (P = .3898). The accuracy for ultrasonography and radiography for left versus right joints was equal and corresponded with the overall results. Ultrasonographic evaluation of articular lesions was more accurate than radiographic evaluation. © Copyright 2011 by The American College of Veterinary Surgeons.
Thermocouple Calibration and Accuracy in a Materials Testing Laboratory
NASA Technical Reports Server (NTRS)
Lerch, B. A.; Nathal, M. V.; Keller, D. J.
2002-01-01
A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.
Levine, Judah
2016-01-01
A method is presented for synchronizing the time of a clock to a remote time standard when the channel connecting the two has significant delay variation that can be described only statistically. The method compares the Allan deviation of the channel fluctuations to the free-running stability of the local clock, and computes the optimum interval between requests based on one of three selectable requirements: (1) choosing the highest possible accuracy, (2) choosing the best tradeoff of cost vs. accuracy, or (3) minimizing the number of requests to realize a specific accuracy. Once the interval between requests is chosen, the final step is to steer the local clock based on the received data. A typical adjustment algorithm, which supports both the statistical considerations based on the Allan deviation comparison and the timely detection of errors is included as an example. PMID:26529759
Klink, Thorsten; Geiger, Julia; Both, Marcus; Ness, Thomas; Heinzelmann, Sonja; Reinhard, Matthias; Holl-Ulrich, Konstanze; Duwendag, Dirk; Vaith, Peter; Bley, Thorsten Alexander
2014-12-01
To assess the diagnostic accuracy of contrast material-enhanced magnetic resonance (MR) imaging of superficial cranial arteries in the initial diagnosis of giant cell arteritis ( GCA giant cell arteritis ). Following institutional review board approval and informed consent, 185 patients suspected of having GCA giant cell arteritis were included in a prospective three-university medical center trial. GCA giant cell arteritis was diagnosed or excluded clinically in all patients (reference standard [final clinical diagnosis]). In 53.0% of patients (98 of 185), temporal artery biopsy ( TAB temporal artery biopsy ) was performed (diagnostic standard [ TAB temporal artery biopsy ]). Two observers independently evaluated contrast-enhanced T1-weighted MR images of superficial cranial arteries by using a four-point scale. Diagnostic accuracy, involvement pattern, and systemic corticosteroid ( sCS systemic corticosteroid ) therapy effects were assessed in comparison with the reference standard (total study cohort) and separately in comparison with the diagnostic standard TAB temporal artery biopsy ( TAB temporal artery biopsy subcohort). Statistical analysis included diagnostic accuracy parameters, interobserver agreement, and receiver operating characteristic analysis. Sensitivity of MR imaging was 78.4% and specificity was 90.4% for the total study cohort, and sensitivity was 88.7% and specificity was 75.0% for the TAB temporal artery biopsy subcohort (first observer). Diagnostic accuracy was comparable for both observers, with good interobserver agreement ( TAB temporal artery biopsy subcohort, κ = 0.718; total study cohort, κ = 0.676). MR imaging scores were significantly higher in patients with GCA giant cell arteritis -positive results than in patients with GCA giant cell arteritis -negative results ( TAB temporal artery biopsy subcohort and total study cohort, P < .001). Diagnostic accuracy of MR imaging was high in patients without and with sCS systemic corticosteroid therapy for 5 days or fewer (area under the curve, ≥0.9) and was decreased in patients receiving sCS systemic corticosteroid therapy for 6-14 days. In 56.5% of patients with TAB temporal artery biopsy -positive results (35 of 62), MR imaging displayed symmetrical and simultaneous inflammation of arterial segments. MR imaging of superficial cranial arteries is accurate in the initial diagnosis of GCA giant cell arteritis . Sensitivity probably decreases after more than 5 days of sCS systemic corticosteroid therapy; thus, imaging should not be delayed. Clinical trial registration no. DRKS00000594 . © RSNA, 2014.
Genotyping by sequencing for genomic prediction in a soybean breeding population.
Jarquín, Diego; Kocak, Kyle; Posadas, Luis; Hyma, Katie; Jedlicka, Joseph; Graef, George; Lorenz, Aaron
2014-08-29
Advances in genotyping technology, such as genotyping by sequencing (GBS), are making genomic prediction more attractive to reduce breeding cycle times and costs associated with phenotyping. Genomic prediction and selection has been studied in several crop species, but no reports exist in soybean. The objectives of this study were (i) evaluate prospects for genomic selection using GBS in a typical soybean breeding program and (ii) evaluate the effect of GBS marker selection and imputation on genomic prediction accuracy. To achieve these objectives, a set of soybean lines sampled from the University of Nebraska Soybean Breeding Program were genotyped using GBS and evaluated for yield and other agronomic traits at multiple Nebraska locations. Genotyping by sequencing scored 16,502 single nucleotide polymorphisms (SNPs) with minor-allele frequency (MAF) > 0.05 and percentage of missing values ≤ 5% on 301 elite soybean breeding lines. When SNPs with up to 80% missing values were included, 52,349 SNPs were scored. Prediction accuracy for grain yield, assessed using cross validation, was estimated to be 0.64, indicating good potential for using genomic selection for grain yield in soybean. Filtering SNPs based on missing data percentage had little to no effect on prediction accuracy, especially when random forest imputation was used to impute missing values. The highest accuracies were observed when random forest imputation was used on all SNPs, but differences were not significant. A standard additive G-BLUP model was robust; modeling additive-by-additive epistasis did not provide any improvement in prediction accuracy. The effect of training population size on accuracy began to plateau around 100, but accuracy steadily climbed until the largest possible size was used in this analysis. Including only SNPs with MAF > 0.30 provided higher accuracies when training populations were smaller. Using GBS for genomic prediction in soybean holds good potential to expedite genetic gain. Our results suggest that standard additive G-BLUP models can be used on unfiltered, imputed GBS data without loss in accuracy.
Measurement system with high accuracy for laser beam quality.
Ke, Yi; Zeng, Ciling; Xie, Peiyuan; Jiang, Qingshan; Liang, Ke; Yang, Zhenyu; Zhao, Ming
2015-05-20
Presently, most of the laser beam quality measurement system collimates the optical path manually with low efficiency and low repeatability. To solve these problems, this paper proposed a new collimated method to improve the reliability and accuracy of the measurement results. The system accuracy controlled the position of the mirror to change laser beam propagation direction, which can realize the beam perpendicularly incident to the photosurface of camera. The experiment results show that the proposed system has good repeatability and the measuring deviation of M2 factor is less than 0.6%.
Theoretical study of surface plasmon resonance sensors based on 2D bimetallic alloy grating
NASA Astrophysics Data System (ADS)
Dhibi, Abdelhak; Khemiri, Mehdi; Oumezzine, Mohamed
2016-11-01
A surface plasmon resonance (SPR) sensor based on 2D alloy grating with a high performance is proposed. The grating consists of homogeneous alloys of formula MxAg1-x, where M is gold, copper, platinum and palladium. Compared to the SPR sensors based a pure metal, the sensor based on angular interrogation with silver exhibits a sharper (i.e. larger depth-to-width ratio) reflectivity dip, which provides a big detection accuracy, whereas the sensor based on gold exhibits the broadest dips and the highest sensitivity. The detection accuracy of SPR sensor based a metal alloy is enhanced by the increase of silver composition. In addition, the composition of silver which is around 0.8 improves the sensitivity and the quality of SPR sensor of pure metal. Numerical simulations based on rigorous coupled wave analysis (RCWA) show that the sensor based on a metal alloy not only has a high sensitivity and a high detection accuracy, but also exhibits a good linearity and a good quality.
Diagnostic accuracy for major depression in multiple sclerosis using self-report questionnaires
Fischer, Anja; Fischer, Marcus; Nicholls, Robert A; Lau, Stephanie; Poettgen, Jana; Patas, Kostas; Heesen, Christoph; Gold, Stefan M
2015-01-01
Objective Multiple sclerosis and major depressive disorder frequently co-occur but depression often remains undiagnosed in this population. Self-rated depression questionnaires are a good option where clinician-based standardized diagnostics are not feasible. However, there is a paucity of data on diagnostic accuracy of self-report measures for depression in multiple sclerosis (MS). Moreover, head-to-head comparisons of common questionnaires are largely lacking. This could be particularly relevant for high-risk patients with depressive symptoms. Here, we compare the diagnostic accuracy of the Beck Depression Inventory (BDI) and 30-item version of the Inventory of Depressive Symptomatology Self-Rated (IDS-SR30) for major depressive disorder (MSS) against diagnosis by a structured clinical interview. Methods Patients reporting depressive symptoms completed the BDI, the IDS-SR30 and underwent diagnostic assessment (Mini International Neuropsychiatric Interview, M.I.N.I.). Receiver-Operating Characteristic analyses were performed, providing error estimates and false-positive/negative rates of suggested thresholds. Results Data from n = 31 MS patients were available. BDI and IDS-SR30 total score were significantly correlated (r = 0.82). The IDS-SR30total score, cognitive subscore, and BDI showed excellent to good accuracy (area under the curve (AUC) 0.86, 0.91, and 0.85, respectively). Conclusion Both the IDS-SR30 and the BDI are useful to quantify depressive symptoms showing good sensitivity and specificity. The IDS-SR30 cognitive subscale may be useful as a screening tool and to quantify affective/cognitive depressive symptomatology. PMID:26445703
Comment on the asymptotics of a distribution-free goodness of fit test statistic.
Browne, Michael W; Shapiro, Alexander
2015-03-01
In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.
Mesial Temporal Sclerosis: Accuracy of NeuroQuant versus Neuroradiologist.
Azab, M; Carone, M; Ying, S H; Yousem, D M
2015-08-01
We sought to compare the accuracy of a volumetric fully automated computer assessment of hippocampal volume asymmetry versus neuroradiologists' interpretations of the temporal lobes for mesial temporal sclerosis. Detecting mesial temporal sclerosis (MTS) is important for the evaluation of patients with temporal lobe epilepsy as it often guides surgical intervention. One feature of MTS is hippocampal volume loss. Electronic medical record and researcher reports of scans of patients with proved mesial temporal sclerosis were compared with volumetric assessment with an FDA-approved software package, NeuroQuant, for detection of mesial temporal sclerosis in 63 patients. The degree of volumetric asymmetry was analyzed to determine the neuroradiologists' threshold for detecting right-left asymmetry in temporal lobe volumes. Thirty-six patients had left-lateralized MTS, 25 had right-lateralized MTS, and 2 had bilateral MTS. The estimated accuracy of the neuroradiologist was 72.6% with a κ statistic of 0.512 (95% CI, 0.315-0.710) [moderate agreement, P < 3 × 10(-6)]), whereas the estimated accuracy of NeuroQuant was 79.4% with a κ statistic of 0.588 (95% CI, 0.388-0.787) [moderate agreement, P < 2 × 10(-6)]). This discrepancy in accuracy was not statistically significant. When at least a 5%-10% volume discrepancy between temporal lobes was present, the neuroradiologists detected it 75%-80% of the time. As a stand-alone fully automated software program that can process temporal lobe volume in 5-10 minutes, NeuroQuant compares favorably with trained neuroradiologists in predicting the side of mesial temporal sclerosis. Neuroradiologists can often detect even small temporal lobe volumetric changes visually. © 2015 by American Journal of Neuroradiology.
Reflexion on linear regression trip production modelling method for ensuring good model quality
NASA Astrophysics Data System (ADS)
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
NASA Astrophysics Data System (ADS)
Rebolledo, M. A.; Martinez-Betorz, J. A.
1989-04-01
In this paper the accuracy in the determination of the period of an oscillating signal, when obtained from the photon statistics time-interval probability, is studied as a function of the precision (the inverse of the cutoff frequency of the photon counting system) with which time intervals are measured. The results are obtained by means of an experiment with a square-wave signal, where the Fourier or square-wave transforms of the time-interval probability are measured. It is found that for values of the frequency of the signal near the cutoff frequency the errors in the period are small.
JIANG, QUAN; ZHANG, YUAN; CHEN, JIAN; ZHANG, YUN-XIAO; HE, ZHU
2014-01-01
The aim of this study was to investigate the diagnostic value of the Virtual Touch™ tissue quantification (VTQ) and elastosonography technologies in benign and malignant breast tumors. Routine preoperative ultrasound, elastosonography and VTQ examinations were performed on 86 patients with breast lesions. The elastosonography score and VTQ speed grouping of each lesion were measured and compared with the pathological findings. The difference in the elastosonography score between the benign and malignant breast tumors was statistically significant (P<0.05). The detection rate for an elastosonography score of 1–3 points in benign tumors was 68.09% and that for an elastosonography score of 4–5 points in malignant tumors was 82.05%. The difference in VTQ speed values between the benign and malignant tumors was also statistically significant (P<0.05). In addition, the diagnostic accuracy of conventional ultrasound, elastosonography, VTQ technology and the combined methods showed statistically significant differences (P<0.05). The use of the three technologies in combination significantly improved the diagnostic accuracy to 91.86%. In conclusion, the combination of conventional ultrasound, elastosonography and VTQ technology can significantly improve accuracy in the diagnosis of breast cancer. PMID:25187797
Tataru, Paula; Hobolth, Asger
2011-12-05
Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian
2016-01-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564
Su, Zhong; Zhang, Lisha; Ramakrishnan, V.; Hagan, Michael; Anscher, Mitchell
2011-01-01
Purpose: To evaluate both the Calypso Systems’ (Calypso Medical Technologies, Inc., Seattle, WA) localization accuracy in the presence of wireless metal–oxide–semiconductor field-effect transistor (MOSFET) dosimeters of dose verification system (DVS, Sicel Technologies, Inc., Morrisville, NC) and the dosimeters’ reading accuracy in the presence of wireless electromagnetic transponders inside a phantom.Methods: A custom-made, solid-water phantom was fabricated with space for transponders and dosimeters. Two inserts were machined with positioning grooves precisely matching the dimensions of the transponders and dosimeters and were arranged in orthogonal and parallel orientations, respectively. To test the transponder localization accuracy with∕without presence of dosimeters (hypothesis 1), multivariate analyses were performed on transponder-derived localization data with and without dosimeters at each preset distance to detect statistically significant localization differences between the control and test sets. To test dosimeter dose-reading accuracy with∕without presence of transponders (hypothesis 2), an approach of alternating the transponder presence in seven identical fraction dose (100 cGy) deliveries and measurements was implemented. Two-way analysis of variance was performed to examine statistically significant dose-reading differences between the two groups and the different fractions. A relative-dose analysis method was also used to evaluate transponder impact on dose-reading accuracy after dose-fading effect was removed by a second-order polynomial fit.Results: Multivariate analysis indicated that hypothesis 1 was false; there was a statistically significant difference between the localization data from the control and test sets. However, the upper and lower bounds of the 95% confidence intervals of the localized positional differences between the control and test sets were less than 0.1 mm, which was significantly smaller than the minimum clinical localization resolution of 0.5 mm. For hypothesis 2, analysis of variance indicated that there was no statistically significant difference between the dosimeter readings with and without the presence of transponders. Both orthogonal and parallel configurations had difference of polynomial-fit dose to measured dose values within 1.75%.Conclusions: The phantom study indicated that the Calypso System’s localization accuracy was not affected clinically due to the presence of DVS wireless MOSFET dosimeters and the dosimeter-measured doses were not affected by the presence of transponders. Thus, the same patients could be implanted with both transponders and dosimeters to benefit from improved accuracy of radiotherapy treatments offered by conjunctional use of the two systems. PMID:21776780
The music of morality and logic.
Mesz, Bruno; Rodriguez Zivic, Pablo H; Cecchi, Guillermo A; Sigman, Mariano; Trevisan, Marcos A
2015-01-01
Musical theory has built on the premise that musical structures can refer to something different from themselves (Nattiez and Abbate, 1990). The aim of this work is to statistically corroborate the intuitions of musical thinkers and practitioners starting at least with Plato, that music can express complex human concepts beyond merely "happy" and "sad" (Mattheson and Lenneberg, 1958). To do so, we ask whether musical improvisations can be used to classify the semantic category of the word that triggers them. We investigated two specific domains of semantics: morality and logic. While morality has been historically associated with music, logic concepts, which involve more abstract forms of thought, are more rarely associated with music. We examined musical improvisations inspired by positive and negative morality (e.g., good and evil) and logic concepts (true and false), analyzing the associations between these words and their musical representations in terms of acoustic and perceptual features. We found that music conveys information about valence (good and true vs. evil and false) with remarkable consistency across individuals. This information is carried by several musical dimensions which act in synergy to achieve very high classification accuracy. Positive concepts are represented by music with more ordered pitch structure and lower harmonic and sensorial dissonance than negative concepts. Music also conveys information indicating whether the word which triggered it belongs to the domains of logic or morality (true vs. good), principally through musical articulation. In summary, improvisations consistently map logic and morality information to specific musical dimensions, testifying the capacity of music to accurately convey semantic information in domains related to abstract forms of thought.
The music of morality and logic
Mesz, Bruno; Rodriguez Zivic, Pablo H.; Cecchi, Guillermo A.; Sigman, Mariano; Trevisan, Marcos A.
2015-01-01
Musical theory has built on the premise that musical structures can refer to something different from themselves (Nattiez and Abbate, 1990). The aim of this work is to statistically corroborate the intuitions of musical thinkers and practitioners starting at least with Plato, that music can express complex human concepts beyond merely “happy” and “sad” (Mattheson and Lenneberg, 1958). To do so, we ask whether musical improvisations can be used to classify the semantic category of the word that triggers them. We investigated two specific domains of semantics: morality and logic. While morality has been historically associated with music, logic concepts, which involve more abstract forms of thought, are more rarely associated with music. We examined musical improvisations inspired by positive and negative morality (e.g., good and evil) and logic concepts (true and false), analyzing the associations between these words and their musical representations in terms of acoustic and perceptual features. We found that music conveys information about valence (good and true vs. evil and false) with remarkable consistency across individuals. This information is carried by several musical dimensions which act in synergy to achieve very high classification accuracy. Positive concepts are represented by music with more ordered pitch structure and lower harmonic and sensorial dissonance than negative concepts. Music also conveys information indicating whether the word which triggered it belongs to the domains of logic or morality (true vs. good), principally through musical articulation. In summary, improvisations consistently map logic and morality information to specific musical dimensions, testifying the capacity of music to accurately convey semantic information in domains related to abstract forms of thought. PMID:26191020
The Statistics of Visual Representation
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.
2002-01-01
The experience of retinex image processing has prompted us to reconsider fundamental aspects of imaging and image processing. Foremost is the idea that a good visual representation requires a non-linear transformation of the recorded (approximately linear) image data. Further, this transformation appears to converge on a specific distribution. Here we investigate the connection between numerical and visual phenomena. Specifically the questions explored are: (1) Is there a well-defined consistent statistical character associated with good visual representations? (2) Does there exist an ideal visual image? And (3) what are its statistical properties?
Weather related continuity and completeness on Deep Space Ka-band links: statistics and forecasting
NASA Technical Reports Server (NTRS)
Shambayati, Shervin
2006-01-01
In this paper the concept of link 'stability' as means of measuring the continuity of the link is introduced and through it, along with the distributions of 'good' periods and 'bad' periods, the performance of the proposed Ka-band link design method using both forecasting and long-term statistics has been analyzed. The results indicate that the proposed link design method has relatively good continuity and completeness characteristics even when only long-term statistics are used and that the continuity performance further improves when forecasting is employed. .
Sensors and signal processing for high accuracy passenger counting : final report.
DOT National Transportation Integrated Search
2009-03-05
It is imperative for a transit system to track statistics about their ridership in order to plan bus routes. There exists a wide variety of methods for obtaining these statistics that range from relying on the driver to count people to utilizing came...
Predicting juvenile recidivism: new method, old problems.
Benda, B B
1987-01-01
This prediction study compared three statistical procedures for accuracy using two assessment methods. The criterion is return to a juvenile prison after the first release, and the models tested are logit analysis, predictive attribute analysis, and a Burgess procedure. No significant differences are found between statistics in prediction.
NASA Astrophysics Data System (ADS)
Sisay, Z. G.; Besha, T.; Gessesse, B.
2017-05-01
This study used in-situ GPS data to validate the accuracy of horizontal coordinates and orientation of linear features of orthophoto and line map for Bahir Dar city. GPS data is processed using GAMIT/GLOBK and Lieca GeoOfice (LGO) in a least square sense with a tie to local and regional GPS reference stations to predict horizontal coordinates at five checkpoints. Real-Time-Kinematic GPS measurement technique is used to collect the coordinates of road centerline to test the accuracy associated with the orientation of the photogrammetric line map. The accuracy of orthophoto was evaluated by comparing with in-situ GPS coordinates and it is in a good agreement with a root mean square error (RMSE) of 12.45 cm in x- and 13.97 cm in y-coordinates, on the other hand, 6.06 cm with 95 % confidence level - GPS coordinates from GAMIT/GLOBK. Whereas, the horizontal coordinates of the orthophoto are in agreement with in-situ GPS coordinates at an accuracy of 16.71 cm and 18.98 cm in x and y-directions respectively and 11.07 cm with 95 % confidence level - GPS data is processed by LGO and a tie to local GPS network. Similarly, the accuracy of linear feature is in a good fit with in-situ GPS measurement. The GPS coordinates of the road centerline deviates from the corresponding coordinates of line map by a mean value of 9.18 cm in x- direction and -14.96 cm in y-direction. Therefore, it can be concluded that, the accuracy of the orthophoto and line map is within the national standard of error budget ( 25 cm).
Spectroscopic Diagnosis of Arsenic Contamination in Agricultural Soils
Shi, Tiezhu; Liu, Huizeng; Chen, Yiyun; Fei, Teng; Wang, Junjie; Wu, Guofeng
2017-01-01
This study investigated the abilities of pre-processing, feature selection and machine-learning methods for the spectroscopic diagnosis of soil arsenic contamination. The spectral data were pre-processed by using Savitzky-Golay smoothing, first and second derivatives, multiplicative scatter correction, standard normal variate, and mean centering. Principle component analysis (PCA) and the RELIEF algorithm were used to extract spectral features. Machine-learning methods, including random forests (RF), artificial neural network (ANN), radial basis function- and linear function- based support vector machine (RBF- and LF-SVM) were employed for establishing diagnosis models. The model accuracies were evaluated and compared by using overall accuracies (OAs). The statistical significance of the difference between models was evaluated by using McNemar’s test (Z value). The results showed that the OAs varied with the different combinations of pre-processing, feature selection, and classification methods. Feature selection methods could improve the modeling efficiencies and diagnosis accuracies, and RELIEF often outperformed PCA. The optimal models established by RF (OA = 86%), ANN (OA = 89%), RBF- (OA = 89%) and LF-SVM (OA = 87%) had no statistical difference in diagnosis accuracies (Z < 1.96, p < 0.05). These results indicated that it was feasible to diagnose soil arsenic contamination using reflectance spectroscopy. The appropriate combination of multivariate methods was important to improve diagnosis accuracies. PMID:28471412
Erby, Lori A H; Roter, Debra L; Biesecker, Barbara B
2011-11-01
To explore the accuracy and consistency of standardized patient (SP) performance in the context of routine genetic counseling, focusing on elements beyond scripted case items including general communication style and affective demeanor. One hundred seventy-seven genetic counselors were randomly assigned to counsel one of six SPs. Videotapes and transcripts of the sessions were analyzed to assess consistency of performance across four dimensions. Accuracy of script item presentation was high; 91% and 89% in the prenatal and cancer cases. However, there were statistically significant differences among SPs in the accuracy of presentation, general communication style, and some aspects of affective presentation. All SPs were rated as presenting with similarly high levels of realism. SP performance over time was generally consistent, with some small but statistically significant differences. These findings demonstrate that well-trained SPs can not only perform the factual elements of a case with high degrees of accuracy and realism; but they can also maintain sufficient levels of uniformity in general communication style and affective demeanor over time to support their use in even the demanding context of genetic counseling. Results indicate a need for an additional focus in training on consistency between different SPs. Copyright © 2010. Published by Elsevier Ireland Ltd.
Differential Effects of Context and Feedback on Orthographic Learning: How Good Is Good Enough?
ERIC Educational Resources Information Center
Martin-Chang, Sandra; Ouellette, Gene; Bond, Linda
2017-01-01
In this study, students in Grade 2 read different sets of words under 4 experimental training conditions (context/feedback, isolation/feedback, context/no-feedback, isolation/no-feedback). Training took place over 10 trials, followed by a spelling test and a delayed reading posttest. Reading in context boosted reading accuracy initially; in…
Zeh, Clement; Rose, Charles E; Inzaule, Seth; Desai, Mitesh A; Otieno, Fredrick; Humwa, Felix; Akoth, Benta; Omolo, Paul; Chen, Robert T; Kebede, Yenew; Samandari, Taraz
2017-09-01
CD4+ T-lymphocyte count testing at the point-of-care (POC) may improve linkage to care of persons diagnosed with HIV-1 infection, but the accuracy of POC devices when operated by lay-counselors in the era of task-shifting is unknown. We examined the accuracy of Alere's Pima™ POC device on both capillary and venous blood when performed by lay-counselors and laboratory technicians. In Phase I, we compared the perfomance of POC against FACSCalibur™ for 280 venous specimens by laboratory technicians. In Phase II we compared POC performance by lay-counselors versus laboratory technicians using 147 paired capillary and venous specimens, and compared these to FACSCalibur™. Statistical analyses included Bland-Altman analyses, concordance correlation coefficient, sensitivity, and specificity at treatment eligibility thresholds of 200, 350, and 500cells/μl. Phase I: POC sensitivity and specificity were 93.0% and 84.1% at 500cells/μl, respectively. Phase II: Good agreement was observed for venous POC results from both lay-counselors (concordance correlation coefficient (CCC)=0.873, bias -86.4cells/μl) and laboratory technicians (CCC=0.920, bias -65.7cells/μl). Capillary POC had good correlation: lay-counselors (CCC=0.902, bias -71.2cells/μl), laboratory technicians (CCC=0.918, bias -63.0cells/μl). Misclassification at the 500 cells/μl threshold for venous blood was 13.6% and 10.2% for lay-counselors and laboratory technicians and 12.2% for capillary blood in both groups. POC tended to under-classify the CD4 values with increasingly negative bias at higher CD4 values. Pima™ results were comparable to FACSCalibur™ for both venous and capillary specimens when operated by lay-counselors. POC CD4 testing has the potential to improve linkage to HIV care without burdening laboratory technicians in resource-limited settings. Published by Elsevier B.V.
Choi, Stephanie K Y; Boyle, Eleanor; Burchell, Ann N; Gardner, Sandra; Collins, Evan; Grootendorst, Paul; Rourke, Sean B
2015-01-01
Major depression affects up to half of people living with HIV. However, among HIV-positive patients, depression goes unrecognized 60-70% of the time in non-psychiatric settings. We sought to evaluate three screening instruments and their short forms to facilitate the recognition of current depression in HIV-positive patients attending HIV specialty care clinics in Ontario. A multi-centre validation study was conducted in Ontario to examine the validity and accuracy of three instruments (the Center for Epidemiologic Depression Scale [CESD20], the Kessler Psychological Distress Scale [K10], and the Patient Health Questionnaire depression scale [PHQ9]) and their short forms (CESD10, K6, and PHQ2) in diagnosing current major depression among 190 HIV-positive patients in Ontario. Results from the three instruments and their short forms were compared to results from the gold standard measured by Mini International Neuropsychiatric Interview (the "M.I.N.I."). Overall, the three instruments identified depression with excellent accuracy and validity (area under the curve [AUC]>0.9) and good reliability (Kappa statistics: 0.71-0.79; Cronbach's alpha: 0.87-0.93). We did not find that the AUCs differed in instrument pairs (p-value>0.09), or between the instruments and their short forms (p-value>0.3). Except for the PHQ2, the instruments showed good-to-excellent sensitivity (0.86-1.0) and specificity (0.81-0.87), excellent negative predictive value (>0.90), and moderate positive predictive value (0.49-0.58) at their optimal cut-points. Among people in HIV care in Ontario, Canada, the three instruments and their short forms performed equally well and accurately. When further in-depth assessments become available, shorter instruments might find greater clinical acceptance. This could lead to clinical benefits in fast-paced speciality HIV care settings and better management of depression in HIV-positive patients.
ICU scoring systems allow prediction of patient outcomes and comparison of ICU performance.
Becker, R B; Zimmerman, J E
1996-07-01
Too much time and effort are wasted in attempts to pass final judgment on whether systems for ICU prognostication are "good or bad" and whether they "do or do not" provide a simple answer to the complex and often unpredictable question of individual mortality in the ICU. A substantial amount of data supports the usefulness of general ICU prognostic systems in comparing ICU performance with respect to a wide variety of endpoints, including ICU and hospital mortality, duration of stay, and efficiency of resource use. Work in progress is analyzing both general resource use and specific therapeutic interventions. It also is time to fully acknowledge that statistics never can predict whether a patient will die with 100% accuracy. There always will be exceptions to the rule, and physicians frequently will have information that is not included in prognostic models. In addition, the values of both physicians and patients frequently lead to differences in how a probability in interpreted; for some, a 95% probability estimate means that death is near and, for others, this estimate represents a tangible 5% chance for survival. This means that physicians must learn how to integrate such estimates into their medical decisions. In doing so, it is our hope that prognostic systems are not viewed as oversimplifying or automating clinical decisions. Rather, such systems provide objective data on which physicians may ground a spectrum of decisions regarding either escalation or withdrawal of therapy in critically ill patients. These systems do not dehumanize our decision-making process but, rather, help eliminate physician reliance on emotional, heuristic, poorly calibrated, or overly pessimistic subjective estimates. No decision regarding patient care can be considered best if the facts upon which it is based on imprecise or biased. Future research will improve the accuracy of individual patient predictions but, even with the highest degree of precision, such predictions are useful only in support of, and not as a substitute for, good clinical judgment.
Resident accuracy of joint line palpation using ultrasound verification.
Rho, Monica E; Chu, Samuel K; Yang, Aaron; Hameed, Farah; Lin, Cindy Yuchin; Hurh, Peter J
2014-10-01
To determine the accuracy of knee and acromioclavicular (AC) joint line palpation in Physical Medicine and Rehabilitation (PM&R) residents using ultrasound (US) verification. Cohort study. PM&R residency program at an academic institution. Twenty-four PM&R residents participating in a musculoskeletal US course (7 PGY-2, 8 PGY-3, and 9 PGY4 residents). Twenty-four PM&R residents participating in an US course were asked to palpate the AC joint and lateral joint line of the knee in a female and male model before the start of the course. Once the presumed joint line was localized, the residents were asked to tape an 18-gauge, 1.5-inch, blunt-tip needle parallel to the joint line on the overlying skin. The accuracy of needle placement over the joint line was verified using US. US verification of correct needle placement over the joint line. Overall AC joint palpation accuracy was 16.7%, and knee lateral joint line palpation accuracy was 58.3%. Based on the resident level of education, using a value of P < .05, there were no statistically significant differences in the accuracy of joint line palpation. Residents in this study demonstrate poor accuracy of AC joint and lateral knee joint line identification by palpation, using US as the criterion standard for verification. There were no statistically significant differences in the accuracy rates of joint line palpation based on resident level of education. US may be a useful tool to use to advance the current methods of teaching the physical examination in medical education. Copyright © 2014 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Deep exclusive π+ electroproduction off the proton at CLAS
NASA Astrophysics Data System (ADS)
Park, K.; Guidal, M.; Gothe, R. W.; Laget, J. M.; Garçon, M.; Adhikari, K. P.; Aghasyan, M.; Amaryan, M. J.; Anghinolfi, M.; Avakian, H.; Baghdasaryan, H.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Bedlinsky, I.; Bennett, R. P.; Biselli, A. S.; Bookwalter, C.; Boiarinov, S.; Briscoe, W. J.; Brooks, W. K.; Burkert, V. D.; Carman, D. S.; Celentano, A.; Chandavar, S.; Charles, G.; Contalbrigo, M.; Crede, V.; D'Angelo, A.; Daniel, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Dodge, G. E.; Doughty, D.; Dupre, R.; Egiyan, H.; El Alaoui, A.; El Fassi, L.; Eugenio, P.; Fedotov, G.; Fegan, S.; Fleming, J. A.; Forest, T. A.; Fradi, A.; Gevorgyan, N.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Gohn, W.; Golovatch, E.; Graham, L.; Griffioen, K. A.; Guegan, B.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Heddle, D.; Hicks, K.; Ho, D.; Holtrop, M.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Jenkins, D.; Jo, H. S.; Keller, D.; Khandaker, M.; Khetarpal, P.; Kim, A.; Kim, W.; Klein, F. J.; Koirala, S.; Kubarovsky, A.; Kubarovsky, V.; Kuhn, S. E.; Kuleshov, S. V.; Livingston, K.; Lu, H. Y.; MacGregor, I. J. D.; Mao, Y.; Markov, N.; Martinez, D.; Mayer, M.; McKinnon, B.; Meyer, C. A.; Mineeva, T.; Mirazita, M.; Mokeev, V.; Moutarde, H.; Munevar, E.; Munoz Camacho, C.; Nadel-Turonski, P.; Nepali, C. S.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Ostrovidov, A. I.; Pappalardo, L. L.; Paremuzyan, R.; Park, S.; Pasyuk, E.; Anefalos Pereira, S.; Phelps, E.; Pisano, S.; Pogorelko, O.; Pozdniakov, S.; Price, J. W.; Procureur, S.; Protopopescu, D.; Puckett, A. J. R.; Raue, B. A.; Ricco, G.; Rimal, D.; Ripani, M.; Rosner, G.; Rossi, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Schott, D.; Schumacher, R. A.; Seder, E.; Seraydaryan, H.; Sharabian, Y. G.; Smith, E. S.; Smith, G. D.; Sober, D. I.; Sokhan, D.; Stepanyan, S. S.; Stoler, P.; Strakovsky, I. I.; Strauch, S.; Taiuti, M.; Tang, W.; Taylor, C. E.; Tian, Ye; Tkachenko, S.; Trivedi, A.; Ungaro, M.; Vernarsky, B.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Watts, D. P.; Weinstein, L. B.; Weygand, D. P.; Wood, M. H.; Zachariou, N.; Zhang, J.; Zhao, Z. W.; Zonta, I.
2013-01-01
The exclusive electroproduction of π + above the resonance region was studied using the CEBAF Large Acceptance Spectrometer (CLAS) at Jefferson Laboratory by scattering a 6GeV continuous electron beam off a hydrogen target. The large acceptance and good resolution of CLAS, together with the high luminosity, allowed us to measure the cross section for the γ * p → nπ + process in 140 ( Q 2, x B , t) bins: 0.16 < x B < 0.58, 1.6 GeV2 < Q 2 < 4.5 GeV2 and 0.1 GeV2 < - t < 5.3 GeV2. For most bins, the statistical accuracy is on the order of a few percent. Differential cross sections are compared to four theoretical models, based either on hadronic or on partonic degrees of freedom. The four models can describe the gross features of the data reasonably well, but differ strongly in their ingredients. In particular, the model based on Generalized Parton Distributions (GPDs) contain the interesting potential to experimentally access transversity GPDs.
Ecological footprint model using the support vector machine technique.
Ma, Haibo; Chang, Wenjuan; Cui, Guangbai
2012-01-01
The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.
Ducat, Giseli; Felsner, Maria L; da Costa Neto, Pedro R; Quináia, Sueli P
2015-06-15
Recently the use of brown sugar has increased due to its nutritional characteristics, thus requiring a more rigid quality control. The development of a method for water content analysis in soft brown sugar is carried out for the first time by TG/DTA with application of different statistical tests. The results of the optimization study suggest that heating rates of 5°C min(-1) and an alumina sample holder improve the efficiency of the drying process. The validation study showed that thermo gravimetry presents good accuracy and precision for water content analysis in soft brown sugar samples. This technique offers advantages over other analytical methods as it does not use toxic and costly reagents or solvents, it does not need any sample preparation, and it allows the identification of the temperature at which water is completely eliminated in relation to other volatile degradation products. This is an important advantage over the official method (loss on drying). Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhang, Junjie; Li, Shuqi; Lin, Mengfei; Yang, Endian; Chen, Xiaoyang
2018-05-01
The drumstick tree has traditionally been used as foodstuff and fodder in several countries. Due to its high nutritional value and good biomass production, interest in this plant has increased in recent years. It has therefore become important to rapidly and accurately evaluate drumstick quality. In this study, we addressed the optimization of Near-infrared spectroscopy (NIRS) to analyze crude protein, crude fat, crude fiber, iron (Fe), and potassium (K) in a variety of drumstick accessions (N = 111) representing different populations, cultivation programs, and climates. Partial least-squares regression with internal cross-validation was used to evaluate the models and identify possible spectral outliers. The calibration statistics for these fodder-related chemical components suggest that NIRS can predict these parameters in a wide range of drumstick types with high accuracy. The NIRS calibration models developed in this study will be useful in predicting drumstick forage quality for these five quality parameters.
Imputation of missing data in time series for air pollutants
NASA Astrophysics Data System (ADS)
Junger, W. L.; Ponce de Leon, A.
2015-02-01
Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.
Web-based triage in a college health setting.
Sole, Mary Lou; Stuart, Patricia L; Deichen, Michael
2006-01-01
The authors describe the initiation and use of a Web-based triage system in a college health setting. During the first 4 months of implementation, the system recorded 1,290 encounters. More women accessed the system (70%); the average age was 21.8 years. The Web-based triage system advised the majority of students to seek care within 24 hours; however, it recommended self-care management in 22.7% of encounters. Sore throat was the most frequent chief complaint (14.2%). A subset of 59 students received treatment at student health services after requesting an appointment via e-mail. The authors used kappa statistics to compare congruence between chief complaint and 24/7 WebMed classification (kappa = .94), between chief complaint and student health center diagnosis (kappa = .91), and between 24/7 WebMed classification and student health center diagnosis (kappa = .89). Initial evaluation showed high use and good accuracy of Web-based triage. This service provides education and advice to students about their health care concerns.
Navas, Juan Moreno; Telfer, Trevor C; Ross, Lindsay G
2011-08-01
Combining GIS with neuro-fuzzy modeling has the advantage that expert scientific knowledge in coastal aquaculture activities can be incorporated into a geospatial model to classify areas particularly vulnerable to pollutants. Data on the physical environment and its suitability for aquaculture in an Irish fjard, which is host to a number of different aquaculture activities, were derived from a three-dimensional hydrodynamic and GIS models. Subsequent incorporation into environmental vulnerability models, based on neuro-fuzzy techniques, highlighted localities particularly vulnerable to aquaculture development. The models produced an overall classification accuracy of 85.71%, with a Kappa coefficient of agreement of 81%, and were sensitive to different input parameters. A statistical comparison between vulnerability scores and nitrogen concentrations in sediment associated with salmon cages showed good correlation. Neuro-fuzzy techniques within GIS modeling classify vulnerability of coastal regions appropriately and have a role in policy decisions for aquaculture site selection. Copyright © 2011 Elsevier Ltd. All rights reserved.
EL-Houssini, Ola M.; Zawilla, Nagwan H.; Mohammad, Mohammad A.
2013-01-01
Specific stability indicating reverse-phase liquid chromatography (RP-LC) assay method (SIAM) was developed for the determination of cinnarizine (Cinn)/piracetam (Pira) and cinnarizine (Cinn)/heptaminol acefyllinate (Hept) in the presence of the reported degradation products of Cinn. A C18 column and gradient mobile phase was applied for good resolution of all peaks. The detection was achieved at 210 nm and 254 nm for Cinn/Pira and Cinn/Hept, respectively. The responses were linear over concentration ranges of 20–200, 20–1000 and 25–1000 μgmL−1 for Cinn, Pira, and Hept respectively. The proposed method was validated for linearity, accuracy, repeatability, intermediate precision, and robustness via statistical analysis of the data. The method was shown to be precise, accurate, reproducible, sensitive, and selective for the analysis of Cinn/Pira and Cinn/Hept in laboratory prepared mixtures and in pharmaceutical formulations. PMID:24137049
Sai, Jin-Kan; Suyama, Masafumi; Kubokawa, Yoshihiro; Watanabe, Sumio
2008-02-28
To investigate the usefulness of secretin injection-MRCP for the diagnosis of mild chronic pancreatitis. Sixteen patients having mild chronic pancreatitis according to the Cambridge classification and 12 control subjects with no abnormal findings on the pancreatogram were examined for the diagnostic accuracy of secretin injection-MRCP regarding abnormal branch pancreatic ducts associated with mild chronic pancreatitis (Cambridge Classification), using endoscopic retrograde cholangiopancreatography (ERCP) for comparison. The sensitivity and specificity for abnormal branch pancreatic ducts determined by two reviewers were respectively 55%-63% and 75%-83% in the head, 57%-64% and 82%-83% in the body, and 44%-44% and 72%-76% in the tail of the pancreas. The sensitivity and specificity for mild chronic pancreatitis were 56%-63% and 92%-92%, respectively. Interobserver agreement (kappa statistics) concerning the diagnosis of an abnormal branch pancreatic duct and of mild chronic pancreatitis was good to excellent. Secretin injection-MRCP might be useful for the diagnosis of mild chronic pancreatitis.
A global fit of the MSSM with GAMBIT
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We study the seven-dimensional Minimal Supersymmetric Standard Model (MSSM7) with the new GAMBIT software framework, with all parameters defined at the weak scale. Our analysis significantly extends previous weak-scale, phenomenological MSSM fits, by adding more and newer experimental analyses, improving the accuracy and detail of theoretical predictions, including dominant uncertainties from the Standard Model, the Galactic dark matter halo and the quark content of the nucleon, and employing novel and highly-efficient statistical sampling methods to scan the parameter space. We find regions of the MSSM7 that exhibit co-annihilation of neutralinos with charginos, stops and sbottoms, as well as models that undergo resonant annihilation via both light and heavy Higgs funnels. We find high-likelihood models with light charginos, stops and sbottoms that have the potential to be within the future reach of the LHC. Large parts of our preferred parameter regions will also be accessible to the next generation of direct and indirect dark matter searches, making prospects for discovery in the near future rather good.
NASA Astrophysics Data System (ADS)
Wang, Jianing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.
2017-02-01
Medical image registration establishes a correspondence between images of biological structures and it is at the core of many applications. Commonly used deformable image registration methods are dependent on a good preregistration initialization. The initialization can be performed by localizing homologous landmarks and calculating a point-based transformation between the images. The selection of landmarks is however important. In this work, we present a learning-based method to automatically find a set of robust landmarks in 3D MR image volumes of the head to initialize non-rigid transformations. To validate our method, these selected landmarks are localized in unknown image volumes and they are used to compute a smoothing thin-plate splines transformation that registers the atlas to the volumes. The transformed atlas image is then used as the preregistration initialization of an intensity-based non-rigid registration algorithm. We show that the registration accuracy of this algorithm is statistically significantly improved when using the presented registration initialization over a standard intensity-based affine registration.
The bag-of-frames approach: A not so sufficient model for urban soundscapes
NASA Astrophysics Data System (ADS)
Lagrange, Mathieu; Lafay, Grégoire; Défréville, Boris; Aucouturier, Jean-Julien
2015-11-01
The "bag-of-frames" approach (BOF), which encodes audio signals as the long-term statistical distribution of short-term spectral features, is commonly regarded as an effective and sufficient way to represent environmental sound recordings (soundscapes) since its introduction in an influential 2007 article. The present paper describes a concep-tual replication of this seminal article using several new soundscape datasets, with results strongly questioning the adequacy of the BOF approach for the task. We show that the good accuracy originally re-ported with BOF likely result from a particularly thankful dataset with low within-class variability, and that for more realistic datasets, BOF in fact does not perform significantly better than a mere one-point av-erage of the signal's features. Soundscape modeling, therefore, may not be the closed case it was once thought to be. Progress, we ar-gue, could lie in reconsidering the problem of considering individual acoustical events within each soundscape.
A scan statistic to extract causal gene clusters from case-control genome-wide rare CNV data.
Nishiyama, Takeshi; Takahashi, Kunihiko; Tango, Toshiro; Pinto, Dalila; Scherer, Stephen W; Takami, Satoshi; Kishino, Hirohisa
2011-05-26
Several statistical tests have been developed for analyzing genome-wide association data by incorporating gene pathway information in terms of gene sets. Using these methods, hundreds of gene sets are typically tested, and the tested gene sets often overlap. This overlapping greatly increases the probability of generating false positives, and the results obtained are difficult to interpret, particularly when many gene sets show statistical significance. We propose a flexible statistical framework to circumvent these problems. Inspired by spatial scan statistics for detecting clustering of disease occurrence in the field of epidemiology, we developed a scan statistic to extract disease-associated gene clusters from a whole gene pathway. Extracting one or a few significant gene clusters from a global pathway limits the overall false positive probability, which results in increased statistical power, and facilitates the interpretation of test results. In the present study, we applied our method to genome-wide association data for rare copy-number variations, which have been strongly implicated in common diseases. Application of our method to a simulated dataset demonstrated the high accuracy of this method in detecting disease-associated gene clusters in a whole gene pathway. The scan statistic approach proposed here shows a high level of accuracy in detecting gene clusters in a whole gene pathway. This study has provided a sound statistical framework for analyzing genome-wide rare CNV data by incorporating topological information on the gene pathway.
Riegel, Adam C; Chen, Yu; Kapur, Ajay; Apicello, Laura; Kuruvilla, Abraham; Rea, Anthony J; Jamshidi, Abolghassem; Potters, Louis
Optically stimulated luminescent dosimeters (OSLDs) are utilized for in vivo dosimetry (IVD) of modern radiation therapy techniques such as intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). Dosimetric precision achieved with conventional techniques may not be attainable. In this work, we measured accuracy and precision for a large sample of clinical OSLD-based IVD measurements. Weekly IVD measurements were collected from 4 linear accelerators for 2 years and were expressed as percent differences from planned doses. After outlier analysis, 10,224 measurements were grouped in the following way: overall, modality (photons, electrons), treatment technique (3-dimensional [3D] conformal, field-in-field intensity modulation, inverse-planned IMRT, and VMAT), placement location (gantry angle, cardinality, and central axis positioning), and anatomical site (prostate, breast, head and neck, pelvis, lung, rectum and anus, brain, abdomen, esophagus, and bladder). Distributions were modeled via a Gaussian function. Fitting was performed with least squares, and goodness-of-fit was assessed with the coefficient of determination. Model means (μ) and standard deviations (σ) were calculated. Sample means and variances were compared for statistical significance by analysis of variance and the Levene tests (α = 0.05). Overall, μ ± σ was 0.3 ± 10.3%. Precision for electron measurements (6.9%) was significantly better than for photons (10.5%). Precision varied significantly among treatment techniques (P < .0001) with field-in-field lowest (σ = 7.2%) and IMRT and VMAT highest (σ = 11.9% and 13.4%, respectively). Treatment site models with goodness-of-fit greater than 0.90 (6 of 10) yielded accuracy within ±3%, except for head and neck (μ = -3.7%). Precision varied with treatment site (range, 7.3%-13.0%), with breast and head and neck yielding the best and worst precision, respectively. Placement on the central axis of cardinal gantry angles yielded more precise results (σ = 8.5%) compared with other locations (range, 10.5%-11.4%). Accuracy of ±3% was achievable. Precision ranged from 6.9% to 13.4% depending on modality, technique, and treatment site. Simple, standardized locations may improve IVD precision. These findings may aid development of patient-specific tolerances for OSLD-based IVD. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Applications of Principled Search Methods in Climate Influences and Mechanisms
NASA Technical Reports Server (NTRS)
Glymour, Clark
2005-01-01
Forest and grass fires cause economic losses in the billions of dollars in the U.S. alone. In addition, boreal forests constitute a large carbon store; it has been estimated that, were no burning to occur, an additional 7 gigatons of carbon would be sequestered in boreal soils each century. Effective wildfire suppression requires anticipation of locales and times for which wildfire is most probable, preferably with a two to four week forecast, so that limited resources can be efficiently deployed. The United States Forest Service (USFS), and other experts and agencies have developed several measures of fire risk combining physical principles and expert judgment, and have used them in automated procedures for forecasting fire risk. Forecasting accuracies for some fire risk indices in combination with climate and other variables have been estimated for specific locations, with the value of fire risk index variables assessed by their statistical significance in regressions. In other cases, the MAPSS forecasts [23, 241 for example, forecasting accuracy has been estimated only by simulated data. We describe alternative forecasting methods that predict fire probability by locale and time using statistical or machine learning procedures trained on historical data, and we give comparative assessments of their forecasting accuracy for one fire season year, April- October, 2003, for all U.S. Forest Service lands. Aside from providing an accuracy baseline for other forecasting methods, the results illustrate the interdependence between the statistical significance of prediction variables and the forecasting method used.
NASA Astrophysics Data System (ADS)
Hancock, Matthew C.; Magnan, Jerry F.
2017-03-01
To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capabilities of statistical learning methods for classifying nodule malignancy, utilizing the Lung Image Database Consortium (LIDC) dataset, and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that is achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (+/-1.14)% which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (+/-0.012), which increases to 0.949 (+/-0.007) when diameter and volume features are included, along with the accuracy to 88.08 (+/-1.11)%. Our results are comparable to those in the literature that use algorithmically-derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features, and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.
Catto, James W F; Linkens, Derek A; Abbod, Maysam F; Chen, Minyou; Burton, Julian L; Feeley, Kenneth M; Hamdy, Freddie C
2003-09-15
New techniques for the prediction of tumor behavior are needed, because statistical analysis has a poor accuracy and is not applicable to the individual. Artificial intelligence (AI) may provide these suitable methods. Whereas artificial neural networks (ANN), the best-studied form of AI, have been used successfully, its hidden networks remain an obstacle to its acceptance. Neuro-fuzzy modeling (NFM), another AI method, has a transparent functional layer and is without many of the drawbacks of ANN. We have compared the predictive accuracies of NFM, ANN, and traditional statistical methods, for the behavior of bladder cancer. Experimental molecular biomarkers, including p53 and the mismatch repair proteins, and conventional clinicopathological data were studied in a cohort of 109 patients with bladder cancer. For all three of the methods, models were produced to predict the presence and timing of a tumor relapse. Both methods of AI predicted relapse with an accuracy ranging from 88% to 95%. This was superior to statistical methods (71-77%; P < 0.0006). NFM appeared better than ANN at predicting the timing of relapse (P = 0.073). The use of AI can accurately predict cancer behavior. NFM has a similar or superior predictive accuracy to ANN. However, unlike the impenetrable "black-box" of a neural network, the rules of NFM are transparent, enabling validation from clinical knowledge and the manipulation of input variables to allow exploratory predictions. This technique could be used widely in a variety of areas of medicine.
Northern Hemisphere observations of ICRF sources on the USNO stellar catalogue frame
NASA Astrophysics Data System (ADS)
Fienga, A.; Andrei, A. H.
2004-06-01
The most recent USNO stellar catalogue, the USNO B1.0 (Monet et al. \\cite{Monet03}), provides positions for 1 042 618 261 objects, with a published astrometric accuracy of 200 mas and five-band magnitudes with a 0.3 mag accuracy. Its completeness is believed to be up to magnitude 21th in V-band. Such a catalogue would be a very good tool for astrometric reduction. This work investigates the accuracy of the USNO B1.0 link to ICRF and give an estimation of its internal and external accuracies by comparison with different catalogues, and by computation of ICRF sources using USNO B1.0 star positions.
Depth calibration of the Experimental Advanced Airborne Research Lidar, EAARL-B
Wright, C. Wayne; Kranenburg, Christine J.; Troche, Rodolfo J.; Mitchell, Richard W.; Nagle, David B.
2016-05-17
The resulting calibrated EAARL-B data were then analyzed and compared with the original reference dataset, the jet-ski-based dataset from the same Fort Lauderdale site, as well as the depth-accuracy requirements of the International Hydrographic Organization (IHO). We do not claim to meet all of the IHO requirements and standards. The IHO minimum depth-accuracy requirements were used as a reference only and we do not address the other IHO requirements such as “ Full Seafloor Search”. Our results show good agreement between the calibrated EAARL-B data and all reference datasets, with results that are within the 95 percent depth accuracy of the IHO Order 1 (a and b) depth-accuracy requirements.
Accuracy assessment of percent canopy cover, cover type, and size class
H. T. Schreuder; S. Bain; R. C. Czaplewski
2003-01-01
Truth for vegetation cover percent and type is obtained from very large-scale photography (VLSP), stand structure as measured by size classes, and vegetation types from a combination of VLSP and ground sampling. We recommend using the Kappa statistic with bootstrap confidence intervals for overall accuracy, and similarly bootstrap confidence intervals for percent...
The accuracy of the National Land Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or a...
Factors Affecting the Item Parameter Estimation and Classification Accuracy of the DINA Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Hong, Yuan; Deng, Weiling
2010-01-01
To better understand the statistical properties of the deterministic inputs, noisy "and" gate cognitive diagnosis (DINA) model, the impact of several factors on the quality of the item parameter estimates and classification accuracy was investigated. Results of the simulation study indicate that the fully Bayes approach is most accurate when the…
Statistical classifiers on multifractal parameters for optical diagnosis of cervical cancer
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sabyasachi; Pratiher, Sawon; Kumar, Rajeev; Krishnamoorthy, Vigneshram; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2017-06-01
An augmented set of multifractal parameters with physical interpretations have been proposed to quantify the varying distribution and shape of the multifractal spectrum. The statistical classifier with accuracy of 84.17% validates the adequacy of multi-feature MFDFA characterization of elastic scattering spectroscopy for optical diagnosis of cancer.
Zhang, Ying-Ying; Zhou, Xiao-Bin; Wang, Qiu-Zhen; Zhu, Xiao-Yan
2017-05-01
Multivariable logistic regression (MLR) has been increasingly used in Chinese clinical medical research during the past few years. However, few evaluations of the quality of the reporting strategies in these studies are available.To evaluate the reporting quality and model accuracy of MLR used in published work, and related advice for authors, readers, reviewers, and editors.A total of 316 articles published in 5 leading Chinese clinical medical journals with high impact factor from January 2010 to July 2015 were selected for evaluation. Articles were evaluated according 12 established criteria for proper use and reporting of MLR models.Among the articles, the highest quality score was 9, the lowest 1, and the median 5 (4-5). A total of 85.1% of the articles scored below 6. No significant differences were found among these journals with respect to quality score (χ = 6.706, P = .15). More than 50% of the articles met the following 5 criteria: complete identification of the statistical software application that was used (97.2%), calculation of the odds ratio and its confidence interval (86.4%), description of sufficient events (>10) per variable, selection of variables, and fitting procedure (78.2%, 69.3%, and 58.5%, respectively). Less than 35% of the articles reported the coding of variables (18.7%). The remaining 5 criteria were not satisfied by a sufficient number of articles: goodness-of-fit (10.1%), interactions (3.8%), checking for outliers (3.2%), collinearity (1.9%), and participation of statisticians and epidemiologists (0.3%). The criterion of conformity with linear gradients was applicable to 186 articles; however, only 7 (3.8%) mentioned or tested it.The reporting quality and model accuracy of MLR in selected articles were not satisfactory. In fact, severe deficiencies were noted. Only 1 article scored 9. We recommend authors, readers, reviewers, and editors to consider MLR models more carefully and cooperate more closely with statisticians and epidemiologists. Journals should develop statistical reporting guidelines concerning MLR.
NASA Astrophysics Data System (ADS)
Omar, Mahmoud A.; Badr El-Din, Kalid M.; Salem, Hesham; Abdelmageed, Osama H.
2018-03-01
Two simple and sensitive spectrophotometric and spectrofluorimetric methods for the determination of terbutaline sulfate, fenoterol hydrobromide, etilefrine hydrochloride, isoxsuprine hydrochloride, ethamsylate, doxycycline hyclate have been developed. Both methods were based on the oxidation of the cited drugs with cerium (IV) in acid medium. The spectrophotometric method was based on measurement of the absorbance difference (ΔA), which represents the excess cerium (IV), at 317 nm for each drug. On the other hand, the spectrofluorimetric method was based on measurement of the fluorescent of the produced cerium (III) at emission wavelength 354 nm (λexcitation = 255 nm) for the concentrations studied for each drug. For both methods, the variables affecting the reactions were carefully investigated and the conditions were optimized. Linear relationships were found between either ΔA or the fluorescent of the produced cerium (III) values and the concentration of the studied drugs in a general concentration range of 2.0-24.0 μg mL- 1, 20.0-24.0 ng mL- 1 with good correlation coefficients in the following range 0.9990-0.9999, 0.9990-0.9993 for spectrophotometric and spectrofluorimetric methods respectively. The limits of detection and quantitation of spectrophotometric method were found in general concentration range 0.190-0.787 and 0.634-2.624 μg mL- 1respectively. For spectrofluorimetric method, the limits of detection and quantitation were found in general concentration range 4.77-9.52 and 15.91-31.74 ng mL- 1 respectively. The stoichiometry of the reaction was determined, and the reactions pathways were postulated. The analytical performance of the methods, in terms of accuracy and precision, were statistically validated and the results obtained were satisfactory. The methods have been successfully applied to the determination of the cited drugs in their commercial pharmaceutical formulations. Statistical comparison of the results with the reference methods showed excellent agreement and proved that no significant difference in the accuracy and precision.
A High-Precision Counter Using the DSP Technique
2004-09-01
DSP is not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. So we cut the...sampling number in a cycle is not good enough to achieve an accuracy less than 2×10-11. For this reason, a correlation operation is performed for... not good enough to process all the 1-second samples. The cache memory is also not sufficient to store all the sampling data. We will solve this
Solnica, Bogdan
2009-01-01
In this issue of Journal of Diabetes Science and Technology, Chang and colleagues present the analytical performance evaluation of the OneTouch® UltraVue™ blood glucose meter. This device is an advanced construction with a color display, used-strip ejector, no-button interface, and short assay time. Accuracy studies were performed using a YSI 2300 analyzer, considered the reference. Altogether, 349 pairs of results covering a wide range of blood glucose concentrations were analyzed. Patients with diabetes performed a significant part of the tests. Obtained results indicate good accuracy of OneTouch UltraVue blood glucose monitoring system, satisfying the International Organization for Standardization recommendations and thereby locating >95% of tests within zone A of the error grid. Results of the precision studies indicate good reproducibility of measurements. In conclusion, the evaluation of the OneTouch UltraVue meter revealed good analytical performance together with convenient handling useful for self-monitoring of blood glucose performed by elderly diabetes patients. PMID:20144432
Solnica, Bogdan
2009-09-01
In this issue of Journal of Diabetes Science and Technology, Chang and colleagues present the analytical performance evaluation of the OneTouch UltraVue blood glucose meter. This device is an advanced construction with a color display, used-strip ejector, no-button interface, and short assay time. Accuracy studies were performed using a YSI 2300 analyzer, considered the reference. Altogether, 349 pairs of results covering a wide range of blood glucose concentrations were analyzed. Patients with diabetes performed a significant part of the tests. Obtained results indicate good accuracy of OneTouch UltraVue blood glucose monitoring system, satisfying the International Organization for Standardization recommendations and thereby locating >95% of tests within zone A of the error grid. Results of the precision studies indicate good reproducibility of measurements. In conclusion, the evaluation of the OneTouch UltraVue meter revealed good analytical performance together with convenient handling useful for self-monitoring of blood glucose performed by elderly diabetes patients. 2009 Diabetes Technology Society.
System considerations for detection and tracking of small targets using passive sensors
NASA Astrophysics Data System (ADS)
DeBell, David A.
1991-08-01
Passive sensors provide only a few discriminants to assist in threat assessment of small targets. Tracking of the small targets provides additional discriminants. This paper discusses the system considerations for tracking small targets using passive sensors, in particular EO sensors. Tracking helps establish good versus bad detections. Discussed are the requirements to be placed on the sensor system's accuracy, with respect to knowledge of the sightline direction. The detection of weak targets sets a requirement for two levels of tracking in order to reduce processor throughput. A system characteristic is the need to track all detections. For low thresholds, this can mean a heavy track burden. Therefore, thresholds must be adaptive in order not to saturate the processors. Second-level tracks must develop a range estimate in order to assess threat. Sensor platform maneuvers are required if the targets are moving. The need for accurate pointing, good stability, and a good update rate will be shown quantitatively, relating to track accuracy and track association.
Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano
2008-09-01
This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy.
Local indicators of geocoding accuracy (LIGA): theory and application
Jacquez, Geoffrey M; Rommel, Robert
2009-01-01
Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795
NASA Astrophysics Data System (ADS)
Poyatos, Rafael; Sus, Oliver; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi
2018-05-01
The ubiquity of missing data in plant trait databases may hinder trait-based analyses of ecological patterns and processes. Spatially explicit datasets with information on intraspecific trait variability are rare but offer great promise in improving our understanding of functional biogeography. At the same time, they offer specific challenges in terms of data imputation. Here we compare statistical imputation approaches, using varying levels of environmental information, for five plant traits (leaf biomass to sapwood area ratio, leaf nitrogen content, maximum tree height, leaf mass per area and wood density) in a spatially explicit plant trait dataset of temperate and Mediterranean tree species (Ecological and Forest Inventory of Catalonia, IEFC, dataset for Catalonia, north-east Iberian Peninsula, 31 900 km2). We simulated gaps at different missingness levels (10-80 %) in a complete trait matrix, and we used overall trait means, species means, k nearest neighbours (kNN), ordinary and regression kriging, and multivariate imputation using chained equations (MICE) to impute missing trait values. We assessed these methods in terms of their accuracy and of their ability to preserve trait distributions, multi-trait correlation structure and bivariate trait relationships. The relatively good performance of mean and species mean imputations in terms of accuracy masked a poor representation of trait distributions and multivariate trait structure. Species identity improved MICE imputations for all traits, whereas forest structure and topography improved imputations for some traits. No method performed best consistently for the five studied traits, but, considering all traits and performance metrics, MICE informed by relevant ecological variables gave the best results. However, at higher missingness (> 30 %), species mean imputations and regression kriging tended to outperform MICE for some traits. MICE informed by relevant ecological variables allowed us to fill the gaps in the IEFC incomplete dataset (5495 plots) and quantify imputation uncertainty. Resulting spatial patterns of the studied traits in Catalan forests were broadly similar when using species means, regression kriging or the best-performing MICE application, but some important discrepancies were observed at the local level. Our results highlight the need to assess imputation quality beyond just imputation accuracy and show that including environmental information in statistical imputation approaches yields more plausible imputations in spatially explicit plant trait datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nedic, Vladimir, E-mail: vnedic@kg.ac.rs; Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs; Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs
2014-11-15
Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. Themore » output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.« less
Tabak, Ying P; Sun, Xiaowu; Nunez, Carlos M; Gupta, Vikas; Johannes, Richard S
2017-03-01
Identifying patients at high risk for readmission early during hospitalization may aid efforts in reducing readmissions. We sought to develop an early readmission risk predictive model using automated clinical data available at hospital admission. We developed an early readmission risk model using a derivation cohort and validated the model with a validation cohort. We used a published Acute Laboratory Risk of Mortality Score as an aggregated measure of clinical severity at admission and the number of hospital discharges in the previous 90 days as a measure of disease progression. We then evaluated the administrative data-enhanced model by adding principal and secondary diagnoses and other variables. We examined the c-statistic change when additional variables were added to the model. There were 1,195,640 adult discharges from 70 hospitals with 39.8% male and the median age of 63 years (first and third quartile: 43, 78). The 30-day readmission rate was 11.9% (n=142,211). The early readmission model yielded a graded relationship of readmission and the Acute Laboratory Risk of Mortality Score and the number of previous discharges within 90 days. The model c-statistic was 0.697 with good calibration. When administrative variables were added to the model, the c-statistic increased to 0.722. Automated clinical data can generate a readmission risk score early at hospitalization with fair discrimination. It may have applied value to aid early care transition. Adding administrative data increases predictive accuracy. The administrative data-enhanced model may be used for hospital comparison and outcome research.
Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.
Monica, Stefania; Ferrari, Gianluigi
2018-05-17
Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.
Timsit, J F; Fosse, J P; Troché, G; De Lassence, A; Alberti, C; Garrouste-Orgeas, M; Azoulay, E; Chevret, S; Moine, P; Cohen, Y
2001-06-01
In most databases used to build general severity scores the median duration of intensive care unit (ICU) stay is less than 3 days. Consequently, these scores are not the most appropriate tools for measuring prognosis in studies dealing with ICU patients hospitalized for more than 72 h. To develop a new prognostic model based on a general severity score (SAPS II), an organ dysfunction score (LOD) and evolution of both scores during the first 3 days of ICU stay. Prospective multicenter study. Twenty-eight intensive care units (ICUs) in France. A training data-set was created with four ICUs during an 18-month period (893 patients). Seventy percent of the patients were medical (628) aged 66 years. The median SAPS II was 38. The ICU and hospital mortality rates were 22.7% and 30%, respectively. Forty-seven percent (420 patients) were transferred from hospital wards. In this population, the calibration (Hosmer-Lemeshow chi-square: 37.4, P = 0.001) and the discrimination [area under the ROC curves: 0.744 (95 % CI: 0.714-0.773)] of the original SAPS II were relatively poor. A validation data set was created with a random panel of 24 French ICUs during March 1999 (312 patients). The LOD and SAPS II scores were calculated during the first (SAPS1, LOD1), second (SAPS2, LOD2), and third (SAPS3, LOD3) calendar days. The LOD and SAPS scores alterations were assigned the value "1" when scores increased with time and "0" otherwise. A multivariable logistic regression model was used to select variables measured during the first three calendar days, and independently associated with death. Selected variables were: SAPS II at admission [OR: 1.04 (95 % CI: 1.027-1.053) per point], LOD [OR: 1.16 (95 % CI: 1.085-1.253) per point], transfer from ward [OR: 1.74 (95 % CI: 1.25-2.42)], as well as SAPS3-SAPS2 alterations [OR: 1.516 (95 % CI: 1.04-2.22)], and LOD3-LOD2 alterations [OR: 2.00 (95 % CI: 1.29-3.11)]. The final model has good calibration and discrimination properties in the training data set [area under the ROC curve: 0.794 (95 % CI: 0.766-0.820), Hosmer-Lemeshow C statistic: 5.56, P = 0.7]. In the validation data set, the model maintained good accuracy [area under the ROC curve: 0.826 (95 % CI: 0.780-0.867), Hosmer-Lemeshow C statistic: 7.14, P = 0.5]. The new model using SAPS II and LOD and their evolution during the first calendar days has good discrimination and calibration properties. We propose its use for benchmarking and evaluating the over-risk of death associated with ICU-acquired nosocomial infections.
Using Technology to Prompt Good Questions about Distributions in Statistics
ERIC Educational Resources Information Center
Nabbout-Cheiban, Marie; Fisher, Forest; Edwards, Michael Todd
2017-01-01
The Common Core State Standards for Mathematics envisions data analysis as a key component of K-grade 12 mathematics instruction with statistics introduced in the early grades. Nonetheless, deficiencies in statistical learning persist throughout elementary school and beyond. Too often, mathematics teachers lack the statistical knowledge for…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1982-01-01
A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.
Combining accuracy assessment of land-cover maps with environmental monitoring programs
Stehman, S.V.; Czaplewski, R.L.; Nusser, S.M.; Yang, L.; Zhu, Z.
2000-01-01
A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring protocols are described. These strategies range from a fully integrated accuracy assessment and environmental monitoring protocol, to one in which the protocols operate nearly independently. For all three strategies, features critical to using monitoring data for accuracy assessment include compatibility of the land-cover classification schemes, precisely co-registered sample data, and spatial and temporal compatibility of the map and reference data. Two monitoring programs, the National Resources Inventory (NRI) and the Forest Inventory and Monitoring (FIM), are used to illustrate important features for implementing a combined protocol.
Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P
2018-01-01
Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.
Modified Distribution-Free Goodness-of-Fit Test Statistic.
Chun, So Yeon; Browne, Michael W; Shapiro, Alexander
2018-03-01
Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.
Computation of large-scale statistics in decaying isotropic turbulence
NASA Technical Reports Server (NTRS)
Chasnov, Jeffrey R.
1993-01-01
We have performed large-eddy simulations of decaying isotropic turbulence to test the prediction of self-similar decay of the energy spectrum and to compute the decay exponents of the kinetic energy. In general, good agreement between the simulation results and the assumption of self-similarity were obtained. However, the statistics of the simulations were insufficient to compute the value of gamma which corrects the decay exponent when the spectrum follows a k(exp 4) wave number behavior near k = 0. To obtain good statistics, it was found necessary to average over a large ensemble of turbulent flows.
Multivariate Regression Analysis and Slaughter Livestock,
AGRICULTURE, *ECONOMICS), (*MEAT, PRODUCTION), MULTIVARIATE ANALYSIS, REGRESSION ANALYSIS , ANIMALS, WEIGHT, COSTS, PREDICTIONS, STABILITY, MATHEMATICAL MODELS, STORAGE, BEEF, PORK, FOOD, STATISTICAL DATA, ACCURACY
Accuracy and borehole influences in pulsed neutron gamma density logging while drilling.
Yu, Huawei; Sun, Jianmeng; Wang, Jiaxin; Gardner, Robin P
2011-09-01
A new pulsed neutron gamma density (NGD) logging has been developed to replace radioactive chemical sources in oil logging tools. The present paper describes studies of near and far density measurement accuracy of NGD logging at two spacings and the borehole influences using Monte-Carlo simulation. The results show that the accuracy of near density is not as good as far density. It is difficult to correct this for borehole effects by using conventional methods because both near and far density measurement is significantly sensitive to standoffs and mud properties. Copyright © 2011 Elsevier Ltd. All rights reserved.
The effect of a nurse team leader on communication and leadership in major trauma resuscitations.
Clements, Alana; Curtis, Kate; Horvat, Leanne; Shaban, Ramon Z
2015-01-01
Effective assessment and resuscitation of trauma patients requires an organised, multidisciplinary team. Literature evaluating leadership roles of nurses in trauma resuscitation and their effect on team performance is scarce. To assess the effect of allocating the most senior nurse as team leader of trauma patient assessment and resuscitation on communication, documentation and perceptions of leadership within an Australian emergency department. The study design was a pre-post-test survey of emergency nursing staff (working at resuscitation room level) perceptions of leadership, communication, and documentation before and after the implementation of a nurse leader role. Patient records were audited focussing on initial resuscitation assessment, treatment, and nursing clinical entry. Descriptive statistical analyses were performed. Communication trended towards improvement. All (100%) respondents post-test stated they had a good to excellent understanding of their role, compared to 93.2% pre-study. A decrease (58.1-12.5%) in 'intimidating personality' as a negative aspect of communication. Nursing leadership had a 6.7% increase in the proportion of those who reported nursing leadership to be good to excellent. Accuracy of clinical documentation improved (P = 0.025). Trauma nurse team leaders improve some aspects of communication and leadership. Development of trauma nurse leaders should be encouraged within trauma team training programmes. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
[Identification of adverse events in hospitalised influenza patients].
Aranaz-Andrés, J M; Gea-Velázquez de Castro, M T; Jiménez-Pericás, F; Balbuena-Segura, A I; Meyer-García, M C; López-Fresneña, N; Miralles-Bueno, J J; Obón-Azuara, B; Moliner-Lahoz, J; Aibar-Remón, C
2015-01-01
To test the inter-observer agreement in identifying adverse events (AE) in patients hospitalized by flu and undergoing precautionary isolation measures. Historical cohort study, 50 patients undergoing isolation measures due to flu, and 50 patients without any isolation measures. The AE incidence ranges from 10 to 26% depending on the observer (26% [95%CI: 17.4%-34.60%], 10% [95%CI: 4.12%-15.88%], and 23% [95%CI: 14.75%-31.25%]). It was always lower in the cohort undergoing the isolation measures. This difference is statistically significant when the accurate definition of a case is applied. The agreement as regards the screening was good (higher than 76%; Kappa index between 0.29 and 0.81). The agreement as regards the accurate identification of AE related to care was lower (from 50 to 93.3%, Kappa index from 0.20 to 0.70). Before performing an epidemiological study on AE, interobserver concordance must be analyzed to improve the accuracy of the results and the validity of the study. Studies have different levels of reliability. Kappa index shows high levels for the screening guide, but not for the identification of AE. Without a good methodology the results achieved, and thus the decisions made from them, cannot be guaranteed. Researchers have to be sure of the method used, which should be as close as possible to the optimal achievable. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Kopaev, A.; Ducarme, B.
2003-04-01
We have used the most recent oceanic tidal models e.g. FES’99/02, GOT’00, CSR’4, NAO’99 and TPXO’5/6 for tidal gravity loading computations using LOAD’97 software. Resulting loading vectors were compared against each other in different regions located at different distances from the sea coast. Results indicate good coincidence for majority of models at the distances larger than 100-200 km, excluding some regions where mostly CSR’4 and TPXO have problems. Outlying models were rejected for this regions and mean loading vectors have been calculated for more than 200 tidal gravity stations from GGP and ICET data banks, representing state of the art of tidal loading correction. Corresponding errors in d-factors and phase lags are generally smaller than 0.1 % resp. 0.05o, that means that we do not have the real troubles with loading corrections and more attention should be applied to the calibration values and phase lag determination accuracies. Corrected values agree with DDW model values very well (within 0.2 %) for majority of GGP stations, whereas some of very good (Chinese network mainly) ICET tidal gravity stations clearly demonstrate statistically significant (up to 0.5 %) anomalies that seems not connected either with calibration troubles or loading problems. Various possible reasons including instrumental and geophysical will be presented and discussed.
SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal
Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.
2017-01-01
Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758
Modelling Multi Hazard Mapping in Semarang City Using GIS-Fuzzy Method
NASA Astrophysics Data System (ADS)
Nugraha, A. L.; Awaluddin, M.; Sasmito, B.
2018-02-01
One important aspect of disaster mitigation planning is hazard mapping. Hazard mapping can provide spatial information on the distribution of locations that are threatened by disaster. Semarang City as the capital of Central Java Province is one of the cities with high natural disaster intensity. Frequent natural disasters Semarang city is tidal flood, floods, landslides, and droughts. Therefore, Semarang City needs spatial information by doing multi hazard mapping to support disaster mitigation planning in Semarang City. Multi Hazards map modelling can be derived from parameters such as slope maps, rainfall, land use, and soil types. This modelling is done by using GIS method with scoring and overlay technique. However, the accuracy of modelling would be better if the GIS method is combined with Fuzzy Logic techniques to provide a good classification in determining disaster threats. The Fuzzy-GIS method will build a multi hazards map of Semarang city can deliver results with good accuracy and with appropriate threat class spread so as to provide disaster information for disaster mitigation planning of Semarang city. from the multi-hazard modelling using GIS-Fuzzy can be known type of membership that has a good accuracy is the type of membership Gauss with RMSE of 0.404 the smallest of the other membership and VAF value of 72.909% of the largest of the other membership.
Design and Application of Automatic Falling Device for Different Brands of Goods
NASA Astrophysics Data System (ADS)
Yang, Xudong; Ge, Qingkuan; Zuo, Ping; Peng, Tao; Dong, Weifu
2017-12-01
The Goods-Falling device is an important device in the intelligent sorting goods sorting system, which is responsible for the temporary storage and counting of the goods, and the function of putting the goods on the conveyor belt according to certain precision requirements. According to the present situation analysis and actual demand of the domestic goods sorting equipment, a vertical type Goods - Falling Device is designed and the simulation model of the device is established. The dynamic characteristics such as the angular error of the opening and closing mechanism are carried out by ADAMS software. The simulation results show that the maximum angular error is 0.016rad. Through the test of the device, the goods falling speed is 7031/hour, the good of the falling position error within 2mm, meet the crawl accuracy requirements of the palletizing robot.