Sample records for samples improves accuracy

  1. Improved imputation accuracy in Hispanic/Latino populations with larger and more diverse reference panels: applications in the Hispanic Community Health Study/Study of Latinos (HCHS/SOL)

    PubMed Central

    Nelson, Sarah C.; Stilp, Adrienne M.; Papanicolaou, George J.; Taylor, Kent D.; Rotter, Jerome I.; Thornton, Timothy A.; Laurie, Cathy C.

    2016-01-01

    Imputation is commonly used in genome-wide association studies to expand the set of genetic variants available for analysis. Larger and more diverse reference panels, such as the final Phase 3 of the 1000 Genomes Project, hold promise for improving imputation accuracy in genetically diverse populations such as Hispanics/Latinos in the USA. Here, we sought to empirically evaluate imputation accuracy when imputing to a 1000 Genomes Phase 3 versus a Phase 1 reference, using participants from the Hispanic Community Health Study/Study of Latinos. Our assessments included calculating the correlation between imputed and observed allelic dosage in a subset of samples genotyped on a supplemental array. We observed that the Phase 3 reference yielded higher accuracy at rare variants, but that the two reference panels were comparable at common variants. At a sample level, the Phase 3 reference improved imputation accuracy in Hispanic/Latino samples from the Caribbean more than for Mainland samples, which we attribute primarily to the additional reference panel samples available in Phase 3. We conclude that a 1000 Genomes Project Phase 3 reference panel can yield improved imputation accuracy compared with Phase 1, particularly for rare variants and for samples of certain genetic ancestry compositions. Our findings can inform imputation design for other genome-wide association studies of participants with diverse ancestries, especially as larger and more diverse reference panels continue to become available. PMID:27346520

  2. Improved technical success and radiation safety of adrenal vein sampling using rapid, semi-quantitative point-of-care cortisol measurement.

    PubMed

    Page, Michael M; Taranto, Mario; Ramsay, Duncan; van Schie, Greg; Glendenning, Paul; Gillett, Melissa J; Vasikaran, Samuel D

    2018-01-01

    Objective Primary aldosteronism is a curable cause of hypertension which can be treated surgically or medically depending on the findings of adrenal vein sampling studies. Adrenal vein sampling studies are technically demanding with a high failure rate in many centres. The use of intraprocedural cortisol measurement could improve the success rates of adrenal vein sampling but may be impracticable due to cost and effects on procedural duration. Design Retrospective review of the results of adrenal vein sampling procedures since commencement of point-of-care cortisol measurement using a novel single-use semi-quantitative measuring device for cortisol, the adrenal vein sampling Accuracy Kit. Success rate and complications of adrenal vein sampling procedures before and after use of the adrenal vein sampling Accuracy Kit. Routine use of the adrenal vein sampling Accuracy Kit device for intraprocedural measurement of cortisol commenced in 2016. Results Technical success rate of adrenal vein sampling increased from 63% of 99 procedures to 90% of 48 procedures ( P = 0.0007) after implementation of the adrenal vein sampling Accuracy Kit. Failure of right adrenal vein cannulation was the main reason for an unsuccessful study. Radiation dose decreased from 34.2 Gy.cm 2 (interquartile range, 15.8-85.9) to 15.7 Gy.cm 2 (6.9-47.3) ( P = 0.009). No complications were noted, and implementation costs were minimal. Conclusions Point-of-care cortisol measurement during adrenal vein sampling improved cannulation success rates and reduced radiation exposure. The use of the adrenal vein sampling Accuracy Kit is now standard practice at our centre.

  3. [Combining speech sample and feature bilateral selection algorithm for classification of Parkinson's disease].

    PubMed

    Zhang, Xiaoheng; Wang, Lirui; Cao, Yao; Wang, Pin; Zhang, Cheng; Yang, Liuyang; Li, Yongming; Zhang, Yanling; Cheng, Oumei

    2018-02-01

    Diagnosis of Parkinson's disease (PD) based on speech data has been proved to be an effective way in recent years. However, current researches just care about the feature extraction and classifier design, and do not consider the instance selection. Former research by authors showed that the instance selection can lead to improvement on classification accuracy. However, no attention is paid on the relationship between speech sample and feature until now. Therefore, a new diagnosis algorithm of PD is proposed in this paper by simultaneously selecting speech sample and feature based on relevant feature weighting algorithm and multiple kernel method, so as to find their synergy effects, thereby improving classification accuracy. Experimental results showed that this proposed algorithm obtained apparent improvement on classification accuracy. It can obtain mean classification accuracy of 82.5%, which was 30.5% higher than the relevant algorithm. Besides, the proposed algorithm detected the synergy effects of speech sample and feature, which is valuable for speech marker extraction.

  4. A Classification of Remote Sensing Image Based on Improved Compound Kernels of Svm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianing; Gao, Wanlin; Liu, Zili; Mou, Guifen; Lu, Lin; Yu, Lina

    The accuracy of RS classification based on SVM which is developed from statistical learning theory is high under small number of train samples, which results in satisfaction of classification on RS using SVM methods. The traditional RS classification method combines visual interpretation with computer classification. The accuracy of the RS classification, however, is improved a lot based on SVM method, because it saves much labor and time which is used to interpret images and collect training samples. Kernel functions play an important part in the SVM algorithm. It uses improved compound kernel function and therefore has a higher accuracy of classification on RS images. Moreover, compound kernel improves the generalization and learning ability of the kernel.

  5. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  6. Study design requirements for RNA sequencing-based breast cancer diagnostics.

    PubMed

    Mer, Arvind Singh; Klevebring, Daniel; Grönberg, Henrik; Rantalainen, Mattias

    2016-02-01

    Sequencing-based molecular characterization of tumors provides information required for individualized cancer treatment. There are well-defined molecular subtypes of breast cancer that provide improved prognostication compared to routine biomarkers. However, molecular subtyping is not yet implemented in routine breast cancer care. Clinical translation is dependent on subtype prediction models providing high sensitivity and specificity. In this study we evaluate sample size and RNA-sequencing read requirements for breast cancer subtyping to facilitate rational design of translational studies. We applied subsampling to ascertain the effect of training sample size and the number of RNA sequencing reads on classification accuracy of molecular subtype and routine biomarker prediction models (unsupervised and supervised). Subtype classification accuracy improved with increasing sample size up to N = 750 (accuracy = 0.93), although with a modest improvement beyond N = 350 (accuracy = 0.92). Prediction of routine biomarkers achieved accuracy of 0.94 (ER) and 0.92 (Her2) at N = 200. Subtype classification improved with RNA-sequencing library size up to 5 million reads. Development of molecular subtyping models for cancer diagnostics requires well-designed studies. Sample size and the number of RNA sequencing reads directly influence accuracy of molecular subtyping. Results in this study provide key information for rational design of translational studies aiming to bring sequencing-based diagnostics to the clinic.

  7. The systematic component of phylogenetic error as a function of taxonomic sampling under parsimony.

    PubMed

    Debry, Ronald W

    2005-06-01

    The effect of taxonomic sampling on phylogenetic accuracy under parsimony is examined by simulating nucleotide sequence evolution. Random error is minimized by using very large numbers of simulated characters. This allows estimation of the consistency behavior of parsimony, even for trees with up to 100 taxa. Data were simulated on 8 distinct 100-taxon model trees and analyzed as stratified subsets containing either 25 or 50 taxa, in addition to the full 100-taxon data set. Overall accuracy decreased in a majority of cases when taxa were added. However, the magnitude of change in the cases in which accuracy increased was larger than the magnitude of change in the cases in which accuracy decreased, so, on average, overall accuracy increased as more taxa were included. A stratified sampling scheme was used to assess accuracy for an initial subsample of 25 taxa. The 25-taxon analyses were compared to 50- and 100-taxon analyses that were pruned to include only the original 25 taxa. On average, accuracy for the 25 taxa was improved by taxon addition, but there was considerable variation in the degree of improvement among the model trees and across different rates of substitution.

  8. Comparison of Hybrid Classifiers for Crop Classification Using Normalized Difference Vegetation Index Time Series: A Case Study for Major Crops in North Xinjiang, China

    PubMed Central

    Hao, Pengyu; Wang, Li; Niu, Zheng

    2015-01-01

    A range of single classifiers have been proposed to classify crop types using time series vegetation indices, and hybrid classifiers are used to improve discriminatory power. Traditional fusion rules use the product of multi-single classifiers, but that strategy cannot integrate the classification output of machine learning classifiers. In this research, the performance of two hybrid strategies, multiple voting (M-voting) and probabilistic fusion (P-fusion), for crop classification using NDVI time series were tested with different training sample sizes at both pixel and object levels, and two representative counties in north Xinjiang were selected as study area. The single classifiers employed in this research included Random Forest (RF), Support Vector Machine (SVM), and See 5 (C 5.0). The results indicated that classification performance improved (increased the mean overall accuracy by 5%~10%, and reduced standard deviation of overall accuracy by around 1%) substantially with the training sample number, and when the training sample size was small (50 or 100 training samples), hybrid classifiers substantially outperformed single classifiers with higher mean overall accuracy (1%~2%). However, when abundant training samples (4,000) were employed, single classifiers could achieve good classification accuracy, and all classifiers obtained similar performances. Additionally, although object-based classification did not improve accuracy, it resulted in greater visual appeal, especially in study areas with a heterogeneous cropping pattern. PMID:26360597

  9. Determination of Monensin in Bovine Tissues: A Bridging Study Comparing the Bioautographic Method (FSIS CLG-MON) with a Liquid Chromatography-Tandem Mass Spectrometry Method (OMA 2011.24).

    PubMed

    Mizinga, Kemmy M; Burnett, Thomas J; Brunelle, Sharon L; Wallace, Michael A; Coleman, Mark R

    2018-05-01

    The U.S. Department of Agriculture, Food Safety Inspection Service regulatory method for monensin, Chemistry Laboratory Guidebook CLG-MON, is a semiquantitative bioautographic method adopted in 1991. Official Method of AnalysisSM (OMA) 2011.24, a modern quantitative and confirmatory LC-tandem MS method, uses no chlorinated solvents and has several advantages, including ease of use, ready availability of reagents and materials, shorter run-time, and higher throughput than CLG-MON. Therefore, a bridging study was conducted to support the replacement of method CLG-MON with OMA 2011.24 for regulatory use. Using fortified bovine tissue samples, CLG-MON yielded accuracies of 80-120% in 44 of the 56 samples tested (one sample had no result, six samples had accuracies of >120%, and five samples had accuracies of 40-160%), but the semiquantitative nature of CLG-MON prevented assessment of precision, whereas OMA 2011.24 had accuracies of 88-110% and RSDr of 0.00-15.6%. Incurred residue results corroborated these results, demonstrating improved accuracy (83.3-114%) and good precision (RSDr of 2.6-20.5%) for OMA 2011.24 compared with CLG-MON (accuracy generally within 80-150%, with exceptions). Furthermore, χ2 analysis revealed no statistically significant difference between the two methods. Thus, the microbiological activity of monensin correlated with the determination of monensin A in bovine tissues, and OMA 2011.24 provided improved accuracy and precision over CLG-MON.

  10. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.

  11. Analysis of spatial distribution of land cover maps accuracy

    NASA Astrophysics Data System (ADS)

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.

  12. On-line analysis of algae in water by discrete three-dimensional fluorescence spectroscopy.

    PubMed

    Zhao, Nanjing; Zhang, Xiaoling; Yin, Gaofang; Yang, Ruifang; Hu, Li; Chen, Shuang; Liu, Jianguo; Liu, Wenqing

    2018-03-19

    In view of the problem of the on-line measurement of algae classification, a method of algae classification and concentration determination based on the discrete three-dimensional fluorescence spectra was studied in this work. The discrete three-dimensional fluorescence spectra of twelve common species of algae belonging to five categories were analyzed, the discrete three-dimensional standard spectra of five categories were built, and the recognition, classification and concentration prediction of algae categories were realized by the discrete three-dimensional fluorescence spectra coupled with non-negative weighted least squares linear regression analysis. The results show that similarities between discrete three-dimensional standard spectra of different categories were reduced and the accuracies of recognition, classification and concentration prediction of the algae categories were significantly improved. By comparing with that of the chlorophyll a fluorescence excitation spectra method, the recognition accuracy rate in pure samples by discrete three-dimensional fluorescence spectra is improved 1.38%, and the recovery rate and classification accuracy in pure diatom samples 34.1% and 46.8%, respectively; the recognition accuracy rate of mixed samples by discrete-three dimensional fluorescence spectra is enhanced by 26.1%, the recovery rate of mixed samples with Chlorophyta 37.8%, and the classification accuracy of mixed samples with diatoms 54.6%.

  13. Improving the sensitivity and accuracy of gamma activation analysis for the rapid determination of gold in mineral ores.

    PubMed

    Tickner, James; Ganly, Brianna; Lovric, Bojan; O'Dwyer, Joel

    2017-04-01

    Mining companies rely on chemical analysis methods to determine concentrations of gold in mineral ore samples. As gold is often mined commercially at concentrations around 1 part-per-million, it is necessary for any analysis method to provide good sensitivity as well as high absolute accuracy. We describe work to improve both the sensitivity and accuracy of the gamma activation analysis (GAA) method for gold. We present analysis results for several suites of ore samples and discuss the design of a GAA facility designed to replace conventional chemical assay in industrial applications. Copyright © 2017. Published by Elsevier Ltd.

  14. Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.

    PubMed

    Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto

    2007-07-01

    To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.

  15. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  16. Performance Evaluation and Analysis for Gravity Matching Aided Navigation.

    PubMed

    Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong

    2017-04-05

    Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN.

  17. Performance Evaluation and Analysis for Gravity Matching Aided Navigation

    PubMed Central

    Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong

    2017-01-01

    Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN. PMID:28379178

  18. A review of accuracy assessment for object-based image analysis: From per-pixel to per-polygon approaches

    NASA Astrophysics Data System (ADS)

    Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul

    2018-07-01

    Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.

  19. Predictive accuracy of combined genetic and environmental risk scores.

    PubMed

    Dudbridge, Frank; Pashayan, Nora; Yang, Jian

    2018-02-01

    The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. © 2017 WILEY PERIODICALS, INC.

  20. Predictive accuracy of combined genetic and environmental risk scores

    PubMed Central

    Pashayan, Nora; Yang, Jian

    2017-01-01

    ABSTRACT The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. PMID:29178508

  1. Improved Statistical Sampling and Accuracy with Accelerated Molecular Dynamics on Rotatable Torsions.

    PubMed

    Doshi, Urmi; Hamelberg, Donald

    2012-11-13

    In enhanced sampling techniques, the precision of the reweighted ensemble properties is often decreased due to large variation in statistical weights and reduction in the effective sampling size. To abate this reweighting problem, here, we propose a general accelerated molecular dynamics (aMD) approach in which only the rotatable dihedrals are subjected to aMD (RaMD), unlike the typical implementation wherein all dihedrals are boosted (all-aMD). Nonrotatable and improper dihedrals are marginally important to conformational changes or the different rotameric states. Not accelerating them avoids the sharp increases in the potential energies due to small deviations from their minimum energy conformations and leads to improvement in the precision of RaMD. We present benchmark studies on two model dipeptides, Ace-Ala-Nme and Ace-Trp-Nme, simulated with normal MD, all-aMD, and RaMD. We carry out a systematic comparison between the performances of both forms of aMD using a theory that allows quantitative estimation of the effective number of sampled points and the associated uncertainty. Our results indicate that, for the same level of acceleration and simulation length, as used in all-aMD, RaMD results in significantly less loss in the effective sample size and, hence, increased accuracy in the sampling of φ-ψ space. RaMD yields an accuracy comparable to that of all-aMD, from simulation lengths 5 to 1000 times shorter, depending on the peptide and the acceleration level. Such improvement in speed and accuracy over all-aMD is highly remarkable, suggesting RaMD as a promising method for sampling larger biomolecules.

  2. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The accuracy for quantitative desktop judgments increased from 43 to 63% correct after the rule of thumb training (P < 0.001). The rule of thumb training did not significantly impact accuracy for qualitative desktop judgments. The finding that even some simple statistical rules of thumb improve judgment accuracy significantly suggests that hygienists need to routinely use statistical tools while making exposure judgments using monitoring data.

  3. Enhancement of the spectral selectivity of complex samples by measuring them in a frozen state at low temperatures in order to improve accuracy for quantitative analysis. Part II. Determination of viscosity for lube base oils using Raman spectroscopy.

    PubMed

    Kim, Mooeung; Chung, Hoeil

    2013-03-07

    The use of selectivity-enhanced Raman spectra of lube base oil (LBO) samples achieved by the spectral collection under frozen conditions at low temperatures was effective for improving accuracy for the determination of the kinematic viscosity at 40 °C (KV@40). A collection of Raman spectra from samples cooled around -160 °C provided the most accurate measurement of KV@40. Components of the LBO samples were mainly long-chain hydrocarbons with molecular structures that were deformable when these were frozen, and the different structural deformabilities of the components enhanced spectral selectivity among the samples. To study the structural variation of components according to the change of sample temperature from cryogenic to ambient condition, n-heptadecane and pristane (2,6,10,14-tetramethylpentadecane) were selected as representative components of LBO samples, and their temperature-induced spectral features as well as the corresponding spectral loadings were investigated. A two-dimensional (2D) correlation analysis was also employed to explain the origin for the improved accuracy. The asynchronous 2D correlation pattern was simplest at the optimal temperature, indicating the occurrence of distinct and selective spectral variations, which enabled the variation of KV@40 of LBO samples to be more accurately assessed.

  4. A systematic review of the PTSD Checklist's diagnostic accuracy studies using QUADAS.

    PubMed

    McDonald, Scott D; Brown, Whitney L; Benesek, John P; Calhoun, Patrick S

    2015-09-01

    Despite the popularity of the PTSD Checklist (PCL) as a clinical screening test, there has been no comprehensive quality review of studies evaluating its diagnostic accuracy. A systematic quality assessment of 22 diagnostic accuracy studies of the English-language PCL using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) assessment tool was conducted to examine (a) the quality of diagnostic accuracy studies of the PCL, and (b) whether quality has improved since the 2003 STAndards for the Reporting of Diagnostic accuracy studies (STARD) initiative regarding reporting guidelines for diagnostic accuracy studies. Three raters independently applied the QUADAS tool to each study, and a consensus among the 4 authors is reported. Findings indicated that although studies generally met standards in several quality areas, there is still room for improvement. Areas for improvement include establishing representativeness, adequately describing clinical and demographic characteristics of the sample, and presenting better descriptions of important aspects of test and reference standard execution. Only 2 studies met each of the 14 quality criteria. In addition, study quality has not appreciably improved since the publication of the STARD Statement in 2003. Recommendations for the improvement of diagnostic accuracy studies of the PCL are discussed. (c) 2015 APA, all rights reserved).

  5. High-precision radiometric tracking for planetary approach and encounter in the inner solar system

    NASA Technical Reports Server (NTRS)

    Christensen, C. S.; Thurman, S. W.; Davidson, J. M.; Finger, M. H.; Folkner, W. M.

    1989-01-01

    The benefits of improved radiometric tracking data have been studied for planetary approach within the inner Solar System using the Mars Rover Sample Return trajectory as a model. It was found that the benefit of improved data to approach and encounter navigation was highly dependent on the a priori uncertainties assumed for several non-estimated parameters, including those for frame-tie, Earth orientation, troposphere delay, and station locations. With these errors at their current levels, navigational performance was found to be insensitive to enhancements in data accuracy. However, when expected improvements in these errors are modeled, performance with current-accuracy data significantly improves, with substantial further improvements possible with enhancements in data accuracy.

  6. Alternative Loglinear Smoothing Models and Their Effect on Equating Function Accuracy. Research Report. ETS RR-09-48

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul

    2009-01-01

    This simulation study evaluated the potential of alternative loglinear smoothing strategies for improving equipercentile equating function accuracy. These alternative strategies use cues from the sample data to make automatable and efficient improvements to model fit, either through the use of indicator functions for fitting large residuals or by…

  7. Effects of aniracetam on delayed matching-to-sample performance of monkeys and pigeons.

    PubMed

    Pontecorvo, M J; Evans, H L

    1985-05-01

    A 3-choice, variable-delay, matching-to-sample procedure was used to evaluate drugs in both pigeons and monkeys while tested under nearly-identical conditions. Aniracetam (Roche 13-5057) improved accuracy of matching at all retention intervals following oral administration (12.5, 25 and 50 mg/kg) to macaque monkeys, with a maximal effect at 25 mg/kg. Aniracetam also antagonized scopolamine-induced impairment of the monkey's performance. Intramuscular administration of these same doses of aniracetam produced a similar, but not significant trend toward improved matching accuracy in pigeons.

  8. Accuracy improvement of quantitative analysis by spatial confinement in laser-induced breakdown spectroscopy.

    PubMed

    Guo, L B; Hao, Z Q; Shen, M; Xiong, W; He, X N; Xie, Z Q; Gao, M; Li, X Y; Zeng, X Y; Lu, Y F

    2013-07-29

    To improve the accuracy of quantitative analysis in laser-induced breakdown spectroscopy, the plasma produced by a Nd:YAG laser from steel targets was confined by a cavity. A number of elements with low concentrations, such as vanadium (V), chromium (Cr), and manganese (Mn), in the steel samples were investigated. After the optimization of the cavity dimension and laser fluence, significant enhancement factors of 4.2, 3.1, and 2.87 in the emission intensity of V, Cr, and Mn lines, respectively, were achieved at a laser fluence of 42.9 J/cm(2) using a hemispherical cavity (diameter: 5 mm). More importantly, the correlation coefficient of the V I 440.85/Fe I 438.35 nm was increased from 0.946 (without the cavity) to 0.981 (with the cavity); and similar results for Cr I 425.43/Fe I 425.08 nm and Mn I 476.64/Fe I 492.05 nm were also obtained. Therefore, it was demonstrated that the accuracy of quantitative analysis with low concentration elements in steel samples was improved, because the plasma became uniform with spatial confinement. The results of this study provide a new pathway for improving the accuracy of quantitative analysis of LIBS.

  9. Short-Term Intra-Subject Variation in Exhaled Volatile Organic Compounds (VOCs) in COPD Patients and Healthy Controls and Its Effect on Disease Classification

    PubMed Central

    Phillips, Christopher; Mac Parthaláin, Neil; Syed, Yasir; Deganello, Davide; Claypole, Timothy; Lewis, Keir

    2014-01-01

    Exhaled volatile organic compounds (VOCs) are of interest for their potential to diagnose disease non-invasively. However, most breath VOC studies have analyzed single breath samples from an individual and assumed them to be wholly consistent representative of the person. This provided the motivation for an investigation of the variability of breath profiles when three breath samples are taken over a short time period (two minute intervals between samples) for 118 stable patients with Chronic Obstructive Pulmonary Disease (COPD) and 63 healthy controls and analyzed by gas chromatography and mass spectroscopy (GC/MS). The extent of the variation in VOC levels differed between COPD and healthy subjects and the patterns of variation differed for isoprene versus the bulk of other VOCs. In addition, machine learning approaches were applied to the breath data to establish whether these samples differed in their ability to discriminate COPD from healthy states and whether aggregation of multiple samples, into single data sets, could offer improved discrimination. The three breath samples gave similar classification accuracy to one another when evaluated separately (66.5% to 68.3% subjects classified correctly depending on the breath repetition used). Combining multiple breath samples into single data sets gave better discrimination (73.4% subjects classified correctly). Although accuracy is not sufficient for COPD diagnosis in a clinical setting, enhanced sampling and analysis may improve accuracy further. Variability in samples, and short-term effects of practice or exertion, need to be considered in any breath testing program to improve reliability and optimize discrimination. PMID:24957028

  10. Short-Term Intra-Subject Variation in Exhaled Volatile Organic Compounds (VOCs) in COPD Patients and Healthy Controls and Its Effect on Disease Classification.

    PubMed

    Phillips, Christopher; Mac Parthaláin, Neil; Syed, Yasir; Deganello, Davide; Claypole, Timothy; Lewis, Keir

    2014-05-09

    Exhaled volatile organic compounds (VOCs) are of interest for their potential to diagnose disease non-invasively. However, most breath VOC studies have analyzed single breath samples from an individual and assumed them to be wholly consistent representative of the person. This provided the motivation for an investigation of the variability of breath profiles when three breath samples are taken over a short time period (two minute intervals between samples) for 118 stable patients with Chronic Obstructive Pulmonary Disease (COPD) and 63 healthy controls and analyzed by gas chromatography and mass spectroscopy (GC/MS). The extent of the variation in VOC levels differed between COPD and healthy subjects and the patterns of variation differed for isoprene versus the bulk of other VOCs. In addition, machine learning approaches were applied to the breath data to establish whether these samples differed in their ability to discriminate COPD from healthy states and whether aggregation of multiple samples, into single data sets, could offer improved discrimination. The three breath samples gave similar classification accuracy to one another when evaluated separately (66.5% to 68.3% subjects classified correctly depending on the breath repetition used). Combining multiple breath samples into single data sets gave better discrimination (73.4% subjects classified correctly). Although accuracy is not sufficient for COPD diagnosis in a clinical setting, enhanced sampling and analysis may improve accuracy further. Variability in samples, and short-term effects of practice or exertion, need to be considered in any breath testing program to improve reliability and optimize discrimination.

  11. Impact of proto-oncogene mutation detection in cytological specimens from thyroid nodules improves the diagnostic accuracy of cytology.

    PubMed

    Cantara, Silvia; Capezzone, Marco; Marchisotta, Stefania; Capuano, Serena; Busonero, Giulia; Toti, Paolo; Di Santo, Andrea; Caruso, Giuseppe; Carli, Anton Ferdinando; Brilli, Lucia; Montanaro, Annalisa; Pacini, Furio

    2010-03-01

    Fine-needle aspiration cytology (FNAC) is the gold standard for the differential diagnosis of thyroid nodules but has the limitation of inadequate sampling or indeterminate lesions. We aimed to verify whether search of thyroid cancer-associated protooncogene mutations in cytological samples may improve the diagnostic accuracy of FNAC. One hundred seventy-four consecutive patients undergoing thyroid surgery were submitted to FNAC (on 235 thyroid nodules) that was used for cytology and molecular analysis of BRAF, RAS, RET, TRK, and PPRgamma mutations. At surgery these nodules were sampled to perform the same molecular testing. Mutations were found in 67 of 235 (28.5%) cytological samples. Of the 67 mutated samples, 23 (34.3%) were mutated by RAS, 33 (49.3%) by BRAF, and 11 (16.4%) by RET/PTC. In 88.2% of the cases, the mutation was confirmed in tissue sample. The presence of mutations at cytology was associated with cancer 91.1% of the times and follicular adenoma 8.9% of the time. BRAF or RET/PTC mutations were always associated with cancer, whereas RAS mutations were mainly associated with cancer (74%) but also follicular adenoma (26%). The diagnostic performance of molecular analysis was superior to that of traditional cytology, with better sensitivity and specificity, and the combination of the two techniques further contributed to improve the total accuracy (93.2%), compared with molecular analysis (90.2%) or traditional cytology (83.0%). Our findings demonstrate that molecular analysis of cytological specimens is feasible and that its results in combination with cytology improves the diagnostic performance of traditional cytology.

  12. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Diagnostic Accuracy of the Slump Test for Identifying Neuropathic Pain in the Lower Limb.

    PubMed

    Urban, Lawrence M; MacNeil, Brian J

    2015-08-01

    Diagnostic accuracy study with nonconsecutive enrollment. To assess the diagnostic accuracy of the slump test for neuropathic pain (NeP) in those with low to moderate levels of chronic low back pain (LBP), and to determine whether accuracy of the slump test improves by adding anatomical or qualitative pain descriptors. Neuropathic pain has been linked with poor outcomes, likely due to inadequate diagnosis, which precludes treatment specific for NeP. Current diagnostic approaches are time consuming or lack accuracy. A convenience sample of 21 individuals with LBP, with or without radiating leg pain, was recruited. A standardized neurosensory examination was used to determine the reference diagnosis for NeP. Afterward, the slump test was administered to all participants. Reports of pain location and quality produced during the slump test were recorded. The neurosensory examination designated 11 of the 21 participants with LBP/sciatica as having NeP. The slump test displayed high sensitivity (0.91), moderate specificity (0.70), a positive likelihood ratio of 3.03, and a negative likelihood ratio of 0.13. Adding the criterion of pain below the knee significantly increased specificity to 1.00 (positive likelihood ratio = 11.9). Pain-quality descriptors did not improve diagnostic accuracy. The slump test was highly sensitive in identifying NeP within the study sample. Adding a pain-location criterion improved specificity. Combining the diagnostic outcomes was very effective in identifying all those without NeP and half of those with NeP. Limitations arising from the small and narrow spectrum of participants with LBP/sciatica sampled within the study prevent application of the findings to a wider population. Diagnosis, level 4-.

  14. Classification of Parkinson's disease utilizing multi-edit nearest-neighbor and ensemble learning algorithms with speech samples.

    PubMed

    Zhang, He-Hua; Yang, Liuyang; Liu, Yuchuan; Wang, Pin; Yin, Jun; Li, Yongming; Qiu, Mingguo; Zhu, Xueru; Yan, Fang

    2016-11-16

    The use of speech based data in the classification of Parkinson disease (PD) has been shown to provide an effect, non-invasive mode of classification in recent years. Thus, there has been an increased interest in speech pattern analysis methods applicable to Parkinsonism for building predictive tele-diagnosis and tele-monitoring models. One of the obstacles in optimizing classifications is to reduce noise within the collected speech samples, thus ensuring better classification accuracy and stability. While the currently used methods are effect, the ability to invoke instance selection has been seldomly examined. In this study, a PD classification algorithm was proposed and examined that combines a multi-edit-nearest-neighbor (MENN) algorithm and an ensemble learning algorithm. First, the MENN algorithm is applied for selecting optimal training speech samples iteratively, thereby obtaining samples with high separability. Next, an ensemble learning algorithm, random forest (RF) or decorrelated neural network ensembles (DNNE), is used to generate trained samples from the collected training samples. Lastly, the trained ensemble learning algorithms are applied to the test samples for PD classification. This proposed method was examined using a more recently deposited public datasets and compared against other currently used algorithms for validation. Experimental results showed that the proposed algorithm obtained the highest degree of improved classification accuracy (29.44%) compared with the other algorithm that was examined. Furthermore, the MENN algorithm alone was found to improve classification accuracy by as much as 45.72%. Moreover, the proposed algorithm was found to exhibit a higher stability, particularly when combining the MENN and RF algorithms. This study showed that the proposed method could improve PD classification when using speech data and can be applied to future studies seeking to improve PD classification methods.

  15. Use the Bar Code System to Improve Accuracy of the Patient and Sample Identification.

    PubMed

    Chuang, Shu-Hsia; Yeh, Huy-Pzu; Chi, Kun-Hung; Ku, Hsueh-Chen

    2018-01-01

    In time and correct sample collection were highly related to patient's safety. The sample error rate was 11.1%, because misbranded patient information and wrong sample containers during January to April, 2016. We developed a barcode system of "Specimens Identify System" through process of reengineering of TRM, used bar code scanners, add sample container instructions, and mobile APP. Conclusion, the bar code systems improved the patient safety and created green environment.

  16. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  17. Improving the accuracy of sediment-associated constituent concentrations in whole storm water samples by wet-sieving

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.; Bowman, G.

    2007-01-01

    Sand-sized particles (>63 ??m) in whole storm water samples collected from urban runoff have the potential to produce data with substantial bias and/or poor precision both during sample splitting and laboratory analysis. New techniques were evaluated in an effort to overcome some of the limitations associated with sample splitting and analyzing whole storm water samples containing sand-sized particles. Wet-sieving separates sand-sized particles from a whole storm water sample. Once separated, both the sieved solids and the remaining aqueous (water suspension of particles less than 63 ??m) samples were analyzed for total recoverable metals using a modification of USEPA Method 200.7. The modified version digests the entire sample, rather than an aliquot, of the sample. Using a total recoverable acid digestion on the entire contents of the sieved solid and aqueous samples improved the accuracy of the derived sediment-associated constituent concentrations. Concentration values of sieved solid and aqueous samples can later be summed to determine an event mean concentration. ?? ASA, CSSA, SSSA.

  18. State-dependent biasing method for importance sampling in the weighted stochastic simulation algorithm.

    PubMed

    Roh, Min K; Gillespie, Dan T; Petzold, Linda R

    2010-11-07

    The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.

  19. Population variation in isotopic composition of shorebird feathers: Implications for determining molting grounds

    USGS Publications Warehouse

    Torres-Dowdall, J.; Farmer, A.H.; Bucher, E.H.; Rye, R.O.; Landis, G.

    2009-01-01

    Stable isotope analyses have revolutionized the study of migratory connectivity. However, as with all tools, their limitations must be understood in order to derive the maximum benefit of a particular application. The goal of this study was to evaluate the efficacy of stable isotopes of C, N, H, O and S for assigning known-origin feathers to the molting sites of migrant shorebird species wintering and breeding in Argentina. Specific objectives were to: 1) compare the efficacy of the technique for studying shorebird species with different migration patterns, life histories and habitat-use patterns; 2) evaluate the grouping of species with similar migration and habitat use patterns in a single analysis to potentially improve prediction accuracy; and 3) evaluate the potential gains in prediction accuracy that might be achieved from using multiple stable isotopes. The efficacy of stable isotope ratios to determine origin was found to vary with species. While one species (White-rumped Sandpiper, Calidris fuscicollis) had high levels of accuracy assigning samples to known origin (91% of samples correctly assigned), another (Collared Plover, Charadrius collaris) showed low levels of accuracy (52% of samples correctly assigned). Intra-individual variability may account for this difference in efficacy. The prediction model for three species with similar migration and habitat-use patterns performed poorly compared with the model for just one of the species (71% versus 91% of samples correctly assigned). Thus, combining multiple sympatric species may not improve model prediction accuracy. Increasing the number of stable isotopes in the analyses increased the accuracy of assigning shorebirds to their molting origin, but the best combination - involving a subset of all the isotopes analyzed - varied among species.

  20. Spiking of serum specimens with exogenous reporter peptides for mass spectrometry based protease profiling as diagnostic tool.

    PubMed

    Findeisen, Peter; Peccerella, Teresa; Post, Stefan; Wenz, Frederik; Neumaier, Michael

    2008-04-01

    Serum is a difficult matrix for the identification of biomarkers by mass spectrometry (MS). This is due to high-abundance proteins and their complex processing by a multitude of endogenous proteases making rigorous standardisation difficult. Here, we have investigated the use of defined exogenous reporter peptides as substrates for disease-specific proteases with respect to improved standardisation and disease classification accuracy. A recombinant N-terminal fragment of the Adenomatous Polyposis Coli (APC) protein was digested with trypsin to yield a peptide mixture for subsequent Reporter Peptide Spiking (RPS) of serum. Different preanalytical handling of serum samples was simulated by storage of serum samples for up to 6 h at ambient temperature, followed by RPS, further incubation under standardised conditions and testing for stability of protease-generated MS profiles. To demonstrate the superior classification accuracy achieved by RPS, a pilot profiling experiment was performed using serum specimens from pancreatic cancer patients (n = 50) and healthy controls (n = 50). After RPS six different peak categories could be defined, two of which (categories C and D) are modulated by endogenous proteases. These latter are relevant for improved classification accuracy as shown by enhanced disease-specific classification from 78% to 87% in unspiked and spiked samples, respectively. Peaks of these categories presented with unchanged signal intensities regardless of preanalytical conditions. The use of RPS generally improved the signal intensities of protease-generated peptide peaks. RPS circumvents preanalytical variabilities and improves classification accuracies. Our approach will be helpful to introduce MS-based proteomic profiling into routine laboratory testing.

  1. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  2. Evaluation of Techniques Used to Estimate Cortical Feature Maps

    PubMed Central

    Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2011-01-01

    Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537

  3. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    USGS Publications Warehouse

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  4. Improved automation of dissolved organic carbon sampling for organic-rich surface waters.

    PubMed

    Grayson, Richard P; Holden, Joseph

    2016-02-01

    In-situ UV-Vis spectrophotometers offer the potential for improved estimates of dissolved organic carbon (DOC) fluxes for organic-rich systems such as peatlands because they are able to sample and log DOC proxies automatically through time at low cost. In turn, this could enable improved total carbon budget estimates for peatlands. The ability of such instruments to accurately measure DOC depends on a number of factors, not least of which is how absorbance measurements relate to DOC and the environmental conditions. Here we test the ability of a S::can Spectro::lyser™ for measuring DOC in peatland streams with routinely high DOC concentrations. Through analysis of the spectral response data collected by the instrument we have been able to accurately measure DOC up to 66 mg L(-1), which is more than double the original upper calibration limit for this particular instrument. A linear regression modelling approach resulted in an accuracy >95%. The greatest accuracy was achieved when absorbance values for several different wavelengths were used at the same time in the model. However, an accuracy >90% was achieved using absorbance values for a single wavelength to predict DOC concentration. Our calculations indicated that, for organic-rich systems, in-situ measurement with a scanning spectrophotometer can improve fluvial DOC flux estimates by 6 to 8% compared with traditional sampling methods. Thus, our techniques pave the way for improved long-term carbon budget calculations from organic-rich systems such as peatlands. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Knowing What You Know: Improving Metacomprehension and Calibration Accuracy in Digital Text

    ERIC Educational Resources Information Center

    Reid, Alan J.; Morrison, Gary R.; Bol, Linda

    2017-01-01

    This paper presents results from an experimental study that examined embedded strategy prompts in digital text and their effects on calibration and metacomprehension accuracies. A sample population of 80 college undergraduates read a digital expository text on the basics of photography. The most robust treatment (mixed) read the text, generated a…

  6. Weighted statistical parameters for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  7. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    PubMed

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.

  8. Recalibration of the Klales et al. (2012) method of sexing the human innominate for Mexican populations.

    PubMed

    Gómez-Valdés, Jorge A; Menéndez Garmendia, Antinea; García-Barzola, Lizbeth; Sánchez-Mejorada, Gabriela; Karam, Carlos; Baraybar, José Pablo; Klales, Alexandra

    2017-03-01

    The aim of this study was to test the accuracy of the Klales et al. (2012) equation for sex estimation in contemporary Mexican population. Our investigation was carried out on a sample of 203 left innominates of identified adult skeletons from the UNAM-Collection and the Santa María Xigui Cemetery, in Central Mexico. The Klales' original equation produces a sex bias in sex estimation against males (86-92% accuracy versus 100% accuracy in females). Based on these results, the Klales et al. (2012) method was recalibrated for a new cutt-of-point for sex estimation in contemporary Mexican populations. The results show cross-validated classification accuracy rates as high as 100% after recalibrating the original logistic regression equation. Recalibration improved classification accuracy and eliminated sex bias. This new formula will improve sex estimation for Mexican contemporary populations. © 2017 Wiley Periodicals, Inc.

  9. Double Sampling with Multiple Imputation to Answer Large Sample Meta-Research Questions: Introduction and Illustration by Evaluating Adherence to Two Simple CONSORT Guidelines

    PubMed Central

    Capers, Patrice L.; Brown, Andrew W.; Dawson, John A.; Allison, David B.

    2015-01-01

    Background: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing) has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. Objective: To evaluate the use of double sampling combined with multiple imputation (DS + MI) to address meta-research questions, using as an example adherence of PubMed entries to two simple consolidated standards of reporting trials guidelines for titles and abstracts. Methods: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT, human, abstract available, and English language (n = 322, 107). For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI) method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO) human rating method. Multiple imputation of the missing-completely at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. Results: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title = 1.00, abstract = 0.92). Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS + MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by year: subsample RHITLO 1.050–1.174 vs. DS + MI 1.082–1.151). As evidence of improved accuracy, DS + MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. Conclusion: Our results support our hypothesis that DS + MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of literature. PMID:25988135

  10. Sampling factors influencing accuracy of sperm kinematic analysis.

    PubMed

    Owen, D H; Katz, D F

    1993-01-01

    Sampling conditions that influence the accuracy of experimental measurement of sperm head kinematics were studied by computer simulation methods. Several archetypal sperm trajectories were studied. First, mathematical models of typical flagellar beats were input to hydrodynamic equations of sperm motion. The instantaneous swimming velocities of such sperm were computed over sequences of flagellar beat cycles, from which the resulting trajectories were determined. In a second, idealized approach, direct mathematical models of trajectories were utilized, based upon similarities to the previous hydrodynamic constructs. In general, it was found that analyses of sampling factors produced similar results for the hydrodynamic and idealized trajectories. A number of experimental sampling factors were studied, including the number of sperm head positions measured per flagellar beat, and the time interval over which these measurements are taken. It was found that when one flagellar beat is sampled, values of amplitude of lateral head displacement (ALH) and linearity (LIN) approached their actual values when five or more sample points per beat were taken. Mean angular displacement (MAD) values, however, remained sensitive to sampling rate even when large sampling rates were used. Values of MAD were also much more sensitive to the initial starting point of the sampling procedure than were ALH or LIN. On the basis of these analyses of measurement accuracy for individual sperm, simulations were then performed of cumulative effects when studying entire populations of motile cells. It was found that substantial (double digit) errors occurred in the mean values of curvilinear velocity (VCL), LIN, and MAD under the conditions of 30 video frames per second and 0.5 seconds of analysis time. Increasing the analysis interval to 1 second did not appreciably improve the results. However, increasing the analysis rate to 60 frames per second significantly reduced the errors. These findings thus suggest that computer-aided sperm analysis (CASA) application at 60 frames per second will significantly improve the accuracy of kinematic analysis in most applications to human and other mammalian sperm.

  11. Phylogenomic analysis of ants, bees and stinging wasps: Improved taxon sampling enhances understanding of hymenopteran evolution

    USDA-ARS?s Scientific Manuscript database

    The importance of taxon sampling in phylogenetic accuracy is a topic of active debate. We investigated the role of taxon sampling in causing incongruent results between two recent phylogenomic studies of stinging wasps (Hymenoptera: Aculeata), a diverse lineage that includes ants, bees and the major...

  12. Using the PDD Behavior Inventory as a Level 2 Screener: A Classification and Regression Trees Analysis

    ERIC Educational Resources Information Center

    Cohen, Ira L.; Liu, Xudong; Hudson, Melissa; Gillis, Jennifer; Cavalari, Rachel N. S.; Romanczyk, Raymond G.; Karmel, Bernard Z.; Gardner, Judith M.

    2016-01-01

    In order to improve discrimination accuracy between Autism Spectrum Disorder (ASD) and similar neurodevelopmental disorders, a data mining procedure, Classification and Regression Trees (CART), was used on a large multi-site sample of PDD Behavior Inventory (PDDBI) forms on children with and without ASD. Discrimination accuracy exceeded 80%,…

  13. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    NASA Technical Reports Server (NTRS)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  14. Mapping seabed sediments: Comparison of manual, geostatistical, object-based image analysis and machine learning approaches

    NASA Astrophysics Data System (ADS)

    Diesing, Markus; Green, Sophie L.; Stephens, David; Lark, R. Murray; Stewart, Heather A.; Dove, Dayton

    2014-08-01

    Marine spatial planning and conservation need underpinning with sufficiently detailed and accurate seabed substrate and habitat maps. Although multibeam echosounders enable us to map the seabed with high resolution and spatial accuracy, there is still a lack of fit-for-purpose seabed maps. This is due to the high costs involved in carrying out systematic seabed mapping programmes and the fact that the development of validated, repeatable, quantitative and objective methods of swath acoustic data interpretation is still in its infancy. We compared a wide spectrum of approaches including manual interpretation, geostatistics, object-based image analysis and machine-learning to gain further insights into the accuracy and comparability of acoustic data interpretation approaches based on multibeam echosounder data (bathymetry, backscatter and derivatives) and seabed samples with the aim to derive seabed substrate maps. Sample data were split into a training and validation data set to allow us to carry out an accuracy assessment. Overall thematic classification accuracy ranged from 67% to 76% and Cohen's kappa varied between 0.34 and 0.52. However, these differences were not statistically significant at the 5% level. Misclassifications were mainly associated with uncommon classes, which were rarely sampled. Map outputs were between 68% and 87% identical. To improve classification accuracy in seabed mapping, we suggest that more studies on the effects of factors affecting the classification performance as well as comparative studies testing the performance of different approaches need to be carried out with a view to developing guidelines for selecting an appropriate method for a given dataset. In the meantime, classification accuracy might be improved by combining different techniques to hybrid approaches and multi-method ensembles.

  15. Analysis of polonium-210 in food products and bioassay samples by isotope-dilution alpha spectrometry.

    PubMed

    Lin, Zhichao; Wu, Zhongyu

    2009-05-01

    A rapid and reliable radiochemical method coupled with a simple and compact plating apparatus was developed, validated, and applied for the analysis of (210)Po in variety of food products and bioassay samples. The method performance characteristics, including accuracy, precision, robustness, and specificity, were evaluated along with a detailed measurement uncertainty analysis. With high Po recovery, improved energy resolution, and effective removal of interfering elements by chromatographic extraction, the overall method accuracy was determined to be better than 5% with measurement precision of 10%, at 95% confidence level.

  16. Development and validation of a simplified titration method for monitoring volatile fatty acids in anaerobic digestion.

    PubMed

    Sun, Hao; Guo, Jianbin; Wu, Shubiao; Liu, Fang; Dong, Renjie

    2017-09-01

    The volatile fatty acids (VFAs) concentration has been considered as one of the most sensitive process performance indicators in anaerobic digestion (AD) process. However, the accurate determination of VFAs concentration in AD processes normally requires advanced equipment and complex pretreatment procedures. A simplified method with fewer sample pretreatment procedures and improved accuracy is greatly needed, particularly for on-site application. This report outlines improvements to the Nordmann method, one of the most popular titrations used for VFA monitoring. The influence of ion and solid interfering subsystems in titrated samples on results accuracy was discussed. The total solid content in titrated samples was the main factor affecting accuracy in VFA monitoring. Moreover, a high linear correlation was established between the total solids contents and VFA measurement differences between the traditional Nordmann equation and gas chromatography (GC). Accordingly, a simplified titration method was developed and validated using a semi-continuous experiment of chicken manure anaerobic digestion with various organic loading rates. The good fitting of the results obtained by this method in comparison with GC results strongly supported the potential application of this method to VFA monitoring. Copyright © 2017. Published by Elsevier Ltd.

  17. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    PubMed

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  18. Improvement of Quantitative Measurements in Multiplex Proteomics Using High-Field Asymmetric Waveform Spectrometry.

    PubMed

    Pfammatter, Sibylle; Bonneil, Eric; Thibault, Pierre

    2016-12-02

    Quantitative proteomics using isobaric reagent tandem mass tags (TMT) or isobaric tags for relative and absolute quantitation (iTRAQ) provides a convenient approach to compare changes in protein abundance across multiple samples. However, the analysis of complex protein digests by isobaric labeling can be undermined by the relative large proportion of co-selected peptide ions that lead to distorted reporter ion ratios and affect the accuracy and precision of quantitative measurements. Here, we investigated the use of high-field asymmetric waveform ion mobility spectrometry (FAIMS) in proteomic experiments to reduce sample complexity and improve protein quantification using TMT isobaric labeling. LC-FAIMS-MS/MS analyses of human and yeast protein digests led to significant reductions in interfering ions, which increased the number of quantifiable peptides by up to 68% while significantly improving the accuracy of abundance measurements compared to that with conventional LC-MS/MS. The improvement in quantitative measurements using FAIMS is further demonstrated for the temporal profiling of protein abundance of HEK293 cells following heat shock treatment.

  19. Raman spectral feature selection using ant colony optimization for breast cancer diagnosis.

    PubMed

    Fallahzadeh, Omid; Dehghani-Bidgoli, Zohreh; Assarian, Mohammad

    2018-06-04

    Pathology as a common diagnostic test of cancer is an invasive, time-consuming, and partially subjective method. Therefore, optical techniques, especially Raman spectroscopy, have attracted the attention of cancer diagnosis researchers. However, as Raman spectra contain numerous peaks involved in molecular bounds of the sample, finding the best features related to cancerous changes can improve the accuracy of diagnosis in this method. The present research attempted to improve the power of Raman-based cancer diagnosis by finding the best Raman features using the ACO algorithm. In the present research, 49 spectra were measured from normal, benign, and cancerous breast tissue samples using a 785-nm micro-Raman system. After preprocessing for removal of noise and background fluorescence, the intensity of 12 important Raman bands of the biological samples was extracted as features of each spectrum. Then, the ACO algorithm was applied to find the optimum features for diagnosis. As the results demonstrated, by selecting five features, the classification accuracy of the normal, benign, and cancerous groups increased by 14% and reached 87.7%. ACO feature selection can improve the diagnostic accuracy of Raman-based diagnostic models. In the present study, features corresponding to ν(C-C) αhelix proline, valine (910-940), νs(C-C) skeletal lipids (1110-1130), and δ(CH2)/δ(CH3) proteins (1445-1460) were selected as the best features in cancer diagnosis.

  20. SALT - a better way of estimating suspended sediment

    Treesearch

    R. B. Thomas

    1984-01-01

    Hardware and software supporting a sediment sampling procedure--Sampling At List Time (SALT) have been perfected. SALT provides estimates of sediment discharge having improved accuracy and estimable precision. Although the greatest benefit of SALT may accrue to those attempting to monitor ""flashy"" small streams, its superior statistical...

  1. Improving the performances of autofocus based on adaptive retina-like sampling model

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce

    2018-03-01

    An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.

  2. Effects of cognitive training on change in accuracy in inductive reasoning ability.

    PubMed

    Boron, Julie Blaskewicz; Turiano, Nicholas A; Willis, Sherry L; Schaie, K Warner

    2007-05-01

    We investigated cognitive training effects on accuracy and number of items attempted in inductive reasoning performance in a sample of 335 older participants (M = 72.78 years) from the Seattle Longitudinal Study. We assessed the impact of individual characteristics, including chronic disease. The reasoning training group showed significantly greater gain in accuracy and number of attempted items than did the comparison group; gain was primarily due to enhanced accuracy. Reasoning training effects involved a complex interaction of gender, prior cognitive status, and chronic disease. Women with prior decline on reasoning but no heart disease showed the greatest accuracy increase. In addition, stable reasoning-trained women with heart disease demonstrated significant accuracy gain. Comorbidity was associated with less change in accuracy. The results support the effectiveness of cognitive training on improving the accuracy of reasoning performance.

  3. Improved bacterial identification directly from urine samples with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Kitagawa, Koichi; Shigemura, Katsumi; Onuma, Ken-Ichiro; Nishida, Masako; Fujiwara, Mayu; Kobayashi, Saori; Yamasaki, Mika; Nakamura, Tatsuya; Yamamichi, Fukashi; Shirakawa, Toshiro; Tokimatsu, Issei; Fujisawa, Masato

    2018-03-01

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) contributes to rapid identification of pathogens in the clinic but has not yet performed especially well for Gram-positive cocci (GPC) causing complicated urinary tract infection (UTI). The goal of this study was to investigate the possible clinical use of MALDI-TOF MS as a rapid method for bacterial identification directly from urine in complicated UTI. MALDI-TOF MS was applied to urine samples gathered from 142 suspected complicated UTI patients in 2015-2017. We modified the standard procedure (Method 1) for sample preparation by adding an initial 10 minutes of ultrasonication followed by centrifugation at 500 g for 1 minutes to remove debris such as epithelial cells and leukocytes from the urine (Method 2). In 133 urine culture-positive bacteria, the rate of corresponded with urine culture in GPC by MALDI-TOF MS in urine with standard sample preparation (Method 1) was 16.7%, but the modified sample preparation (Method 2) significantly improved that rate to 52.2% (P=.045). Method 2 also improved the identification accuracy for Gram-negative rods (GNR) from 77.1% to 94.2% (P=.022). The modified Method 2 significantly improved the average MALDI score from 1.408±0.153 to 2.166±0.045 (P=.000) for GPC and slightly improved the score from 2.107±0.061 to 2.164±0.037 for GNR. The modified sample preparation for MALDI-TOF MS can improve identification accuracy for complicated UTI causative bacteria. This simple modification offers a rapid and accurate routine diagnosis for UTI, and may possibly be a substitute for urine cultures. © 2017 Wiley Periodicals, Inc.

  4. Flow through electrode with automated calibration

    DOEpatents

    Szecsody, James E [Richland, WA; Williams, Mark D [Richland, WA; Vermeul, Vince R [Richland, WA

    2002-08-20

    The present invention is an improved automated flow through electrode liquid monitoring system. The automated system has a sample inlet to a sample pump, a sample outlet from the sample pump to at least one flow through electrode with a waste port. At least one computer controls the sample pump and records data from the at least one flow through electrode for a liquid sample. The improvement relies upon (a) at least one source of a calibration sample connected to (b) an injection valve connected to said sample outlet and connected to said source, said injection valve further connected to said at least one flow through electrode, wherein said injection valve is controlled by said computer to select between said liquid sample or said calibration sample. Advantages include improved accuracy because of more frequent calibrations, no additional labor for calibration, no need to remove the flow through electrode(s), and minimal interruption of sampling.

  5. MuSE: accounting for tumor heterogeneity using a sample-specific error model improves sensitivity and specificity in mutation calling from sequencing data.

    PubMed

    Fan, Yu; Xi, Liu; Hughes, Daniel S T; Zhang, Jianjun; Zhang, Jianhua; Futreal, P Andrew; Wheeler, David A; Wang, Wenyi

    2016-08-24

    Subclonal mutations reveal important features of the genetic architecture of tumors. However, accurate detection of mutations in genetically heterogeneous tumor cell populations using next-generation sequencing remains challenging. We develop MuSE ( http://bioinformatics.mdanderson.org/main/MuSE ), Mutation calling using a Markov Substitution model for Evolution, a novel approach for modeling the evolution of the allelic composition of the tumor and normal tissue at each reference base. MuSE adopts a sample-specific error model that reflects the underlying tumor heterogeneity to greatly improve the overall accuracy. We demonstrate the accuracy of MuSE in calling subclonal mutations in the context of large-scale tumor sequencing projects using whole exome and whole genome sequencing.

  6. Voltammetric Electronic Tongue and Support Vector Machines for Identification of Selected Features in Mexican Coffee

    PubMed Central

    Domínguez, Rocio Berenice; Moreno-Barón, Laura; Muñoz, Roberto; Gutiérrez, Juan Manuel

    2014-01-01

    This paper describes a new method based on a voltammetric electronic tongue (ET) for the recognition of distinctive features in coffee samples. An ET was directly applied to different samples from the main Mexican coffee regions without any pretreatment before the analysis. The resulting electrochemical information was modeled with two different mathematical tools, namely Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Growing conditions (i.e., organic or non-organic practices and altitude of crops) were considered for a first classification. LDA results showed an average discrimination rate of 88% ± 6.53% while SVM successfully accomplished an overall accuracy of 96.4% ± 3.50% for the same task. A second classification based on geographical origin of samples was carried out. Results showed an overall accuracy of 87.5% ± 7.79% for LDA and a superior performance of 97.5% ± 3.22% for SVM. Given the complexity of coffee samples, the high accuracy percentages achieved by ET coupled with SVM in both classification problems suggested a potential applicability of ET in the assessment of selected coffee features with a simpler and faster methodology along with a null sample pretreatment. In addition, the proposed method can be applied to authentication assessment while improving cost, time and accuracy of the general procedure. PMID:25254303

  7. Voltammetric electronic tongue and support vector machines for identification of selected features in Mexican coffee.

    PubMed

    Domínguez, Rocio Berenice; Moreno-Barón, Laura; Muñoz, Roberto; Gutiérrez, Juan Manuel

    2014-09-24

    This paper describes a new method based on a voltammetric electronic tongue (ET) for the recognition of distinctive features in coffee samples. An ET was directly applied to different samples from the main Mexican coffee regions without any pretreatment before the analysis. The resulting electrochemical information was modeled with two different mathematical tools, namely Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Growing conditions (i.e., organic or non-organic practices and altitude of crops) were considered for a first classification. LDA results showed an average discrimination rate of 88% ± 6.53% while SVM successfully accomplished an overall accuracy of 96.4% ± 3.50% for the same task. A second classification based on geographical origin of samples was carried out. Results showed an overall accuracy of 87.5% ± 7.79% for LDA and a superior performance of 97.5% ± 3.22% for SVM. Given the complexity of coffee samples, the high accuracy percentages achieved by ET coupled with SVM in both classification problems suggested a potential applicability of ET in the assessment of selected coffee features with a simpler and faster methodology along with a null sample pretreatment. In addition, the proposed method can be applied to authentication assessment while improving cost, time and accuracy of the general procedure.

  8. Diagnostic accuracy of routine blood examinations and CSF lactate level for post-neurosurgical bacterial meningitis.

    PubMed

    Zhang, Yang; Xiao, Xiong; Zhang, Junting; Gao, Zhixian; Ji, Nan; Zhang, Liwei

    2017-06-01

    To evaluate the diagnostic accuracy of routine blood examinations and Cerebrospinal Fluid (CSF) lactate level for Post-neurosurgical Bacterial Meningitis (PBM) at a large sample-size of post-neurosurgical patients. The diagnostic accuracies of routine blood examinations and CSF lactate level to distinguish between PAM and PBM were evaluated with the values of the Area Under the Curve of the Receiver Operating Characteristic (AUC -ROC ) by retrospectively analyzing the datasets of post-neurosurgical patients in the clinical information databases. The diagnostic accuracy of routine blood examinations was relatively low (AUC -ROC <0.7). The CSF lactate level achieved rather high diagnostic accuracy (AUC -ROC =0.891; CI 95%, 0.852-0.922). The variables of patient age, operation duration, surgical diagnosis and postoperative days (the interval days between the neurosurgery and examinations) were shown to affect the diagnostic accuracy of these examinations. The variables were integrated with routine blood examinations and CSF lactate level by Fisher discriminant analysis to improve their diagnostic accuracy. As a result, the diagnostic accuracy of blood examinations and CSF lactate level was significantly improved with an AUC -ROC value=0.760 (CI 95%, 0.737-0.782) and 0.921 (CI 95%, 0.887-0.948) respectively. The PBM diagnostic accuracy of routine blood examinations was relatively low, whereas the accuracy of CSF lactate level was high. Some variables that are involved in the incidence of PBM can also affect the diagnostic accuracy for PBM. Taking into account the effects of these variables significantly improves the diagnostic accuracies of routine blood examinations and CSF lactate level. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures.

    PubMed

    Scheid, Anika; Nebel, Markus E

    2012-07-09

    Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case - without sacrificing much of the accuracy of the results. Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms.

  10. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures

    PubMed Central

    2012-01-01

    Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case – without sacrificing much of the accuracy of the results. Conclusions Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms. PMID:22776037

  11. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar.

    PubMed

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-09-09

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar's estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.

  12. Evaluation of multiband, multitemporal, and transformed LANDSAT MSS data for land cover area estimation. [North Central Missouri

    NASA Technical Reports Server (NTRS)

    Stoner, E. R.; May, G. A.; Kalcic, M. T. (Principal Investigator)

    1981-01-01

    Sample segments of ground-verified land cover data collected in conjunction with the USDA/ESS June Enumerative Survey were merged with LANDSAT data and served as a focus for unsupervised spectral class development and accuracy assessment. Multitemporal data sets were created from single-date LANDSAT MSS acquisitions from a nominal scene covering an eleven-county area in north central Missouri. Classification accuracies for the four land cover types predominant in the test site showed significant improvement in going from unitemporal to multitemporal data sets. Transformed LANDSAT data sets did not significantly improve classification accuracies. Regression estimators yielded mixed results for different land covers. Misregistration of two LANDSAT data sets by as much and one half pixels did not significantly alter overall classification accuracies. Existing algorithms for scene-to scene overlay proved adequate for multitemporal data analysis as long as statistical class development and accuracy assessment were restricted to field interior pixels.

  13. Research on the impact factors of GRACE precise orbit determination by dynamic method

    NASA Astrophysics Data System (ADS)

    Guo, Nan-nan; Zhou, Xu-hua; Li, Kai; Wu, Bin

    2018-07-01

    With the successful use of GPS-only-based POD (precise orbit determination), more and more satellites carry onboard GPS receivers to support their orbit accuracy requirements. It provides continuous GPS observations in high precision, and becomes an indispensable way to obtain the orbit of LEO satellites. Precise orbit determination of LEO satellites plays an important role for the application of LEO satellites. Numerous factors should be considered in the POD processing. In this paper, several factors that impact precise orbit determination are analyzed, namely the satellite altitude, the time-variable earth's gravity field, the GPS satellite clock error and accelerometer observation. The GRACE satellites provide ideal platform to study the performance of factors for precise orbit determination using zero-difference GPS data. These factors are quantitatively analyzed on affecting the accuracy of dynamic orbit using GRACE observations from 2005 to 2011 by SHORDE software. The study indicates that: (1) with the altitude of the GRACE satellite is lowered from 480 km to 460 km in seven years, the 3D (three-dimension) position accuracy of GRACE satellite orbit is about 3˜4 cm based on long spans data; (2) the accelerometer data improves the 3D position accuracy of GRACE in about 1 cm; (3) the accuracy of zero-difference dynamic orbit is about 6 cm with the GPS satellite clock error products in 5 min sampling interval and can be raised to 4 cm, if the GPS satellite clock error products with 30 s sampling interval can be adopted. (4) the time-variable part of earth gravity field model improves the 3D position accuracy of GRACE in about 0.5˜1.5 cm. Based on this study, we quantitatively analyze the factors that affect precise orbit determination of LEO satellites. This study plays an important role to improve the accuracy of LEO satellites orbit determination.

  14. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds.

    PubMed

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M; Bloom, Peter H; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  15. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    PubMed Central

    Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data. PMID:28403159

  16. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    USGS Publications Warehouse

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael J.; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  17. Sampling strategies for improving tree accuracy and phylogenetic analyses: a case study in ciliate protists, with notes on the genus Paramecium.

    PubMed

    Yi, Zhenzhen; Strüder-Kypke, Michaela; Hu, Xiaozhong; Lin, Xiaofeng; Song, Weibo

    2014-02-01

    In order to assess how dataset-selection for multi-gene analyses affects the accuracy of inferred phylogenetic trees in ciliates, we chose five genes and the genus Paramecium, one of the most widely used model protist genera, and compared tree topologies of the single- and multi-gene analyses. Our empirical study shows that: (1) Using multiple genes improves phylogenetic accuracy, even when their one-gene topologies are in conflict with each other. (2) The impact of missing data on phylogenetic accuracy is ambiguous: resolution power and topological similarity, but not number of represented taxa, are the most important criteria of a dataset for inclusion in concatenated analyses. (3) As an example, we tested the three classification models of the genus Paramecium with a multi-gene based approach, and only the monophyly of the subgenus Paramecium is supported. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Adjusting the Stems Regional Forest Growth Model to Improve Local Predictions

    Treesearch

    W. Brad Smith

    1983-01-01

    A simple procedure using double sampling is described for adjusting growth in the STEMS regional forest growth model to compensate for subregional variations. Predictive accuracy of the STEMS model (a distance-independent, individual tree growth model for Lake States forests) was improved by using this procedure

  19. Assigning African elephant DNA to geographic region of origin: Applications to the ivory trade

    PubMed Central

    Wasser, Samuel K.; Shedlock, Andrew M.; Comstock, Kenine; Ostrander, Elaine A.; Mutayoba, Benezeth; Stephens, Matthew

    2004-01-01

    Resurgence of illicit trade in African elephant ivory is placing the elephant at renewed risk. Regulation of this trade could be vastly improved by the ability to verify the geographic origin of tusks. We address this need by developing a combined genetic and statistical method to determine the origin of poached ivory. Our statistical approach exploits a smoothing method to estimate geographic-specific allele frequencies over the entire African elephants' range for 16 microsatellite loci, using 315 tissue and 84 scat samples from forest (Loxodonta africana cyclotis) and savannah (Loxodonta africana africana) elephants at 28 locations. These geographic-specific allele frequency estimates are used to infer the geographic origin of DNA samples, such as could be obtained from tusks of unknown origin. We demonstrate that our method alleviates several problems associated with standard assignment methods in this context, and the absolute accuracy of our method is high. Continent-wide, 50% of samples were located within 500 km, and 80% within 932 km of their actual place of origin. Accuracy varied by region (median accuracies: West Africa, 135 km; Central Savannah, 286 km; Central Forest, 411 km; South, 535 km; and East, 697 km). In some cases, allele frequencies vary considerably over small geographic regions, making much finer discriminations possible and suggesting that resolution could be further improved by collection of samples from locations not represented in our study. PMID:15459317

  20. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  1. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  2. Noncontact blood species identification method based on spatially resolved near-infrared transmission spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Linna; Sun, Meixiu; Wang, Zhennan; Li, Hongxiao; Li, Yingxin; Li, Gang; Lin, Ling

    2017-09-01

    The inspection and identification of whole blood are crucially significant for import-export ports and inspection and quarantine departments. In our previous research, we proved Near-Infrared diffuse transmitted spectroscopy method was potential for noninvasively identifying three blood species, including macaque, human and mouse, with samples measured in the cuvettes. However, in open sampling cases, inspectors may be endangered by virulence factors in blood samples. In this paper, we explored the noncontact measurement for classification, with blood samples measured in the vacuum blood vessels. Spatially resolved near-infrared spectroscopy was used to improve the prediction accuracy. Results showed that the prediction accuracy of the model built with nine detection points was more than 90% in identification between all five species, including chicken, goat, macaque, pig and rat, far better than the performance of the model built with single-point spectra. The results fully supported the idea that spatially resolved near-infrared spectroscopy method can improve the prediction ability, and demonstrated the feasibility of this method for noncontact blood species identification in practical applications.

  3. Adjusted Clinical Groups: Predictive Accuracy for Medicaid Enrollees in Three States

    PubMed Central

    Adams, E. Kathleen; Bronstein, Janet M.; Raskind-Hood, Cheryl

    2002-01-01

    Actuarial split-sample methods were used to assess predictive accuracy of adjusted clinical groups (ACGs) for Medicaid enrollees in Georgia, Mississippi (lagging in managed care penetration), and California. Accuracy for two non-random groups—high-cost and located in urban poor areas—was assessed. Measures for random groups were derived with and without short-term enrollees to assess the effect of turnover on predictive accuracy. ACGs improved predictive accuracy for high-cost conditions in all States, but did so only for those in Georgia's poorest urban areas. Higher and more unpredictable expenses of short-term enrollees moderated the predictive power of ACGs. This limitation was significant in Mississippi due in part, to that State's very high proportion of short-term enrollees. PMID:12545598

  4. Biological Marker Analysis as Part of the CIBERES-RTIC Cancer-SEPAR Strategic Project on Lung Cancer.

    PubMed

    Monsó, Eduard; Montuenga, Luis M; Sánchez de Cos, Julio; Villena, Cristina

    2015-09-01

    The aim of the Clinical and Molecular Staging of Stage I-IIp Lung Cancer Project is to identify molecular variables that improve the prognostic and predictive accuracy of TMN classification in stage I/IIp non-small cell lung cancer (NSCLC). Clinical data and lung tissue, tumor and blood samples will be collected from 3 patient cohorts created for this purpose. The prognostic protein signature will be validated from these samples, and micro-RNA, ALK, Ros1, Pdl-1, and TKT, TKTL1 y G6PD expression will be analyzed. Tissue inflammatory markers and stromal cell markers will also be analyzed. Methylation of p16, DAPK, RASSF1a, APC and CDH13 genes in the tissue samples will be determined, and inflammatory markers in peripheral blood will also be analyzed. Variables that improve the prognostic and predictive accuracy of TNM in NSCLC by molecular staging may be identified from this extensive analytical panel. Copyright © 2014 SEPAR. Published by Elsevier Espana. All rights reserved.

  5. Breast cancer detection via Hu moment invariant and feedforward neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaowei; Yang, Jiquan; Nguyen, Elijah

    2018-04-01

    One of eight women can get breast cancer during all her life. This study used Hu moment invariant and feedforward neural network to diagnose breast cancer. With the help of K-fold cross validation, we can test the out-of-sample accuracy of our method. Finally, we found that our methods can improve the accuracy of detecting breast cancer and reduce the difficulty of judging.

  6. Probing the microscopic environment of 23 Na ions in brain tissue by MRI: On the accuracy of different sampling schemes for the determination of rapid, biexponential T2* decay at low signal-to-noise ratio.

    PubMed

    Lommen, Jonathan M; Flassbeck, Sebastian; Behl, Nicolas G R; Niesporek, Sebastian; Bachert, Peter; Ladd, Mark E; Nagel, Armin M

    2018-08-01

    To investigate and to reduce influences on the determination of the short and long apparent transverse relaxation times ( T2,s*, T2,l*) of 23 Na in vivo with respect to signal sampling. The accuracy of T2* determination was analyzed in simulations for five different sampling schemes. The influence of noise in the parameter fit was investigated for three different models. A dedicated sampling scheme was developed for brain parenchyma by numerically optimizing the parameter estimation. This scheme was compared in vivo to linear sampling at 7T. For the considered sampling schemes, T2,s* / T2,l* exhibit an average bias of 3% / 4% with a variation of 25% / 15% based on simulations with previously published T2* values. The accuracy could be improved with the optimized sampling scheme by strongly averaging the earliest sample. A fitting model with constant noise floor can increase accuracy while additional fitting of a noise term is only beneficial in case of sampling until late echo time > 80 ms. T2* values in white matter were determined to be T2,s* = 5.1 ± 0.8 / 4.2 ± 0.4 ms and T2,l* = 35.7 ± 2.4 / 34.4 ± 1.5 ms using linear/optimized sampling. Voxel-wise T2* determination of 23 Na is feasible in vivo. However, sampling and fitting methods have to be chosen carefully to retrieve accurate results. Magn Reson Med 80:571-584, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  7. Protein homology model refinement by large-scale energy optimization.

    PubMed

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  8. The Efficacy of Stuttering Measurement Training: Evaluating Two Training Programs

    PubMed Central

    Bainbridge, Lauren A.; Stavros, Candace; Ebrahimian, Mineh; Wang, Yuedong

    2015-01-01

    Purpose Two stuttering measurement training programs currently used for training clinicians were evaluated for their efficacy in improving the accuracy of total stuttering event counting. Method Four groups, each with 12 randomly allocated participants, completed a pretest–posttest design training study. They were evaluated by their counts of stuttering events on eight 3-min audiovisual speech samples from adults and children who stutter. Stuttering judgment training involved use of either the Stuttering Measurement System (SMS), Stuttering Measurement Assessment and Training (SMAAT) programs, or no training. To test for the reliability of any training effect, SMS training was repeated with the 4th group. Results Both SMS-trained groups produced approximately 34% improvement, significantly better than no training or the SMAAT program. The SMAAT program produced a mixed result. Conclusions The SMS program was shown to produce a “medium” effect size improvement in the accuracy of stuttering event counts, and this improvement was almost perfectly replicated in a 2nd group. Half of the SMAAT judges produced a 36% improvement in accuracy, but the other half showed no improvement. Additional studies are needed to demonstrate the durability of the reported improvements, but these positive effects justify the importance of stuttering measurement training. PMID:25629956

  9. The efficacy of stuttering measurement training: evaluating two training programs.

    PubMed

    Bainbridge, Lauren A; Stavros, Candace; Ebrahimian, Mineh; Wang, Yuedong; Ingham, Roger J

    2015-04-01

    Two stuttering measurement training programs currently used for training clinicians were evaluated for their efficacy in improving the accuracy of total stuttering event counting. Four groups, each with 12 randomly allocated participants, completed a pretest-posttest design training study. They were evaluated by their counts of stuttering events on eight 3-min audiovisual speech samples from adults and children who stutter. Stuttering judgment training involved use of either the Stuttering Measurement System (SMS), Stuttering Measurement Assessment and Training (SMAAT) programs, or no training. To test for the reliability of any training effect, SMS training was repeated with the 4th group. Both SMS-trained groups produced approximately 34% improvement, significantly better than no training or the SMAAT program. The SMAAT program produced a mixed result. The SMS program was shown to produce a "medium" effect size improvement in the accuracy of stuttering event counts, and this improvement was almost perfectly replicated in a 2nd group. Half of the SMAAT judges produced a 36% improvement in accuracy, but the other half showed no improvement. Additional studies are needed to demonstrate the durability of the reported improvements, but these positive effects justify the importance of stuttering measurement training.

  10. Short communication: Evaluation of sampling socks for detection of Mycobacterium avium ssp. paratuberculosis on dairy farms.

    PubMed

    Wolf, R; Orsel, K; De Buck, J; Kanevets, U; Barkema, H W

    2016-04-01

    Mycobacterium avium ssp. paratuberculosis (MAP) causes Johne's disease, a production-limiting disease in cattle. Detection of infected herds is often done using environmental samples (ES) of manure, which are collected in cattle pens and manure storage areas. Disadvantages of the method are that sample accuracy is affected by cattle housing and type of manure storage area. Furthermore, some sampling locations (e.g., manure lagoons) are frequently not readily accessible. However, sampling socks (SO), as used for Salmonella spp. testing in chicken flocks, might be an easy to use and accurate alternative to ES. The objective of the study was to assess accuracy of SO for detection of MAP in dairy herds. At each of 102 participating herds, 6 ES and 2 SO were collected. In total, 45 herds had only negative samples in both methods and 29 herds had ≥1 positive ES and ≥1 positive SO. Furthermore, 27 herds with ≥1 positive ES had no positive SO, and 1 herd with no positive ES had 1 positive SO. Bayesian simulation with informative priors on sensitivity of ES and MAP herd prevalence provided a posterior sensitivity for SO of 43.5% (95% probability interval=33-58), and 78.5% (95% probability interval=62-93) for ES. Although SO were easy to use, accuracy was lower than for ES. Therefore, with improvements in the sampling protocol (e.g., more SO per farm and more frequent herd visits), as well as improvements in the laboratory protocol, perhaps SO would be a useful alternative for ES. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Fluorescence microscopy point spread function model accounting for aberrations due to refractive index variability within a specimen.

    PubMed

    Ghosh, Sreya; Preza, Chrysanthe

    2015-07-01

    A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6  μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction.

  12. Machine Learning for Big Data: A Study to Understand Limits at Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R.; Del-Castillo-Negrete, Carlos Emilio

    This report aims to empirically understand the limits of machine learning when applied to Big Data. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical data mining and machine learning under more scrutiny, evaluation and application for gleaning insights from the data than ever before. Much is expected from algorithms without understanding their limitations at scale while dealing with massive datasets. In that context, we pose and address the following questions How does a machine learning algorithm perform on measuresmore » such as accuracy and execution time with increasing sample size and feature dimensionality? Does training with more samples guarantee better accuracy? How many features to compute for a given problem? Do more features guarantee better accuracy? Do efforts to derive and calculate more features and train on larger samples worth the effort? As problems become more complex and traditional binary classification algorithms are replaced with multi-task, multi-class categorization algorithms do parallel learners perform better? What happens to the accuracy of the learning algorithm when trained to categorize multiple classes within the same feature space? Towards finding answers to these questions, we describe the design of an empirical study and present the results. We conclude with the following observations (i) accuracy of the learning algorithm increases with increasing sample size but saturates at a point, beyond which more samples do not contribute to better accuracy/learning, (ii) the richness of the feature space dictates performance - both accuracy and training time, (iii) increased dimensionality often reflected in better performance (higher accuracy in spite of longer training times) but the improvements are not commensurate the efforts for feature computation and training and (iv) accuracy of the learning algorithms drop significantly with multi-class learners training on the same feature matrix and (v) learning algorithms perform well when categories in labeled data are independent (i.e., no relationship or hierarchy exists among categories).« less

  13. Improving Alcohol Screening for College Students: Screening for Alcohol Misuse amongst College Students with a Simple Modification to the CAGE Questionnaire

    ERIC Educational Resources Information Center

    Taylor, Purcell; El-Sabawi, Taleed; Cangin, Causenge

    2016-01-01

    Objective: To improve the CAGE (Cut down, Annoyed, Guilty, Eye opener) questionnaire's predictive accuracy in screening college students. Participants: The sample consisted of 219 midwestern university students who self-administered a confidential survey. Methods: Exploratory factor analysis, confirmatory factor analysis, receiver operating…

  14. Robust Stereo Visual Odometry Using Improved RANSAC-Based Methods for Mobile Robot Localization

    PubMed Central

    Liu, Yanqing; Gu, Yuzhang; Li, Jiamao; Zhang, Xiaolin

    2017-01-01

    In this paper, we present a novel approach for stereo visual odometry with robust motion estimation that is faster and more accurate than standard RANSAC (Random Sample Consensus). Our method makes improvements in RANSAC in three aspects: first, the hypotheses are preferentially generated by sampling the input feature points on the order of ages and similarities of the features; second, the evaluation of hypotheses is performed based on the SPRT (Sequential Probability Ratio Test) that makes bad hypotheses discarded very fast without verifying all the data points; third, we aggregate the three best hypotheses to get the final estimation instead of only selecting the best hypothesis. The first two aspects improve the speed of RANSAC by generating good hypotheses and discarding bad hypotheses in advance, respectively. The last aspect improves the accuracy of motion estimation. Our method was evaluated in the KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) and the New Tsukuba dataset. Experimental results show that the proposed method achieves better results for both speed and accuracy than RANSAC. PMID:29027935

  15. A Strapdown Interial Navigation System/Beidou/Doppler Velocity Log Integrated Navigation Algorithm Based on a Cubature Kalman Filter

    PubMed Central

    Gao, Wei; Zhang, Ya; Wang, Jianguo

    2014-01-01

    The integrated navigation system with strapdown inertial navigation system (SINS), Beidou (BD) receiver and Doppler velocity log (DVL) can be used in marine applications owing to the fact that the redundant and complementary information from different sensors can markedly improve the system accuracy. However, the existence of multisensor asynchrony will introduce errors into the system. In order to deal with the problem, conventionally the sampling interval is subdivided, which increases the computational complexity. In this paper, an innovative integrated navigation algorithm based on a Cubature Kalman filter (CKF) is proposed correspondingly. A nonlinear system model and observation model for the SINS/BD/DVL integrated system are established to more accurately describe the system. By taking multi-sensor asynchronization into account, a new sampling principle is proposed to make the best use of each sensor's information. Further, CKF is introduced in this new algorithm to enable the improvement of the filtering accuracy. The performance of this new algorithm has been examined through numerical simulations. The results have shown that the positional error can be effectively reduced with the new integrated navigation algorithm. Compared with the traditional algorithm based on EKF, the accuracy of the SINS/BD/DVL integrated navigation system is improved, making the proposed nonlinear integrated navigation algorithm feasible and efficient. PMID:24434842

  16. Construction and testing of a simple and economical soil greenhouse gas automatic sampler

    USGS Publications Warehouse

    Ginting, D.; Arnold, S.L.; Arnold, N.S.; Tubbs, R.S.

    2007-01-01

    Quantification of soil greenhouse gas emissions requires considerable sampling to account for spatial and/or temporal variation. With manual sampling, additional personnel are often not available to sample multiple sites within a narrow time interval. The objectives were to construct an automatic gas sampler and to compare the accuracy and precision of automatic versus manual sampling. The automatic sampler was tested with carbon dioxide (CO2) fluxes that mimicked the range of CO2 fluxes during a typical corn-growing season in eastern Nebraska. Gas samples were drawn from the chamber at 0, 5, and 10 min manually and with the automatic sampler. The three samples drawn with the automatic sampler were transferred to pre-vacuumed vials after 1 h; thus the samples in syringe barrels stayed connected with the increasing CO2 concentration in the chamber. The automatic sampler sustains accuracy and precision in greenhouse gas sampling while improving time efficiency and reducing labor stress. Copyright ?? Taylor & Francis Group, LLC.

  17. Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes.

    PubMed

    Demitri, Nevine; Zoubir, Abdelhak M

    2017-01-01

    Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.

  18. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  19. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    PubMed

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  20. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data

    PubMed Central

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  1. Comparison of several von Willebrand factor (VWF) activity assays for monitoring patients undergoing treatment with VWF/FVIII concentrates: improved performance with a new modified automated method.

    PubMed

    Hillarp, A; Friedman, K D; Adcock-Funk, D; Tiefenbacher, S; Nichols, W L; Chen, D; Stadler, M; Schwartz, B A

    2015-11-01

    The ability of von Willebrand factor (VWF) to bind platelet GP Ib and promote platelet plug formation is measured in vitro using the ristocetin cofactor (VWF:RCo) assay. Automated assay systems make testing more accessible for diagnosis, but do not necessarily improve sensitivity and accuracy. We assessed the performance of a modified automated VWF:RCo assay protocol for the Behring Coagulation System (BCS(®) ) compared to other available assay methods. Results from different VWF:RCo assays in a number of specialized commercial and research testing laboratories were compared using plasma samples with varying VWF:RCo activities (0-1.2 IU mL(-1) ). Samples were prepared by mixing VWF concentrate or plasma standard into VWF-depleted plasma. Commercially available lyophilized standard human plasma was also studied. Emphasis was put on the low measuring range. VWF:RCo accuracy was calculated based on the expected values, whereas precision was obtained from repeated measurements. In the physiological concentration range, most of the automated tests resulted in acceptable accuracy, with varying reproducibility dependent on the method. However, several assays were inaccurate in the low measuring range. Only the modified BCS protocol showed acceptable accuracy over the entire measuring range with improved reproducibility. A modified BCS(®) VWF:RCo method can improve sensitivity and thus enhances the measuring range. Furthermore, the modified BCS(®) assay displayed good precision. This study indicates that the specific modifications - namely the combination of increased ristocetin concentration, reduced platelet content, VWF-depleted plasma as on-board diluent and a two-curve calculation mode - reduces the issues seen with current VWF:RCo activity assays. © 2015 John Wiley & Sons Ltd.

  2. Improved classification accuracy in 1- and 2-dimensional NMR metabolomics data using the variance stabilising generalised logarithm transformation

    PubMed Central

    Parsons, Helen M; Ludwig, Christian; Günther, Ulrich L; Viant, Mark R

    2007-01-01

    Background Classifying nuclear magnetic resonance (NMR) spectra is a crucial step in many metabolomics experiments. Since several multivariate classification techniques depend upon the variance of the data, it is important to first minimise any contribution from unwanted technical variance arising from sample preparation and analytical measurements, and thereby maximise any contribution from wanted biological variance between different classes. The generalised logarithm (glog) transform was developed to stabilise the variance in DNA microarray datasets, but has rarely been applied to metabolomics data. In particular, it has not been rigorously evaluated against other scaling techniques used in metabolomics, nor tested on all forms of NMR spectra including 1-dimensional (1D) 1H, projections of 2D 1H, 1H J-resolved (pJRES), and intact 2D J-resolved (JRES). Results Here, the effects of the glog transform are compared against two commonly used variance stabilising techniques, autoscaling and Pareto scaling, as well as unscaled data. The four methods are evaluated in terms of the effects on the variance of NMR metabolomics data and on the classification accuracy following multivariate analysis, the latter achieved using principal component analysis followed by linear discriminant analysis. For two of three datasets analysed, classification accuracies were highest following glog transformation: 100% accuracy for discriminating 1D NMR spectra of hypoxic and normoxic invertebrate muscle, and 100% accuracy for discriminating 2D JRES spectra of fish livers sampled from two rivers. For the third dataset, pJRES spectra of urine from two breeds of dog, the glog transform and autoscaling achieved equal highest accuracies. Additionally we extended the glog algorithm to effectively suppress noise, which proved critical for the analysis of 2D JRES spectra. Conclusion We have demonstrated that the glog and extended glog transforms stabilise the technical variance in NMR metabolomics datasets. This significantly improves the discrimination between sample classes and has resulted in higher classification accuracies compared to unscaled, autoscaled or Pareto scaled data. Additionally we have confirmed the broad applicability of the glog approach using three disparate datasets from different biological samples using 1D NMR spectra, 1D projections of 2D JRES spectra, and intact 2D JRES spectra. PMID:17605789

  3. Diagnostic Accuracy of Fall Risk Assessment Tools in People With Diabetic Peripheral Neuropathy

    PubMed Central

    Pohl, Patricia S.; Mahnken, Jonathan D.; Kluding, Patricia M.

    2012-01-01

    Background Diabetic peripheral neuropathy affects nearly half of individuals with diabetes and leads to increased fall risk. Evidence addressing fall risk assessment for these individuals is lacking. Objective The purpose of this study was to identify which of 4 functional mobility fall risk assessment tools best discriminates, in people with diabetic peripheral neuropathy, between recurrent “fallers” and those who are not recurrent fallers. Design A cross-sectional study was conducted. Setting The study was conducted in a medical research university setting. Participants The participants were a convenience sample of 36 individuals between 40 and 65 years of age with diabetic peripheral neuropathy. Measurements Fall history was assessed retrospectively and was the criterion standard. Fall risk was assessed using the Functional Reach Test, the Timed “Up & Go” Test, the Berg Balance Scale, and the Dynamic Gait Index. Sensitivity, specificity, positive and negative likelihood ratios, and overall diagnostic accuracy were calculated for each fall risk assessment tool. Receiver operating characteristic curves were used to estimate modified cutoff scores for each fall risk assessment tool; indexes then were recalculated. Results Ten of the 36 participants were classified as recurrent fallers. When traditional cutoff scores were used, the Dynamic Gait Index and Functional Reach Test demonstrated the highest sensitivity at only 30%; the Dynamic Gait Index also demonstrated the highest overall diagnostic accuracy. When modified cutoff scores were used, all tools demonstrated improved sensitivity (80% or 90%). Overall diagnostic accuracy improved for all tests except the Functional Reach Test; the Timed “Up & Go” Test demonstrated the highest diagnostic accuracy at 88.9%. Limitations The small sample size and retrospective fall history assessment were limitations of the study. Conclusions Modified cutoff scores improved diagnostic accuracy for 3 of 4 fall risk assessment tools when testing people with diabetic peripheral neuropathy. PMID:22836004

  4. Low power and high accuracy spike sorting microprocessor with on-line interpolation and re-alignment in 90 nm CMOS process.

    PubMed

    Chen, Tung-Chien; Ma, Tsung-Chuan; Chen, Yun-Yu; Chen, Liang-Gee

    2012-01-01

    Accurate spike sorting is an important issue for neuroscientific and neuroprosthetic applications. The sorting of spikes depends on the features extracted from the neural waveforms, and a better sorting performance usually comes with a higher sampling rate (SR). However for the long duration experiments on free-moving subjects, the miniaturized and wireless neural recording ICs are the current trend, and the compromise on sorting accuracy is usually made by a lower SR for the lower power consumption. In this paper, we implement an on-chip spike sorting processor with integrated interpolation hardware in order to improve the performance in terms of power versus accuracy. According to the fabrication results in 90nm process, if the interpolation is appropriately performed during the spike sorting, the system operated at the SR of 12.5 k samples per second (sps) can outperform the one not having interpolation at 25 ksps on both accuracy and power.

  5. Method for improving accuracy in full evaporation headspace analysis.

    PubMed

    Xie, Wei-Qi; Chai, Xin-Sheng

    2017-05-01

    We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar

    PubMed Central

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-01-01

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058

  7. Improved age determination of blood and teeth samples using a selected set of DNA methylation markers

    PubMed Central

    Kamalandua, Aubeline

    2015-01-01

    Age estimation from DNA methylation markers has seen an exponential growth of interest, not in the least from forensic scientists. The current published assays, however, can still be improved by lowering the number of markers in the assay and by providing more accurate models to predict chronological age. From the published literature we selected 4 age-associated genes (ASPA, PDE4C, ELOVL2, and EDARADD) and determined CpG methylation levels from 206 blood samples of both deceased and living individuals (age range: 0–91 years). This data was subsequently used to compare prediction accuracy with both linear and non-linear regression models. A quadratic regression model in which the methylation levels of ELOVL2 were squared showed the highest accuracy with a Mean Absolute Deviation (MAD) between chronological age and predicted age of 3.75 years and an adjusted R2 of 0.95. No difference in accuracy was observed for samples obtained either from living and deceased individuals or between the 2 genders. In addition, 29 teeth from different individuals (age range: 19–70 years) were analyzed using the same set of markers resulting in a MAD of 4.86 years and an adjusted R2 of 0.74. Cross validation of the results obtained from blood samples demonstrated the robustness and reproducibility of the assay. In conclusion, the set of 4 CpG DNA methylation markers is capable of producing highly accurate age predictions for blood samples from deceased and living individuals PMID:26280308

  8. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  9. We Can Have It All: Improved Surveillance Outcomes and Decreased Personnel Costs Associated With Electronic Reportable Disease Surveillance, North Carolina, 2010

    PubMed Central

    DiBiase, Lauren; Fangman, Mary T.; Fleischauer, Aaron T.; Waller, Anna E.; MacDonald, Pia D. M.

    2013-01-01

    Objectives. We assessed the timeliness, accuracy, and cost of a new electronic disease surveillance system at the local health department level. We describe practices associated with lower cost and better surveillance timeliness and accuracy. Methods. Interviews conducted May through August 2010 with local health department (LHD) staff at a simple random sample of 30 of 100 North Carolina counties provided information on surveillance practices and costs; we used surveillance system data to calculate timeliness and accuracy. We identified LHDs with best timeliness and accuracy and used these categories to compare surveillance practices and costs. Results. Local health departments in the top tertiles for surveillance timeliness and accuracy had a lower cost per case reported than LHDs with lower timeliness and accuracy ($71 and $124 per case reported, respectively; P = .03). Best surveillance practices fell into 2 domains: efficient use of the electronic surveillance system and use of surveillance data for local evaluation and program management. Conclusions. Timely and accurate surveillance can be achieved in the setting of restricted funding experienced by many LHDs. Adopting best surveillance practices may improve both efficiency and public health outcomes. PMID:24134385

  10. TH-EF-BRA-08: A Novel Technique for Estimating Volumetric Cine MRI (VC-MRI) From Multi-Slice Sparsely Sampled Cine Images Using Motion Modeling and Free Form Deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, W; Yin, F; Wang, C

    Purpose: To develop a technique to estimate on-board VC-MRI using multi-slice sparsely-sampled cine images, patient prior 4D-MRI, motion-modeling and free-form deformation for real-time 3D target verification of lung radiotherapy. Methods: A previous method has been developed to generate on-board VC-MRI by deforming prior MRI images based on a motion model(MM) extracted from prior 4D-MRI and a single-slice on-board 2D-cine image. In this study, free-form deformation(FD) was introduced to correct for errors in the MM when large anatomical changes exist. Multiple-slice sparsely-sampled on-board 2D-cine images located within the target are used to improve both the estimation accuracy and temporal resolution ofmore » VC-MRI. The on-board 2D-cine MRIs are acquired at 20–30frames/s by sampling only 10% of the k-space on Cartesian grid, with 85% of that taken at the central k-space. The method was evaluated using XCAT(computerized patient model) simulation of lung cancer patients with various anatomical and respirational changes from prior 4D-MRI to onboard volume. The accuracy was evaluated using Volume-Percent-Difference(VPD) and Center-of-Mass-Shift(COMS) of the estimated tumor volume. Effects of region-of-interest(ROI) selection, 2D-cine slice orientation, slice number and slice location on the estimation accuracy were evaluated. Results: VCMRI estimated using 10 sparsely-sampled sagittal 2D-cine MRIs achieved VPD/COMS of 9.07±3.54%/0.45±0.53mm among all scenarios based on estimation with ROI-MM-ROI-FD. The FD optimization improved estimation significantly for scenarios with anatomical changes. Using ROI-FD achieved better estimation than global-FD. Changing the multi-slice orientation to axial, coronal, and axial/sagittal orthogonal reduced the accuracy of VCMRI to VPD/COMS of 19.47±15.74%/1.57±2.54mm, 20.70±9.97%/2.34±0.92mm, and 16.02±13.79%/0.60±0.82mm, respectively. Reducing the number of cines to 8 enhanced temporal resolution of VC-MRI by 25% while maintaining the estimation accuracy. Estimation using slices sampled uniformly through the tumor achieved better accuracy than slices sampled non-uniformly. Conclusions: Preliminary studies showed that it is feasible to generate VC-MRI from multi-slice sparsely-sampled 2D-cine images for real-time 3D-target verification. This work was supported by the National Institutes of Health under Grant No. R01-CA184173 and a research grant from Varian Medical Systems.« less

  11. PubChem3D: Conformer generation

    PubMed Central

    2011-01-01

    Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340

  12. Application of Deep Learning in GLOBELAND30-2010 Product Refinement

    NASA Astrophysics Data System (ADS)

    Liu, T.; Chen, X.

    2018-04-01

    GlobeLand30, as one of the best Global Land Cover (GLC) product at 30-m resolution, has been widely used in many research fields. Due to the significant spectral confusion among different land cover types and limited textual information of Landsat data, the overall accuracy of GlobeLand30 is about 80 %. Although such accuracy is much higher than most other global land cover products, it cannot satisfy various applications. There is still a great need of an effective method to improve the quality of GlobeLand30. The explosive high-resolution satellite images and remarkable performance of Deep Learning on image classification provide a new opportunity to refine GlobeLand30. However, the performance of deep leaning depends on quality and quantity of training samples as well as model training strategy. Therefore, this paper 1) proposed an automatic training sample generation method via Google earth to build a large training sample set; and 2) explore the best training strategy for land cover classification using GoogleNet (Inception V3), one of the most widely used deep learning network. The result shows that the fine-tuning from first layer of Inception V3 using rough large sample set is the best strategy. The retrained network was then applied in one selected area from Xi'an city as a case study of GlobeLand30 refinement. The experiment results indicate that the proposed approach with Deep Learning and google earth imagery is a promising solution for further improving accuracy of GlobeLand30.

  13. Effect of black point on accuracy of LCD displays colorimetric characterization

    NASA Astrophysics Data System (ADS)

    Li, Tong; Xie, Kai; He, Nannan; Ye, Yushan

    2018-03-01

    Black point is the point at which RGB's single channel digital drive value is 0. Due to the problem of light leakage of liquid-crystal displays (LCDs), black point's luminance value is not 0, this phenomenon bring some errors to colorimetric characterization of LCDs, especially low luminance value driving greater sampling effect. This paper describes the characteristic accuracy of polynomial model method and the effect of black point on accuracy, the color difference accuracy is given. When considering the black point in the characteristics equation, the maximum color difference is 3.246, the maximum color difference than without considering the black points reduced by 2.36. The experimental results show that the accuracy of LCDs colorimetric characterization can be improved, if the effect of black point is eliminated properly.

  14. Improving accuracy in household and external travel surveys.

    DOT National Transportation Integrated Search

    2010-01-01

    The Texas Department of Transportation has a comprehensive on-going travel survey program. This research examines areas within two select travel surveys concerning quality control issues involved in data collection and sampling error in the data caus...

  15. [Study on physical deviation factors on laser induced breakdown spectroscopy measurement].

    PubMed

    Wan, Xiong; Wang, Peng; Wang, Qi; Zhang, Qing; Zhang, Zhi-Min; Zhang, Hua-Ming

    2013-10-01

    In order to eliminate the deviation between the measured LIBS spectral line and the standard LIBS spectral line, and improve the accuracy of elements measurement, a research of physical deviation factors in laser induced breakdown spectroscopy technology was proposed. Under the same experimental conditions, the relationship of ablated hole effect and spectral wavelength was tested, the Stark broadening data of Mg plasma laser induced breakdown spectroscopy with sampling time-delay from 1.00 to 3.00 micros was also studied, thus the physical deviation influences such as ablated hole effect and Stark broadening could be obtained while collecting the spectrum. The results and the method of the research and analysis can also be applied to other laser induced breakdown spectroscopy experiment system, which is of great significance to improve the accuracy of LIBS elements measuring and is also important to the research on the optimum sampling time-delay of LIBS.

  16. [Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].

    PubMed

    Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie

    2013-11-01

    In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.

  17. Multi-element stochastic spectral projection for high quantile estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, Jordan, E-mail: jordan.ko@mac.com; Garnier, Josselin

    2013-06-15

    We investigate quantile estimation by multi-element generalized Polynomial Chaos (gPC) metamodel where the exact numerical model is approximated by complementary metamodels in overlapping domains that mimic the model’s exact response. The gPC metamodel is constructed by the non-intrusive stochastic spectral projection approach and function evaluation on the gPC metamodel can be considered as essentially free. Thus, large number of Monte Carlo samples from the metamodel can be used to estimate α-quantile, for moderate values of α. As the gPC metamodel is an expansion about the means of the inputs, its accuracy may worsen away from these mean values where themore » extreme events may occur. By increasing the approximation accuracy of the metamodel, we may eventually improve accuracy of quantile estimation but it is very expensive. A multi-element approach is therefore proposed by combining a global metamodel in the standard normal space with supplementary local metamodels constructed in bounded domains about the design points corresponding to the extreme events. To improve the accuracy and to minimize the sampling cost, sparse-tensor and anisotropic-tensor quadratures are tested in addition to the full-tensor Gauss quadrature in the construction of local metamodels; different bounds of the gPC expansion are also examined. The global and local metamodels are combined in the multi-element gPC (MEgPC) approach and it is shown that MEgPC can be more accurate than Monte Carlo or importance sampling methods for high quantile estimations for input dimensions roughly below N=8, a limit that is very much case- and α-dependent.« less

  18. Intelligent diagnosis of short hydraulic signal based on improved EEMD and SVM with few low-dimensional training samples

    NASA Astrophysics Data System (ADS)

    Zhang, Meijun; Tang, Jian; Zhang, Xiaoming; Zhang, Jiaojiao

    2016-03-01

    The high accurate classification ability of an intelligent diagnosis method often needs a large amount of training samples with high-dimensional eigenvectors, however the characteristics of the signal need to be extracted accurately. Although the existing EMD(empirical mode decomposition) and EEMD(ensemble empirical mode decomposition) are suitable for processing non-stationary and non-linear signals, but when a short signal, such as a hydraulic impact signal, is concerned, their decomposition accuracy become very poor. An improve EEMD is proposed specifically for short hydraulic impact signals. The improvements of this new EEMD are mainly reflected in four aspects, including self-adaptive de-noising based on EEMD, signal extension based on SVM(support vector machine), extreme center fitting based on cubic spline interpolation, and pseudo component exclusion based on cross-correlation analysis. After the energy eigenvector is extracted from the result of the improved EEMD, the fault pattern recognition based on SVM with small amount of low-dimensional training samples is studied. At last, the diagnosis ability of improved EEMD+SVM method is compared with the EEMD+SVM and EMD+SVM methods, and its diagnosis accuracy is distinctly higher than the other two methods no matter the dimension of the eigenvectors are low or high. The improved EEMD is very propitious for the decomposition of short signal, such as hydraulic impact signal, and its combination with SVM has high ability for the diagnosis of hydraulic impact faults.

  19. SKATE: a docking program that decouples systematic sampling from scoring.

    PubMed

    Feng, Jianwen A; Marshall, Garland R

    2010-11-15

    SKATE is a docking prototype that decouples systematic sampling from scoring. This novel approach removes any interdependence between sampling and scoring functions to achieve better sampling and, thus, improves docking accuracy. SKATE systematically samples a ligand's conformational, rotational and translational degrees of freedom, as constrained by a receptor pocket, to find sterically allowed poses. Efficient systematic sampling is achieved by pruning the combinatorial tree using aggregate assembly, discriminant analysis, adaptive sampling, radial sampling, and clustering. Because systematic sampling is decoupled from scoring, the poses generated by SKATE can be ranked by any published, or in-house, scoring function. To test the performance of SKATE, ligands from the Asetex/CDCC set, the Surflex set, and the Vertex set, a total of 266 complexes, were redocked to their respective receptors. The results show that SKATE was able to sample poses within 2 A RMSD of the native structure for 98, 95, and 98% of the cases in the Astex/CDCC, Surflex, and Vertex sets, respectively. Cross-docking accuracy of SKATE was also assessed by docking 10 ligands to thymidine kinase and 73 ligands to cyclin-dependent kinase. 2010 Wiley Periodicals, Inc.

  20. Empirical evaluation of data normalization methods for molecular classification.

    PubMed

    Huang, Huei-Chung; Qin, Li-Xuan

    2018-01-01

    Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.

  1. An Evaluation of Explicit Receptor Flexibility in Molecular Docking Using Molecular Dynamics and Torsion Angle Molecular Dynamics.

    PubMed

    Armen, Roger S; Chen, Jianhan; Brooks, Charles L

    2009-10-13

    Incorporating receptor flexibility into molecular docking should improve results for flexible proteins. However, the incorporation of explicit all-atom flexibility with molecular dynamics for the entire protein chain may also introduce significant error and "noise" that could decrease docking accuracy and deteriorate the ability of a scoring function to rank native-like poses. We address this apparent paradox by comparing the success of several flexible receptor models in cross-docking and multiple receptor ensemble docking for p38α mitogen-activated protein (MAP) kinase. Explicit all-atom receptor flexibility has been incorporated into a CHARMM-based molecular docking method (CDOCKER) using both molecular dynamics (MD) and torsion angle molecular dynamics (TAMD) for the refinement of predicted protein-ligand binding geometries. These flexible receptor models have been evaluated, and the accuracy and efficiency of TAMD sampling is directly compared to MD sampling. Several flexible receptor models are compared, encompassing flexible side chains, flexible loops, multiple flexible backbone segments, and treatment of the entire chain as flexible. We find that although including side chain and some backbone flexibility is required for improved docking accuracy as expected, docking accuracy also diminishes as additional and unnecessary receptor flexibility is included into the conformational search space. Ensemble docking results demonstrate that including protein flexibility leads to to improved agreement with binding data for 227 active compounds. This comparison also demonstrates that a flexible receptor model enriches high affinity compound identification without significantly increasing the number of false positives from low affinity compounds.

  2. An Evaluation of Explicit Receptor Flexibility in Molecular Docking Using Molecular Dynamics and Torsion Angle Molecular Dynamics

    PubMed Central

    Armen, Roger S.; Chen, Jianhan; Brooks, Charles L.

    2009-01-01

    Incorporating receptor flexibility into molecular docking should improve results for flexible proteins. However, the incorporation of explicit all-atom flexibility with molecular dynamics for the entire protein chain may also introduce significant error and “noise” that could decrease docking accuracy and deteriorate the ability of a scoring function to rank native-like poses. We address this apparent paradox by comparing the success of several flexible receptor models in cross-docking and multiple receptor ensemble docking for p38α mitogen-activated protein (MAP) kinase. Explicit all-atom receptor flexibility has been incorporated into a CHARMM-based molecular docking method (CDOCKER) using both molecular dynamics (MD) and torsion angle molecular dynamics (TAMD) for the refinement of predicted protein-ligand binding geometries. These flexible receptor models have been evaluated, and the accuracy and efficiency of TAMD sampling is directly compared to MD sampling. Several flexible receptor models are compared, encompassing flexible side chains, flexible loops, multiple flexible backbone segments, and treatment of the entire chain as flexible. We find that although including side chain and some backbone flexibility is required for improved docking accuracy as expected, docking accuracy also diminishes as additional and unnecessary receptor flexibility is included into the conformational search space. Ensemble docking results demonstrate that including protein flexibility leads to to improved agreement with binding data for 227 active compounds. This comparison also demonstrates that a flexible receptor model enriches high affinity compound identification without significantly increasing the number of false positives from low affinity compounds. PMID:20160879

  3. Word associations contribute to machine learning in automatic scoring of degree of emotional tones in dream reports.

    PubMed

    Amini, Reza; Sabourin, Catherine; De Koninck, Joseph

    2011-12-01

    Scientific study of dreams requires the most objective methods to reliably analyze dream content. In this context, artificial intelligence should prove useful for an automatic and non subjective scoring technique. Past research has utilized word search and emotional affiliation methods, to model and automatically match human judges' scoring of dream report's negative emotional tone. The current study added word associations to improve the model's accuracy. Word associations were established using words' frequency of co-occurrence with their defining words as found in a dictionary and an encyclopedia. It was hypothesized that this addition would facilitate the machine learning model and improve its predictability beyond those of previous models. With a sample of 458 dreams, this model demonstrated an improvement in accuracy from 59% to 63% (kappa=.485) on the negative emotional tone scale, and for the first time reached an accuracy of 77% (kappa=.520) on the positive scale. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  5. Airborne particulate matter (PM) filter analysis and modeling by total reflection X-ray fluorescence (TXRF) and X-ray standing wave (XSW).

    PubMed

    Borgese, L; Salmistraro, M; Gianoncelli, A; Zacco, A; Lucchini, R; Zimmerman, N; Pisani, L; Siviero, G; Depero, L E; Bontempi, E

    2012-01-30

    This work is presented as an improvement of a recently introduced method for airborne particulate matter (PM) filter analysis [1]. X-ray standing wave (XSW) and total reflection X-ray fluorescence (TXRF) were performed with a new dedicated laboratory instrumentation. The main advantage of performing both XSW and TXRF, is the possibility to distinguish the nature of the sample: if it is a small droplet dry residue, a thin film like or a bulk sample. Another advantage is related to the possibility to select the angle of total reflection to make TXRF measurements. Finally, the possibility to switch the X-ray source allows to measure with more accuracy lighter and heavier elements (with a change in X-ray anode, for example from Mo to Cu). The aim of the present study is to lay the theoretical foundation of the new proposed method for airborne PM filters quantitative analysis improving the accuracy and efficiency of quantification by means of an external standard. The theoretical model presented and discussed demonstrated that airborne PM filters can be considered as thin layers. A set of reference samples is prepared in laboratory and used to obtain a calibration curve. Our results demonstrate that the proposed method for quantitative analysis of air PM filters is affordable and reliable without the necessity to digest filters to obtain quantitative chemical analysis, and that the use of XSW improve the accuracy of TXRF analysis. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Methods for measuring water activity (aw) of foods and its applications to moisture sorption isotherm studies.

    PubMed

    Zhang, Lida; Sun, Da-Wen; Zhang, Zhihang

    2017-03-24

    Moisture sorption isotherm is commonly determined by saturated salt slurry method, which has defects of long time cost, cumbersome labor, and microbial deterioration of samples. Thus, a novel method, a w measurement (AWM) method, has been developed to overcome these drawbacks. Fundamentals and applications of this fast method have been introduced with respects to its typical operational steps, a variety of equipment set-ups and applied samples. The resultant rapidness and reliability have been evaluated by comparing with conventional methods. This review also discussed factors impairing measurement precision and accuracy, including inappropriate choice of predryingwetting techniques and unachieved moisture uniformity in samples due to inadequate time. This analysis and corresponding suggestions can facilitate improved AWM method with more satisfying accuracy and time cost.

  7. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    PubMed Central

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm3 FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations. PMID:25186406

  8. Analytical and Clinical Performance of Blood Glucose Monitors

    PubMed Central

    Boren, Suzanne Austin; Clarke, William L.

    2010-01-01

    Background The objective of this study was to understand the level of performance of blood glucose monitors as assessed in the published literature. Methods Medline from January 2000 to October 2009 and reference lists of included articles were searched to identify eligible studies. Key information was abstracted from eligible studies: blood glucose meters tested, blood sample, meter operators, setting, sample of people (number, diabetes type, age, sex, and race), duration of diabetes, years using a glucose meter, insulin use, recommendations followed, performance evaluation measures, and specific factors affecting the accuracy evaluation of blood glucose monitors. Results Thirty-one articles were included in this review. Articles were categorized as review articles of blood glucose accuracy (6 articles), original studies that reported the performance of blood glucose meters in laboratory settings (14 articles) or clinical settings (9 articles), and simulation studies (2 articles). A variety of performance evaluation measures were used in the studies. The authors did not identify any studies that demonstrated a difference in clinical outcomes. Examples of analytical tools used in the description of accuracy (e.g., correlation coefficient, linear regression equations, and International Organization for Standardization standards) and how these traditional measures can complicate the achievement of target blood glucose levels for the patient were presented. The benefits of using error grid analysis to quantify the clinical accuracy of patient-determined blood glucose values were discussed. Conclusions When examining blood glucose monitor performance in the real world, it is important to consider if an improvement in analytical accuracy would lead to improved clinical outcomes for patients. There are several examples of how analytical tools used in the description of self-monitoring of blood glucose accuracy could be irrelevant to treatment decisions. PMID:20167171

  9. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture.

    PubMed

    Vallejo, Roger L; Leeds, Timothy D; Gao, Guangtu; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Fragomeni, Breno O; Wiens, Gregory D; Palti, Yniv

    2017-02-01

    Previously, we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative that enables exploitation of within-family genetic variation. We compared three GS models [single-step genomic best linear unbiased prediction (ssGBLUP), weighted ssGBLUP (wssGBLUP), and BayesB] to predict genomic-enabled breeding values (GEBV) for BCWD resistance in a commercial rainbow trout population, and compared the accuracy of GEBV to traditional estimates of breeding values (EBV) from a pedigree-based BLUP (P-BLUP) model. We also assessed the impact of sampling design on the accuracy of GEBV predictions. For these comparisons, we used BCWD survival phenotypes recorded on 7893 fish from 102 families, of which 1473 fish from 50 families had genotypes [57 K single nucleotide polymorphism (SNP) array]. Naïve siblings of the training fish (n = 930 testing fish) were genotyped to predict their GEBV and mated to produce 138 progeny testing families. In the following generation, 9968 progeny were phenotyped to empirically assess the accuracy of GEBV predictions made on their non-phenotyped parents. The accuracy of GEBV from all tested GS models were substantially higher than the P-BLUP model EBV. The highest increase in accuracy relative to the P-BLUP model was achieved with BayesB (97.2 to 108.8%), followed by wssGBLUP at iteration 2 (94.4 to 97.1%) and 3 (88.9 to 91.2%) and ssGBLUP (83.3 to 85.3%). Reducing the training sample size to n = ~1000 had no negative impact on the accuracy (0.67 to 0.72), but with n = ~500 the accuracy dropped to 0.53 to 0.61 if the training and testing fish were full-sibs, and even substantially lower, to 0.22 to 0.25, when they were not full-sibs. Using progeny performance data, we showed that the accuracy of genomic predictions is substantially higher than estimates obtained from the traditional pedigree-based BLUP model for BCWD resistance. Overall, we found that using a much smaller training sample size compared to similar studies in livestock, GS can substantially improve the selection accuracy and genetic gains for this trait in a commercial rainbow trout breeding population.

  10. Multicategory reclassification statistics for assessing improvements in diagnostic accuracy

    PubMed Central

    Li, Jialiang; Jiang, Binyan; Fine, Jason P.

    2013-01-01

    In this paper, we extend the definitions of the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) in the context of multicategory classification. Both measures were proposed in Pencina and others (2008. Evaluating the added predictive ability of a new marker: from area under the receiver operating characteristic (ROC) curve to reclassification and beyond. Statistics in Medicine 27, 157–172) as numeric characterizations of accuracy improvement for binary diagnostic tests and were shown to have certain advantage over analyses based on ROC curves or other regression approaches. Estimation and inference procedures for the multiclass NRI and IDI are provided in this paper along with necessary asymptotic distributional results. Simulations are conducted to study the finite-sample properties of the proposed estimators. Two medical examples are considered to illustrate our methodology. PMID:23197381

  11. A deformable particle-in-cell method for advective transport in geodynamic modeling

    NASA Astrophysics Data System (ADS)

    Samuel, Henri

    2018-06-01

    This paper presents an improvement of the particle-in-cell method commonly used in geodynamic modeling for solving pure advection of sharply varying fields. Standard particle-in-cell approaches use particle kernels to transfer the information carried by the Lagrangian particles to/from the Eulerian grid. These kernels are generally one-dimensional and non-evolutive, which leads to the development of under- and over-sampling of the spatial domain by the particles. This reduces the accuracy of the solution, and may require the use of a prohibitive amount of particles in order to maintain the solution accuracy to an acceptable level. The new proposed approach relies on the use of deformable kernels that account for the strain history in the vicinity of particles. It results in a significant improvement of the spatial sampling by the particles, leading to a much higher accuracy of the numerical solution, for a reasonable computational extra cost. Various 2D tests were conducted to compare the performances of the deformable particle-in-cell method with the particle-in-cell approach. These consistently show that at comparable accuracy, the deformable particle-in-cell method was found to be four to six times more efficient than standard particle-in-cell approaches. The method could be adapted to 3D space and generalized to cases including motionless transport.

  12. Size at emergence improves accuracy of age estimates in forensically-useful beetle Creophilus maxillosus L. (Staphylinidae).

    PubMed

    Matuszewski, Szymon; Frątczak-Łagiewska, Katarzyna

    2018-02-05

    Insects colonizing human or animal cadavers may be used to estimate post-mortem interval (PMI) usually by aging larvae or pupae sampled on a crime scene. The accuracy of insect age estimates in a forensic context is reduced by large intraspecific variation in insect development time. Here we test the concept that insect size at emergence may be used to predict insect physiological age and accordingly to improve the accuracy of age estimates in forensic entomology. Using results of laboratory study on development of forensically-useful beetle Creophilus maxillosus (Linnaeus, 1758) (Staphylinidae) we demonstrate that its physiological age at emergence [i.e. thermal summation value (K) needed for emergence] fall with an increase of beetle size. In the validation study it was found that K estimated based on the adult insect size was significantly closer to the true K as compared to K from the general thermal summation model. Using beetle length at emergence as a predictor variable and male or female specific model regressing K against beetle length gave the most accurate predictions of age. These results demonstrate that size of C. maxillosus at emergence improves accuracy of age estimates in a forensic context.

  13. Dissolved oxygen content prediction in crab culture using a hybrid intelligent method

    PubMed Central

    Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang

    2016-01-01

    A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206

  14. Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.

    PubMed

    Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang

    2016-06-08

    A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.

  15. A multi-agency nutrient dataset used to estimate loads, improve monitoring design, and calibrate regional nutrient SPARROW models

    USGS Publications Warehouse

    Saad, David A.; Schwarz, Gregory E.; Robertson, Dale M.; Booth, Nathaniel

    2011-01-01

    Stream-loading information was compiled from federal, state, and local agencies, and selected universities as part of an effort to develop regional SPAtially Referenced Regressions On Watershed attributes (SPARROW) models to help describe the distribution, sources, and transport of nutrients in streams throughout much of the United States. After screening, 2,739 sites, sampled by 73 agencies, were identified as having suitable data for calculating long-term mean annual nutrient loads required for SPARROW model calibration. These sites had a wide range in nutrient concentrations, loads, and yields, and environmental characteristics in their basins. An analysis of the accuracy in load estimates relative to site attributes indicated that accuracy in loads improve with increases in the number of observations, the proportion of uncensored data, and the variability in flow on observation days, whereas accuracy declines with increases in the root mean square error of the water-quality model, the flow-bias ratio, the number of days between samples, the variability in daily streamflow for the prediction period, and if the load estimate has been detrended. Based on compiled data, all areas of the country had recent declines in the number of sites with sufficient water-quality data to compute accurate annual loads and support regional modeling analyses. These declines were caused by decreases in the number of sites being sampled and data not being entered in readily accessible databases.

  16. Improvements to sample processing and measurement to enable more widespread environmental application of tritium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moran, James; Alexander, Thomas; Aalseth, Craig

    2017-08-01

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of T behavior in the environment.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Steve E.

    The accuracy and precision of a new Isolok sampler configuration was evaluated using a recirculation flow loop. The evaluation was performed using two slurry simulants of Hanford high-level tank waste. Through testing, the capability of the Isolok sampler was evaluated. Sample concentrations were compared to reference samples that were simultaneously collected by a two-stage Vezin sampler. The capability of the Isolok sampler to collect samples that accurately reflect the contents in the test loop improved – biases between the Isolok and Vezin samples were greatly reduce for fast settling particles.

  18. Development of a near-infrared spectroscopic system for monitoring urine glucose level for the use of long-term home healthcare

    NASA Astrophysics Data System (ADS)

    Tanaka, Shinobu; Hayakawa, Yuuto; Ogawa, Mitsuhiro; Yamakoshi, Ken-ichi

    2010-08-01

    We have been developing a new technique for measuring urine glucose concentration using near infrared spectroscopy (NIRS) in conjunction with the Partial Least Square (PLS) method. In the previous study, we reported some results of preliminary experiments for assessing feasibility of this method using a FT-IR spectrometer. In this study, considering practicability of the system, a flow-through cell with the optical path length of 10 mm was newly introduced. Accuracy of the system was verified by the preliminary experiments using urine samples. From the results obtained, it was clearly demonstrated that the present method had a capability of predicting individual urine glucose level with reasonable accuracy (the minimum value of standard error of prediction: SEP = 22.3 mg/dl) and appeared to be a useful means for long-term home health care. However, mean value of SEP obtained by the urine samples from ten subjects was not satisfactorily low (53.7 mg/dl). For improving the accuracy, (1) mechanical stability of the optical system should be improved, (2) the method for normalizing the spectrum should be reconsidered, and (3) the number of subject should be increased.

  19. Kalman/Map filtering-aided fast normalized cross correlation-based Wi-Fi fingerprinting location sensing.

    PubMed

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-11-13

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.

  20. Kalman/Map Filtering-Aided Fast Normalized Cross Correlation-Based Wi-Fi Fingerprinting Location Sensing

    PubMed Central

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-01-01

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027

  1. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce

    PubMed Central

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987

  2. Delayed matching to two-picture samples by individuals with and without disabilities: an analysis of the role of naming.

    PubMed

    Gutowski, Stanley J; Stromer, Robert

    2003-01-01

    Delayed matching to complex, two-picture samples (e.g., cat-dog) may be improved when the samples occasion differential verbal behavior. In Experiment 1, individuals with mental retardation matched picture comparisons to identical single-picture samples or to two-picture samples, one of which was identical to a comparison. Accuracy scores were typically high on single-picture trials under both simultaneous and delayed matching conditions. Scores on two-picture trials were also high during the simultaneous condition but were lower during the delay condition. However, scores improved on delayed two-picture trials when each of the sample pictures was named aloud before comparison responding. Experiment 2 replicated these results with preschoolers with typical development and a youth with mental retardation. Sample naming also improved the preschoolers' matching when the samples were pairs of spoken names and the correct comparison picture matched one of the names. Collectively, the participants could produce the verbal behavior that might have improved performance, but typically did not do so unless the procedure required it. The success of the naming intervention recommends it for improving the observing and remembering of multiple elements of complex instructional stimuli.

  3. An empirical analysis of the quantitative effect of data when fitting quadratic and cubic polynomials

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.

  4. Applying active learning to supervised word sense disambiguation in MEDLINE.

    PubMed

    Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua

    2013-01-01

    This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models.

  5. Applying active learning to supervised word sense disambiguation in MEDLINE

    PubMed Central

    Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua

    2013-01-01

    Objectives This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. Methods We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Results Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. Conclusions This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models. PMID:23364851

  6. Testing the accuracy of clustering redshifts with simulations

    NASA Astrophysics Data System (ADS)

    Scottez, V.; Benoit-Lévy, A.; Coupon, J.; Ilbert, O.; Mellier, Y.

    2018-03-01

    We explore the accuracy of clustering-based redshift inference within the MICE2 simulation. This method uses the spatial clustering of galaxies between a spectroscopic reference sample and an unknown sample. This study give an estimate of the reachable accuracy of this method. First, we discuss the requirements for the number objects in the two samples, confirming that this method does not require a representative spectroscopic sample for calibration. In the context of next generation of cosmological surveys, we estimated that the density of the Quasi Stellar Objects in BOSS allows us to reach 0.2 per cent accuracy in the mean redshift. Secondly, we estimate individual redshifts for galaxies in the densest regions of colour space ( ˜ 30 per cent of the galaxies) without using the photometric redshifts procedure. The advantage of this procedure is threefold. It allows: (i) the use of cluster-zs for any field in astronomy, (ii) the possibility to combine photo-zs and cluster-zs to get an improved redshift estimation, (iii) the use of cluster-z to define tomographic bins for weak lensing. Finally, we explore this last option and build five cluster-z selected tomographic bins from redshift 0.2 to 1. We found a bias on the mean redshift estimate of 0.002 per bin. We conclude that cluster-z could be used as a primary redshift estimator by next generation of cosmological surveys.

  7. Classification of weld defect based on information fusion technology for radiographic testing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defectmore » feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.« less

  8. Digital adaptive optics confocal microscopy based on iterative retrieval of optical aberration from a guidestar hologram

    PubMed Central

    Liu, Changgeng; Thapa, Damber; Yao, Xincheng

    2017-01-01

    Guidestar hologram based digital adaptive optics (DAO) is one recently emerging active imaging modality. It records each complex distorted line field reflected or scattered from the sample by an off-axis digital hologram, measures the optical aberration from a separate off-axis digital guidestar hologram, and removes the optical aberration from the distorted line fields by numerical processing. In previously demonstrated DAO systems, the optical aberration was directly retrieved from the guidestar hologram by taking its Fourier transform and extracting the phase term. For the direct retrieval method (DRM), when the sample is not coincident with the guidestar focal plane, the accuracy of the optical aberration retrieved by DRM undergoes a fast decay, leading to quality deterioration of corrected images. To tackle this problem, we explore here an image metrics-based iterative method (MIM) to retrieve the optical aberration from the guidestar hologram. Using an aberrated objective lens and scattering samples, we demonstrate that MIM can improve the accuracy of the retrieved aberrations from both focused and defocused guidestar holograms, compared to DRM, to improve the robustness of the DAO. PMID:28380937

  9. Classification of weld defect based on information fusion technology for radiographic testing system.

    PubMed

    Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying

    2016-03-01

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.

  10. Efficient alignment-free DNA barcode analytics.

    PubMed

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-11-10

    In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.

  11. Improving CID, HCD, and ETD FT MS/MS degradome-peptidome identifications using high accuracy mass information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Yufeng; Tolic, Nikola; Purvine, Samuel O.

    2011-11-07

    The peptidome (i.e. processed and degraded forms of proteins) of e.g. blood can potentially provide insights into disease processes, as well as a source of candidate biomarkers that are unobtainable using conventional bottom-up proteomics approaches. MS dissociation methods, including CID, HCD, and ETD, can each contribute distinct identifications using conventional peptide identification methods (Shen et al. J. Proteome Res. 2011), but such samples still pose significant analysis and informatics challenges. In this work, we explored a simple approach for better utilization of high accuracy fragment ion mass measurements provided e.g. by FT MS/MS and demonstrate significant improvements relative to conventionalmore » descriptive and probabilistic scores methods. For example, at the same FDR level we identified 20-40% more peptides than SEQUEST and Mascot scoring methods using high accuracy fragment ion information (e.g., <10 mass errors) from CID, HCD, and ETD spectra. Species identified covered >90% of all those identified from SEQUEST, Mascot, and MS-GF scoring methods. Additionally, we found that the merging the different fragment spectra provided >60% more species using the UStags method than achieved previously, and enabled >1000 peptidome components to be identified from a single human blood plasma sample with a 0.6% peptide-level FDR, and providing an improved basis for investigation of potentially disease-related peptidome components.« less

  12. A fiber optic sensor for noncontact measurement of shaft speed, torque, and power

    NASA Technical Reports Server (NTRS)

    Madzsar, George C.

    1990-01-01

    A fiber optic sensor which enables noncontact measurement of the speed, torque and power of a rotating shaft was fabricated and tested. The sensor provides a direct measurement of shaft rotational speed and shaft angular twist, from which torque and power can be determined. Angles of twist between 0.005 and 10 degrees were measured. Sensor resolution is limited by the sampling rate of the analog to digital converter, while accuracy is dependent on the spot size of the focused beam on the shaft. Increasing the sampling rate improves measurement resolution, and decreasing the focused spot size increases accuracy. Digital processing allows for enhancement of an electronically or optically degraded signal.

  13. A fiber optic sensor for noncontact measurement of shaft speed, torque and power

    NASA Technical Reports Server (NTRS)

    Madzsar, George C.

    1990-01-01

    A fiber optic sensor which enables noncontact measurement of the speed, torque and power of a rotating shaft was fabricated and tested. The sensor provides a direct measurement of shaft rotational speed and shaft angular twist, from which torque and power can be determined. Angles of twist between 0.005 and 10 degrees were measured. Sensor resolution is limited by the sampling rate of the analog to digital converter, while accuracy is dependent on the spot size of the focused beam on the shaft. Increasing the sampling rate improves measurement resolution, and decreasing the focused spot size increases accuracy. Digital processing allows for enhancement of an electronically or optically degraded signal.

  14. Clinical pharmacology quality assurance program: models for longitudinal analysis of antiretroviral proficiency testing for international laboratories.

    PubMed

    DiFrancesco, Robin; Rosenkranz, Susan L; Taylor, Charlene R; Pande, Poonam G; Siminski, Suzanne M; Jenny, Richard W; Morse, Gene D

    2013-10-01

    Among National Institutes of Health HIV Research Networks conducting multicenter trials, samples from protocols that span several years are analyzed at multiple clinical pharmacology laboratories (CPLs) for multiple antiretrovirals. Drug assay data are, in turn, entered into study-specific data sets that are used for pharmacokinetic analyses, merged to conduct cross-protocol pharmacokinetic analysis, and integrated with pharmacogenomics research to investigate pharmacokinetic-pharmacogenetic associations. The CPLs participate in a semiannual proficiency testing (PT) program implemented by the Clinical Pharmacology Quality Assurance program. Using results from multiple PT rounds, longitudinal analyses of recovery are reflective of accuracy and precision within/across laboratories. The objectives of this longitudinal analysis of PT across multiple CPLs were to develop and test statistical models that longitudinally: (1) assess the precision and accuracy of concentrations reported by individual CPLs and (2) determine factors associated with round-specific and long-term assay accuracy, precision, and bias using a new regression model. A measure of absolute recovery is explored as a simultaneous measure of accuracy and precision. Overall, the analysis outcomes assured 97% accuracy (±20% of the final target concentration of all (21) drug concentration results reported for clinical trial samples by multiple CPLs). Using the Clinical Laboratory Improvement Act acceptance of meeting criteria for ≥2/3 consecutive rounds, all 10 laboratories that participated in 3 or more rounds per analyte maintained Clinical Laboratory Improvement Act proficiency. Significant associations were present between magnitude of error and CPL (Kruskal-Wallis P < 0.001) and antiretroviral (Kruskal-Wallis P < 0.001).

  15. Data for Program Management: An Accuracy Assessment of Data Collected in Household Registers by Community Health Workers in Southern Kayonza, Rwanda.

    PubMed

    Mitsunaga, Tisha; Hedt-Gauthier, Bethany L; Ngizwenayo, Elias; Farmer, Didi Bertrand; Gaju, Erick; Drobac, Peter; Basinga, Paulin; Hirschhorn, Lisa; Rich, Michael L; Winch, Peter J; Ngabo, Fidele; Mugeni, Cathy

    2015-08-01

    Community health workers (CHWs) collect data for routine services, surveys and research in their communities. However, quality of these data is largely unknown. Utilizing poor quality data can result in inefficient resource use, misinformation about system gaps, and poor program management and effectiveness. This study aims to measure CHW data accuracy, defined as agreement between household registers compared to household member interview and client records in one district in Eastern province, Rwanda. We used cluster-lot quality assurance sampling to randomly sample six CHWs per cell and six households per CHW. We classified cells as having 'poor' or 'good' accuracy for household registers for five indicators, calculating point estimates of percent of households with accurate data by health center. We evaluated 204 CHW registers and 1,224 households for accuracy across 34 cells in southern Kayonza. Point estimates across health centers ranged from 79 to 100% for individual indicators and 61 to 72% for the composite indicator. Recording error appeared random for all but the widely under-reported number of women on modern family planning method. Overall, accuracy was largely 'good' across cells, with varying results by indicator. Program managers should identify optimum thresholds for 'good' data quality and interventions to reach them according to data use. Decreasing variability and improving quality will facilitate potential of these routinely-collected data to be more meaningful for community health program management. We encourage further studies assessing CHW data quality and the impact training, supervision and other strategies have on improving it.

  16. Vital sign sensing method based on EMD in terahertz band

    NASA Astrophysics Data System (ADS)

    Xu, Zhengwu; Liu, Tong

    2014-12-01

    Non-contact respiration and heartbeat rates detection could be applied to find survivors trapped in the disaster or the remote monitoring of the respiration and heartbeat of a patient. This study presents an improved algorithm that extracts the respiration and heartbeat rates of humans by utilizing the terahertz radar, which further lessens the effects of noise, suppresses the cross-term, and enhances the detection accuracy. A human target echo model for the terahertz radar is first presented. Combining the over-sampling method, low-pass filter, and Empirical Mode Decomposition improves the signal-to-noise ratio. The smoothed pseudo Wigner-Ville distribution time-frequency technique and the centroid of the spectrogram are used to estimate the instantaneous velocity of the target's cardiopulmonary motion. The down-sampling method is adopted to prevent serious distortion. Finally, a second time-frequency analysis is applied to the centroid curve to extract the respiration and heartbeat rates of the individual. Simulation results show that compared with the previously presented vital sign sensing method, the improved algorithm enhances the signal-to-noise ratio to 1 dB with a detection accuracy of 80%. The improved algorithm is an effective approach for the detection of respiration and heartbeat signal in a complicated environment.

  17. An interpolation method for stream habitat assessments

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2015-01-01

    Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.

  18. Risk of bias reporting in the recent animal focal cerebral ischaemia literature.

    PubMed

    Bahor, Zsanett; Liao, Jing; Macleod, Malcolm R; Bannach-Brown, Alexandra; McCann, Sarah K; Wever, Kimberley E; Thomas, James; Ottavi, Thomas; Howells, David W; Rice, Andrew; Ananiadou, Sophia; Sena, Emily

    2017-10-15

    Findings from in vivo research may be less reliable where studies do not report measures to reduce risks of bias. The experimental stroke community has been at the forefront of implementing changes to improve reporting, but it is not known whether these efforts are associated with continuous improvements. Our aims here were firstly to validate an automated tool to assess risks of bias in published works, and secondly to assess the reporting of measures taken to reduce the risk of bias within recent literature for two experimental models of stroke. We developed and used text analytic approaches to automatically ascertain reporting of measures to reduce risk of bias from full-text articles describing animal experiments inducing middle cerebral artery occlusion (MCAO) or modelling lacunar stroke. Compared with previous assessments, there were improvements in the reporting of measures taken to reduce risks of bias in the MCAO literature but not in the lacunar stroke literature. Accuracy of automated annotation of risk of bias in the MCAO literature was 86% (randomization), 94% (blinding) and 100% (sample size calculation); and in the lacunar stroke literature accuracy was 67% (randomization), 91% (blinding) and 96% (sample size calculation). There remains substantial opportunity for improvement in the reporting of animal research modelling stroke, particularly in the lacunar stroke literature. Further, automated tools perform sufficiently well to identify whether studies report blinded assessment of outcome, but improvements are required in the tools to ascertain whether randomization and a sample size calculation were reported. © 2017 The Author(s).

  19. Multi-look fusion identification: a paradigm shift from quality to quantity in data samples

    NASA Astrophysics Data System (ADS)

    Wong, S.

    2009-05-01

    A multi-look identification method known as score-level fusion is found to be capable of achieving very high identification accuracy, even when low quality target signatures are used. Analysis using measured ground vehicle radar signatures has shown that a 97% correct identification rate can be achieved using this multi-look fusion method; in contrast, only a 37% accuracy rate is obtained when single target signature input is used. The results suggest that quantity can be used to replace quality of the target data in improving identification accuracy. With the advent of sensor technology, a large amount of target signatures of marginal quality can be captured routinely. This quantity over quality approach allows maximum exploitation of the available data to improve the target identification performance and this could have the potential of being developed into a disruptive technology.

  20. Compositional Solution Space Quantification for Probabilistic Software Analysis

    NASA Technical Reports Server (NTRS)

    Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem

    2014-01-01

    Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.

  1. Requirements for an Advanced Low Earth Orbit (LEO) Sounder (ALS) for Improved Regional Weather Prediction and Monitoring of Greenhouse Gases

    NASA Technical Reports Server (NTRS)

    Pagano, Thomas S.; Chahine, Moustafa T.; Susskind, Joel

    2008-01-01

    Hyperspectral infrared atmospheric sounders (e.g., the Atmospheric Infrared Sounder (AIRS) on Aqua and the Infrared Atmospheric Sounding Interferometer (IASI) on Met Op) provide highly accurate temperature and water vapor profiles in the lower to upper troposphere. These systems are vital operational components of our National Weather Prediction system and the AIRS has demonstrated over 6 hrs of forecast improvement on the 5 day operational forecast. Despite the success in the mid troposphere to lower stratosphere, a reduction in sensitivity and accuracy has been seen in these systems in the boundary layer over land. In this paper we demonstrate the potential improvement associated with higher spatial resolution (1 km vs currently 13.5 km) on the accuracy of boundary layer products with an added consequence of higher yield of cloud free scenes. This latter feature is related to the number of samples that can be assimilated and has also shown to have a significant impact on improving forecast accuracy. We also present a set of frequencies and resolutions that will improve vertical resolution of temperature and water vapor and trace gas species throughout the atmosphere. Development of an Advanced Low Earth Orbit (LEO) Sounder (ALS) with these improvements will improve weather forecast at the regional scale and of tropical storms and hurricanes. Improvements are also expected in the accuracy of the water vapor and cloud properties products, enhancing process studies and providing a better match to the resolution of future climate models. The improvements of technology required for the ALS are consistent with the current state of technology as demonstrated in NASA Instrument Incubator Program and NOAA's Hyperspectral Environmental Suite (HES) formulation phase development programs.

  2. Financial impact of improved pressure ulcer staging in the acute hospital with use of a new tool, the NE1 Wound Assessment Tool.

    PubMed

    Young, Daniel L; Shen, Jay J; Estocado, Nancy; Landers, Merrill R

    2012-04-01

    The NE1 Wound Assessment Tool (NE1 WAT; Medline Industries, Inc, Mundelein, Illinois), previously called the N.E. One Can Stage, was shown to significantly improve accuracy of pressure ulcer (PrU) staging. Improved PrU staging has many potential benefits, including improved care for the patient and better reimbursement. Medicare has incentivized good care and accurate identification of PrUs in the acute care hospital through an additional payment, the Medicare Severity-Diagnosis Related Group (MS-DRG). This article examines the financial impact of NE1 WAT use on the acute care hospital relative to MS-DRG reimbursement. PrU staging accuracy with and without use of the NE1 WAT from previous data was compared with acute care hospital PrU rates obtained from the 2006 National Inpatient Sample. Hill-Rom International Pressure Ulcer Prevalence Survey data were used to estimate the number of MS-DRG-eligible PrUs. There are between 390,000 and 130,000 MS-DRG-eligible PrUs annually. Given current PrU staging accuracy, approximately $209 million in MS-DRG money is being collected. With the improved staging afforded by the NE1 WAT, this figure is approximately $763.9 million. Subtracting the 2 reveals $554.9 million in additional reimbursement that could be generated by using the NE1 WAT. There is a tremendous financial incentive to improve PrU staging. The NE1 WAT has been shown to improve PrU staging accuracy significantly. This improvement has the potential to improve the financial health of acute care hospitals caring for patients with PrUs.

  3. Boomerang: A method for recursive reclassification.

    PubMed

    Devlin, Sean M; Ostrovnaya, Irina; Gönen, Mithat

    2016-09-01

    While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted toward this reclassification goal. In this article, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a prespecified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia data set where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent data set. © 2016, The International Biometric Society.

  4. Boomerang: A Method for Recursive Reclassification

    PubMed Central

    Devlin, Sean M.; Ostrovnaya, Irina; Gönen, Mithat

    2016-01-01

    Summary While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted towards this reclassification goal. In this paper, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a pre-specified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia dataset where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent dataset. PMID:26754051

  5. Analysis of near infrared spectra for age-grading of wild populations of Anopheles gambiae.

    PubMed

    Krajacich, Benjamin J; Meyers, Jacob I; Alout, Haoues; Dabiré, Roch K; Dowell, Floyd E; Foy, Brian D

    2017-11-07

    Understanding the age-structure of mosquito populations, especially malaria vectors such as Anopheles gambiae, is important for assessing the risk of infectious mosquitoes, and how vector control interventions may impact this risk. The use of near-infrared spectroscopy (NIRS) for age-grading has been demonstrated previously on laboratory and semi-field mosquitoes, but to date has not been utilized on wild-caught mosquitoes whose age is externally validated via parity status or parasite infection stage. In this study, we developed regression and classification models using NIRS on datasets of wild An. gambiae (s.l.) reared from larvae collected from the field in Burkina Faso, and two laboratory strains. We compared the accuracy of these models for predicting the ages of wild-caught mosquitoes that had been scored for their parity status as well as for positivity for Plasmodium sporozoites. Regression models utilizing variable selection increased predictive accuracy over the more common full-spectrum partial least squares (PLS) approach for cross-validation of the datasets, validation, and independent test sets. Models produced from datasets that included the greatest range of mosquito samples (i.e. different sampling locations and times) had the highest predictive accuracy on independent testing sets, though overall accuracy on these samples was low. For classification, we found that intramodel accuracy ranged between 73.5-97.0% for grouping of mosquitoes into "early" and "late" age classes, with the highest prediction accuracy found in laboratory colonized mosquitoes. However, this accuracy was decreased on test sets, with the highest classification of an independent set of wild-caught larvae reared to set ages being 69.6%. Variation in NIRS data, likely from dietary, genetic, and other factors limits the accuracy of this technique with wild-caught mosquitoes. Alternative algorithms may help improve prediction accuracy, but care should be taken to either maximize variety in models or minimize confounders.

  6. The accuracy of parent-reported height and weight for 6-12 year old U.S. children.

    PubMed

    Wright, Davene R; Glanz, Karen; Colburn, Trina; Robson, Shannon M; Saelens, Brian E

    2018-02-12

    Previous studies have examined correlations between BMI calculated using parent-reported and directly-measured child height and weight. The objective of this study was to validate correction factors for parent-reported child measurements. Concordance between parent-reported and investigator measured child height, weight, and BMI (kg/m 2 ) among participants in the Neighborhood Impact on Kids Study (n = 616) was examined using the Lin coefficient, where a value of ±1.0 indicates perfect concordance and a value of zero denotes non-concordance. A correction model for parent-reported height, weight, and BMI based on commonly collected demographic information was developed using 75% of the sample. This model was used to estimate corrected measures for the remaining 25% of the sample and measured concordance between correct parent-reported and investigator-measured values. Accuracy of corrected values in classifying children as overweight/obese was assessed by sensitivity and specificity. Concordance between parent-reported and measured height, weight and BMI was low (0.007, - 0.039, and - 0.005 respectively). Concordance in the corrected test samples improved to 0.752 for height, 0.616 for weight, and 0.227 for BMI. Sensitivity of corrected parent-reported measures for predicting overweight and obesity among children in the test sample decreased from 42.8 to 25.6% while specificity improved from 79.5 to 88.6%. Correction factors improved concordance for height and weight but did not improve the sensitivity of parent-reported measures for measuring child overweight and obesity. Future research should be conducted using larger and more nationally-representative samples that allow researchers to fully explore demographic variance in correction coefficients.

  7. New and Improved? A Comparison of the Original and Revised Versions of the Structured Interview of Reported Symptoms

    ERIC Educational Resources Information Center

    Green, Debbie; Rosenfeld, Barry; Belfi, Brian

    2013-01-01

    The current study evaluated the accuracy of the Structured Interview of Reported Symptoms, Second Edition (SIRS-2) in a criterion-group study using a sample of forensic psychiatric patients and a community simulation sample, comparing it to the original SIRS and to results published in the SIRS-2 manual. The SIRS-2 yielded an impressive…

  8. Quantum-enhanced Sensing and Efficient Quantum Computation

    DTIC Science & Technology

    2015-07-27

    accuracy. The system was used to improve quantum boson sampling tests. 15. SUBJECT TERMS EOARD, Quantum Information Processing, Transition Edge Sensors...quantum  boson  sampling (QBS) problem are reported in Ref. [7]. To substantially  increase the scale of feasible tests, we developed a new variation

  9. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    PubMed Central

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2015-01-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541

  10. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    NASA Astrophysics Data System (ADS)

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2014-12-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices.

  11. Research on sparse feature matching of improved RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xiangsi; Zhao, Xian

    2018-04-01

    In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.

  12. Thematic accuracy of the NLCD 2001 land cover for the conterminous United States

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Fry, J.A.; Smith, J.H.; Homer, Collin G.

    2010-01-01

    The land-cover thematic accuracy of NLCD 2001 was assessed from a probability-sample of 15,000 pixels. Nationwide, NLCD 2001 overall Anderson Level II and Level I accuracies were 78.7% and 85.3%, respectively. By comparison, overall accuracies at Level II and Level I for the NLCD 1992 were 58% and 80%. Forest and cropland were two classes showing substantial improvements in accuracy in NLCD 2001 relative to NLCD 1992. NLCD 2001 forest and cropland user's accuracies were 87% and 82%, respectively, compared to 80% and 43% for NLCD 1992. Accuracy results are reported for 10 geographic regions of the United States, with regional overall accuracies ranging from 68% to 86% for Level II and from 79% to 91% at Level I. Geographic variation in class-specific accuracy was strongly associated with the phenomenon that regionally more abundant land-cover classes had higher accuracy. Accuracy estimates based on several definitions of agreement are reported to provide an indication of the potential impact of reference data error on accuracy. Drawing on our experience from two NLCD national accuracy assessments, we discuss the use of designs incorporating auxiliary data to more seamlessly quantify reference data quality as a means to further advance thematic map accuracy assessment.

  13. Extrapolation of in situ data from 1-km squares to adjacent squares using remote sensed imagery and airborne lidar data for the assessment of habitat diversity and extent.

    PubMed

    Lang, M; Vain, A; Bunce, R G H; Jongman, R H G; Raet, J; Sepp, K; Kuusemets, V; Kikas, T; Liba, N

    2015-03-01

    Habitat surveillance and subsequent monitoring at a national level is usually carried out by recording data from in situ sample sites located according to predefined strata. This paper describes the application of remote sensing to the extension of such field data recorded in 1-km squares to adjacent squares, in order to increase sample number without further field visits. Habitats were mapped in eight central squares in northeast Estonia in 2010 using a standardized recording procedure. Around one of the squares, a special study site was established which consisted of the central square and eight surrounding squares. A Landsat-7 Enhanced Thematic Mapper Plus (ETM+) image was used for correlation with in situ data. An airborne light detection and ranging (lidar) vegetation height map was also included in the classification. A series of tests were carried out by including the lidar data and contrasting analytical techniques, which are described in detail in the paper. Training accuracy in the central square varied from 75 to 100 %. In the extrapolation procedure to the surrounding squares, accuracy varied from 53.1 to 63.1 %, which improved by 10 % with the inclusion of lidar data. The reasons for this relatively low classification accuracy were mainly inherent variability in the spectral signatures of habitats but also differences between the dates of imagery acquisition and field sampling. Improvements could therefore be made by better synchronization of the field survey and image acquisition as well as by dividing general habitat categories (GHCs) into units which are more likely to have similar spectral signatures. However, the increase in the number of sample kilometre squares compensates for the loss of accuracy in the measurements of individual squares. The methodology can be applied in other studies as the procedures used are readily available.

  14. Empirical evaluation of data normalization methods for molecular classification

    PubMed Central

    Huang, Huei-Chung

    2018-01-01

    Background Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers—an increasingly important application of microarrays in the era of personalized medicine. Methods In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. Results In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Conclusion Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy. PMID:29666754

  15. Interface Prostheses With Classifier-Feedback-Based User Training.

    PubMed

    Fang, Yinfeng; Zhou, Dalin; Li, Kairu; Liu, Honghai

    2017-11-01

    It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well as the centroids of the training samples, whose dimensionality is reduced to minimal number by dimension reduction. Clustering feedback provides a criterion that guides users to adjust motion gestures and muscle contraction forces intentionally. The experiment results have demonstrated that hand motion recognition accuracy increases steadily along the progress of the clustering-feedback-based user training, while conventional classifier-feedback methods, i.e., label feedback, hardly achieve any improvement. The result concludes that the use of proper classifier feedback can accelerate the process of user training, and implies prosperous future for the amputees with limited or no experience in pattern-recognition-based prosthetic device manipulation.It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well as the centroids of the training samples, whose dimensionality is reduced to minimal number by dimension reduction. Clustering feedback provides a criterion that guides users to adjust motion gestures and muscle contraction forces intentionally. The experiment results have demonstrated that hand motion recognition accuracy increases steadily along the progress of the clustering-feedback-based user training, while conventional classifier-feedback methods, i.e., label feedback, hardly achieve any improvement. The result concludes that the use of proper classifier feedback can accelerate the process of user training, and implies prosperous future for the amputees with limited or no experience in pattern-recognition-based prosthetic device manipulation.

  16. Higher-order time integration of Coulomb collisions in a plasma using Langevin equations

    DOE PAGES

    Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...

    2013-02-08

    The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less

  17. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. [MicroRNA Target Prediction Based on Support Vector Machine Ensemble Classification Algorithm of Under-sampling Technique].

    PubMed

    Chen, Zhiru; Hong, Wenxue

    2016-02-01

    Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier.

  19. Improved measurement performance of the Physikalisch-Technische Bundesanstalt nanometer comparator by integration of a new Zerodur sample carriage

    NASA Astrophysics Data System (ADS)

    Flügge, Jens; Köning, Rainer; Schötka, Eugen; Weichert, Christoph; Köchert, Paul; Bosse, Harald; Kunzmann, Horst

    2014-12-01

    The paper describes recent improvements of Physikalisch-Technische Bundesanstalt's (PTB) reference measuring instrument for length graduations, the so-called nanometer comparator, intended to achieve a measurement uncertainty in the domain of 1 nm for a length up to 300 mm. The improvements are based on the design and realization of a new sample carriage, integrated into the existing structure and the optimization of coupling this new device to the vacuum interferometer, by which the length measuring range of approximately 540 mm with sub-nm resolution is given. First, measuring results of the enhanced nanometer comparator are presented and discussed, which show the improvements of the measuring capabilities and verify the step toward the sub-nm accuracy level.

  20. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odéen, Henrik, E-mail: h.odeen@gmail.com; Diakite, Mahamadou; Todd, Nick

    2014-09-15

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemesmore » utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm{sup 3} FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations.« less

  1. Risk-adjusted capitation based on the Diagnostic Cost Group Model: an empirical evaluation with health survey information.

    PubMed Central

    Lamers, L M

    1999-01-01

    OBJECTIVE: To evaluate the predictive accuracy of the Diagnostic Cost Group (DCG) model using health survey information. DATA SOURCES/STUDY SETTING: Longitudinal data collected for a sample of members of a Dutch sickness fund. In the Netherlands the sickness funds provide compulsory health insurance coverage for the 60 percent of the population in the lowest income brackets. STUDY DESIGN: A demographic model and DCG capitation models are estimated by means of ordinary least squares, with an individual's annual healthcare expenditures in 1994 as the dependent variable. For subgroups based on health survey information, costs predicted by the models are compared with actual costs. Using stepwise regression procedures a subset of relevant survey variables that could improve the predictive accuracy of the three-year DCG model was identified. Capitation models were extended with these variables. DATA COLLECTION/EXTRACTION METHODS: For the empirical analysis, panel data of sickness fund members were used that contained demographic information, annual healthcare expenditures, and diagnostic information from hospitalizations for each member. In 1993, a mailed health survey was conducted among a random sample of 15,000 persons in the panel data set, with a 70 percent response rate. PRINCIPAL FINDINGS: The predictive accuracy of the demographic model improves when it is extended with diagnostic information from prior hospitalizations (DCGs). A subset of survey variables further improves the predictive accuracy of the DCG capitation models. The predictable profits and losses based on survey information for the DCG models are smaller than for the demographic model. Most persons with predictable losses based on health survey information were not hospitalized in the preceding year. CONCLUSIONS: The use of diagnostic information from prior hospitalizations is a promising option for improving the demographic capitation payment formula. This study suggests that diagnostic information from outpatient utilization is complementary to DCGs in predicting future costs. PMID:10029506

  2. Vibrational shape tracking of atomic force microscopy cantilevers for improved sensitivity and accuracy of nanomechanical measurements

    NASA Astrophysics Data System (ADS)

    Wagner, Ryan; Killgore, Jason P.; Tung, Ryan C.; Raman, Arvind; Hurley, Donna C.

    2015-01-01

    Contact resonance atomic force microscopy (CR-AFM) methods currently utilize the eigenvalues, or resonant frequencies, of an AFM cantilever in contact with a surface to quantify local mechanical properties. However, the cantilever eigenmodes, or vibrational shapes, also depend strongly on tip-sample contact stiffness. In this paper, we evaluate the potential of eigenmode measurements for improved accuracy and sensitivity of CR-AFM. We apply a recently developed, in situ laser scanning method to experimentally measure changes in cantilever eigenmodes as a function of tip-sample stiffness. Regions of maximum sensitivity for eigenvalues and eigenmodes are compared and found to occur at different values of contact stiffness. The results allow the development of practical guidelines for CR-AFM experiments, such as optimum laser spot positioning for different experimental conditions. These experiments provide insight into the complex system dynamics that can affect CR-AFM and lay a foundation for enhanced nanomechanical measurements with CR-AFM.

  3. Enhanced Ligand Sampling for Relative Protein–Ligand Binding Free Energy Calculations

    PubMed Central

    2016-01-01

    Free energy calculations are used to study how strongly potential drug molecules interact with their target receptors. The accuracy of these calculations depends on the accuracy of the molecular dynamics (MD) force field as well as proper sampling of the major conformations of each molecule. However, proper sampling of ligand conformations can be difficult when there are large barriers separating the major ligand conformations. An example of this is for ligands with an asymmetrically substituted phenyl ring, where the presence of protein loops hinders the proper sampling of the different ring conformations. These ring conformations become more difficult to sample when the size of the functional groups attached to the ring increases. The Adaptive Integration Method (AIM) has been developed, which adaptively changes the alchemical coupling parameter λ during the MD simulation so that conformations sampled at one λ can aid sampling at the other λ values. The Accelerated Adaptive Integration Method (AcclAIM) builds on AIM by lowering potential barriers for specific degrees of freedom at intermediate λ values. However, these methods may not work when there are very large barriers separating the major ligand conformations. In this work, we describe a modification to AIM that improves sampling of the different ring conformations, even when there is a very large barrier between them. This method combines AIM with conformational Monte Carlo sampling, giving improved convergence of ring populations and the resulting free energy. This method, called AIM/MC, is applied to study the relative binding free energy for a pair of ligands that bind to thrombin and a different pair of ligands that bind to aspartyl protease β-APP cleaving enzyme 1 (BACE1). These protein–ligand binding free energy calculations illustrate the improvements in conformational sampling and the convergence of the free energy compared to both AIM and AcclAIM. PMID:25906170

  4. Adaptive OFDM Radar Waveform Design for Improved Micro-Doppler Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata

    Here we analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a rotating target having multiple scattering centers. The use of a frequency-diverse OFDM signal enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. We characterize the accuracy of micro-Doppler frequency estimation by computing the Cramer-Rao bound (CRB) on the angular-velocity estimate of the target. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to themore » OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations with respect to the signal-to-noise ratios, number of temporal samples, and number of OFDM subcarriers. We also analysed numerically the improvement in estimation accuracy due to the adaptive waveform design. A grid-based maximum likelihood estimation technique is applied to evaluate the corresponding mean-squared error performance.« less

  5. Autonomous spatially adaptive sampling in experiments based on curvature, statistical error and sample spacing with applications in LDA measurements

    NASA Astrophysics Data System (ADS)

    Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.

    2015-06-01

    Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.

  6. Multi-element RIMS Analysis of Genesis Solar Wind Collectors

    NASA Astrophysics Data System (ADS)

    Veryovkin, I. V.; Tripa, C. E.; Zinovev, A. V.; King, B. V.; Pellin, M. J.; Burnett, D. S.

    2009-12-01

    The samples of Solar Wind (SW) delivered by the NASA Genesis mission, present significant challenges for surface analytical techniques, in part due to severe terrestrial contamination of the samples on reentry, in part due to the ultra-shallow and diffused ion implants in the SW collector materials. We are performing measurements of metallic elements in the Genesis collectors using Resonance Ionization Mass Spectrometry (RIMS), an ultra-sensitive analytical method capable of detecting SW in samples with lateral dimensions of only a few mm and at concentrations from above one ppm to below one ppt. Since our last report at 2008 AGU Fall Meeting, we have (a) developed and tested new resonance ionization schemes permitting simultaneous measurements of up to three (Ca, Cr, and Mg) elements, and (b) improved reproducibility and accuracy of our RIMS analyses for SW-like samples (i.e. shallow ion implants) by developing and implementing an optimized set of new analytical protocols. This is important since the quality of scientific results from the Genesis mission critically depends on the accuracy of analytical techniques. In this work, we report on simultaneous RIMS measurements of Ca and Cr performed on two silicon SW collector samples, (#60179 and #60476). First, we have conducted test experiments with 3×1013 at/cm2 52Cr and 44Ca implants in silicon to evaluate the accuracy of our quantitative analyses. Implant fluencies were measured by RIMS to be 2.73×1013 and 2.71×1013 at/cm2 for 52Cr and 44Ca, respectively, which corresponds to an accuracy of ≈10%. Using the same implanted wafer as a reference, we conducted RIMS analyses of the Genesis samples: 3 spots on #60179 and 4 spots on #60476. The elemental SW fluencies expected for Cr and Ca are 2.95×1010 and 1.33×1011 at/cm2 , respectively. Our measurements of 52Cr yielded 3.0±0.6×1011 at/cm2 and 5.1±4.1×1010 at/cm2 for #60179 and #60476, respectively. For 40Ca, SW fluencies of 1.39±0.70×1011 at/cm2 in #60179 and 3.6±2.5×1013 at/cm2 in #60476 were measured. Thus, only one element in each sample showed reasonable agreement with the expected values, Ca in #60179 and Cr in #60476. However the cleaning procedures applied to these samples were different: #60179 was only Megasonicated in ultra-pure water, while #60476 was subjected to longer Megasonication and an RCA cleaning procedure involving multiple rinsing steps with acid solutions. It is apparent that the surface contamination and cleaning procedures influenced the results of our measurements. We will present these experimental results and discuss procedures - including improved sample cleaning, dual-beam high resolution sputter depth profiling from front and back sides of the sample, and modeling of near-surface impurity transport - aimed at improving the accuracy of determination of elemental abundances by ion sputtering based analytical methods. This work is supported by NASA through a grant NNH08AH761 and by UChicago Argonne, LLC, under contract No. DE-AC02-06CH11357.

  7. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.

    PubMed

    Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A

    2014-10-01

    Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.

  8. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less

  9. Nonuniform depth grids in parabolic equation solutions.

    PubMed

    Sanders, William M; Collins, Michael D

    2013-04-01

    The parabolic wave equation is solved using a finite-difference solution in depth that involves a nonuniform grid. The depth operator is discretized using Galerkin's method with asymmetric hat functions. Examples are presented to illustrate that this approach can be used to improve efficiency for problems in ocean acoustics and seismo-acoustics. For shallow water problems, accuracy is sensitive to the precise placement of the ocean bottom interface. This issue is often addressed with the inefficient approach of using a fine grid spacing over all depth. Efficiency may be improved by using a relatively coarse grid with nonuniform sampling to precisely position the interface. Efficiency may also be improved by reducing the sampling in the sediment and in an absorbing layer that is used to truncate the computational domain. Nonuniform sampling may also be used to improve the implementation of a single-scattering approximation for sloping fluid-solid interfaces.

  10. On the design of experiments for determining ternary mixture free energies from static light scattering data using a nonlinear partial differential equation

    PubMed Central

    Wahle, Chris W.; Ross, David S.; Thurston, George M.

    2012-01-01

    We mathematically design sets of static light scattering experiments to provide for model-independent measurements of ternary liquid mixing free energies to a desired level of accuracy. A parabolic partial differential equation (PDE), linearized from the full nonlinear PDE [D. Ross, G. Thurston, and C. Lutzer, J. Chem. Phys. 129, 064106 (2008)10.1063/1.2937902], describes how data noise affects the free energies to be inferred. The linearized PDE creates a net of spacelike characteristic curves and orthogonal, timelike curves in the composition triangle, and this net governs diffusion of information coming from light scattering measurements to the free energy. Free energy perturbations induced by a light scattering perturbation diffuse along the characteristic curves and towards their concave sides, with a diffusivity that is proportional to the local characteristic curvature radius. Consequently, static light scattering can determine mixing free energies in regions with convex characteristic curve boundaries, given suitable boundary data. The dielectric coefficient is a Lyapunov function for the dynamical system whose trajectories are PDE characteristics. Information diffusion is heterogeneous and system-dependent in the composition triangle, since the characteristics depend on molecular interactions and are tangent to liquid-liquid phase separation coexistence loci at critical points. We find scaling relations that link free energy accuracy, total measurement time, the number of samples, and the interpolation method, and identify the key quantitative tradeoffs between devoting time to measuring more samples, or fewer samples more accurately. For each total measurement time there are optimal sample numbers beyond which more will not improve free energy accuracy. We estimate the degree to which many-point interpolation and optimized measurement concentrations can improve accuracy and save time. For a modest light scattering setup, a sample calculation shows that less than two minutes of measurement time is, in principle, sufficient to determine the dimensionless mixing free energy of a non-associating ternary mixture to within an integrated error norm of 0.003. These findings establish a quantitative framework for designing light scattering experiments to determine the Gibbs free energy of ternary liquid mixtures. PMID:22830693

  11. HLA imputation in an admixed population: An assessment of the 1000 Genomes data as a training set.

    PubMed

    Nunes, Kelly; Zheng, Xiuwen; Torres, Margareth; Moraes, Maria Elisa; Piovezan, Bruno Z; Pontes, Gerlandia N; Kimura, Lilian; Carnavalli, Juliana E P; Mingroni Netto, Regina C; Meyer, Diogo

    2016-03-01

    Methods to impute HLA alleles based on dense single nucleotide polymorphism (SNP) data provide a valuable resource to association studies and evolutionary investigation of the MHC region. The availability of appropriate training sets is critical to the accuracy of HLA imputation, and the inclusion of samples with various ancestries is an important pre-requisite in studies of admixed populations. We assess the accuracy of HLA imputation using 1000 Genomes Project data as a training set, applying it to a highly admixed Brazilian population, the Quilombos from the state of São Paulo. To assess accuracy, we compared imputed and experimentally determined genotypes for 146 samples at 4 HLA classical loci. We found imputation accuracies of 82.9%, 81.8%, 94.8% and 86.6% for HLA-A, -B, -C and -DRB1 respectively (two-field resolution). Accuracies were improved when we included a subset of Quilombo individuals in the training set. We conclude that the 1000 Genomes data is a valuable resource for construction of training sets due to the diversity of ancestries and the potential for a large overlap of SNPs with the target population. We also show that tailoring training sets to features of the target population substantially enhances imputation accuracy. Copyright © 2016 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.

  12. Differentiating neoplastic from benign lesions of the pancreas: translational techniques.

    PubMed

    Khalid, Asif

    2009-11-01

    There has been substantial recent progress in our ability to image and sample the pancreas leading to the improved recognition of benign and premalignant conditions of the pancreas such as autoimmune pancreatitis (AIP) and mucinous lesions (mucinous cystic neoplasms [MCN] and intraductal papillary mucinous neoplasms [IPMN]), respectively. Clinically relevant and difficult situations that continue to be faced in this context include differentiating MCN and IPMN from nonmucinous pancreatic cysts, the early detection of malignant degeneration in MCN and IPMN, and accurate differentiation between pancreatic cancer and inflammatory masses, especially AIP. These challenges arise primarily due to the less than perfect sensitivity for malignancy utilizing cytological samples obtained via EUS and ERCP. Aspirates from pancreatic cysts are often paucicellular further limiting the accuracy of cytology. One approach to improve the diagnostic yield from these very small samples is through the use of molecular techniques. Because the development of pancreatic cancer and malignant degeneration in MCN and IPMN is associated with well studied genetic insults including oncogene activation (eg, k-ras), tumor suppressor gene losses (eg, p53, p16, and DPC4), and genome maintenance gene mutations (eg, BRCA2 and telomerase), detecting these molecular abnormalities may aid in improving our diagnostic accuracy. A number of studies have shown the utility of testing clinical samples from pancreatic lesions and bile duct strictures for these molecular markers of malignancy to differentiate between cancer and inflammation. The information from these studies will be discussed with emphasis on how to use this information in clinical practice.

  13. Classification of Tree Species in Overstorey Canopy of Subtropical Forest Using QuickBird Images.

    PubMed

    Lin, Chinsu; Popescu, Sorin C; Thomson, Gavin; Tsogt, Khongor; Chang, Chein-I

    2015-01-01

    This paper proposes a supervised classification scheme to identify 40 tree species (2 coniferous, 38 broadleaf) belonging to 22 families and 36 genera in high spatial resolution QuickBird multispectral images (HMS). Overall kappa coefficient (OKC) and species conditional kappa coefficients (SCKC) were used to evaluate classification performance in training samples and estimate accuracy and uncertainty in test samples. Baseline classification performance using HMS images and vegetation index (VI) images were evaluated with an OKC value of 0.58 and 0.48 respectively, but performance improved significantly (up to 0.99) when used in combination with an HMS spectral-spatial texture image (SpecTex). One of the 40 species had very high conditional kappa coefficient performance (SCKC ≥ 0.95) using 4-band HMS and 5-band VIs images, but, only five species had lower performance (0.68 ≤ SCKC ≤ 0.94) using the SpecTex images. When SpecTex images were combined with a Visible Atmospherically Resistant Index (VARI), there was a significant improvement in performance in the training samples. The same level of improvement could not be replicated in the test samples indicating that a high degree of uncertainty exists in species classification accuracy which may be due to individual tree crown density, leaf greenness (inter-canopy gaps), and noise in the background environment (intra-canopy gaps). These factors increase uncertainty in the spectral texture features and therefore represent potential problems when using pixel-based classification techniques for multi-species classification.

  14. Improvements of the Vis-NIRS Model in the Prediction of Soil Organic Matter Content Using Spectral Pretreatments, Sample Selection, and Wavelength Optimization

    NASA Astrophysics Data System (ADS)

    Lin, Z. D.; Wang, Y. B.; Wang, R. J.; Wang, L. S.; Lu, C. P.; Zhang, Z. Y.; Song, L. T.; Liu, Y.

    2017-07-01

    A total of 130 topsoil samples collected from Guoyang County, Anhui Province, China, were used to establish a Vis-NIR model for the prediction of organic matter content (OMC) in lime concretion black soils. Different spectral pretreatments were applied for minimizing the irrelevant and useless information of the spectra and increasing the spectra correlation with the measured values. Subsequently, the Kennard-Stone (KS) method and sample set partitioning based on joint x-y distances (SPXY) were used to select the training set. Successive projection algorithm (SPA) and genetic algorithm (GA) were then applied for wavelength optimization. Finally, the principal component regression (PCR) model was constructed, in which the optimal number of principal components was determined using the leave-one-out cross validation technique. The results show that the combination of the Savitzky-Golay (SG) filter for smoothing and multiplicative scatter correction (MSC) can eliminate the effect of noise and baseline drift; the SPXY method is preferable to KS in the sample selection; both the SPA and the GA can significantly reduce the number of wavelength variables and favorably increase the accuracy, especially GA, which greatly improved the prediction accuracy of soil OMC with Rcc, RMSEP, and RPD up to 0.9316, 0.2142, and 2.3195, respectively.

  15. Accurate Molecular Orientation Analysis Using Infrared p-Polarized Multiple-Angle Incidence Resolution Spectrometry (pMAIRS) Considering the Refractive Index of the Thin Film Sample.

    PubMed

    Shioya, Nobutaka; Shimoaka, Takafumi; Murdey, Richard; Hasegawa, Takeshi

    2017-06-01

    Infrared (IR) p-polarized multiple-angle incidence resolution spectrometry (pMAIRS) is a powerful tool for analyzing the molecular orientation in an organic thin film. In particular, pMAIRS works powerfully for a thin film with a highly rough surface irrespective of degree of the crystallinity. Recently, the optimal experimental condition has comprehensively been revealed, with which the accuracy of the analytical results has largely been improved. Regardless, some unresolved matters still remain. A structurally isotropic sample, for example, yields different peak intensities in the in-plane and out-of-plane spectra. In the present study, this effect is shown to be due to the refractive index of the sample film and a correction factor has been developed using rigorous theoretical methods. As a result, with the use of the correction factor, organic materials having atypical refractive indices such as perfluoroalkyl compounds ( n = 1.35) and fullerene ( n = 1.83) can be analyzed with high accuracy comparable to a compound having a normal refractive index of approximately 1.55. With this improved technique, we are also ready for discriminating an isotropic structure from an oriented sample having the magic angle of 54.7°.

  16. Using nearly full-genome HIV sequence data improves phylogeny reconstruction in a simulated epidemic

    PubMed Central

    Yebra, Gonzalo; Hodcroft, Emma B.; Ragonnet-Cronin, Manon L.; Pillay, Deenan; Brown, Andrew J. Leigh; Fraser, Christophe; Kellam, Paul; de Oliveira, Tulio; Dennis, Ann; Hoppe, Anne; Kityo, Cissy; Frampton, Dan; Ssemwanga, Deogratius; Tanser, Frank; Keshani, Jagoda; Lingappa, Jairam; Herbeck, Joshua; Wawer, Maria; Essex, Max; Cohen, Myron S.; Paton, Nicholas; Ratmann, Oliver; Kaleebu, Pontiano; Hayes, Richard; Fidler, Sarah; Quinn, Thomas; Novitsky, Vladimir; Haywards, Andrew; Nastouli, Eleni; Morris, Steven; Clark, Duncan; Kozlakidis, Zisis

    2016-01-01

    HIV molecular epidemiology studies analyse viral pol gene sequences due to their availability, but whole genome sequencing allows to use other genes. We aimed to determine what gene(s) provide(s) the best approximation to the real phylogeny by analysing a simulated epidemic (created as part of the PANGEA_HIV project) with a known transmission tree. We sub-sampled a simulated dataset of 4662 sequences into different combinations of genes (gag-pol-env, gag-pol, gag, pol, env and partial pol) and sampling depths (100%, 60%, 20% and 5%), generating 100 replicates for each case. We built maximum-likelihood trees for each combination using RAxML (GTR + Γ), and compared their topologies to the corresponding true tree’s using CompareTree. The accuracy of the trees was significantly proportional to the length of the sequences used, with the gag-pol-env datasets showing the best performance and gag and partial pol sequences showing the worst. The lowest sampling depths (20% and 5%) greatly reduced the accuracy of tree reconstruction and showed high variability among replicates, especially when using the shortest gene datasets. In conclusion, using longer sequences derived from nearly whole genomes will improve the reliability of phylogenetic reconstruction. With low sample coverage, results can be highly variable, particularly when based on short sequences. PMID:28008945

  17. Using nearly full-genome HIV sequence data improves phylogeny reconstruction in a simulated epidemic.

    PubMed

    Yebra, Gonzalo; Hodcroft, Emma B; Ragonnet-Cronin, Manon L; Pillay, Deenan; Brown, Andrew J Leigh

    2016-12-23

    HIV molecular epidemiology studies analyse viral pol gene sequences due to their availability, but whole genome sequencing allows to use other genes. We aimed to determine what gene(s) provide(s) the best approximation to the real phylogeny by analysing a simulated epidemic (created as part of the PANGEA_HIV project) with a known transmission tree. We sub-sampled a simulated dataset of 4662 sequences into different combinations of genes (gag-pol-env, gag-pol, gag, pol, env and partial pol) and sampling depths (100%, 60%, 20% and 5%), generating 100 replicates for each case. We built maximum-likelihood trees for each combination using RAxML (GTR + Γ), and compared their topologies to the corresponding true tree's using CompareTree. The accuracy of the trees was significantly proportional to the length of the sequences used, with the gag-pol-env datasets showing the best performance and gag and partial pol sequences showing the worst. The lowest sampling depths (20% and 5%) greatly reduced the accuracy of tree reconstruction and showed high variability among replicates, especially when using the shortest gene datasets. In conclusion, using longer sequences derived from nearly whole genomes will improve the reliability of phylogenetic reconstruction. With low sample coverage, results can be highly variable, particularly when based on short sequences.

  18. Building Extraction Based on an Optimized Stacked Sparse Autoencoder of Structure and Training Samples Using LIDAR DSM and Optical Images.

    PubMed

    Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui

    2017-08-24

    In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.

  19. 60 seconds to survival: A pilot study of a disaster triage video game for prehospital providers.

    PubMed

    Cicero, Mark X; Whitfill, Travis; Munjal, Kevin; Madhok, Manu; Diaz, Maria Carmen G; Scherzer, Daniel J; Walsh, Barbara M; Bowen, Angela; Redlener, Michael; Goldberg, Scott A; Symons, Nadine; Burkett, James; Santos, Joseph C; Kessler, David; Barnicle, Ryan N; Paesano, Geno; Auerbach, Marc A

    2017-01-01

    Disaster triage training for emergency medical service (EMS) providers is not standardized. Simulation training is costly and time-consuming. In contrast, educational video games enable low-cost and more time-efficient standardized training. We hypothesized that players of the video game "60 Seconds to Survival" (60S) would have greater improvements in disaster triage accuracy compared to control subjects who did not play 60S. Participants recorded their demographics and highest EMS training level and were randomized to play 60S (intervention) or serve as controls. At baseline, all participants completed a live school-shooting simulation in which manikins and standardized patients depicted 10 adult and pediatric victims. The intervention group then played 60S at least three times over the course of 13 weeks (time 2). Players triaged 12 patients in three scenarios (school shooting, house fire, tornado), and received in-game performance feedback. At time 2, the same live simulation was conducted for all participants. Controls had no disaster training during the study. The main outcome was improvement in triage accuracy in live simulations from baseline to time 2. Physicians and EMS providers predetermined expected triage level (RED/YELLOW/GREEN/BLACK) via modified Delphi method. There were 26 participants in the intervention group and 21 in the control group. There was no difference in gender, level of training, or years of EMS experience (median 5.5 years intervention, 3.5 years control, p = 0.49) between the groups. At baseline, both groups demonstrated median triage accuracy of 80 percent (IQR 70-90 percent, p = 0.457). At time 2, the intervention group had a significant improvement from baseline (median accuracy = 90 percent [IQR: 80-90 percent], p = 0.005), while the control group did not (median accuracy = 80 percent [IQR:80-95], p = 0.174). However, the mean improvement from baseline was not significant between the two groups (difference = 6.5, p = 0.335). The intervention demonstrated a significant improvement in accuracy from baseline to time 2 while the control did not. However, there was no significant difference in the improvement between the intervention and control groups. These results may be due to small sample size. Future directions include assessment of the game's effect on triage accuracy with a larger, multisite site cohort and iterative development to improve 60S.

  20. Improved mass resolution and mass accuracy in TOF-SIMS spectra and images using argon gas cluster ion beams.

    PubMed

    Shon, Hyun Kyong; Yoon, Sohee; Moon, Jeong Hee; Lee, Tae Geol

    2016-06-09

    The popularity of argon gas cluster ion beams (Ar-GCIB) as primary ion beams in time-of-flight secondary ion mass spectrometry (TOF-SIMS) has increased because the molecular ions of large organic- and biomolecules can be detected with less damage to the sample surfaces. However, Ar-GCIB is limited by poor mass resolution as well as poor mass accuracy. The inferior quality of the mass resolution in a TOF-SIMS spectrum obtained by using Ar-GCIB compared to the one obtained by a bismuth liquid metal cluster ion beam and others makes it difficult to identify unknown peaks because of the mass interference from the neighboring peaks. However, in this study, the authors demonstrate improved mass resolution in TOF-SIMS using Ar-GCIB through the delayed extraction of secondary ions, a method typically used in TOF mass spectrometry to increase mass resolution. As for poor mass accuracy, although mass calibration using internal peaks with low mass such as hydrogen and carbon is a common approach in TOF-SIMS, it is unsuited to the present study because of the disappearance of the low-mass peaks in the delayed extraction mode. To resolve this issue, external mass calibration, another regularly used method in TOF-MS, was adapted to enhance mass accuracy in the spectrum and image generated by TOF-SIMS using Ar-GCIB in the delayed extraction mode. By producing spectra analyses of a peptide mixture and bovine serum albumin protein digested with trypsin, along with image analyses of rat brain samples, the authors demonstrate for the first time the enhancement of mass resolution and mass accuracy for the purpose of analyzing large biomolecules in TOF-SIMS using Ar-GCIB through the use of delayed extraction and external mass calibration.

  1. Improved formula for continuous-wave measurements of ultrasonic phase velocity

    NASA Technical Reports Server (NTRS)

    Chern, E. J.; Cantrell, J. H., Jr.; Heyman, J. S.

    1981-01-01

    An improved formula for continuous-wave ultrasonic phase velocity measurements using contact transducers is derived from the transmission line theory. The effect of transducer-sample coupling bonds is considered for measurements of solid samples even though it is often neglected because of the difficulty of accurately determining the bond thickness. Computer models show that the present formula is more accurate than previous expressions. Laboratory measurements using contacting transducers with the present formula are compared to measurements using noncontacting (hence effectively correction-free) capacitive transducers. The results of the experiments verify the validity and accuracy of the new formula.

  2. Detecting the Water-soluble Chloride Distribution of Cement Paste in a High-precision Way.

    PubMed

    Chang, Honglei; Mu, Song

    2017-11-21

    To improve the accuracy of the chloride distribution along the depth of cement paste under cyclic wet-dry conditions, a new method is proposed to obtain a high-precision chloride profile. Firstly, paste specimens are molded, cured, and exposed to cyclic wet-dry conditions. Then, powder samples at different specimen depths are grinded when the exposure age is reached. Finally, the water-soluble chloride content is detected using a silver nitrate titration method, and chloride profiles are plotted. The key to improving the accuracy of the chloride distribution along the depth is to exclude the error in the powderization, which is the most critical step for testing the distribution of chloride. Based on the above concept, the grinding method in this protocol can be used to grind powder samples automatically layer by layer from the surface inward, and it should be noted that a very thin grinding thickness (less than 0.5 mm) with a minimum error less than 0.04 mm can be obtained. The chloride profile obtained by this method better reflects the chloride distribution in specimens, which helps researchers to capture the distribution features that are often overlooked. Furthermore, this method can be applied to studies in the field of cement-based materials, which require high chloride distribution accuracy.

  3. Efficient alignment-free DNA barcode analytics

    PubMed Central

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-01-01

    Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305

  4. Improving the accuracy of electronic moisture meters for runner-type peanuts

    USDA-ARS?s Scientific Manuscript database

    Runner-type peanut kernel moisture content (MC) is measured periodically during curing and post harvest processing with electronic moisture meters for marketing and quality control. MC is predicted for 250 g samples of kernels with a mathematical function from measurements of various physical prope...

  5. Dual THz comb spectroscopy

    NASA Astrophysics Data System (ADS)

    Yasui, Takeshi

    2017-08-01

    Optical frequency combs are innovative tools for broadband spectroscopy because a series of comb modes can serve as frequency markers that are traceable to a microwave frequency standard. However, a mode distribution that is too discrete limits the spectral sampling interval to the mode frequency spacing even though individual mode linewidth is sufficiently narrow. Here, using a combination of a spectral interleaving and dual-comb spectroscopy in the terahertz (THz) region, we achieved a spectral sampling interval equal to the mode linewidth rather than the mode spacing. The spectrally interleaved THz comb was realized by sweeping the laser repetition frequency and interleaving additional frequency marks. In low-pressure gas spectroscopy, we achieved an improved spectral sampling density of 2.5 MHz and enhanced spectral accuracy of 8.39 × 10-7 in the THz region. The proposed method is a powerful tool for simultaneously achieving high resolution, high accuracy, and broad spectral coverage in THz spectroscopy.

  6. Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset

    PubMed Central

    Lipps, David; Devineni, Sree

    2016-01-01

    MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy is developed for the community for identifying novel miRNAs and the complete set of miRNAs. Source code is available at: https://github.com/xueLab/mirMeta PMID:28002428

  7. Improved sample preparation to determine acrylamide in difficult matrixes such as chocolate powder, cocoa, and coffee by liquid chromatography tandem mass spectroscopy.

    PubMed

    Delatour, Thierry; Périsset, Adrienne; Goldmann, Till; Riediker, Sonja; Stadler, Richard H

    2004-07-28

    An improved sample preparation (extraction and cleanup) is presented that enables the quantification of low levels of acrylamide in difficult matrixes, including soluble chocolate powder, cocoa, coffee, and coffee surrogate. Final analysis is done by isotope-dilution liquid chromatography-electrospray ionization tandem mass spectrometry (LC-MS/MS) using d3-acrylamide as internal standard. Sample pretreatment essentially encompasses (a) protein precipitation with Carrez I and II solutions, (b) extraction of the analyte into ethyl acetate, and (c) solid-phase extraction on a Multimode cartridge. The stability of acrylamide in final extracts and in certain commercial foods and beverages is also reported. This approach provided good performance in terms of linearity, accuracy and precision. Full validation was conducted in soluble chocolate powder, achieving a decision limit (CCalpha) and detection capability (CCbeta) of 9.2 and 12.5 microg/kg, respectively. The method was extended to the analysis of acrylamide in various foodstuffs such as mashed potatoes, crisp bread, and butter biscuit and cookies. Furthermore, the accuracy of the method is demonstrated by the results obtained in three inter-laboratory proficiency tests. Copyright 2004 American Chemical Society

  8. Accuracy of human papillomavirus testing on self-collected versus clinician-collected samples: a meta-analysis.

    PubMed

    Arbyn, Marc; Verdoodt, Freija; Snijders, Peter J F; Verhoef, Viola M J; Suonio, Eero; Dillner, Lena; Minozzi, Silvia; Bellisario, Cristina; Banzi, Rita; Zhao, Fang-Hui; Hillemanns, Peter; Anttila, Ahti

    2014-02-01

    Screening for human papillomavirus (HPV) infection is more effective in reducing the incidence of cervical cancer than screening using Pap smears. Moreover, HPV testing can be done on a vaginal sample self-taken by a woman, which offers an opportunity to improve screening coverage. However, the clinical accuracy of HPV testing on self-samples is not well-known. We assessed whether HPV testing on self-collected samples is equivalent to HPV testing on samples collected by clinicians. We identified relevant studies through a search of PubMed, Embase, and CENTRAL. Studies were eligible for inclusion if they fulfilled all of the following selection criteria: a cervical cell sample was self-collected by a woman followed by a sample taken by a clinician; a high-risk HPV test was done on the self-sample (index test) and HPV-testing or cytological interpretation was done on the specimen collected by the clinician (comparator tests); and the presence or absence of cervical intraepithelial neoplasia grade 2 (CIN2) or worse was verified by colposcopy and biopsy in all enrolled women or in women with one or more positive tests. The absolute accuracy for finding CIN2 or worse, or CIN grade 3 (CIN3) or worse of the index and comparator tests as well as the relative accuracy of the index versus the comparator tests were pooled using bivariate normal models and random effect models. We included data from 36 studies, which altogether enrolled 154 556 women. The absolute accuracy varied by clinical setting. In the context of screening, HPV testing on self-samples detected, on average, 76% (95% CI 69-82) of CIN2 or worse and 84% (72-92) of CIN3 or worse. The pooled absolute specificity to exclude CIN2 or worse was 86% (83-89) and 87% (84-90) to exclude CIN3 or worse. The variation of the relative accuracy of HPV testing on self-samples compared with tests on clinician-taken samples was low across settings, enabling pooling of the relative accuracy over all studies. The pooled sensitivity of HPV testing on self-samples was lower than HPV testing on a clinician-taken sample (ratio 0·88 [95% CI 0·85-0·91] for CIN2 or worse and 0·89 [0·83-0·96] for CIN3 or worse). Also specificity was lower in self-samples versus clinician-taken samples (ratio 0·96 [0·95-0·97] for CIN2 or worse and 0·96 [0·93-0·99] for CIN3 or worse). HPV testing with signal-based assays on self-samples was less sensitive and specific than testing on clinician-based samples. By contrast, some PCR-based HPV tests generally showed similar sensitivity on both self-samples and clinician-based samples. In screening programmes using signal-based assays, sampling by a clinician should be recommended. However, HPV testing on a self-sample can be suggested as an additional strategy to reach women not participating in the regular screening programme. Some PCR-based HPV tests could be considered for routine screening after careful piloting assessing feasibility, logistics, population compliance, and costs. The 7th Framework Programme of the European Commission, the Belgian Foundation against Cancer, the International Agency for Research on Cancer, and the German Guideline Program in Oncology. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. AVNM: A Voting based Novel Mathematical Rule for Image Classification.

    PubMed

    Vidyarthi, Ankit; Mittal, Namita

    2016-12-01

    In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Modified slanted-edge method for camera modulation transfer function measurement using nonuniform fast Fourier transform technique

    NASA Astrophysics Data System (ADS)

    Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin

    2018-01-01

    ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.

  11. Assessment of xylem phenology: a first attempt to verify its accuracy and precision.

    PubMed

    Lupi, C; Rossi, S; Vieira, J; Morin, H; Deslauriers, A

    2014-01-01

    This manuscript aims to evaluate the precision and accuracy of current methodology for estimating xylem phenology and tracheid production in trees. Through a simple approach, sampling at two positions on the stem of co-dominant black spruce trees in two sites of the boreal forest of Quebec, we were able to quantify variability among sites, between trees and within a tree for different variables. We demonstrated that current methodology is accurate for the estimation of the onset of xylogenesis, while the accuracy for the evaluation of the ending of xylogenesis may be improved by sampling at multiple positions on the stem. The pattern of variability in different phenological variables and cell production allowed us to advance a novel hypothesis on the shift in the importance of various drivers of xylogenesis, from factors mainly varying at the level of site (e.g., climate) at the beginning of the growing season to factors varying at the level of individual trees (e.g., possibly genetic variability) at the end of the growing season.

  12. [Discussion of scattering in THz time domain spectrum tests].

    PubMed

    Yan, Fang; Zhang, Zhao-hui; Zhao, Xiao-yan; Su, Hai-xia; Li, Zhi; Zhang, Han

    2014-06-01

    Using THz-TDS to extract the absorption spectrum of a sample is an important branch of various THz applications. Basically, we believe that the THz radiation scatters from sample particles, leading to an obvious baseline increasing with frequencies in its absorption spectrum. The baseline will affect the measurement accuracy due to ambiguous height and pattern of the spectrum. The authors should try to remove the baseline, and eliminate the effects of scattering. In the present paper, we investigated the causes of baselines, reviewed some of scatter mitigating methods and summarized some of research aspects in the future. In order to validate the correctness of these methods, we designed a series of experiments to compare the computational accuracy of molar concentration. The result indicated that the computational accuracy of molar concentration can be improved, which can be the basis of quantitative analysis in further researches. Finally, with comprehensive experimental results, we presented further research directions on THz absorption spectrum that is needed for the removal of scattering effects.

  13. Microfluidic Purification and Concentration of Malignant Pleural Effusions for Improved Molecular and Cytomorphological Diagnostics

    PubMed Central

    Go, Derek E.; Talati, Ish; Ying, Yong; Rao, Jianyu; Kulkarni, Rajan P.; Di Carlo, Dino

    2013-01-01

    Evaluation of pleural fluids for metastatic cells is a key component of diagnostic cytopathology. However, a large background of smaller leukocytes and/or erythrocytes can make accurate diagnosis difficult and reduce specificity in identification of mutations of interest for targeted anti-cancer therapies. Here, we describe an automated microfluidic system (Centrifuge Chip) which employs microscale vortices for the size-based isolation and concentration of cancer cells and mesothelial cells from a background of blood cells. We are able to process non-diluted pleural fluids at 6 mL/min and enrich target cells significantly over the background; we achieved improved purity in all patient samples analyzed. The resulting isolated and viable cells are readily available for immunostaining, cytological analysis, and detection of gene mutations. To demonstrate the utility towards aiding companion diagnostics, we also show improved detection accuracy of KRAS gene mutations in lung cancer cells processed using the Centrifuge Chip, leading to an increase in the area under the curve (AUC) of the receiver operating characteristic from 0.90 to 0.99. The Centrifuge Chip allows for rapid concentration and processing of large volumes of bodily fluid samples for improved cytological diagnosis and purification of cells of interest for genetic testing, which will be helpful for enhancing diagnostic accuracy. PMID:24205153

  14. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method.

    PubMed

    Batres-Mendoza, Patricia; Ibarra-Manzano, Mario A; Guerra-Hernandez, Erick I; Almanza-Ojeda, Dora L; Montoro-Sanjose, Carlos R; Romero-Troncoso, Rene J; Rostro-Gonzalez, Horacio

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications.

  15. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method

    PubMed Central

    Batres-Mendoza, Patricia; Guerra-Hernandez, Erick I.; Almanza-Ojeda, Dora L.; Montoro-Sanjose, Carlos R.

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications. PMID:29348744

  16. Infrared calibration for climate: a perspective on present and future high-spectral resolution instruments

    NASA Astrophysics Data System (ADS)

    Revercomb, Henry E.; Anderson, James G.; Best, Fred A.; Tobin, David C.; Knuteson, Robert O.; LaPorte, Daniel D.; Taylor, Joe K.

    2006-12-01

    The new era of high spectral resolution infrared instruments for atmospheric sounding offers great opportunities for climate change applications. A major issue with most of our existing IR observations from space is spectral sampling uncertainty and the lack of standardization in spectral sampling. The new ultra resolution observing capabilities from the AIRS grating spectrometer on the NASA Aqua platform and from new operational FTS instruments (IASI on Metop, CrIS for NPP/NPOESS, and the GIFTS for a GOES demonstration) will go a long way toward improving this situation. These new observations offer the following improvements: 1. Absolute accuracy, moving from issues of order 1 K to <0.2-0.4 K brightness temperature, 2. More complete spectral coverage, with Nyquist sampling for scale standardization, and 3. Capabilities for unifying IR calibration among different instruments and platforms. However, more needs to be done to meet the immediate needs for climate and to effectively leverage these new operational weather systems, including 1. Place special emphasis on making new instruments as accurate as they can be to realize the potential of technological investments already made, 2. Maintain a careful validation program for establishing the best possible direct radiance check of long-term accuracy--specifically, continuing to use aircraft-or balloon-borne instruments that are periodically checked directly with NIST, and 3. Commit to a simple, new IR mission that will provide an ongoing backbone for the climate observing system. The new mission would make use of Fourier Transform Spectrometer measurements to fill in spectral and diurnal sampling gaps of the operational systems and provide a benchmark with better than 0.1K 3-sigma accuracy based on standards that are verifiable in-flight.

  17. Design and expected performance of a fast neutron attenuation probe for light element density measurements

    DOE PAGES

    Sweany, M.; Marleau, P.

    2016-07-08

    In this paper, we present the design and expected performance of a proof-of-concept 32 channel material identification system. Our system is based on the energy-dependent attenuation of fast neutrons for four elements: hydrogen, carbon, nitrogen and oxygen. We describe a new approach to obtaining a broad range of neutron energies to probe a sample, as well as our technique for reconstructing the molar densities within a sample. The system's performance as a function of time-of-flight energy resolution is explored using a Geant4-based Monte Carlo. Our results indicate that, with the expected detector response of our system, we will be ablemore » to determine the molar density of all four elements to within a 20–30% accuracy in a two hour scan time. In many cases this error is systematically low, thus the ratio between elements is more accurate. This degree of accuracy is enough to distinguish, for example, a sample of water from a sample of pure hydrogen peroxide: the ratio of oxygen to hydrogen is reconstructed to within 8±0.5% of the true value. Lastly, with future algorithm development that accounts for backgrounds caused by scattering within the sample itself, the accuracy of molar densities, not ratios, may improve to the 5–10% level for a two hour scan time.« less

  18. Development and evaluation of an automatic labeling technique for spring small grains

    NASA Technical Reports Server (NTRS)

    Crist, E. P.; Malila, W. A. (Principal Investigator)

    1981-01-01

    A labeling technique is described which seeks to associate a sampling entity with a particular crop or crop group based on similarity of growing season and temporal-spectral patterns of development. Human analyst provide contextual information, after which labeling decisions are made automatically. Results of a test of the technique on a large, multi-year data set are reported. Grain labeling accuracies are similar to those achieved by human analysis techniques, while non-grain accuracies are lower. Recommendations for improvments and implications of the test results are discussed.

  19. Differentially Private Frequent Sequence Mining via Sampling-based Candidate Pruning

    PubMed Central

    Xu, Shengzhi; Cheng, Xiang; Li, Zhengyi; Xiong, Li

    2016-01-01

    In this paper, we study the problem of mining frequent sequences under the rigorous differential privacy model. We explore the possibility of designing a differentially private frequent sequence mining (FSM) algorithm which can achieve both high data utility and a high degree of privacy. We found, in differentially private FSM, the amount of required noise is proportionate to the number of candidate sequences. If we could effectively reduce the number of unpromising candidate sequences, the utility and privacy tradeoff can be significantly improved. To this end, by leveraging a sampling-based candidate pruning technique, we propose a novel differentially private FSM algorithm, which is referred to as PFS2. The core of our algorithm is to utilize sample databases to further prune the candidate sequences generated based on the downward closure property. In particular, we use the noisy local support of candidate sequences in the sample databases to estimate which sequences are potentially frequent. To improve the accuracy of such private estimations, a sequence shrinking method is proposed to enforce the length constraint on the sample databases. Moreover, to decrease the probability of misestimating frequent sequences as infrequent, a threshold relaxation method is proposed to relax the user-specified threshold for the sample databases. Through formal privacy analysis, we show that our PFS2 algorithm is ε-differentially private. Extensive experiments on real datasets illustrate that our PFS2 algorithm can privately find frequent sequences with high accuracy. PMID:26973430

  20. A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Bhaduri, Budhendra L

    2011-01-01

    Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less

  1. Enhancing the accuracy of subcutaneous glucose sensors: a real-time deconvolution-based approach.

    PubMed

    Guerra, Stefania; Facchinetti, Andrea; Sparacino, Giovanni; Nicolao, Giuseppe De; Cobelli, Claudio

    2012-06-01

    Minimally invasive continuous glucose monitoring (CGM) sensors can greatly help diabetes management. Most of these sensors consist of a needle electrode, placed in the subcutaneous tissue, which measures an electrical current exploiting the glucose-oxidase principle. This current is then transformed to glucose levels after calibrating the sensor on the basis of one, or more, self-monitoring blood glucose (SMBG) samples. In this study, we design and test a real-time signal-enhancement module that, cascaded to the CGM device, improves the quality of its output by a proper postprocessing of the CGM signal. In fact, CGM sensors measure glucose in the interstitium rather than in the blood compartment. We show that this distortion can be compensated by means of a regularized deconvolution procedure relying on a linear regression model that can be updated whenever a pair of suitably sampled SMBG references is collected. Tests performed both on simulated and real data demonstrate a significant accuracy improvement of the CGM signal. Simulation studies also demonstrate the robustness of the method against departures from nominal conditions, such as temporal misplacement of the SMBG samples and uncertainty in the blood-to-interstitium glucose kinetic model. Thanks to its online capabilities, the proposed signal-enhancement algorithm can be used to improve the performance of CGM-based real-time systems such as the hypo/hyper glycemic alert generators or the artificial pancreas.

  2. Evaluation of antibiotic resistance analysis and ribotyping for identification of faecal pollution sources in an urban watershed.

    PubMed

    Moore, D F; Harwood, V J; Ferguson, D M; Lukasik, J; Hannah, P; Getrich, M; Brownell, M

    2005-01-01

    The accuracy of ribotyping and antibiotic resistance analysis (ARA) for prediction of sources of faecal bacterial pollution in an urban southern California watershed was determined using blinded proficiency samples. Antibiotic resistance patterns and HindIII ribotypes of Escherichia coli (n = 997), and antibiotic resistance patterns of Enterococcus spp. (n = 3657) were used to construct libraries from sewage samples and from faeces of seagulls, dogs, cats, horses and humans within the watershed. The three libraries were analysed to determine the accuracy of host source prediction. The internal accuracy of the libraries (average rate of correct classification, ARCC) with six source categories was 44% for E. coli ARA, 69% for E. coli ribotyping and 48% for Enterococcus ARA. Each library's predictive ability towards isolates that were not part of the library was determined using a blinded proficiency panel of 97 E. coli and 99 Enterococcus isolates. Twenty-eight per cent (by ARA) and 27% (by ribotyping) of the E. coli proficiency isolates were assigned to the correct source category. Sixteen per cent were assigned to the same source category by both methods, and 6% were assigned to the correct category. Addition of 2480 E. coli isolates to the ARA library did not improve the ARCC or proficiency accuracy. In contrast, 45% of Enterococcus proficiency isolates were correctly identified by ARA. None of the methods performed well enough on the proficiency panel to be judged ready for application to environmental samples. Most microbial source tracking (MST) studies published have demonstrated library accuracy solely by the internal ARCC measurement. Low rates of correct classification for E. coli proficiency isolates compared with the ARCCs of the libraries indicate that testing of bacteria from samples that are not represented in the library, such as blinded proficiency samples, is necessary to accurately measure predictive ability. The library-based MST methods used in this study may not be suited for determination of the source(s) of faecal pollution in large, urban watersheds.

  3. Highly accurate detection of ovarian cancer using CA125 but limited improvement with serum matrix-assisted laser desorption/ionization time-of-flight mass spectrometry profiling.

    PubMed

    Tiss, Ali; Timms, John F; Smith, Celia; Devetyarov, Dmitry; Gentry-Maharaj, Aleksandra; Camuzeaux, Stephane; Burford, Brian; Nouretdinov, Ilia; Ford, Jeremy; Luo, Zhiyuan; Jacobs, Ian; Menon, Usha; Gammerman, Alex; Cramer, Rainer

    2010-12-01

    Our objective was to test the performance of CA125 in classifying serum samples from a cohort of malignant and benign ovarian cancers and age-matched healthy controls and to assess whether combining information from matrix-assisted laser desorption/ionization (MALDI) time-of-flight profiling could improve diagnostic performance. Serum samples from women with ovarian neoplasms and healthy volunteers were subjected to CA125 assay and MALDI time-of-flight mass spectrometry (MS) profiling. Models were built from training data sets using discriminatory MALDI MS peaks in combination with CA125 values and tested their ability to classify blinded test samples. These were compared with models using CA125 threshold levels from 193 patients with ovarian cancer, 290 with benign neoplasm, and 2236 postmenopausal healthy controls. Using a CA125 cutoff of 30 U/mL, an overall sensitivity of 94.8% (96.6% specificity) was obtained when comparing malignancies versus healthy postmenopausal controls, whereas a cutoff of 65 U/mL provided a sensitivity of 83.9% (99.6% specificity). High classification accuracies were obtained for early-stage cancers (93.5% sensitivity). Reasons for high accuracies include recruitment bias, restriction to postmenopausal women, and inclusion of only primary invasive epithelial ovarian cancer cases. The combination of MS profiling information with CA125 did not significantly improve the specificity/accuracy compared with classifications on the basis of CA125 alone. We report unexpectedly good performance of serum CA125 using threshold classification in discriminating healthy controls and women with benign masses from those with invasive ovarian cancer. This highlights the dependence of diagnostic tests on the characteristics of the study population and the crucial need for authors to provide sufficient relevant details to allow comparison. Our study also shows that MS profiling information adds little to diagnostic accuracy. This finding is in contrast with other reports and shows the limitations of serum MS profiling for biomarker discovery and as a diagnostic tool.

  4. Cost-Effective Prediction of Reading Difficulties.

    ERIC Educational Resources Information Center

    Heath, Steve M.; Hogben, John H.

    2004-01-01

    This study addressed 2 questions: (a) Can preschoolers who will fail at reading be more efficiently identified by targeting those at highest risk for reading problems? and (b) will auditory temporal processing (ATP) improve the accuracy of identification derived from phonological processing and oral language ability? A sample of 227 preschoolers…

  5. Colorimetric Measurements of Amylase Activity: Improved Accuracy and Efficiency with a Smartphone

    ERIC Educational Resources Information Center

    Dangkulwanich, Manchuta; Kongnithigarn, Kaness; Aurnoppakhun, Nattapat

    2018-01-01

    Routinely used in quantitative determination of various analytes, UV-vis spectroscopy is commonly taught in undergraduate chemistry laboratory courses. Because the technique measures the absorbance of light through the samples, losses from reflection and scattering by large molecules interfere with the measurement. To emphasize the importance of…

  6. Enhanced systems for measuring and monitoring REDD+: Opportunities to improve the accuracy of emission factor and activity data in Indonesia

    NASA Astrophysics Data System (ADS)

    Solichin

    The importance of accurate measurement of forest biomass in Indonesia has been growing ever since climate change mitigation schemes, particularly the reduction of emissions from deforestation and forest degradation scheme (known as REDD+), were constitutionally accepted by the government of Indonesia. The need for an accurate system of historical and actual forest monitoring has also become more pronounced, as such a system would afford a better understanding of the role of forests in climate change and allow for the quantification of the impact of activities implemented to reduce greenhouse gas emissions. The aim of this study was to enhance the accuracy of estimations of carbon stocks and to monitor emissions in tropical forests. The research encompassed various scales (from trees and stands to landscape-sized scales) and a wide range of aspects, from evaluation and development of allometric equations to exploration of the potential of existing forest inventory databases and evaluation of cutting-edge technology for non-destructive sampling and accurate forest biomass mapping over large areas. In this study, I explored whether accuracy--especially regarding the identification and reduction of bias--of forest aboveground biomass (AGB) estimates in Indonesia could be improved through (1) development and refinement of allometric equations for major forest types, (2) integration of existing large forest inventory datasets, (3) assessing nondestructive sampling techniques for tree AGB measurement, and (4) landscape-scale mapping of AGB and forest cover using lidar. This thesis provides essential foundations to improve the estimation of forest AGB at tree scale through development of new AGB equations for several major forest types in Indonesia. I successfully developed new allometric equations using large datasets from various forest types that enable us to estimate tree aboveground biomass for both forest type specific and generic equations. My models outperformed the existing local equations, with lower bias and higher precision of the AGB estimates. This study also highlights the potential advantages and challenges of using terrestrial lidar and the acoustic velocity tool for non-destructive sampling of tree biomass to enable more sample collection without the felling of trees. Further, I explored whether existing forest inventories and permanent sample plot datasets can be integrated into Indonesia's existing carbon accounting system. My investigation of these existing datasets found that through quality assurance tests these datasets are essential to be integrated into national and provincial forest monitoring and carbon accounting systems. Integration of this information would eventually improve the accuracy of the estimates of forest carbon stocks, biomass growth, mortality and emission factors from deforestation and forest degradation. At landscape scale, this study demonstrates the capability of airborne lidar for forest monitoring and forest cover classification in tropical peat swamp ecosystems. The mapping application using airborne lidar showed a more accurate and precise classification of land and forest cover when compared with mapping using optical and active sensors. To reduce the cost of lidar acquisition, this study assessed the optimum lidar return density for forest monitoring. I found that the density of lidar return could be reduced to at least 1 return per 4 m2. Overall, this study provides essential scientific background to improve the accuracy of forest AGB estimates. Therefore, the described results and techniques should be integrated into the existing monitoring systems to assess emission reduction targets and the impact of REDD+ implementation.

  7. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca; Haider, Masoom A.; Jaffray, David A.

    Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarselymore » sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.« less

  8. Personality, Cognitive Style, Motivation, and Aptitude Predict Systematic Trends in Analytic Forecasting Behavior.

    PubMed

    Poore, Joshua C; Forlines, Clifton L; Miller, Sarah M; Regan, John R; Irvine, John M

    2014-12-01

    The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures-personality, cognitive style, motivated cognition-predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated "top-down" cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains.

  9. Personality, Cognitive Style, Motivation, and Aptitude Predict Systematic Trends in Analytic Forecasting Behavior

    PubMed Central

    Forlines, Clifton L.; Miller, Sarah M.; Regan, John R.; Irvine, John M.

    2014-01-01

    The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures—personality, cognitive style, motivated cognition—predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated “top-down” cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains. PMID:25983670

  10. Motion direction estimation based on active RFID with changing environment

    NASA Astrophysics Data System (ADS)

    Jie, Wu; Minghua, Zhu; Wei, He

    2018-05-01

    The gate system is used to estimate the direction of RFID tags carriers when they are going through the gate. Normally, it is difficult to achieve and keep a high accuracy in estimating motion direction of RFID tags because the received signal strength of tag changes sharply according to the changing electromagnetic environment. In this paper, a method of motion direction estimation for RFID tags is presented. To improve estimation accuracy, the machine leaning algorithm is used to get the fitting function of the received data by readers which are deployed inside and outside gate respectively. Then the fitted data are sampled to get the standard vector. We compare the stand vector with template vectors to get the motion direction estimation result. Then the corresponding template vector is updated according to the surrounding environment. We conducted the simulation and implement of the proposed method and the result shows that the proposed method in this work can improve and keep a high accuracy under the condition of the constantly changing environment.

  11. Single-Step BLUP with Varying Genotyping Effort in Open-Pollinated Picea glauca.

    PubMed

    Ratcliffe, Blaise; El-Dien, Omnia Gamal; Cappa, Eduardo P; Porth, Ilga; Klápště, Jaroslav; Chen, Charles; El-Kassaby, Yousry A

    2017-03-10

    Maximization of genetic gain in forest tree breeding programs is contingent on the accuracy of the predicted breeding values and precision of the estimated genetic parameters. We investigated the effect of the combined use of contemporary pedigree information and genomic relatedness estimates on the accuracy of predicted breeding values and precision of estimated genetic parameters, as well as rankings of selection candidates, using single-step genomic evaluation (HBLUP). In this study, two traits with diverse heritabilities [tree height (HT) and wood density (WD)] were assessed at various levels of family genotyping efforts (0, 25, 50, 75, and 100%) from a population of white spruce ( Picea glauca ) consisting of 1694 trees from 214 open-pollinated families, representing 43 provenances in Québec, Canada. The results revealed that HBLUP bivariate analysis is effective in reducing the known bias in heritability estimates of open-pollinated populations, as it exposes hidden relatedness, potential pedigree errors, and inbreeding. The addition of genomic information in the analysis considerably improved the accuracy in breeding value estimates by accounting for both Mendelian sampling and historical coancestry that were not captured by the contemporary pedigree alone. Increasing family genotyping efforts were associated with continuous improvement in model fit, precision of genetic parameters, and breeding value accuracy. Yet, improvements were observed even at minimal genotyping effort, indicating that even modest genotyping effort is effective in improving genetic evaluation. The combined utilization of both pedigree and genomic information may be a cost-effective approach to increase the accuracy of breeding values in forest tree breeding programs where shallow pedigrees and large testing populations are the norm. Copyright © 2017 Ratcliffe et al.

  12. Crowdsourcing for translational research: analysis of biomarker expression using cancer microarrays

    PubMed Central

    Lawson, Jonathan; Robinson-Vyas, Rupesh J; McQuillan, Janette P; Paterson, Andy; Christie, Sarah; Kidza-Griffiths, Matthew; McDuffus, Leigh-Anne; Moutasim, Karwan A; Shaw, Emily C; Kiltie, Anne E; Howat, William J; Hanby, Andrew M; Thomas, Gareth J; Smittenaar, Peter

    2017-01-01

    Background: Academic pathology suffers from an acute and growing lack of workforce resource. This especially impacts on translational elements of clinical trials, which can require detailed analysis of thousands of tissue samples. We tested whether crowdsourcing – enlisting help from the public – is a sufficiently accurate method to score such samples. Methods: We developed a novel online interface to train and test lay participants on cancer detection and immunohistochemistry scoring in tissue microarrays. Lay participants initially performed cancer detection on lung cancer images stained for CD8, and we measured how extending a basic tutorial by annotated example images and feedback-based training affected cancer detection accuracy. We then applied this tutorial to additional cancer types and immunohistochemistry markers – bladder/ki67, lung/EGFR, and oesophageal/CD8 – to establish accuracy compared with experts. Using this optimised tutorial, we then tested lay participants' accuracy on immunohistochemistry scoring of lung/EGFR and bladder/p53 samples. Results: We observed that for cancer detection, annotated example images and feedback-based training both improved accuracy compared with a basic tutorial only. Using this optimised tutorial, we demonstrate highly accurate (>0.90 area under curve) detection of cancer in samples stained with nuclear, cytoplasmic and membrane cell markers. We also observed high Spearman correlations between lay participants and experts for immunohistochemistry scoring (0.91 (0.78, 0.96) and 0.97 (0.91, 0.99) for lung/EGFR and bladder/p53 samples, respectively). Conclusions: These results establish crowdsourcing as a promising method to screen large data sets for biomarkers in cancer pathology research across a range of cancers and immunohistochemical stains. PMID:27959886

  13. Influence of scanning parameters on the estimation accuracy of control points of B-spline surfaces

    NASA Astrophysics Data System (ADS)

    Aichinger, Julia; Schwieger, Volker

    2018-04-01

    This contribution deals with the influence of scanning parameters like scanning distance, incidence angle, surface quality and sampling width on the average estimated standard deviations of the position of control points from B-spline surfaces which are used to model surfaces from terrestrial laser scanning data. The influence of the scanning parameters is analyzed by the Monte Carlo based variance analysis. The samples were generated for non-correlated and correlated data, leading to the samples generated by Latin hypercube and replicated Latin hypercube sampling algorithms. Finally, the investigations show that the most influential scanning parameter is the distance from the laser scanner to the object. The angle of incidence shows a significant effect for distances of 50 m and longer, while the surface quality contributes only negligible effects. The sampling width has no influence. Optimal scanning parameters can be found in the smallest possible object distance at an angle of incidence close to 0° in the highest surface quality. The consideration of correlations improves the estimation accuracy and underlines the importance of complete stochastic models for TLS measurements.

  14. A clustering algorithm for sample data based on environmental pollution characteristics

    NASA Astrophysics Data System (ADS)

    Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun

    2015-04-01

    Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.

  15. Neutron Tomography at the Los Alamos Neutron Science Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, William Riley

    Neutron imaging is an incredibly powerful tool for non-destructive sample characterization and materials science. Neutron tomography is one technique that results in a three-dimensional model of the sample, representing the interaction of the neutrons with the sample. This relies both on reliable data acquisition and on image processing after acquisition. Over the course of the project, the focus has changed from the former to the latter, culminating in a large-scale reconstruction of a meter-long fossilized skull. The full reconstruction is not yet complete, though tools have been developed to improve the speed and accuracy of the reconstruction. This project helpsmore » to improve the capabilities of LANSCE and LANL with regards to imaging large or unwieldy objects.« less

  16. Cross-Sectional HIV Incidence Estimation in HIV Prevention Research

    PubMed Central

    Brookmeyer, Ron; Laeyendecker, Oliver; Donnell, Deborah; Eshleman, Susan H.

    2013-01-01

    Accurate methods for estimating HIV incidence from cross-sectional samples would have great utility in prevention research. This report describes recent improvements in cross-sectional methods that significantly improve their accuracy. These improvements are based on the use of multiple biomarkers to identify recent HIV infections. These multi-assay algorithms (MAAs) use assays in a hierarchical approach for testing that minimizes the effort and cost of incidence estimation. These MAAs do not require mathematical adjustments for accurate estimation of the incidence rates in study populations in the year prior to sample collection. MAAs provide a practical, accurate, and cost-effective approach for cross-sectional HIV incidence estimation that can be used for HIV prevention research and global epidemic monitoring. PMID:23764641

  17. A multi-view face recognition system based on cascade face detector and improved Dlib

    NASA Astrophysics Data System (ADS)

    Zhou, Hongjun; Chen, Pei; Shen, Wei

    2018-03-01

    In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.

  18. Product Quality Modelling Based on Incremental Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Wang, J.; Zhang, W.; Qin, B.; Shi, W.

    2012-05-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  19. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    NASA Astrophysics Data System (ADS)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  20. Using Mathematical Algorithms to Modify Glomerular Filtration Rate Estimation Equations

    PubMed Central

    Zhu, Bei; Wu, Jianqing; Zhu, Jin; Zhao, Weihong

    2013-01-01

    Background The equations provide a rapid and low-cost method of evaluating glomerular filtration rate (GFR). Previous studies indicated that the Modification of Diet in Renal Disease (MDRD), Chronic Kidney Disease-Epidemiology (CKD-EPI) and MacIsaac equations need further modification for application in Chinese population. Thus, this study was designed to modify the three equations, and compare the diagnostic accuracy of the equations modified before and after. Methodology With the use of 99 mTc-DTPA renal dynamic imaging as the reference GFR (rGFR), the MDRD, CKD-EPI and MacIsaac equations were modified by two mathematical algorithms: the hill-climbing and the simulated-annealing algorithms. Results A total of 703 Chinese subjects were recruited, with the average rGFR 77.14±25.93 ml/min. The entire modification process was based on a random sample of 80% of subjects in each GFR level as a training sample set, the rest of 20% of subjects as a validation sample set. After modification, the three equations performed significant improvement in slop, intercept, correlated coefficient, root mean square error (RMSE), total deviation index (TDI), and the proportion of estimated GFR (eGFR) within 10% and 30% deviation of rGFR (P10 and P30). Of the three modified equations, the modified CKD-EPI equation showed the best accuracy. Conclusions Mathematical algorithms could be a considerable tool to modify the GFR equations. Accuracy of all the three modified equations was significantly improved in which the modified CKD-EPI equation could be the optimal one. PMID:23472113

  1. Usefulness of transpapillary bile duct brushing cytology and forceps biopsy for improved diagnosis in patients with biliary strictures.

    PubMed

    Kitajima, Yasuhiro; Ohara, Hirotaka; Nakazawa, Takahiro; Ando, Tomoaki; Hayashi, Kazuki; Takada, Hiroki; Tanaka, Hajime; Ogawa, Kanto; Sano, Hitoshi; Togawa, Shozo; Naito, Itaru; Hirai, Masaaki; Ueno, Koichiro; Ban, Tessin; Miyabe, Katuyuki; Yamashita, Hiroaki; Yoshimura, Norihiro; Akita, Shinji; Gotoh, Kazuo; Joh, Takashi

    2007-10-01

    Transpapillary bile duct brushing cytology and/or forceps biopsy was performed in the presence of an indwelling guidewire in patients with biliary stricture, and the treatment time, overall diagnosis rate, diagnosis rate of each disease, complications, and influences on subsequent biliary drainage were investigated. After endoscopic retrograde cholangiography, brushing cytology was performed, followed by forceps biopsy. In patients with obstructive jaundice, endoscopic biliary drainage (EBD) was subsequently performed. To investigate the influences of bile duct brushing cytology and forceps biopsy on EBD, patients who underwent subsequent EBD by plastic stent were compared with patients who underwent EBD alone. The samples for cytology were collected successfully in all cases, and the sensitivity for malignancy/benignity, specificity, and accuracy were 71.6%, 100%, and 75.0%, respectively. The biopsy sampling was successful in 51 patients, and samples applicable to the evaluation were collected in all 51 patients. The sensitivity for malignancy/benignity, specificity, and accuracy were 65.2%, 100%, and 68.6%, respectively. Combination of the two procedures increased the sensitivity and accuracy to 73.5% and 76.6%, respectively. The time required for cytology and biopsy was 11.7 min, which is relatively short. Cytology and biopsy did not affect drainage. Regarding accidents, bile duct perforation occurred during biopsy in one patient (1.9%), but was rapidly improved by endoscopic biliary drainage. Transpapillary brushing cytology and forceps biopsy could be performed in a short time. The diagnosis rate was high, and the incidence of complication was low, having no influence on subsequent biliary drainage.

  2. Simulation of range imaging-based estimation of respiratory lung motion. Influence of noise, signal dimensionality and sampling patterns.

    PubMed

    Wilms, M; Werner, R; Blendowski, M; Ortmüller, J; Handels, H

    2014-01-01

    A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions). A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented. This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines. Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.

  3. Impact of sampling strategy on stream load estimates in till landscape of the Midwest

    USGS Publications Warehouse

    Vidon, P.; Hubbard, L.E.; Soyeux, E.

    2009-01-01

    Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.

  4. Interference by the activated sludge matrix on the analysis of soluble microbial products in wastewater.

    PubMed

    Potvin, Christopher M; Zhou, Hongde

    2011-11-01

    The objective of this study was to demonstrate the effects of complex matrix effects caused by chemical materials on the analysis of key soluble microbial products (SMP) including proteins, humics, carbohydrates, and polysaccharides in activated sludge samples. Emphasis was placed on comparison of the commonly used standard curve technique with standard addition (SA), a technique that differs in that the analytical responses are measured for sample solutions spiked with known quantities of analytes. The results showed that using SA provided a great improvement in compensating for SMP recovery and thus improving measurement accuracy by correcting for matrix effects. Analyte recovery was found to be highly dependent on sample dilution, and changed due to extraction techniques, storage conditions and sample composition. Storage of sample extracts by freezing changed SMP concentrations dramatically, as did storage at 4°C for as little as 1d. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. The Effectiveness of Vowel Production Training with Real-Time Spectrographic Displays for Children with Profound Hearing Impairment.

    NASA Astrophysics Data System (ADS)

    Ertmer, David Joseph

    1994-01-01

    The effectiveness of vowel production training which incorporated direct instruction in combination with spectrographic models and feedback was assessed for two children who exhibited profound hearing impairment. A multiple-baseline design across behaviors, with replication across subjects was implemented to determine if vowel production accuracy improved following the introduction of treatment. Listener judgments of vowel correctness were obtained during the baseline, training, and follow-up phases of the study. Data were analyzed through visual inspection of changes in levels of accuracy, changes in trends of accuracy, and changes in variability of accuracy within and across phases. One subject showed significant improvement of all three trained vowel targets; the second subject for the first trained target only (Kolmogorov-Smirnov Two Sample Test). Performance trends during training sessions suggest that continued treatment would have resulted in further improvement for both subjects. Vowel duration, fundamental frequency, and the frequency locations of the first and second formants were measured before and after training. Acoustic analysis revealed highly individualized changes in the frequency locations of F1 and F2. Vowels which received the most training were maintained at higher levels than those which were introduced later in training, Some generalization of practiced vowel targets to untrained words was observed in both subjects. A bias towards judging productions as "correct" was observed for both subjects during self-evaluation tasks using spectrographic feedback.

  6. Development, preliminary usability and accuracy testing of the EBMT 'eGVHD App' to support GvHD assessment according to NIH criteria-a proof of concept.

    PubMed

    Schoemans, H; Goris, K; Durm, R V; Vanhoof, J; Wolff, D; Greinix, H; Pavletic, S; Lee, S J; Maertens, J; Geest, S D; Dobbels, F; Duarte, R F

    2016-08-01

    The EBMT Complications and Quality of Life Working Party has developed a computer-based algorithm, the 'eGVHD App', using a user-centered design process. Accuracy was tested using a quasi-experimental crossover design with four expert-reviewed case vignettes in a convenience sample of 28 clinical professionals. Perceived usefulness was evaluated by the technology acceptance model (TAM) and User satisfaction by the Post-Study System Usability Questionnaire (PSSUQ). User experience was positive, with a median of 6 TAM points (interquartile range: 1) and beneficial median total, and subscale PSSUQ scores. The initial standard practice assessment of the vignettes yielded 65% correct results for diagnosis and 45% for scoring. The 'eGVHD App' significantly increased diagnostic and scoring accuracy to 93% (+28%) and 88% (+43%), respectively (both P<0.05). The same trend was observed in the repeated analysis of case 2: accuracy improved by using the App (+31% for diagnosis and +39% for scoring), whereas performance tended to decrease once the App was taken away. The 'eGVHD App' could dramatically improve the quality of care and research as it increased the performance of the whole user group by about 30% at the first assessment and showed a trend for improvement of individual performance on repeated case evaluation.

  7. Use of fecal volatile organic compound analysis to discriminate between non-vaccinated and BCG—Vaccinated cattle prior to and after Mycobacterium bovis challenge

    PubMed Central

    Stahl, Randal; Waters, W. Ray; Palmer, Mitchell V.; Nol, Pauline; Rhyan, Jack C.; VerCauteren, Kurt C.; Koziel, Jacek A.

    2017-01-01

    Bovine tuberculosis is a zoonotic disease of global public health concern. Development of diagnostic tools to improve test accuracy and efficiency in domestic livestock and enable surveillance of wildlife reservoirs would improve disease management and eradication efforts. Use of volatile organic compound analysis in breath and fecal samples is being developed and optimized as a means to detect disease in humans and animals. In this study we demonstrate that VOCs present in fecal samples can be used to discriminate between non-vaccinated and BCG-vaccinated cattle prior to and after Mycobacterium bovis challenge. PMID:28686691

  8. Validation sampling can reduce bias in health care database studies: an illustration using influenza vaccination effectiveness.

    PubMed

    Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L

    2013-08-01

    Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  10. Opportunities for international collaboration in dog breeding from the sharing of pedigree and health data.

    PubMed

    Fikse, W F; Malm, S; Lewis, T W

    2013-09-01

    Pooling of pedigree and phenotype data from different countries may improve the accuracy of derived indicators of both genetic diversity and genetic merit of traits of interest. This study demonstrates significant migration of individuals of four pedigree dog breeds between Sweden and the United Kingdom. Correlations of estimates of genetic merit (estimated breeding values, EBVs) for the Fédération Cynologique Internationale and the British Veterinary Association and Kennel Club evaluations of hip dysplasia (HD) were strong and favourable, indicating that both scoring schemes capture substantially the same genetic trait. Therefore pooled use of phenotypic data on hip dysplasia would be expected to improve the accuracy of EBV for HD in both countries due to increased sample data. Copyright © 2013. Published by Elsevier Ltd.

  11. Multisensory information boosts numerical matching abilities in young children.

    PubMed

    Jordan, Kerry E; Baker, Joseph

    2011-03-01

    This study presents the first evidence that preschool children perform more accurately in a numerical matching task when given multisensory rather than unisensory information about number. Three- to 5-year-old children learned to play a numerical matching game on a touchscreen computer, which asked them to match a sample numerosity with a numerically equivalent choice numerosity. Samples consisted of a series of visual squares on some trials, a series of auditory tones on other trials, and synchronized squares and tones on still other trials. Children performed at chance on this matching task when provided with either type of unisensory sample, but improved significantly when provided with multisensory samples. There was no speed–accuracy tradeoff between unisensory and multisensory trial types. Thus, these findings suggest that intersensory redundancy may improve young children’s abilities to match numerosities.

  12. Detection of dechallenge in spontaneous reporting systems: a comparison of Bayes methods.

    PubMed

    Banu, A Bazila; Alias Balamurugan, S Appavu; Thirumalaikolundusubramanian, Ponniah

    2014-01-01

    Dechallenge is a response observed for the reduction or disappearance of adverse drug reactions (ADR) on withdrawal of a drug from a patient. Currently available algorithms to detect dechallenge have limitations. Hence, there is a need to compare available new methods. To detect dechallenge in Spontaneous Reporting Systems, data-mining algorithms like Naive Bayes and Improved Naive Bayes were applied for comparing the performance of the algorithms in terms of accuracy and error. Analyzing the factors of dechallenge like outcome and disease category will help medical practitioners and pharmaceutical industries to determine the reasons for dechallenge in order to take essential steps toward drug safety. Adverse drug reactions of the year 2011 and 2012 were downloaded from the United States Food and Drug Administration's database. The outcome of classification algorithms showed that Improved Naive Bayes algorithm outperformed Naive Bayes with accuracy of 90.11% and error of 9.8% in detecting the dechallenge. Detecting dechallenge for unknown samples are essential for proper prescription. To overcome the issues exposed by Naive Bayes algorithm, Improved Naive Bayes algorithm can be used to detect dechallenge in terms of higher accuracy and minimal error.

  13. Sampling strategies to improve passive optical remote sensing of river bathymetry

    USGS Publications Warehouse

    Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.

    2018-01-01

    Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.

  14. High-accuracy reference standards for two-photon absorption in the 680–1050 nm wavelength range

    PubMed Central

    de Reguardati, Sophie; Pahapill, Juri; Mikhailov, Alexander; Stepanenko, Yuriy; Rebane, Aleksander

    2016-01-01

    Degenerate two-photon absorption (2PA) of a series of organic fluorophores is measured using femtosecond fluorescence excitation method in the wavelength range, λ2PA = 680–1050 nm, and ~100 MHz pulse repetition rate. The function of relative 2PA spectral shape is obtained with estimated accuracy 5%, and the absolute 2PA cross section is measured at selected wavelengths with the accuracy 8%. Significant improvement of the accuracy is achieved by means of rigorous evaluation of the quadratic dependence of the fluorescence signal on the incident photon flux in the whole wavelength range, by comparing results obtained from two independent experiments, as well as due to meticulous evaluation of critical experimental parameters, including the excitation spatial- and temporal pulse shape, laser power and sample geometry. Application of the reference standards in nonlinear transmittance measurements is discussed. PMID:27137334

  15. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy

    PubMed Central

    Monaghan, Kieran A.

    2016-01-01

    Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size) selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding) may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data, resulting in an error variance around an average deviation. Following standardized protocols and assigning precise reference conditions, the error variance of their comparative ratio (test-site:reference) can be measured and used to estimate the accuracy of the resultant assessment. PMID:27392036

  16. An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks

    NASA Technical Reports Server (NTRS)

    Kim, Stacy

    2011-01-01

    Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.

  17. Integrating sequence and array data to create an improved 1000 Genomes Project haplotype reference panel.

    PubMed

    Delaneau, Olivier; Marchini, Jonathan

    2014-06-13

    A major use of the 1000 Genomes Project (1000 GP) data is genotype imputation in genome-wide association studies (GWAS). Here we develop a method to estimate haplotypes from low-coverage sequencing data that can take advantage of single-nucleotide polymorphism (SNP) microarray genotypes on the same samples. First the SNP array data are phased to build a backbone (or 'scaffold') of haplotypes across each chromosome. We then phase the sequence data 'onto' this haplotype scaffold. This approach can take advantage of relatedness between sequenced and non-sequenced samples to improve accuracy. We use this method to create a new 1000 GP haplotype reference set for use by the human genetic community. Using a set of validation genotypes at SNP and bi-allelic indels we show that these haplotypes have lower genotype discordance and improved imputation performance into downstream GWAS samples, especially at low-frequency variants.

  18. An Investigation to Improve Classifier Accuracy for Myo Collected Data

    DTIC Science & Technology

    2017-02-01

    distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT A naïve Bayes classifier trained with 1,360 samples from 17 volunteers performs at...movement data from 17 volunteers . Each volunteer performed 8 gestures (Freeze, Rally Point, Hurry Up, Down, Come, Stop, Line Abreast Formation, and Vehicle...line chart was plotted for each gesture’s feature (e.g., Pitch, xAcc) per user. All 10 recorded samples of a particular gesture for a single volunteer

  19. Improving the accuracy of effect-directed analysis: the role of bioavailability.

    PubMed

    You, Jing; Li, Huizhen

    2017-12-13

    Aquatic ecosystems have been suffering from contamination by multiple stressors. Traditional chemical-based risk assessment usually fails to explain the toxicity contributions from contaminants that are not regularly monitored or that have an unknown identity. Diagnosing the causes of noted adverse outcomes in the environment is of great importance in ecological risk assessment and in this regard effect-directed analysis (EDA) has been designed to fulfill this purpose. The EDA approach is now increasingly used in aquatic risk assessment owing to its specialty in achieving effect-directed nontarget analysis; however, a lack of environmental relevance makes conventional EDA less favorable. In particular, ignoring the bioavailability in EDA may cause a biased and even erroneous identification of causative toxicants in a mixture. Taking bioavailability into consideration is therefore of great importance to improve the accuracy of EDA diagnosis. The present article reviews the current status and applications of EDA practices that incorporate bioavailability. The use of biological samples is the most obvious way to include bioavailability into EDA applications, but its development is limited due to the small sample size and lack of evidence for metabolizable compounds. Bioavailability/bioaccessibility-based extraction (bioaccessibility-directed and partitioning-based extraction) and passive-dosing techniques are recommended to be used to integrate bioavailability into EDA diagnosis in abiotic samples. Lastly, the future perspectives of expanding and standardizing the use of biological samples and bioavailability-based techniques in EDA are discussed.

  20. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    PubMed

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu

    Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.

  2. Advanced platform for the in-plane ZT measurement of thin films

    NASA Astrophysics Data System (ADS)

    Linseis, V.; Völklein, F.; Reith, H.; Nielsch, K.; Woias, P.

    2018-01-01

    The characterization of nanostructured samples with at least one restricted dimension like thin films or nanowires is challenging, but important to understand their structure and transport mechanism, and to improve current industrial products and production processes. We report on the 2nd generation of a measurement chip, which allows for a simplified sample preparation process, and the measurement of samples deposited from the liquid phase using techniques like spin coating and drop casting. The new design enables us to apply much higher temperature gradients for the Seebeck coefficient measurement in a shorter time, without influencing the sample holder's temperature distribution. Furthermore, a two membrane correction method for the 3ω thermal conductivity measurement will be presented, which takes the heat loss due to radiation into account and increases the accuracy of the measurement results significantly. Errors caused by different sample compositions, varying sample geometries, and different heat profiles are avoided with the presented measurement method. As a showcase study displaying the validity and accuracy of our platform, we present temperature-dependent measurements of the thermoelectric properties of an 84 nm Bi87Sb13 thin film and a 15 μm PEDOT:PSS thin film.

  3. Improved COD Measurements for Organic Content in Flowback Water with High Chloride Concentrations.

    PubMed

    Cardona, Isabel; Park, Ho Il; Lin, Lian-Shin

    2016-03-01

    An improved method was used to determine chemical oxygen demand (COD) as a measure of organic content in water samples containing high chloride content. A contour plot of COD percent error in the Cl(-)-Cl(-):COD domain showed that COD errors increased with Cl(-):COD. Substantial errors (>10%) could occur in low Cl(-):COD regions (<300) for samples with low (<10 g/L) and high chloride concentrations (>25 g/L). Applying the method to flowback water samples resulted in COD concentrations ranging in 130 to 1060 mg/L, which were substantially lower than the previously reported values for flowback water samples from Marcellus Shale (228 to 21 900 mg/L). It is likely that overestimations of COD in the previous studies occurred as result of chloride interferences. Pretreatment with mercuric sulfate, and use of a low-strength digestion solution, and the contour plot to correct COD measurements are feasible steps to significantly improve the accuracy of COD measurements.

  4. The Utility of Writing Assignments in Undergraduate Bioscience

    ERIC Educational Resources Information Center

    Libarkin, Julie; Ording, Gabriel

    2012-01-01

    We tested the hypothesis that engagement in a few, brief writing assignments in a nonmajors science course can improve student ability to convey critical thought about science. A sample of three papers written by students (n = 30) was coded for presence and accuracy of elements related to scientific writing. Scores for different aspects of…

  5. Development of appropriate methodologies for sampling gypsy moth populations in moderately sized urban parks and other wooded public lands

    Treesearch

    K. W. Thorpe; R. L. Ridgway; R. E. Webb

    1991-01-01

    Egg mass survey data from operational gypsy moth (Lymantria dispar L.) management programs in five Maryland county parks and the Beltsville Agricultural Research Center (BARC) have demonstrated that improved survey protocols are needed to increase the precision and accuracy of the surveys.

  6. Portability of a Screener for Pediatric Bipolar Disorder to a Diverse Setting

    ERIC Educational Resources Information Center

    Freeman, Andrew J.; Youngstrom, Eric A.; Frazier, Thomas W.; Youngstrom, Jennifer Kogos; Demeter, Christine; Findling, Robert L.

    2012-01-01

    Robust screening measures that perform well in different populations could help improve the accuracy of diagnosis of pediatric bipolar disorder. Changes in sampling could influence the performance of items and potentially influence total scores enough to alter the predictive utility of scores. Additionally, creating a brief version of a measure by…

  7. Stepped-combustion 14C dating of bomb carbon in lake sediment

    USGS Publications Warehouse

    McGeehin, J.; Burr, G.S.; Hodgins, G.; Bennett, S.J.; Robbins, J.A.; Morehead, N.; Markewich, H.

    2004-01-01

    In this study, we applied a stepped-combustion approach to dating post-bomb lake sediment from north-central Mississippi. Samples were combusted at a low temperature (400 ??C) and then at 900 ??C. The CO2 was collected separately for both combustions and analyzed. The goal of this work was to develop a methodology to improve the accuracy of 14C dating of sediment by combusting at a lower temperature and reducing the amount of reworked carbon bound to clay minerals in the sample material. The 14C fraction modern results for the low and high temperature fractions of these sediments were compared with well-defined 137Cs determinations made on sediment taken from the same cores. Comparison of "bomb curves" for 14C and 137Cs indicate that low temperature combustion of sediment improved the accuracy of 14C dating of the sediment. However, fraction modern results for the low temperature fractions were depressed compared to atmospheric values for the same time frame, possibly the result of carbon mixing and the low sedimentation rate in the lake system.

  8. Model parameter estimation approach based on incremental analysis for lithium-ion batteries without using open circuit voltage

    NASA Astrophysics Data System (ADS)

    Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui

    2015-08-01

    To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.

  9. Accuracy of clinical diagnosis of Parkinson disease: A systematic review and meta-analysis.

    PubMed

    Rizzo, Giovanni; Copetti, Massimiliano; Arcuti, Simona; Martino, Davide; Fontana, Andrea; Logroscino, Giancarlo

    2016-02-09

    To evaluate the diagnostic accuracy of clinical diagnosis of Parkinson disease (PD) reported in the last 25 years by a systematic review and meta-analysis. We searched for articles published between 1988 and August 2014. Studies were included if reporting diagnostic parameters regarding clinical diagnosis of PD or crude data. The selected studies were subclassified based on different study setting, type of test diagnosis, and gold standard. Bayesian meta-analyses of available data were performed. We selected 20 studies, including 11 using pathologic examination as gold standard. Considering only these 11 studies, the pooled diagnostic accuracy was 80.6% (95% credible interval [CrI] 75.2%-85.3%). Accuracy was 73.8% (95% CrI 67.8%-79.6%) for clinical diagnosis performed mainly by nonexperts. Accuracy of clinical diagnosis performed by movement disorders experts rose from 79.6% (95% CrI 46%-95.1%) of initial assessment to 83.9% (95% CrI 69.7%-92.6%) of refined diagnosis after follow-up. Using UK Parkinson's Disease Society Brain Bank Research Center criteria, the pooled diagnostic accuracy was 82.7% (95% CrI 62.6%-93%). The overall validity of clinical diagnosis of PD is not satisfying. The accuracy did not significantly improve in the last 25 years, particularly in the early stages of disease, where response to dopaminergic treatment is less defined and hallmarks of alternative diagnoses such as atypical parkinsonism may not have emerged. Misclassification rate should be considered to calculate the sample size both in observational studies and randomized controlled trials. Imaging and biomarkers are urgently needed to improve the accuracy of clinical diagnosis in vivo. © 2016 American Academy of Neurology.

  10. Accuracy Sampling Design Bias on Coarse Spatial Resolution Land Cover Data in the Great Lakes Region (United States and Canada)

    EPA Science Inventory

    A number of articles have investigated the impact of sampling design on remotely sensed landcover accuracy estimates. Gong and Howarth (1990) found significant differences for Kappa accuracy values when comparing purepixel sampling, stratified random sampling, and stratified sys...

  11. Flameless atomic-absorption determination of gold in geological materials

    USGS Publications Warehouse

    Meier, A.L.

    1980-01-01

    Gold in geologic material is dissolved using a solution of hydrobromic acid and bromine, extracted with methyl isobutyl ketone, and determined using an atomic-absorption spectrophotometer equipped with a graphite furnace atomizer. A comparison of results obtained by this flameless atomic-absorption method on U.S. Geological Survey reference rocks and geochemical samples with reported values and with results obtained by flame atomic-absorption shows that reasonable accuracy is achieved with improved precision. The sensitivity, accuracy, and precision of the method allows acquisition of data on the distribution of gold at or below its crustal abundance. ?? 1980.

  12. Simultaneous determination of dextromethorphan, dextrorphan, and guaifenesin in human plasma using semi-automated liquid/liquid extraction and gradient liquid chromatography tandem mass spectrometry.

    PubMed

    Eichhold, Thomas H; McCauley-Myers, David L; Khambe, Deepa A; Thompson, Gary A; Hoke, Steven H

    2007-01-17

    A method for the simultaneous determination of dextromethorphan (DEX), dextrorphan (DET), and guaifenesin (GG) in human plasma was developed, validated, and applied to determine plasma concentrations of these compounds in samples from six clinical pharmacokinetic (PK) studies. Semi-automated liquid handling systems were used to perform the majority of the sample manipulation including liquid/liquid extraction (LLE) of the analytes from human plasma. Stable-isotope-labeled analogues were utilized as internal standards (ISTDs) for each analyte to facilitate accurate and precise quantification. Extracts were analyzed using gradient liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). Use of semi-automated LLE with LC-MS/MS proved to be a very rugged and reliable approach for analysis of more than 6200 clinical study samples. The lower limit of quantification was validated at 0.010, 0.010, and 1.0 ng/mL of plasma for DEX, DET, and GG, respectively. Accuracy and precision of quality control (QC) samples for all three analytes met FDA Guidance criteria of +/-15% for average QC accuracy with coefficients of variation less than 15%. Data from the thorough evaluation of the method during development, validation, and application are presented to characterize selectivity, linearity, over-range sample analysis, accuracy, precision, autosampler carry-over, ruggedness, extraction efficiency, ionization suppression, and stability. Pharmacokinetic data are also provided to illustrate improvements in systemic drug and metabolite concentration-time profiles that were achieved by formulation optimization.

  13. Improved electromagnetic tracking for catheter path reconstruction with application in high-dose-rate brachytherapy.

    PubMed

    Lugez, Elodie; Sadjadi, Hossein; Joshi, Chandra P; Akl, Selim G; Fichtinger, Gabor

    2017-04-01

    Electromagnetic (EM) catheter tracking has recently been introduced in order to enable prompt and uncomplicated reconstruction of catheter paths in various clinical interventions. However, EM tracking is prone to measurement errors which can compromise the outcome of the procedure. Minimizing catheter tracking errors is therefore paramount to improve the path reconstruction accuracy. An extended Kalman filter (EKF) was employed to combine the nonlinear kinematic model of an EM sensor inside the catheter, with both its position and orientation measurements. The formulation of the kinematic model was based on the nonholonomic motion constraints of the EM sensor inside the catheter. Experimental verification was carried out in a clinical HDR suite. Ten catheters were inserted with mean curvatures varying from 0 to [Formula: see text] in a phantom. A miniaturized Ascension (Burlington, Vermont, USA) trakSTAR EM sensor (model 55) was threaded within each catheter at various speeds ranging from 7.4 to [Formula: see text]. The nonholonomic EKF was applied on the tracking data in order to statistically improve the EM tracking accuracy. A sample reconstruction error was defined at each point as the Euclidean distance between the estimated EM measurement and its corresponding ground truth. A path reconstruction accuracy was defined as the root mean square of the sample reconstruction errors, while the path reconstruction precision was defined as the standard deviation of these sample reconstruction errors. The impacts of sensor velocity and path curvature on the nonholonomic EKF method were determined. Finally, the nonholonomic EKF catheter path reconstructions were compared with the reconstructions provided by the manufacturer's filters under default settings, namely the AC wide notch and the DC adaptive filter. With a path reconstruction accuracy of 1.9 mm, the nonholonomic EKF surpassed the performance of the manufacturer's filters (2.4 mm) by 21% and the raw EM measurements (3.5 mm) by 46%. Similarly, with a path reconstruction precision of 0.8 mm, the nonholonomic EKF surpassed the performance of the manufacturer's filters (1.0 mm) by 20% and the raw EM measurements (1.7 mm) by 53%. Path reconstruction accuracies did not follow an apparent trend when varying the path curvature and sensor velocity; instead, reconstruction accuracies were predominantly impacted by the position of the EM field transmitter ([Formula: see text]). The advanced nonholonomic EKF is effective in reducing EM measurement errors when reconstructing catheter paths, is robust to path curvature and sensor speed, and runs in real time. Our approach is promising for a plurality of clinical procedures requiring catheter reconstructions, such as cardiovascular interventions, pulmonary applications (Bender et al. in medical image computing and computer-assisted intervention-MICCAI 99. Springer, Berlin, pp 981-989, 1999), and brachytherapy.

  14. An optimal sample data usage strategy to minimize overfitting and underfitting effects in regression tree models based on remotely-sensed data

    USGS Publications Warehouse

    Gu, Yingxin; Wylie, Bruce K.; Boyte, Stephen; Picotte, Joshua J.; Howard, Danny; Smith, Kelcy; Nelson, Kurtis

    2016-01-01

    Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data) may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI) were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD) between the predicted and actual NDVI (scaled NDVI, value from 0–200) and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4), which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.

  15. Integrating conventional and inverse representation for face recognition.

    PubMed

    Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David

    2014-10-01

    Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.

  16. Prolonged monitoring of ethinyl estradiol and levonorgestrel levels confirms an altered pharmacokinetic profile in obese oral contraceptives users

    PubMed Central

    Edelman, Alison B; Cherala, Ganesh; Munar, Myrna Y.; DuBois, Barent; McInnis, Martha; Stanczyk, Frank Z.; Jensen, Jeffrey T

    2014-01-01

    Background Pharmacokinetic (PK) parameters based on short sampling times (48 h or less) may contain inaccuracies due to their dependency on extrapolated values. This study was designed to measure PK parameters with greater accuracy in obese users of a low-dose oral contraceptive (OC), and to correlate drug levels with assessments of end-organ activity. Study design Obese (BMI ≥30 kg/m2), ovulatory, otherwise healthy, women (n = 32) received an OC containing 20 mcg ethinyl estradiol (EE)/100 mcg levonorgestrel (LNG) for two cycles. EE and LNG PK parameters were characterized for 168 h at the end of Cycle 1. During Cycle 2, biweekly outpatient visits were performed to assess cervical mucus, monitor ovarian activity with transvaginal ultrasound, and obtain serum samples to measure EE, LNG, estradiol (E2), and progesterone (P) levels. PK parameters were calculated and correlated with end-organ activity and compared against control samples obtained from normal and obese women sampled up to 48 h in a previous study. Standard determination of PK accuracy was performed; defined by the dependency on extrapolated values (‘excess’ area under the curve of 25% or less). Results The mean BMI was 39.4 kg/m2 (SD 6.6) with a range of 30–64 kg/m2. Key LNG PK parameters were as follows: clearance 0.52 L/h (SD 0.24), half-life 65 h (SD 40), AUC 232 h*ng/mL (SD 102) and time to reach steady-state 13.6 days (SD 8.4). The majority of subjects had increased ovarian activity with diameter of follicles ≥8 mm (n = 25) but only seven women had follicles ≥10 mm plus cervical mucus scores ≥5. Evidence of poor end-organ suppression did not correlate with the severity of the alterations in PK. As compared to historical normal and obese controls (48 h PK sampling), clearance, half-life, area under the curve (AUC) and time to reach steady-state were found to be significantly different (p ≤ 0.05) in obese women undergoing a longer duration of PK sampling (168 h). Longer sampling also improved PK accuracy for obese women (excess AUC 20%) as compared to both normal and obese controls undergoing shorter sampling times (48 h) with excess AUCs of 25% and 50%, respectively. Conclusions Obesity results in significant alterations in OC steroid PK parameters but the severity of these alterations did not correlate with end-organ suppression. A longer PK sampling interval (168 h vs. 48 h) improved the accuracy of PK testing. PMID:23153898

  17. Prolonged monitoring of ethinyl estradiol and levonorgestrel levels confirms an altered pharmacokinetic profile in obese oral contraceptives users.

    PubMed

    Edelman, Alison B; Cherala, Ganesh; Munar, Myrna Y; Dubois, Barent; McInnis, Martha; Stanczyk, Frank Z; Jensen, Jeffrey T

    2013-02-01

    Pharmacokinetic (PK) parameters based on short sampling times (48 h or less) may contain inaccuracies due to their dependency on extrapolated values. This study was designed to measure PK parameters with greater accuracy in obese users of a low-dose oral contraceptive (OC) and to correlate drug levels with assessments of end-organ activity. Obese [body mass index (BMI) ≥30 kg/m2], ovulatory, otherwise healthy women (n=32) received an OC containing 20 mcg ethinyl estradiol (EE)/100 mcg levonorgestrel (LNG) for two cycles. EE and LNG PK parameters were characterized for 168 h at the end of Cycle 1. During cycle 2, biweekly outpatient visits were performed to assess cervical mucus, monitor ovarian activity with transvaginal ultrasound and obtain serum samples to measure EE, LNG, estradiol and progesterone levels. PK parameters were calculated and correlated with end-organ activity and compared against control samples obtained from normal and obese women sampled up to 48 h in a previous study. Standard determination of PK accuracy was performed, defined by the dependency on extrapolated values ('excess' area under the curve of 25% or less). The mean BMI was 39.4 kg/m2 (SD 6.6) with a range of 30-64 kg/m2. Key LNG PK parameters were as follows: clearance, 0.52 L/h (SD 0.24); half-life, 65 h (SD 40); area under the curve (AUC), 232 h*ng/mL (SD 102); and time to reach steady state, 13.6 days (SD 8.4). The majority of subjects had increased ovarian activity with diameter of follicles ≥8 mm (n=25), but only seven women had follicles ≥10 mm plus cervical mucus scores ≥5. Evidence of poor end-organ suppression did not correlate with the severity of the alterations in PK. As compared to historical normal and obese controls (48-h PK sampling), clearance, half-life, AUC and time to reach steady state were found to be significantly different (p≤.05) in obese women undergoing a longer duration of PK sampling (168 h). Longer sampling also improved PK accuracy for obese women (excess AUC 20%) as compared to both normal and obese controls undergoing shorter sampling times (48 h) with excess AUCs of 25% and 50%, respectively. Obesity results in significant alterations in OC steroid PK parameters, but the severity of these alterations did not correlate with end-organ suppression. A longer PK sampling interval (168 h vs. 48 h) improved the accuracy of PK testing. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Improved Quantitative Analysis of Ion Mobility Spectrometry by Chemometric Multivariate Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fraga, Carlos G.; Kerr, Dayle; Atkinson, David A.

    2009-09-01

    Traditional peak-area calibration and the multivariate calibration methods of principle component regression (PCR) and partial least squares (PLS), including unfolded PLS (U-PLS) and multi-way PLS (N-PLS), were evaluated for the quantification of 2,4,6-trinitrotoluene (TNT) and cyclo-1,3,5-trimethylene-2,4,6-trinitramine (RDX) in Composition B samples analyzed by temperature step desorption ion mobility spectrometry (TSD-IMS). The true TNT and RDX concentrations of eight Composition B samples were determined by high performance liquid chromatography with UV absorbance detection. Most of the Composition B samples were found to have distinct TNT and RDX concentrations. Applying PCR and PLS on the exact same IMS spectra used for themore » peak-area study improved quantitative accuracy and precision approximately 3 to 5 fold and 2 to 4 fold, respectively. This in turn improved the probability of correctly identifying Composition B samples based upon the estimated RDX and TNT concentrations from 11% with peak area to 44% and 89% with PLS. This improvement increases the potential of obtaining forensic information from IMS analyzers by providing some ability to differentiate or match Composition B samples based on their TNT and RDX concentrations.« less

  19. [Automatic adjustment control system for DC glow discharge plasma source].

    PubMed

    Wan, Zhen-zhen; Wang, Yong-qing; Li, Xiao-jia; Wang, Hai-zhou; Shi, Ning

    2011-03-01

    There are three important parameters in the DC glow discharge process, the discharge current, discharge voltage and argon pressure in discharge source. These parameters influence each other during glow discharge process. This paper presents an automatic control system for DC glow discharge plasma source. This system collects and controls discharge voltage automatically by adjusting discharge source pressure while the discharge current is constant in the glow discharge process. The design concept, circuit principle and control program of this automatic control system are described. The accuracy is improved by this automatic control system with the method of reducing the complex operations and manual control errors. This system enhances the control accuracy of glow discharge voltage, and reduces the time to reach discharge voltage stability. The glow discharge voltage stability test results with automatic control system are provided as well, the accuracy with automatic control system is better than 1% FS which is improved from 4% FS by manual control. Time to reach discharge voltage stability has been shortened to within 30 s by automatic control from more than 90 s by manual control. Standard samples like middle-low alloy steel and tin bronze have been tested by this automatic control system. The concentration analysis precision has been significantly improved. The RSDs of all the test result are better than 3.5%. In middle-low alloy steel standard sample, the RSD range of concentration test result of Ti, Co and Mn elements is reduced from 3.0%-4.3% by manual control to 1.7%-2.4% by automatic control, and that for S and Mo is also reduced from 5.2%-5.9% to 3.3%-3.5%. In tin bronze standard sample, the RSD range of Sn, Zn and Al elements is reduced from 2.6%-4.4% to 1.0%-2.4%, and that for Si, Ni and Fe is reduced from 6.6%-13.9% to 2.6%-3.5%. The test data is also shown in this paper.

  20. The Challenges of Data Rate and Data Accuracy in the Analysis of Volcanic Systems: An Assessment Using Multi-Parameter Data from the 2012-2013 Eruption Sequence at White Island, New Zealand

    NASA Astrophysics Data System (ADS)

    Jolly, A. D.; Christenson, B. W.; Neuberg, J. W.; Fournier, N.; Mazot, A.; Kilgour, G.; Jolly, G. E.

    2014-12-01

    Volcano monitoring is usually undertaken with the collection of both automated and manual data that form a multi-parameter time-series having a wide range of sampling rates and measurement accuracies. Assessments of hazards and risks ultimately rely on incorporating this information into usable form, first for the scientists to interpret, and then for the public and relevant stakeholders. One important challenge is in building appropriate and efficient strategies to compare and interpret data from these exceptionally different datasets. The White Island volcanic system entered a new eruptive state beginning in mid-2012 and continuing through the present time. Eruptive activity during this period comprised small phreatic and phreato-magmatic events in August 2012, August 2013 and October 2013 and the intrusion of a small dome that was first observed in November 2012. We examine the chemical and geophysical dataset to assess the effects of small magma batches on the shallow hydrothermal system. The analysis incorporates high data rate (100 Hz) seismic, and infrasound data, lower data rate (1 Hz to 5 min sampling interval) GPS, tilt-meter, and gravity data and very low data rate geochemical time series (sampling intervals from days to months). The analysis is further informed by visual observations of lake level changes, geysering activity through crater lake vents, and changes in fumarolic discharges. We first focus on the problems of incorporating the range of observables into coherent time frame dependant conceptual models. We then show examples where high data rate information may be improved through new processing methods and where low data rate information may be collected more frequently without loss of fidelity. By this approach we hope to improve the accuracy and efficiency of interpretations of volcano unrest and thereby improve hazard assessments.

  1. Clinical time series prediction: towards a hierarchical dynamical system framework

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671

  2. Regional Seismic Travel-Time Prediction, Uncertainty, and Location Improvement in Western Eurasia

    NASA Astrophysics Data System (ADS)

    Flanagan, M. P.; Myers, S. C.

    2004-12-01

    We investigate our ability to improve regional travel-time prediction and seismic event location using an a priori, three-dimensional velocity model of Western Eurasia and North Africa: WENA1.0 [Pasyanos et al., 2004]. Our objective is to improve the accuracy of seismic location estimates and calculate representative location uncertainty estimates. As we focus on the geographic region of Western Eurasia, the Middle East, and North Africa, we develop, test, and validate 3D model-based travel-time prediction models for 30 stations in the study region. Three principal results are presented. First, the 3D WENA1.0 velocity model improves travel-time prediction over the iasp91 model, as measured by variance reduction, for regional Pg, Pn, and P phases recorded at the 30 stations. Second, a distance-dependent uncertainty model is developed and tested for the WENA1.0 model. Third, an end-to-end validation test based on 500 event relocations demonstrates improved location performance over the 1-dimensional iasp91 model. Validation of the 3D model is based on a comparison of approximately 11,000 Pg, Pn, and P travel-time predictions and empirical observations from ground truth (GT) events. Ray coverage for the validation dataset is chosen to provide representative, regional-distance sampling across Eurasia and North Africa. The WENA1.0 model markedly improves travel-time predictions for most stations with an average variance reduction of 25% for all ray paths. We find that improvement is station dependent, with some stations benefiting greatly from WENA1.0 predictions (52% at APA, 33% at BKR, and 32% at NIL), some stations showing moderate improvement (12% at KEV, 14% at BOM, and 12% at TAM), some benefiting only slightly (6% at MOX, and 4% at SVE), and some are degraded (-6% at MLR and -18% at QUE). We further test WENA1.0 by comparing location accuracy with results obtained using the iasp91 model. Again, relocation of these events is dependent on ray paths that evenly sample WENA1.0 and therefore provide an unbiased assessment of location performance. A statistically significant sample is achieved by generating 500 location realizations based on 5 events with location accuracy between 1 km and 5 km. Each realization is a randomly selected event with location determined by randomly selecting 5 stations from the available network. In 340 cases (68% of the instances), locations are improved, and average mislocation is reduced from 31 km to 26 km. Preliminary test of uncertainty estimates suggest that our uncertainty model produces location uncertainty ellipses that are representative of location accuracy. These results highlight the importance of accurate GT datasets in assessing regional travel-time models and demonstrate that an a priori 3D model can markedly improve our ability to locate small magnitude events in a regional monitoring context. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, Contribution UCRL-CONF-206386.

  3. An Autonomous Navigation Algorithm for High Orbit Satellite Using Star Sensor and Ultraviolet Earth Sensor

    PubMed Central

    Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu

    2013-01-01

    An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust. PMID:24250261

  4. An autonomous navigation algorithm for high orbit satellite using star sensor and ultraviolet earth sensor.

    PubMed

    Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu

    2013-01-01

    An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust.

  5. Trace element analysis by EPMA in geosciences: detection limit, precision and accuracy

    NASA Astrophysics Data System (ADS)

    Batanova, V. G.; Sobolev, A. V.; Magnin, V.

    2018-01-01

    Use of the electron probe microanalyser (EPMA) for trace element analysis has increased over the last decade, mainly because of improved stability of spectrometers and the electron column when operated at high probe current; development of new large-area crystal monochromators and ultra-high count rate spectrometers; full integration of energy-dispersive / wavelength-dispersive X-ray spectrometry (EDS/WDS) signals; and the development of powerful software packages. For phases that are stable under a dense electron beam, the detection limit and precision can be decreased to the ppm level by using high acceleration voltage and beam current combined with long counting time. Data on 10 elements (Na, Al, P, Ca, Ti, Cr, Mn, Co, Ni, Zn) in olivine obtained on a JEOL JXA-8230 microprobe with tungsten filament show that the detection limit decreases proportionally to the square root of counting time and probe current. For all elements equal or heavier than phosphorus (Z = 15), the detection limit decreases with increasing accelerating voltage. The analytical precision for minor and trace elements analysed in olivine at 25 kV accelerating voltage and 900 nA beam current is 4 - 18 ppm (2 standard deviations of repeated measurements of the olivine reference sample) and is similar to the detection limit of corresponding elements. To analyse trace elements accurately requires careful estimation of background, and consideration of sample damage under the beam and secondary fluorescence from phase boundaries. The development and use of matrix reference samples with well-characterised trace elements of interest is important for monitoring and improving of the accuracy. An evaluation of the accuracy of trace element analyses in olivine has been made by comparing EPMA data for new reference samples with data obtained by different in-situ and bulk analytical methods in six different laboratories worldwide. For all elements, the measured concentrations in the olivine reference sample were found to be identical (within internal precision) to reference values, suggesting that achieved precision and accuracy are similar. The spatial resolution of EPMA in a silicate matrix, even at very extreme conditions (accelerating voltage 25 kV), does not exceed 7 - 8 μm and thus is still better than laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) or secondary ion mass spectrometry (SIMS) of similar precision. These make the electron microprobe an indispensable method with applications in experimental petrology, geochemistry and cosmochemistry.

  6. A novel chemometric classification for FTIR spectra of mycotoxin-contaminated maize and peanuts at regulatory limits.

    PubMed

    Kos, Gregor; Sieger, Markus; McMullin, David; Zahradnik, Celine; Sulyok, Michael; Öner, Tuba; Mizaikoff, Boris; Krska, Rudolf

    2016-10-01

    The rapid identification of mycotoxins such as deoxynivalenol and aflatoxin B 1 in agricultural commodities is an ongoing concern for food importers and processors. While sophisticated chromatography-based methods are well established for regulatory testing by food safety authorities, few techniques exist to provide a rapid assessment for traders. This study advances the development of a mid-infrared spectroscopic method, recording spectra with little sample preparation. Spectral data were classified using a bootstrap-aggregated (bagged) decision tree method, evaluating the protein and carbohydrate absorption regions of the spectrum. The method was able to classify 79% of 110 maize samples at the European Union regulatory limit for deoxynivalenol of 1750 µg kg -1 and, for the first time, 77% of 92 peanut samples at 8 µg kg -1 of aflatoxin B 1 . A subset model revealed a dependency on variety and type of fungal infection. The employed CRC and SBL maize varieties could be pooled in the model with a reduction of classification accuracy from 90% to 79%. Samples infected with Fusarium verticillioides were removed, leaving samples infected with F. graminearum and F. culmorum in the dataset improving classification accuracy from 73% to 79%. A 500 µg kg -1 classification threshold for deoxynivalenol in maize performed even better with 85% accuracy. This is assumed to be due to a larger number of samples around the threshold increasing representativity. Comparison with established principal component analysis classification, which consistently showed overlapping clusters, confirmed the superior performance of bagged decision tree classification.

  7. Optimization of Sample Points for Monitoring Arable Land Quality by Simulated Annealing while Considering Spatial Variations

    PubMed Central

    Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng

    2016-01-01

    With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051

  8. KRAS mutation analysis of washing fluid from endoscopic ultrasound-guided fine needle aspiration improves cytologic diagnosis of pancreatic ductal adenocarcinoma.

    PubMed

    Park, Joo Kyung; Lee, Yoon Jung; Lee, Jong Kyun; Lee, Kyu Taek; Choi, Yoon-La; Lee, Kwang Hyuck

    2017-01-10

    EUS-FNA becomes one of the most important diagnostic modalities for PDACs. However, acquired tissue specimens were sometimes insufficient to make a definite cytological diagnosis. On the other hand, KRAS mutation is the most frequently acquired genetic alteration found more than 90% of PDACs. To investigate the way to improve diagnostic accuracy for PDACs using both cytological examination and KRAS mutation analysis would be a great help. Therefore, the aims of this study were to evaluate usefulness of conventional cytological examination combined with KRAS mutation analysis with modified PCR technology to improve the sensitivity and the accuracy. We enrolled 43 patients with solid pancreatic masses and 86 EUS-FNA specimens were obtained. During the EUS-FNA, the needle catheter was flushed with 2 cc of saline and the washed fluid was collected for KRAS mutation analysis for the first 2 passes; PNAClamp™ KRAS Mutation Detection Kit. There were 46 specimens from the 23 PDACs and 40 specimens from the 20 other pancreatic diseases. The sensitivity, specificity and accuracy were as follows; conventional cytopathologic examination: 63%, 100% and 80%; combination of cytopathologic examination and K-ras mutation analysis: 87%, 100% and 93%. Furthermore, KRAS mutation was detected 11 out of 17 PDAC samples whose cytopathology results were inconclusive. KRAS mutation analysis with PNAClamp™ technique using washing fluid from EUS-FNA along with cytological examination may not only improve the diagnostic accuracy of PDACs, but also establish the platform using genetic analysis which would be helpful as diagnostic modality for PDACs.

  9. Econometric models for predicting confusion crop ratios

    NASA Technical Reports Server (NTRS)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  10. A Systematic Review and Meta-Analysis of the Effects of Transcranial Direct Current Stimulation (tDCS) Over the Dorsolateral Prefrontal Cortex in Healthy and Neuropsychiatric Samples: Influence of Stimulation Parameters.

    PubMed

    Dedoncker, Josefien; Brunoni, Andre R; Baeken, Chris; Vanderhasselt, Marie-Anne

    2016-01-01

    Research into the effects of transcranial direct current stimulation of the dorsolateral prefrontal cortex on cognitive functioning is increasing rapidly. However, methodological heterogeneity in prefrontal tDCS research is also increasing, particularly in technical stimulation parameters that might influence tDCS effects. To systematically examine the influence of technical stimulation parameters on DLPFC-tDCS effects. We performed a systematic review and meta-analysis of tDCS studies targeting the DLPFC published from the first data available to February 2016. Only single-session, sham-controlled, within-subject studies reporting the effects of tDCS on cognition in healthy controls and neuropsychiatric patients were included. Evaluation of 61 studies showed that after single-session a-tDCS, but not c-tDCS, participants responded faster and more accurately on cognitive tasks. Sub-analyses specified that following a-tDCS, healthy subjects responded faster, while neuropsychiatric patients responded more accurately. Importantly, different stimulation parameters affected a-tDCS effects, but not c-tDCS effects, on accuracy in healthy samples vs. increased current density and density charge resulted in improved accuracy in healthy samples, most prominently in females; for neuropsychiatric patients, task performance during a-tDCS resulted in stronger increases in accuracy rates compared to task performance following a-tDCS. Healthy participants respond faster, but not more accurate on cognitive tasks after a-tDCS. However, increasing the current density and/or charge might be able to enhance response accuracy, particularly in females. In contrast, online task performance leads to greater increases in response accuracy than offline task performance in neuropsychiatric patients. Possible implications and practical recommendations are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Resolving occlusion and segmentation errors in multiple video object tracking

    NASA Astrophysics Data System (ADS)

    Cheng, Hsu-Yung; Hwang, Jenq-Neng

    2009-02-01

    In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.

  12. Spectrally interleaved, comb-mode-resolved spectroscopy using swept dual terahertz combs

    PubMed Central

    Hsieh, Yi-Da; Iyonaga, Yuki; Sakaguchi, Yoshiyuki; Yokoyama, Shuko; Inaba, Hajime; Minoshima, Kaoru; Hindle, Francis; Araki, Tsutomu; Yasui, Takeshi

    2014-01-01

    Optical frequency combs are innovative tools for broadband spectroscopy because a series of comb modes can serve as frequency markers that are traceable to a microwave frequency standard. However, a mode distribution that is too discrete limits the spectral sampling interval to the mode frequency spacing even though individual mode linewidth is sufficiently narrow. Here, using a combination of a spectral interleaving and dual-comb spectroscopy in the terahertz (THz) region, we achieved a spectral sampling interval equal to the mode linewidth rather than the mode spacing. The spectrally interleaved THz comb was realized by sweeping the laser repetition frequency and interleaving additional frequency marks. In low-pressure gas spectroscopy, we achieved an improved spectral sampling density of 2.5 MHz and enhanced spectral accuracy of 8.39 × 10−7 in the THz region. The proposed method is a powerful tool for simultaneously achieving high resolution, high accuracy, and broad spectral coverage in THz spectroscopy. PMID:24448604

  13. Determination of Chinese rice wine from different wineries by near-infrared spectroscopy combined with chemometrics methods

    NASA Astrophysics Data System (ADS)

    Niu, Xiaoying; Ying, Yibin; Yu, Haiyan; Xie, Lijuan; Fu, Xiaping; Zhou, Ying; Jiang, Xuesong

    2007-09-01

    In this paper, 104 samples of Chinese rice wines of the same variety (Shaoxing rice wine), collected in three winery ("guyuelongshan", "pagoda" brand, "kuaijishan"), three brewed years (2002, 2004, 2004-2006) were analyzed by near-infrared transmission spectroscopy between 800 and 2500 nm. The spectral differences were studied by principal components analysis (PCA), and Classifications, according the brand, were carried out by discriminant analysis (DA) and partial least squares discriminant analysis (PLSDA). The DA model gained a total accuracy of 94.23% and when used to predict the brand of the validation set samples, a better result, correctly classified all of the three kinds of Chinese rice wine up to 100%, are obtained by PLSDA model. The work reported here is a feasibility study and requires further development with considerable samples of more different brands. Further studies are needed in order to improve the accuracy and robustness, and to extend the discrimination to other Chinese rice wine varieties or brands.

  14. Discovery of novel variants in genotyping arrays improves genotype retention and reduces ascertainment bias

    PubMed Central

    2012-01-01

    Background High-density genotyping arrays that measure hybridization of genomic DNA fragments to allele-specific oligonucleotide probes are widely used to genotype single nucleotide polymorphisms (SNPs) in genetic studies, including human genome-wide association studies. Hybridization intensities are converted to genotype calls by clustering algorithms that assign each sample to a genotype class at each SNP. Data for SNP probes that do not conform to the expected pattern of clustering are often discarded, contributing to ascertainment bias and resulting in lost information - as much as 50% in a recent genome-wide association study in dogs. Results We identified atypical patterns of hybridization intensities that were highly reproducible and demonstrated that these patterns represent genetic variants that were not accounted for in the design of the array platform. We characterized variable intensity oligonucleotide (VINO) probes that display such patterns and are found in all hybridization-based genotyping platforms, including those developed for human, dog, cattle, and mouse. When recognized and properly interpreted, VINOs recovered a substantial fraction of discarded probes and counteracted SNP ascertainment bias. We developed software (MouseDivGeno) that identifies VINOs and improves the accuracy of genotype calling. MouseDivGeno produced highly concordant genotype calls when compared with other methods but it uniquely identified more than 786000 VINOs in 351 mouse samples. We used whole-genome sequence from 14 mouse strains to confirm the presence of novel variants explaining 28000 VINOs in those strains. We also identified VINOs in human HapMap 3 samples, many of which were specific to an African population. Incorporating VINOs in phylogenetic analyses substantially improved the accuracy of a Mus species tree and local haplotype assignment in laboratory mouse strains. Conclusion The problems of ascertainment bias and missing information due to genotyping errors are widely recognized as limiting factors in genetic studies. We have conducted the first formal analysis of the effect of novel variants on genotyping arrays, and we have shown that these variants account for a large portion of miscalled and uncalled genotypes. Genetic studies will benefit from substantial improvements in the accuracy of their results by incorporating VINOs in their analyses. PMID:22260749

  15. Long-Term Trajectories of the Development of Speech Sound Production in Pediatric Cochlear Implant Recipients

    PubMed Central

    Tomblin, J. Bruce; Peng, Shu-Chen; Spencer, Linda J.; Lu, Nelson

    2011-01-01

    Purpose This study characterized the development of speech sound production in prelingually deaf children with a minimum of 8 years of cochlear implant (CI) experience. Method Twenty-seven pediatric CI recipients' spontaneous speech samples from annual evaluation sessions were phonemically transcribed. Accuracy for these speech samples was evaluated in piecewise regression models. Results As a group, pediatric CI recipients showed steady improvement in speech sound production following implantation, but the improvement rate declined after 6 years of device experience. Piecewise regression models indicated that the slope estimating the participants' improvement rate was statistically greater than 0 during the first 6 years postimplantation, but not after 6 years. The group of pediatric CI recipients' accuracy of speech sound production after 4 years of device experience reasonably predicts their speech sound production after 5–10 years of device experience. Conclusions The development of speech sound production in prelingually deaf children stabilizes after 6 years of device experience, and typically approaches a plateau by 8 years of device use. Early growth in speech before 4 years of device experience did not predict later rates of growth or levels of achievement. However, good predictions could be made after 4 years of device use. PMID:18695018

  16. Improved optical axis determination accuracy for fiber-based polarization-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Matcher, Stephen J.

    2013-03-01

    We report on a new calibration technique that permits the accurate extraction of sample Jones matrix and hence fast-axis orientation by using fiber-based polarization-sensitive optical coherence tomography (PS-OCT) that is completely based on non polarization maintaining fiber such as SMF-28. In this technique, two quarter waveplates are used to completely specify the parameters of the system fibers in the sample arm so that the Jones matrix of the sample can be determined directly. The device was validated on measurements of a quarter waveplate and an equine tendon sample by a single-mode fiber-based swept-source PS-OCT system.

  17. Palladium-based Mass-Tag Cell Barcoding with a Doublet-Filtering Scheme and Single Cell Deconvolution Algorithm

    PubMed Central

    Zunder, Eli R.; Finck, Rachel; Behbehani, Gregory K.; Amir, El-ad D.; Krishnaswamy, Smita; Gonzalez, Veronica D.; Lorang, Cynthia G.; Bjornson, Zach; Spitzer, Matthew H.; Bodenmiller, Bernd; Fantl, Wendy J.; Pe’er, Dana; Nolan, Garry P.

    2015-01-01

    SUMMARY Mass-tag cell barcoding (MCB) labels individual cell samples with unique combinatorial barcodes, after which they are pooled for processing and measurement as a single multiplexed sample. The MCB method eliminates variability between samples in antibody staining and instrument sensitivity, reduces antibody consumption, and shortens instrument measurement time. Here, we present an optimized MCB protocol with several improvements over previously described methods. The use of palladium-based labeling reagents expands the number of measurement channels available for mass cytometry and reduces interference with lanthanide-based antibody measurement. An error-detecting combinatorial barcoding scheme allows cell doublets to be identified and removed from the analysis. A debarcoding algorithm that is single cell-based rather than population-based improves the accuracy and efficiency of sample deconvolution. This debarcoding algorithm has been packaged into software that allows rapid and unbiased sample deconvolution. The MCB procedure takes 3–4 h, not including sample acquisition time of ~1 h per million cells. PMID:25612231

  18. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel 'V-plot' methodology to display accuracy values.

    PubMed

    Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.

  19. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  20. Local classifier weighting by quadratic programming.

    PubMed

    Cevikalp, Hakan; Polikar, Robi

    2008-10-01

    It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures -- many of which are heuristic in nature -- have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.

  1. Accurate quantification of creatinine in serum by coupling a measurement standard to extractive electrospray ionization mass spectrometry

    NASA Astrophysics Data System (ADS)

    Huang, Keke; Li, Ming; Li, Hongmei; Li, Mengwan; Jiang, You; Fang, Xiang

    2016-01-01

    Ambient ionization (AI) techniques have been widely used in chemistry, medicine, material science, environmental science, forensic science. AI takes advantage of direct desorption/ionization of chemicals in raw samples under ambient environmental conditions with minimal or no sample preparation. However, its quantitative accuracy is restricted by matrix effects during the ionization process. To improve the quantitative accuracy of AI, a matrix reference material, which is a particular form of measurement standard, was coupled to an AI technique in this study. Consequently the analyte concentration in a complex matrix can be easily quantified with high accuracy. As a demonstration, this novel method was applied for the accurate quantification of creatinine in serum by using extractive electrospray ionization (EESI) mass spectrometry. Over the concentration range investigated (0.166 ~ 1.617 μg/mL), a calibration curve was obtained with a satisfactory linearity (R2 = 0.994), and acceptable relative standard deviations (RSD) of 4.6 ~ 8.0% (n = 6). Finally, the creatinine concentration value of a serum sample was determined to be 36.18 ± 1.08 μg/mL, which is in excellent agreement with the certified value of 35.16 ± 0.39 μg/mL.

  2. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  3. Imbalanced Learning for Functional State Assessment

    NASA Technical Reports Server (NTRS)

    Li, Feng; McKenzie, Frederick; Li, Jiang; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom

    2011-01-01

    This paper presents results of several imbalanced learning techniques applied to operator functional state assessment where the data is highly imbalanced, i.e., some function states (majority classes) have much more training samples than other states (minority classes). Conventional machine learning techniques usually tend to classify all data samples into majority classes and perform poorly for minority classes. In this study, we implemented five imbalanced learning techniques, including random undersampling, random over-sampling, synthetic minority over-sampling technique (SMOTE), borderline-SMOTE and adaptive synthetic sampling (ADASYN) to solve this problem. Experimental results on a benchmark driving lest dataset show thai accuracies for minority classes could be improved dramatically with a cost of slight performance degradations for majority classes,

  4. Deconvoluting simulated metagenomes: the performance of hard- and soft- clustering algorithms applied to metagenomic chromosome conformation capture (3C)

    PubMed Central

    DeMaere, Matthew Z.

    2016-01-01

    Background Chromosome conformation capture, coupled with high throughput DNA sequencing in protocols like Hi-C and 3C-seq, has been proposed as a viable means of generating data to resolve the genomes of microorganisms living in naturally occuring environments. Metagenomic Hi-C and 3C-seq datasets have begun to emerge, but the feasibility of resolving genomes when closely related organisms (strain-level diversity) are present in the sample has not yet been systematically characterised. Methods We developed a computational simulation pipeline for metagenomic 3C and Hi-C sequencing to evaluate the accuracy of genomic reconstructions at, above, and below an operationally defined species boundary. We simulated datasets and measured accuracy over a wide range of parameters. Five clustering algorithms were evaluated (2 hard, 3 soft) using an adaptation of the extended B-cubed validation measure. Results When all genomes in a sample are below 95% sequence identity, all of the tested clustering algorithms performed well. When sequence data contains genomes above 95% identity (our operational definition of strain-level diversity), a naive soft-clustering extension of the Louvain method achieves the highest performance. Discussion Previously, only hard-clustering algorithms have been applied to metagenomic 3C and Hi-C data, yet none of these perform well when strain-level diversity exists in a metagenomic sample. Our simple extension of the Louvain method performed the best in these scenarios, however, accuracy remained well below the levels observed for samples without strain-level diversity. Strain resolution is also highly dependent on the amount of available 3C sequence data, suggesting that depth of sequencing must be carefully considered during experimental design. Finally, there appears to be great scope to improve the accuracy of strain resolution through further algorithm development. PMID:27843713

  5. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    NASA Astrophysics Data System (ADS)

    Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe

    2017-04-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce the cost and improve the accuracy of the estimations of cumulative N2O fluxes using the discrete chamber-based method.

  6. Reproducibility of preclinical animal research improves with heterogeneity of study samples

    PubMed Central

    Vogt, Lucile; Sena, Emily S.; Würbel, Hanno

    2018-01-01

    Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495

  7. Spectrophotometric analyses of hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) in water.

    PubMed

    Shi, Cong; Xu, Zhonghou; Smolinski, Benjamin L; Arienti, Per M; O'Connor, Gregory; Meng, Xiaoguang

    2015-07-01

    A simple and accurate spectrophotometric method for on-site analysis of royal demolition explosive (RDX) in water samples was developed based on the Berthelot reaction. The sensitivity and accuracy of an existing spectrophotometric method was improved by: replacing toxic chemicals with more stable and safer reagents; optimizing the reagent dose and reaction time; improving color stability; and eliminating the interference from inorganic nitrogen compounds in water samples. Cation and anion exchange resin cartridges were developed and used for sample pretreatment to eliminate the effect of ammonia and nitrate on RDX analyses. The detection limit of the method was determined to be 100 μg/L. The method was used successfully for analysis of RDX in untreated industrial wastewater samples. It can be used for on-site monitoring of RDX in wastewater for early detection of chemical spills and failure of wastewater treatment systems. Copyright © 2015. Published by Elsevier B.V.

  8. Recognition of genetically modified product based on affinity propagation clustering and terahertz spectroscopy

    NASA Astrophysics Data System (ADS)

    Liu, Jianjun; Kan, Jianquan

    2018-04-01

    In this paper, based on the terahertz spectrum, a new identification method of genetically modified material by support vector machine (SVM) based on affinity propagation clustering is proposed. This algorithm mainly uses affinity propagation clustering algorithm to make cluster analysis and labeling on unlabeled training samples, and in the iterative process, the existing SVM training data are continuously updated, when establishing the identification model, it does not need to manually label the training samples, thus, the error caused by the human labeled samples is reduced, and the identification accuracy of the model is greatly improved.

  9. Optical dating of tsunami-laid sand from an Oregon coastal lake

    USGS Publications Warehouse

    Ollerhead, J.; Huntley, D.J.; Nelson, A.R.; Kelsey, H.M.

    2001-01-01

    Optical ages for five samples of tsunami-laid sand from an Oregon coastal lake were determined using an infrared optical-dating method on K-feldspar separates and, as a test of accuracy, compared to ages determined by AMS 14C dating of detrital plant fragments found in the same beds. Two optical ages were about 20% younger than calibrated 14C ages of about 3.1 and 4.3 ka. Correction of the optical ages using measured anomalous fading rates brings them into agreement with the 14C ages. The approach used holds significant promise for improving the accuracy of infrared optical-dating methods. Luminescence data for the other three samples result in optical age limits much greater than the 14C ages. These data provide a textbook demonstration of the correlation between scatter in the luminescence intensity of individual sample aliquots and their normalization values that is expected when the samples contain sand grains not adequately exposed to daylight just prior to or during deposition and burial. Thus, the data for these three samples suggest that the tsunamis eroded young and old sand deposits before dropping the sand in the lake. ?? 2001 Elsevier Science Ltd. All rights reserved.

  10. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... You may extend the sampling time to improve measurement accuracy of PM emissions, using good...-speed engines whose design prevents full-load operation for extended periods, you may ask for approval... designed to operate for extended periods. (e) See 40 CFR part 1065 for detailed specifications of...

  11. Quantification of Phenol, Phenyl Glucuronide, and Phenyl Sulfate in Blood of Unanesthetized Rainbow Trout by On-line Microdialysis Sampling

    EPA Science Inventory

    In this study we have developed a novel method to estimate in vivo rates of metabolism in unanesthetized fish. This method provides a basis for evaluating the accuracy of in vitro-in vivo metabolism extrapolations. As such, this research will lead to improved risk assessments f...

  12. Determination of carbonate carbon in geological materials by coulometric titration

    USGS Publications Warehouse

    Engleman, E.E.; Jackson, L.L.; Norton, D.R.

    1985-01-01

    A coulometric titration is used for the determination of carbonate carbon in geological materials. Carbon dioxide is evolved from the sample by the addition of 2 M perchloric acid, with heating, and is determined by automated coulometric titration. The coulometric titration showed improved speed and precision with comparable accuracy to gravimetric and gasometric techniques. ?? 1985.

  13. The Effects of Instructions on Mothers' Ratings of Child Attention-Deficit/Hyperactivity Disorder Symptoms

    ERIC Educational Resources Information Center

    Johnston, Charlotte; Weiss, Margaret; Murray, Candice; Miller, Natalie

    2011-01-01

    We examined whether instructional materials describing how to rate child ADHD symptoms would improve the accuracy of mothers' ratings of ADHD symptoms presented in standard child behavior stimuli, and whether instructions would be equally effective across a range of maternal depressive symptoms and family incomes. A community sample of 100 mothers…

  14. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... You may extend the sampling time to improve measurement accuracy of PM emissions, using good..., you may omit speed, torque, and power points from the duty-cycle regression statistics if the... mapped. (2) For variable-speed engines without low-speed governors, you may omit torque and power points...

  15. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... You may extend the sampling time to improve measurement accuracy of PM emissions, using good..., you may omit speed, torque, and power points from the duty-cycle regression statistics if the... mapped. (2) For variable-speed engines without low-speed governors, you may omit torque and power points...

  16. Improving CSF biomarker accuracy in predicting prevalent and incident Alzheimer disease

    PubMed Central

    Fagan, A.M.; Williams, M.M.; Ghoshal, N.; Aeschleman, M.; Grant, E.A.; Marcus, D.S.; Mintun, M.A.; Holtzman, D.M.; Morris, J.C.

    2011-01-01

    Objective: To investigate factors, including cognitive and brain reserve, which may independently predict prevalent and incident dementia of the Alzheimer type (DAT) and to determine whether inclusion of identified factors increases the predictive accuracy of the CSF biomarkers Aβ42, tau, ptau181, tau/Aβ42, and ptau181/Aβ42. Methods: Logistic regression identified variables that predicted prevalent DAT when considered together with each CSF biomarker in a cross-sectional sample of 201 participants with normal cognition and 46 with DAT. The area under the receiver operating characteristic curve (AUC) from the resulting model was compared with the AUC generated using the biomarker alone. In a second sample with normal cognition at baseline and longitudinal data available (n = 213), Cox proportional hazards models identified variables that predicted incident DAT together with each biomarker, and the models' concordance probability estimate (CPE), which was compared to the CPE generated using the biomarker alone. Results: APOE genotype including an ε4 allele, male gender, and smaller normalized whole brain volumes (nWBV) were cross-sectionally associated with DAT when considered together with every biomarker. In the longitudinal sample (mean follow-up = 3.2 years), 14 participants (6.6%) developed DAT. Older age predicted a faster time to DAT in every model, and greater education predicted a slower time in 4 of 5 models. Inclusion of ancillary variables resulted in better cross-sectional prediction of DAT for all biomarkers (p < 0.0021), and better longitudinal prediction for 4 of 5 biomarkers (p < 0.0022). Conclusions: The predictive accuracy of CSF biomarkers is improved by including age, education, and nWBV in analyses. PMID:21228296

  17. Establishing the accuracy of asteroseismic mass and radius estimates of giant stars - I. Three eclipsing systems at [Fe/H] ˜ -0.3 and the need for a large high-precision sample

    NASA Astrophysics Data System (ADS)

    Brogaard, K.; Hansen, C. J.; Miglio, A.; Slumstrup, D.; Frandsen, S.; Jessen-Hansen, J.; Lund, M. N.; Bossini, D.; Thygesen, A.; Davies, G. R.; Chaplin, W. J.; Arentoft, T.; Bruntt, H.; Grundahl, F.; Handberg, R.

    2018-05-01

    We aim to establish and improve the accuracy level of asteroseismic estimates of mass, radius, and age of giant stars. This can be achieved by measuring independent, accurate, and precise masses, radii, effective temperatures and metallicities of long period eclipsing binary stars with a red giant component that displays solar-like oscillations. We measured precise properties of the three eclipsing binary systems KIC 7037405, KIC 9540226, and KIC 9970396 and estimated their ages be 5.3 ± 0.5, 3.1 ± 0.6, and 4.8 ± 0.5 Gyr. The measurements of the giant stars were compared to corresponding measurements of mass, radius, and age using asteroseismic scaling relations and grid modelling. We found that asteroseismic scaling relations without corrections to Δν systematically overestimate the masses of the three red giants by 11.7 per cent, 13.7 per cent, and 18.9 per cent, respectively. However, by applying theoretical correction factors fΔν according to Rodrigues et al. (2017), we reached general agreement between dynamical and asteroseismic mass estimates, and no indications of systematic differences at the precision level of the asteroseismic measurements. The larger sample investigated by Gaulme et al. (2016) showed a much more complicated situation, where some stars show agreement between the dynamical and corrected asteroseismic measures while others suggest significant overestimates of the asteroseismic measures. We found no simple explanation for this, but indications of several potential problems, some theoretical, others observational. Therefore, an extension of the present precision study to a larger sample of eclipsing systems is crucial for establishing and improving the accuracy of asteroseismology of giant stars.

  18. An incremental knowledge assimilation system (IKAS) for mine detection

    NASA Astrophysics Data System (ADS)

    Porway, Jake; Raju, Chaitanya; Varadarajan, Karthik Mahesh; Nguyen, Hieu; Yadegar, Joseph

    2010-04-01

    In this paper we present an adaptive incremental learning system for underwater mine detection and classification that utilizes statistical models of seabed texture and an adaptive nearest-neighbor classifier to identify varied underwater targets in many different environments. The first stage of processing uses our Background Adaptive ANomaly detector (BAAN), which identifies statistically likely target regions using Gabor filter responses over the image. Using this information, BAAN classifies the background type and updates its detection using background-specific parameters. To perform classification, a Fully Adaptive Nearest Neighbor (FAAN) determines the best label for each detection. FAAN uses an extremely fast version of Nearest Neighbor to find the most likely label for the target. The classifier perpetually assimilates new and relevant information into its existing knowledge database in an incremental fashion, allowing improved classification accuracy and capturing concept drift in the target classes. Experiments show that the system achieves >90% classification accuracy on underwater mine detection tasks performed on synthesized datasets provided by the Office of Naval Research. We have also demonstrated that the system can incrementally improve its detection accuracy by constantly learning from new samples.

  19. Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.

    PubMed

    Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J

    2009-11-01

    Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.

  20. Tritium internal dose estimation from measurements with liquid scintillators.

    PubMed

    Pántya, A; Dálnoki, Á; Imre, A R; Zagyvai, P; Pázmándi, T

    2018-07-01

    Tritium may exist in several chemical and physical forms in workplaces, common occurrences are in vapor or liquid form (as tritiated water) and in organic form (e.g. thymidine) which can get into the body by inhalation or by ingestion. For internal dose assessment it is usually assumed that urine samples for tritium analysis are obtained after the tritium concentration inside the body has reached equilibrium following intake. Comparison was carried out for two types of vials, two efficiency calculation methods and two available liquid scintillation devices to highlight the errors of the measurements. The results were used for dose estimation with MONDAL-3 software. It has been shown that concerning the accuracy of the final internal dose assessment, the uncertainties of the assumptions used in the dose assessment (for example the date and route of intake, the physical and chemical form) can be more influential than the errors of the measured data. Therefore, the improvement of the experimental accuracy alone is not the proper way to improve the accuracy of the internal dose estimation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Feasibility Analysis of DEM Differential Method on Tree Height Assessment wit Terra-SAR/TanDEM-X Data

    NASA Astrophysics Data System (ADS)

    Zhang, Wangfei; Chen, Erxue; Li, Zengyuan; Feng, Qi; Zhao, Lei

    2016-08-01

    DEM Differential Method is an effective and efficient way for forest tree height assessment with Polarimetric and interferometric technology, however, the assessment accuracy of it is based on the accuracy of interferometric results and DEM. Terra-SAR/TanDEM-X, which established the first spaceborne bistatic interferometer, can provide highly accurate cross-track interferometric images in the whole global without inherent accuracy limitations like temporal decorrelation and atmospheric disturbance. These characters of Terra-SAR/TandDEM-X give great potential for global or regional tree height assessment, which have been constraint by the temporal decorrelation in traditional repeat-pass interferometry. Currently, in China, it will be costly to collect high accurate DEM with Lidar. At the same time, it is also difficult to get truly representative ground survey samples to test and verify the assessment results. In this paper, we analyzed the feasibility of using TerraSAR/TanDEM-X data to assess forest tree height with current free DEM data like ASTER-GDEM and archived ground in-suit data like forest management inventory data (FMI). At first, the accuracy and of ASTER-GDEM and forest management inventory data had been assessment according to the DEM and canopy height model (CHM) extracted from Lidar data. The results show the average elevation RMSE between ASTER-GEDM and Lidar-DEM is about 13 meters, but they have high correlation with the correlation coefficient of 0.96. With a linear regression model, we can compensate ASTER-GDEM and improve its accuracy nearly to the Lidar-DEM with same scale. The correlation coefficient between FMI and CHM is 0.40. its accuracy is able to be improved by a linear regression model withinconfidence intervals of 95%. After compensation of ASTER-GDEM and FMI, we calculated the tree height in Mengla test site with DEM Differential Method. The results showed that the corrected ASTER-GDEM can effectively improve the assessment accuracy. The average assessment accuracy before and after corrected is 0.73 and 0.76, the RMSE is 5.5 and 4.4, respectively.

  2. Percutaneous spinal fixation simulation with virtual reality and haptics.

    PubMed

    Luciano, Cristian J; Banerjee, P Pat; Sorenson, Jeffery M; Foley, Kevin T; Ansari, Sameer A; Rizzi, Silvio; Germanwala, Anand V; Kranzler, Leonard; Chittiboina, Prashant; Roitberg, Ben Z

    2013-01-01

    In this study, we evaluated the use of a part-task simulator with 3-dimensional and haptic feedback as a training tool for percutaneous spinal needle placement. To evaluate the learning effectiveness in terms of entry point/target point accuracy of percutaneous spinal needle placement on a high-performance augmented-reality and haptic technology workstation with the ability to control the duration of computer-simulated fluoroscopic exposure, thereby simulating an actual situation. Sixty-three fellows and residents performed needle placement on the simulator. A virtual needle was percutaneously inserted into a virtual patient's thoracic spine derived from an actual patient computed tomography data set. Ten of 126 needle placement attempts by 63 participants ended in failure for a failure rate of 7.93%. From all 126 needle insertions, the average error (15.69 vs 13.91), average fluoroscopy exposure (4.6 vs 3.92), and average individual performance score (32.39 vs 30.71) improved from the first to the second attempt. Performance accuracy yielded P = .04 from a 2-sample t test in which the rejected null hypothesis assumes no improvement in performance accuracy from the first to second attempt in the test session. The experiments showed evidence (P = .04) of performance accuracy improvement from the first to the second percutaneous needle placement attempt. This result, combined with previous learning retention and/or face validity results of using the simulator for open thoracic pedicle screw placement and ventriculostomy catheter placement, supports the efficacy of augmented reality and haptics simulation as a learning tool.

  3. Simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) for dynamic contrast-enhanced MRI of liver.

    PubMed

    Ning, Jia; Sun, Yongliang; Xie, Sheng; Zhang, Bida; Huang, Feng; Koken, Peter; Smink, Jouke; Yuan, Chun; Chen, Huijun

    2018-05-01

    To propose a simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) method for liver dynamic contrast-enhanced MRI. The proposed SAHA simultaneously acquired high temporal-resolution 2D images for vascular input function extraction using Cartesian sampling and 3D large-coverage high spatial-resolution liver dynamic contrast-enhanced images using golden angle stack-of-stars acquisition in an interleaved way. Simulations were conducted to investigate the accuracy of SAHA in pharmacokinetic analysis. A healthy volunteer and three patients with cirrhosis or hepatocellular carcinoma were included in the study to investigate the feasibility of SAHA in vivo. Simulation studies showed that SAHA can provide closer results to the true values and lower root mean square error of estimated pharmacokinetic parameters in all of the tested scenarios. The in vivo scans of subjects provided fair image quality of both 2D images for arterial input function and portal venous input function and 3D whole liver images. The in vivo fitting results showed that the perfusion parameters of healthy liver were significantly different from those of cirrhotic liver and HCC. The proposed SAHA can provide improved accuracy in pharmacokinetic modeling and is feasible in human liver dynamic contrast-enhanced MRI, suggesting that SAHA is a potential tool for liver dynamic contrast-enhanced MRI. Magn Reson Med 79:2629-2641, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. First-principles calculations of mobility

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Karthik

    First-principles calculations can be a powerful predictive tool for studying, modeling and understanding the fundamental scattering mechanisms impacting carrier transport in materials. In the past, calculations have provided important qualitative insights, but numerical accuracy has been limited due to computational challenges. In this talk, we will discuss some of the challenges involved in calculating electron-phonon scattering and carrier mobility, and outline approaches to overcome them. Topics will include the limitations of models for electron-phonon interaction, the importance of grid sampling, and the use of Gaussian smearing to replace energy-conserving delta functions. Using prototypical examples of oxides that are of technological importance-SrTiO3, BaSnO3, Ga2O3, and WO3-we will demonstrate computational approaches to overcome these challenges and improve the accuracy. One approach that leads to a distinct improvement in the accuracy is the use of analytic functions for the band dispersion, which allows for an exact solution of the energy-conserving delta function. For select cases, we also discuss direct quantitative comparisons with experimental results. The computational approaches and methodologies discussed in the talk are general and applicable to other materials, and greatly improve the numerical accuracy of the calculated transport properties, such as carrier mobility, conductivity and Seebeck coefficient. This work was performed in collaboration with B. Himmetoglu, Y. Kang, W. Wang, A. Janotti and C. G. Van de Walle, and supported by the LEAST Center, the ONR EXEDE MURI, and NSF.

  5. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  6. Using Functional Electrical Stimulation Mediated by Iterative Learning Control and Robotics to Improve Arm Movement for People With Multiple Sclerosis.

    PubMed

    Sampson, Patrica; Freeman, Chris; Coote, Susan; Demain, Sara; Feys, Peter; Meadmore, Katie; Hughes, Ann-Marie

    2016-02-01

    Few interventions address multiple sclerosis (MS) arm dysfunction but robotics and functional electrical stimulation (FES) appear promising. This paper investigates the feasibility of combining FES with passive robotic support during virtual reality (VR) training tasks to improve upper limb function in people with multiple sclerosis (pwMS). The system assists patients in following a specified trajectory path, employing an advanced model-based paradigm termed iterative learning control (ILC) to adjust the FES to improve accuracy and maximise voluntary effort. Reaching tasks were repeated six times with ILC learning the optimum control action from previous attempts. A convenience sample of five pwMS was recruited from local MS societies, and the intervention comprised 18 one-hour training sessions over 10 weeks. The accuracy of tracking performance without FES and the amount of FES delivered during training were analyzed using regression analysis. Clinical functioning of the arm was documented before and after treatment with standard tests. Statistically significant results following training included: improved accuracy of tracking performance both when assisted and unassisted by FES; reduction in maximum amount of FES needed to assist tracking; and less impairment in the proximal arm that was trained. The system was well tolerated by all participants with no increase in muscle fatigue reported. This study confirms the feasibility of FES combined with passive robot assistance as a potentially effective intervention to improve arm movement and control in pwMS and provides the basis for a follow-up study.

  7. Evaluation of a novel flexible snake robot for endoluminal surgery.

    PubMed

    Patel, Nisha; Seneci, Carlo A; Shang, Jianzhong; Leibrandt, Konrad; Yang, Guang-Zhong; Darzi, Ara; Teare, Julian

    2015-11-01

    Endoluminal therapeutic procedures such as endoscopic submucosal dissection are increasingly attractive given the shift in surgical paradigm towards minimally invasive surgery. This novel three-channel articulated robot was developed to overcome the limitations of the flexible endoscope which poses a number of challenges to endoluminal surgery. The device enables enhanced movement in a restricted workspace, with improved range of motion and with the accuracy required for endoluminal surgery. To evaluate a novel flexible robot for therapeutic endoluminal surgery. Bench-top studies. Research laboratory. Targeting and navigation tasks of the robot were performed to explore the range of motion and retroflexion capabilities. Complex endoluminal tasks such as endoscopic mucosal resection were also simulated. Successful completion, accuracy and time to perform the bench-top tasks were the main outcome measures. The robot ranges of movement, retroflexion and navigation capabilities were demonstrated. The device showed significantly greater accuracy of targeting in a retroflexed position compared to a conventional endoscope. Bench-top study and small study sample. We were able to demonstrate a number of simulated endoscopy tasks such as navigation, targeting, snaring and retroflexion. The improved accuracy of targeting whilst in a difficult configuration is extremely promising and may facilitate endoluminal surgery which has been notoriously challenging with a conventional endoscope.

  8. Combination of Autoantibody Signature with PSA Level Enables a Highly Accurate Blood-Based Differentiation of Prostate Cancer Patients from Patients with Benign Prostatic Hyperplasia.

    PubMed

    Leidinger, Petra; Keller, Andreas; Milchram, Lisa; Harz, Christian; Hart, Martin; Werth, Angelika; Lenhof, Hans-Peter; Weinhäusel, Andreas; Keck, Bastian; Wullich, Bernd; Ludwig, Nicole; Meese, Eckart

    2015-01-01

    Although an increased level of the prostate-specific antigen can be an indication for prostate cancer, other reasons often lead to a high rate of false positive results. Therefore, an additional serological screening of autoantibodies in patients' sera could improve the detection of prostate cancer. We performed protein macroarray screening with sera from 49 prostate cancer patients, 70 patients with benign prostatic hyperplasia and 28 healthy controls and compared the autoimmune response in those groups. We were able to distinguish prostate cancer patients from normal controls with an accuracy of 83.2%, patients with benign prostatic hyperplasia from normal controls with an accuracy of 86.0% and prostate cancer patients from patients with benign prostatic hyperplasia with an accuracy of 70.3%. Combining seroreactivity pattern with a PSA level of higher than 4.0 ng/ml this classification could be improved to an accuracy of 84.1%. For selected proteins we were able to confirm the differential expression by using luminex on 84 samples. We provide a minimally invasive serological method to reduce false positive results in detection of prostate cancer and according to PSA screening to distinguish men with prostate cancer from men with benign prostatic hyperplasia.

  9. Improved δ(13)C analysis of amino sugars in soil by ion chromatography-oxidation-isotope ratio mass spectrometry.

    PubMed

    Dippold, Michaela A; Boesel, Stefanie; Gunina, Anna; Kuzyakov, Yakov; Glaser, Bruno

    2014-03-30

    Amino sugars build up microbial cell walls and are important components of soil organic matter. To evaluate their sources and turnover, δ(13)C analysis of soil-derived amino sugars by liquid chromatography was recently suggested. However, amino sugar δ(13)C determination remains challenging due to (1) a strong matrix effect, (2) CO2 -binding by alkaline eluents, and (3) strongly different chromatographic behavior and concentrations of basic and acidic amino sugars. To overcome these difficulties we established an ion chromatography-oxidation-isotope ratio mass spectrometry method to improve and facilitate soil amino sugar analysis. After acid hydrolysis of soil samples, the extract was purified from salts and other components impeding chromatographic resolution. The amino sugar concentrations and δ(13)C values were determined by coupling an ion chromatograph to an isotope ratio mass spectrometer. The accuracy and precision of quantification and δ(13)C determination were assessed. Internal standards enabled correction for losses during analysis, with a relative standard deviation <6%. The higher magnitude peaks of basic than of acidic amino sugars required an amount-dependent correction of δ(13)C values. This correction improved the accuracy of the determination of δ(13)C values to <1.5‰ and the precision to <0.5‰ for basic and acidic amino sugars in a single run. This method enables parallel quantification and δ(13)C determination of basic and acidic amino sugars in a single chromatogram due to the advantages of coupling an ion chromatograph to the isotope ratio mass spectrometer. Small adjustments of sample amount and injection volume are necessary to optimize precision and accuracy for individual soils. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Google Goes Cancer: Improving Outcome Prediction for Cancer Patients by Network-Based Ranking of Marker Genes

    PubMed Central

    Roy, Janine; Aust, Daniela; Knösel, Thomas; Rümmele, Petra; Jahnke, Beatrix; Hentrich, Vera; Rückert, Felix; Niedergethmann, Marco; Weichert, Wilko; Bahra, Marcus; Schlitt, Hans J.; Settmacher, Utz; Friess, Helmut; Büchler, Markus; Saeger, Hans-Detlev; Schroeder, Michael; Pilarsky, Christian; Grützmann, Robert

    2012-01-01

    Predicting the clinical outcome of cancer patients based on the expression of marker genes in their tumors has received increasing interest in the past decade. Accurate predictors of outcome and response to therapy could be used to personalize and thereby improve therapy. However, state of the art methods used so far often found marker genes with limited prediction accuracy, limited reproducibility, and unclear biological relevance. To address this problem, we developed a novel computational approach to identify genes prognostic for outcome that couples gene expression measurements from primary tumor samples with a network of known relationships between the genes. Our approach ranks genes according to their prognostic relevance using both expression and network information in a manner similar to Google's PageRank. We applied this method to gene expression profiles which we obtained from 30 patients with pancreatic cancer, and identified seven candidate marker genes prognostic for outcome. Compared to genes found with state of the art methods, such as Pearson correlation of gene expression with survival time, we improve the prediction accuracy by up to 7%. Accuracies were assessed using support vector machine classifiers and Monte Carlo cross-validation. We then validated the prognostic value of our seven candidate markers using immunohistochemistry on an independent set of 412 pancreatic cancer samples. Notably, signatures derived from our candidate markers were independently predictive of outcome and superior to established clinical prognostic factors such as grade, tumor size, and nodal status. As the amount of genomic data of individual tumors grows rapidly, our algorithm meets the need for powerful computational approaches that are key to exploit these data for personalized cancer therapies in clinical practice. PMID:22615549

  11. Improving Classification of Cancer and Mining Biomarkers from Gene Expression Profiles Using Hybrid Optimization Algorithms and Fuzzy Support Vector Machine

    PubMed Central

    Moteghaed, Niloofar Yousefi; Maghooli, Keivan; Garshasbi, Masoud

    2018-01-01

    Background: Gene expression data are characteristically high dimensional with a small sample size in contrast to the feature size and variability inherent in biological processes that contribute to difficulties in analysis. Selection of highly discriminative features decreases the computational cost and complexity of the classifier and improves its reliability for prediction of a new class of samples. Methods: The present study used hybrid particle swarm optimization and genetic algorithms for gene selection and a fuzzy support vector machine (SVM) as the classifier. Fuzzy logic is used to infer the importance of each sample in the training phase and decrease the outlier sensitivity of the system to increase the ability to generalize the classifier. A decision-tree algorithm was applied to the most frequent genes to develop a set of rules for each type of cancer. This improved the abilities of the algorithm by finding the best parameters for the classifier during the training phase without the need for trial-and-error by the user. The proposed approach was tested on four benchmark gene expression profiles. Results: Good results have been demonstrated for the proposed algorithm. The classification accuracy for leukemia data is 100%, for colon cancer is 96.67% and for breast cancer is 98%. The results show that the best kernel used in training the SVM classifier is the radial basis function. Conclusions: The experimental results show that the proposed algorithm can decrease the dimensionality of the dataset, determine the most informative gene subset, and improve classification accuracy using the optimal parameters of the classifier with no user interface. PMID:29535919

  12. Towards an Online Seizure Advisory System-An Adaptive Seizure Prediction Framework Using Active Learning Heuristics.

    PubMed

    Karuppiah Ramachandran, Vignesh Raja; Alblas, Huibert J; Le, Duc V; Meratnia, Nirvana

    2018-05-24

    In the last decade, seizure prediction systems have gained a lot of attention because of their enormous potential to largely improve the quality-of-life of the epileptic patients. The accuracy of the prediction algorithms to detect seizure in real-world applications is largely limited because the brain signals are inherently uncertain and affected by various factors, such as environment, age, drug intake, etc., in addition to the internal artefacts that occur during the process of recording the brain signals. To deal with such ambiguity, researchers transitionally use active learning, which selects the ambiguous data to be annotated by an expert and updates the classification model dynamically. However, selecting the particular data from a pool of large ambiguous datasets to be labelled by an expert is still a challenging problem. In this paper, we propose an active learning-based prediction framework that aims to improve the accuracy of the prediction with a minimum number of labelled data. The core technique of our framework is employing the Bernoulli-Gaussian Mixture model (BGMM) to determine the feature samples that have the most ambiguity to be annotated by an expert. By doing so, our approach facilitates expert intervention as well as increasing medical reliability. We evaluate seven different classifiers in terms of the classification time and memory required. An active learning framework built on top of the best performing classifier is evaluated in terms of required annotation effort to achieve a high level of prediction accuracy. The results show that our approach can achieve the same accuracy as a Support Vector Machine (SVM) classifier using only 20 % of the labelled data and also improve the prediction accuracy even under the noisy condition.

  13. Correlation of chemical shifts predicted by molecular dynamics simulations for partially disordered proteins.

    PubMed

    Karp, Jerome M; Eryilmaz, Ertan; Erylimaz, Ertan; Cowburn, David

    2015-01-01

    There has been a longstanding interest in being able to accurately predict NMR chemical shifts from structural data. Recent studies have focused on using molecular dynamics (MD) simulation data as input for improved prediction. Here we examine the accuracy of chemical shift prediction for intein systems, which have regions of intrinsic disorder. We find that using MD simulation data as input for chemical shift prediction does not consistently improve prediction accuracy over use of a static X-ray crystal structure. This appears to result from the complex conformational ensemble of the disordered protein segments. We show that using accelerated molecular dynamics (aMD) simulations improves chemical shift prediction, suggesting that methods which better sample the conformational ensemble like aMD are more appropriate tools for use in chemical shift prediction for proteins with disordered regions. Moreover, our study suggests that data accurately reflecting protein dynamics must be used as input for chemical shift prediction in order to correctly predict chemical shifts in systems with disorder.

  14. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  15. The SNAQ(RC), an easy traffic light system as a first step in the recognition of undernutrition in residential care.

    PubMed

    Kruizenga, H M; de Vet, H C W; Van Marissing, C M E; Stassen, E E P M; Strijk, J E; Van Bokhorst-de Van der Schueren, M A E; Horman, J C H; Schols, J M G A; Van Binsbergen, J J; Eliens, A; Knol, D L; Visser, M

    2010-02-01

    Development and validation of a quick and easy screening tool for the early detection of undernourished residents in nursing homes and residential homes. Multi-center, cross sectional observational study. Nursing homes and residential homes. The screening tool was developed in a total of 308 residents (development sample; sample A) and cross validated in a new sample of 720 residents (validation sample) consisting of 476 nursing home residents (Sample B1) and 244 residential home residents (sample B2). Patients were defined severely undernourished when they met at least one of the following criteria: BMI or= 5% unintentional weight loss in the past month and/or >or= 10% unintentional weight loss in the past 6 months. Patients were defined as moderately undernourished if they met the following criteria: BMI 20.1-22 kg/m2 and/or 5-10% unintentional weight loss in the past six months. The most predictive questions (originally derived from previously developed screening instruments) of undernourishment were selected in sample A and cross validated in sample B. In a second stage BMI was added to the SNAQRC in sample B. The diagnostic accuracy of the screening tool in the development and validation samples was expressed in sensitivity, specificity, and the negative and positive predictive value. The four most predictive questions for undernutrition related to: unintentional weight loss more than 6 kg during the past 6 months and more than 3 kg in the past month, capability of eating and drinking with help, and decreased appetite during the past month. The diagnostic accuracy of these questions alone was insufficient (Se=45%, Sp=87%, PPV=50% and NPV=84%). However, combining the questions with measured BMI sufficiently improved the diagnostic accuracy (Se=87%, Sp=82%, PPV=59% and NPV=95%). Early detection of undernourished nursing- and residential home residents is possible using four screening questions and measured BMI.

  16. Decimated Input Ensembles for Improved Generalization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Oza, Nikunj C.; Norvig, Peter (Technical Monitor)

    1999-01-01

    Recently, many researchers have demonstrated that using classifier ensembles (e.g., averaging the outputs of multiple classifiers before reaching a classification decision) leads to improved performance for many difficult generalization problems. However, in many domains there are serious impediments to such "turnkey" classification accuracy improvements. Most notable among these is the deleterious effect of highly correlated classifiers on the ensemble performance. One particular solution to this problem is generating "new" training sets by sampling the original one. However, with finite number of patterns, this causes a reduction in the training patterns each classifier sees, often resulting in considerably worsened generalization performance (particularly for high dimensional data domains) for each individual classifier. Generally, this drop in the accuracy of the individual classifier performance more than offsets any potential gains due to combining, unless diversity among classifiers is actively promoted. In this work, we introduce a method that: (1) reduces the correlation among the classifiers; (2) reduces the dimensionality of the data, thus lessening the impact of the 'curse of dimensionality'; and (3) improves the classification performance of the ensemble.

  17. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel ‘V-plot’ methodology to display accuracy values

    PubMed Central

    Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424

  18. Dental Hygiene Students' Self-Assessment of Ergonomics Utilizing Photography.

    PubMed

    Partido, Brian B

    2017-10-01

    Due to postural demands, dental professionals are at high risk for developing work-related musculoskeletal disorders (WMSDs). Dental clinicians' lack of ergonomic awareness may impede the clinical application of recommendations to improve their posture. The aim of this study was to determine whether feedback involving photography and self-assessment would improve dental hygiene students' ergonomic scores and accuracy of their ergonomic self-assessments. The study involved a randomized control design and used a convenience sample of all 32 junior-year dental hygiene students enrolled in the autumn 2016 term in The Ohio State University baccalaureate dental hygiene program. Sixteen students were randomly assigned to each of two groups (control and training). At weeks one and four, all participants were photographed and completed ergonomic self-evaluations using the Modified-Dental Operator Posture Assessment Instrument (M-DOPAI). During weeks two and three, participants in the training group were photographed again and used those photographs to complete ergonomic self-assessments. All participants' pre-training and post-training photographs were given ergonomic scores by three raters. Students' self-assessments in the control group and faculty evaluations of the training group showed significant improvement in scores over time (F(1,60)=4.25, p<0.05). In addition, the accuracy of self-assessment significantly improved for students in the training group (F(1,30)=8.29, p<0.01). In this study, dental hygiene students' self-assessments using photographs resulted in improvements in their ergonomic scores and increased accuracy of their ergonomic self-assessments. Any improvement in ergonomic score or awareness can help reduce the risks for WMSDs, especially among dental clinicians.

  19. Comparison of the Effectiveness of Interactive Didactic Lecture Versus Online Simulation-Based CME Programs Directed at Improving the Diagnostic Capabilities of Primary Care Practitioners.

    PubMed

    McFadden, Pam; Crim, Andrew

    2016-01-01

    Diagnostic errors in primary care contribute to increased morbidity and mortality, and billions in costs each year. Improvements in the way practicing physicians are taught so as to optimally perform differential diagnosis can increase patient safety and lower the costs of care. This study represents a comparison of the effectiveness of two approaches to CME training directed at improving the primary care practitioner's diagnostic capabilities against seven common and important causes of joint pain. Using a convenience sampling methodology, one group of primary care practitioners was trained by a traditional live, expert-led, multimedia-based training activity supplemented with interactive practice opportunities and feedback (control group). The second group was trained online with a multimedia-based training activity supplemented with interactive practice opportunities and feedback delivered by an artificial intelligence-driven simulation/tutor (treatment group). Before their respective instructional intervention, there were no significant differences in the diagnostic performance of the two groups against a battery of case vignettes presenting with joint pain. Using the same battery of case vignettes to assess postintervention diagnostic performance, there was a slight but not statistically significant improvement in the control group's diagnostic accuracy (P = .13). The treatment group, however, demonstrated a significant improvement in accuracy (P < .02; Cohen d, effect size = 0.79). These data indicate that within the context of a CME activity, a significant improvement in diagnostic accuracy can be achieved by the use of a web-delivered, multimedia-based instructional activity supplemented by practice opportunities and feedback delivered by an artificial intelligence-driven simulation/tutor.

  20. Increasing the accuracy and scalability of the Immunofluorescence Assay for Epstein Barr Virus by inferring continuous titers from a single sample dilution.

    PubMed

    Goh, Sherry Meow Peng; Swaminathan, Muthukaruppan; Lai, Julian U-Ming; Anwar, Azlinda; Chan, Soh Ha; Cheong, Ian

    2017-01-01

    High Epstein Barr Virus (EBV) titers detected by the indirect Immunofluorescence Assay (IFA) are a reliable predictor of Nasopharyngeal Carcinoma (NPC). Despite being the gold standard for serological detection of NPC, the IFA is limited by scaling bottlenecks. Specifically, 5 serial dilutions of each patient sample must be prepared and visually matched by an evaluator to one of 5 discrete titers. Here, we describe a simple method for inferring continuous EBV titers from IFA images acquired from NPC-positive patient sera using only a single sample dilution. In the first part of our study, 2 blinded evaluators used a set of reference titer standards to perform independent re-evaluations of historical samples with known titers. Besides exhibiting high inter-evaluator agreement, both evaluators were also in high concordance with historical titers, thus validating the accuracy of the reference titer standards. In the second part of the study, the reference titer standards were IFA-processed and assigned an 'EBV Score' using image analysis. A log-linear relationship between titers and EBV Score was observed. This relationship was preserved even when images were acquired and analyzed 3days post-IFA. We conclude that image analysis of IFA-processed samples can be used to infer a continuous EBV titer with just a single dilution of NPC-positive patient sera. This work opens new possibilities for improving the accuracy and scalability of IFA in the context of clinical screening. Copyright © 2016. Published by Elsevier B.V.

  1. Laboratory evaluation of a field-portable sealed source X-ray fluorescence spectrometer for determination of metals in air filter samples.

    PubMed

    Lawryk, Nicholas J; Feng, H Amy; Chen, Bean T

    2009-07-01

    Recent advances in field-portable X-ray fluorescence (FP XRF) spectrometer technology have made it a potentially valuable screening tool for the industrial hygienist to estimate worker exposures to airborne metals. Although recent studies have shown that FP XRF technology may be better suited for qualitative or semiquantitative analysis of airborne lead in the workplace, these studies have not extensively addressed its ability to measure other elements. This study involved a laboratory-based evaluation of a representative model FP XRF spectrometer to measure elements commonly encountered in workplace settings that may be collected on air sample filter media, including chromium, copper, iron, manganese, nickel, lead, and zinc. The evaluation included assessments of (1) response intensity with respect to location on the probe window, (2) limits of detection for five different filter media, (3) limits of detection as a function of analysis time, and (4) bias, precision, and accuracy estimates. Teflon, polyvinyl chloride, polypropylene, and mixed cellulose ester filter media all had similarly low limits of detection for the set of elements examined. Limits of detection, bias, and precision generally improved with increasing analysis time. Bias, precision, and accuracy estimates generally improved with increasing element concentration. Accuracy estimates met the National Institute for Occupational Safety and Health criterion for nearly all the element and concentration combinations. Based on these results, FP XRF spectrometry shows potential to be useful in the assessment of worker inhalation exposures to other metals in addition to lead.

  2. Can verbal working memory training improve reading?

    PubMed

    Banales, Erin; Kohnen, Saskia; McArthur, Genevieve

    2015-01-01

    The aim of the current study was to determine whether poor verbal working memory is associated with poor word reading accuracy because the former causes the latter, or the latter causes the former. To this end, we tested whether (a) verbal working memory training improves poor verbal working memory or poor word reading accuracy, and whether (b) reading training improves poor reading accuracy or verbal working memory in a case series of four children with poor word reading accuracy and verbal working memory. Each child completed 8 weeks of verbal working memory training and 8 weeks of reading training. Verbal working memory training improved verbal working memory in two of the four children, but did not improve their reading accuracy. Similarly, reading training improved word reading accuracy in all children, but did not improve their verbal working memory. These results suggest that the causal links between verbal working memory and reading accuracy may not be as direct as has been assumed.

  3. Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.

    PubMed

    An, Yan; Zou, Zhihong; Zhao, Yanfei

    2015-03-01

    An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.

  4. Clinical validation of the NANDA-I diagnosis of impaired memory in elderly patients.

    PubMed

    Montoril, Michelle H; Lopes, Marcos Venícios O; Santana, Rosimere F; Sousa, Vanessa Emille C; Carvalho, Priscilla Magalhães O; Diniz, Camila M; Alves, Naiana P; Ferreira, Gabriele L; Fróes, Nathaly Bianka M; Menezes, Angélica P

    2016-05-01

    The aim of this study was to perform a clinical validation of the defining characteristics of impaired memory (IM) in elderly patients at a long-term care institution. A sample of 123 elderly patients was evaluated with a questionnaire designed to identify IM according to the NANDA-I taxonomy. Accuracy measures were calculated for the total sample and for males and females separately. Sensitivity and specificity values indicated that: (1) inability to learn new skills is useful in screening IM, and (2) forgets to perform a behavior at a scheduled time, forgetfulness, inability to learn new information, inability to recall events, and inability to recall factual information are confirmatory indicators. Specific factors can affect the manifestation of IM by elderly patients. The results may be useful in improving diagnostic accuracy and efficiency of the IM nursing diagnosis. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. lop-DWI: A Novel Scheme for Pre-Processing of Diffusion-Weighted Images in the Gradient Direction Domain.

    PubMed

    Sepehrband, Farshid; Choupan, Jeiran; Caruyer, Emmanuel; Kurniawan, Nyoman D; Gal, Yaniv; Tieng, Quang M; McMahon, Katie L; Vegh, Viktor; Reutens, David C; Yang, Zhengyi

    2014-01-01

    We describe and evaluate a pre-processing method based on a periodic spiral sampling of diffusion-gradient directions for high angular resolution diffusion magnetic resonance imaging. Our pre-processing method incorporates prior knowledge about the acquired diffusion-weighted signal, facilitating noise reduction. Periodic spiral sampling of gradient direction encodings results in an acquired signal in each voxel that is pseudo-periodic with characteristics that allow separation of low-frequency signal from high frequency noise. Consequently, it enhances local reconstruction of the orientation distribution function used to define fiber tracks in the brain. Denoising with periodic spiral sampling was tested using synthetic data and in vivo human brain images. The level of improvement in signal-to-noise ratio and in the accuracy of local reconstruction of fiber tracks was significantly improved using our method.

  6. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Walsh, Jonathan A.

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  7. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE PAGES

    Romano, Paul K.; Walsh, Jonathan A.

    2018-02-03

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  8. An improved reference measurement procedure for triglycerides and total glycerides in human serum by isotope dilution gas chromatography-mass spectrometry.

    PubMed

    Chen, Yizhao; Liu, Qinde; Yong, Sharon; Teo, Hui Ling; Lee, Tong Kooi

    2014-01-20

    Triglycerides are widely tested in clinical laboratories using enzymatic methods for lipid profiling. As enzymatic methods can be affected by interferences from biological samples, this together with the non-specific nature of triglycerides measurement makes it necessary to verify the accuracy of the test results with a reference measurement procedure. Several such measurement procedures had been published. These procedures generally involved lengthy and laborious sample preparation steps. In this paper, an improved reference measurement procedure for triglycerides and total glycerides was reported which simplifies the sample preparation steps and greatly shortens the time taken. The procedure was based on isotope dilution gas chromatography-mass spectrometry (IDGC-MS)with tripalmitin as the calibration standard. Serum samples were first spiked with isotope-labeled tripalmitin. For the measurement of triglycerides, the serum samples were subjected to lipid extraction followed by separation of triglycerides from diglycerides and monoglycerides. Triglycerides were then hydrolyzed to glycerol, derivatized and injected into the GC–MS for quantification. For the measurement of total glycerides, the serum samples were hydrolyzed directly and derivatized before injection into the GC-MS for quantification. All measurement results showed good precision with CV <1%. A certified reference material (CRM) of lipids in frozen human serum was used to verify the accuracy of the measurement. The obtained values for both triglycerides and total glycerides were well within the certified ranges of the CRM, with deviation <0.4% from the certified values. The relative expanded uncertainties were also comparable with the uncertainties associated with the certified values of the CRM. The validated procedure was used in an External Quality Assessment (EQA) Program organized by our laboratory to establish the assigned values for triglycerides and total glycerides.

  9. Development and validation of a liquid chromatography tandem mass spectrometry assay for the quantitation of a protein therapeutic in cynomolgus monkey serum.

    PubMed

    Zhao, Yue; Liu, Guowen; Angeles, Aida; Hamuro, Lora L; Trouba, Kevin J; Wang, Bonnie; Pillutla, Renuka C; DeSilva, Binodh S; Arnold, Mark E; Shen, Jim X

    2015-04-15

    We have developed and fully validated a fast and simple LC-MS/MS assay to quantitate a therapeutic protein BMS-A in cynomolgus monkey serum. Prior to trypsin digestion, a recently reported sample pretreatment method was applied to remove more than 95% of the total serum albumin and denature the proteins in the serum sample. The pretreatment procedure simplified the biological sample prior to digestion, improved digestion efficiency and reproducibility, and did not require reduction and alkylation. The denatured proteins were then digested with trypsin at 60 °C for 30 min and the tryptic peptides were chromatographically separated on an Acquity CSH column (2.1 mm × 50 mm, 1.7 μm) using gradient elution. One surrogate peptide was used for quantitation and another surrogate peptide was selected for confirmation. Two corresponding stable isotope labeled peptides were used to compensate variations during LC-MS detection. The linear analytical range of the assay was 0.50-500 μg/mL. The accuracy (%Dev) was within ± 5.4% and the total assay variation (%CV) was less than 12.0% for sample analysis. The validated method demonstrated good accuracy and precision and the application of the innovative albumin removal sample pretreatment method improved both assay sensitivity and robustness. The assay has been applied to a cynomolgus monkey toxicology study and the serum sample concentration data were in good agreement with data generated using a quantitative ligand-binding assay (LBA). The use of a confirmatory peptide, in addition to the quantitation peptide, ensured the integrity of the drug concentrations measured by the method. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring.

    PubMed

    Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de

    2017-11-05

    Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.

  11. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring

    PubMed Central

    Shu, Tongxin; Xia, Min; Chen, Jiahong; de Silva, Clarence

    2017-01-01

    Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy. PMID:29113087

  12. Accuracy of genomic breeding values for meat tenderness in Polled Nellore cattle.

    PubMed

    Magnabosco, C U; Lopes, F B; Fragoso, R C; Eifert, E C; Valente, B D; Rosa, G J M; Sainz, R D

    2016-07-01

    Zebu () cattle, mostly of the Nellore breed, comprise more than 80% of the beef cattle in Brazil, given their tolerance of the tropical climate and high resistance to ectoparasites. Despite their advantages for production in tropical environments, zebu cattle tend to produce tougher meat than Bos taurus breeds. Traditional genetic selection to improve meat tenderness is constrained by the difficulty and cost of phenotypic evaluation for meat quality. Therefore, genomic selection may be the best strategy to improve meat quality traits. This study was performed to compare the accuracies of different Bayesian regression models in predicting molecular breeding values for meat tenderness in Polled Nellore cattle. The data set was composed of Warner-Bratzler shear force (WBSF) of longissimus muscle from 205, 141, and 81 animals slaughtered in 2005, 2010, and 2012, respectively, which were selected and mated so as to create extreme segregation for WBSF. The animals were genotyped with either the Illumina BovineHD (HD; 777,000 from 90 samples) chip or the GeneSeek Genomic Profiler (GGP Indicus HD; 77,000 from 337 samples). The quality controls of SNP were Hard-Weinberg Proportion -value ≥ 0.1%, minor allele frequency > 1%, and call rate > 90%. The FImpute program was used for imputation from the GGP Indicus HD chip to the HD chip. The effect of each SNP was estimated using ridge regression, least absolute shrinkage and selection operator (LASSO), Bayes A, Bayes B, and Bayes Cπ methods. Different numbers of SNP were used, with 1, 2, 3, 4, 5, 7, 10, 20, 40, 60, 80, or 100% of the markers preselected based on their significance test (-value from genomewide association studies [GWAS]) or randomly sampled. The prediction accuracy was assessed by the correlation between genomic breeding value and the observed WBSF phenotype, using a leave-one-out cross-validation methodology. The prediction accuracies using all markers were all very similar for all models, ranging from 0.22 (Bayes Cπ) to 0.25 (Bayes B). When preselecting SNP based on GWAS results, the highest correlation (0.27) between WBSF and the genomic breeding value was achieved using the Bayesian LASSO model with 15,030 (3%) markers. Although this study used relatively few animals, the design of the segregating population ensured wide genetic variability for meat tenderness, which was important to achieve acceptable accuracy of genomic prediction. Although all models showed similar levels of prediction accuracy, some small advantages were observed with the Bayes B approach when higher numbers of markers were preselected based on their -values resulting from a GWAS analysis.

  13. Evaluation of sampling frequency, window size and sensor position for classification of sheep behaviour.

    PubMed

    Walton, Emily; Casey, Christy; Mitsch, Jurgen; Vázquez-Diosdado, Jorge A; Yan, Juan; Dottorini, Tania; Ellis, Keith A; Winterlich, Anthony; Kaler, Jasmeet

    2018-02-01

    Automated behavioural classification and identification through sensors has the potential to improve health and welfare of the animals. Position of a sensor, sampling frequency and window size of segmented signal data has a major impact on classification accuracy in activity recognition and energy needs for the sensor, yet, there are no studies in precision livestock farming that have evaluated the effect of all these factors simultaneously. The aim of this study was to evaluate the effects of position (ear and collar), sampling frequency (8, 16 and 32 Hz) of a triaxial accelerometer and gyroscope sensor and window size (3, 5 and 7 s) on the classification of important behaviours in sheep such as lying, standing and walking. Behaviours were classified using a random forest approach with 44 feature characteristics. The best performance for walking, standing and lying classification in sheep (accuracy 95%, F -score 91%-97%) was obtained using combination of 32 Hz, 7 s and 32 Hz, 5 s for both ear and collar sensors, although, results obtained with 16 Hz and 7 s window were comparable with accuracy of 91%-93% and F -score 88%-95%. Energy efficiency was best at a 7 s window. This suggests that sampling at 16 Hz with 7 s window will offer benefits in a real-time behavioural monitoring system for sheep due to reduced energy needs.

  14. Evaluation of sampling frequency, window size and sensor position for classification of sheep behaviour

    PubMed Central

    Walton, Emily; Casey, Christy; Mitsch, Jurgen; Vázquez-Diosdado, Jorge A.; Yan, Juan; Dottorini, Tania; Ellis, Keith A.; Winterlich, Anthony

    2018-01-01

    Automated behavioural classification and identification through sensors has the potential to improve health and welfare of the animals. Position of a sensor, sampling frequency and window size of segmented signal data has a major impact on classification accuracy in activity recognition and energy needs for the sensor, yet, there are no studies in precision livestock farming that have evaluated the effect of all these factors simultaneously. The aim of this study was to evaluate the effects of position (ear and collar), sampling frequency (8, 16 and 32 Hz) of a triaxial accelerometer and gyroscope sensor and window size (3, 5 and 7 s) on the classification of important behaviours in sheep such as lying, standing and walking. Behaviours were classified using a random forest approach with 44 feature characteristics. The best performance for walking, standing and lying classification in sheep (accuracy 95%, F-score 91%–97%) was obtained using combination of 32 Hz, 7 s and 32 Hz, 5 s for both ear and collar sensors, although, results obtained with 16 Hz and 7 s window were comparable with accuracy of 91%–93% and F-score 88%–95%. Energy efficiency was best at a 7 s window. This suggests that sampling at 16 Hz with 7 s window will offer benefits in a real-time behavioural monitoring system for sheep due to reduced energy needs. PMID:29515862

  15. Energy-dispersive X-ray fluorescence systems as analytical tool for assessment of contaminated soils.

    PubMed

    Vanhoof, Chris; Corthouts, Valère; Tirez, Kristof

    2004-04-01

    To determine the heavy metal content in soil samples at contaminated locations, a static and time consuming procedure is used in most cases. Soil samples are collected and analyzed in the laboratory at high quality and high analytical costs. The demand by government and consultants for a more dynamic approach and by customers requiring performances in which analyses are performed in the field with immediate feedback of the analytical results, is growing. Especially during the follow-up of remediation projects or during the determination of the sampling strategy, field analyses are advisable. For this purpose four types of ED-XRF systems, ranging from portable up to high performance laboratory systems, have been evaluated. The evaluation criteria are based on the performance characteristics for all the ED-XRF systems such as limit of detection, accuracy and the measurement uncertainty on one hand, and also the influence of the sample pretreatment on the obtained results on the other hand. The study proved that the field portable system and the bench top system, placed in a mobile van, can be applied as field techniques, resulting in semi-quantitative analytical results. A limited homogenization of the analyzed sample significantly increases the representativeness of the soil sample. The ED-XRF systems can be differentiated by their limits of detection which are a factor of 10 to 20 higher for the portable system. The accuracy of the results and the measurement uncertainty also improved using the bench top system. Therefore, the selection criteria for applicability of both field systems are based on the required detection level and also the required accuracy of the results.

  16. Spectroscopic characterization of galaxy clusters in RCS-1: spectroscopic confirmation, redshift accuracy, and dynamical mass-richness relation

    NASA Astrophysics Data System (ADS)

    Gilbank, David G.; Barrientos, L. Felipe; Ellingson, Erica; Blindert, Kris; Yee, H. K. C.; Anguita, T.; Gladders, M. D.; Hall, P. B.; Hertling, G.; Infante, L.; Yan, R.; Carrasco, M.; Garcia-Vergara, Cristina; Dawson, K. S.; Lidman, C.; Morokuma, T.

    2018-05-01

    We present follow-up spectroscopic observations of galaxy clusters from the first Red-sequence Cluster Survey (RCS-1). This work focuses on two samples, a lower redshift sample of ˜30 clusters ranging in redshift from z ˜ 0.2-0.6 observed with multiobject spectroscopy (MOS) on 4-6.5-m class telescopes and a z ˜ 1 sample of ˜10 clusters 8-m class telescope observations. We examine the detection efficiency and redshift accuracy of the now widely used red-sequence technique for selecting clusters via overdensities of red-sequence galaxies. Using both these data and extended samples including previously published RCS-1 spectroscopy and spectroscopic redshifts from SDSS, we find that the red-sequence redshift using simple two-filter cluster photometric redshifts is accurate to σz ≈ 0.035(1 + z) in RCS-1. This accuracy can potentially be improved with better survey photometric calibration. For the lower redshift sample, ˜5 per cent of clusters show some (minor) contamination from secondary systems with the same red-sequence intruding into the measurement aperture of the original cluster. At z ˜ 1, the rate rises to ˜20 per cent. Approximately ten per cent of projections are expected to be serious, where the two components contribute significant numbers of their red-sequence galaxies to another cluster. Finally, we present a preliminary study of the mass-richness calibration using velocity dispersions to probe the dynamical masses of the clusters. We find a relation broadly consistent with that seen in the local universe from the WINGS sample at z ˜ 0.05.

  17. Combined GPS/GLONASS Precise Point Positioning with Fixed GPS Ambiguities

    PubMed Central

    Pan, Lin; Cai, Changsheng; Santerre, Rock; Zhu, Jianjun

    2014-01-01

    Precise point positioning (PPP) technology is mostly implemented with an ambiguity-float solution. Its performance may be further improved by performing ambiguity-fixed resolution. Currently, the PPP integer ambiguity resolutions (IARs) are mainly based on GPS-only measurements. The integration of GPS and GLONASS can speed up the convergence and increase the accuracy of float ambiguity estimates, which contributes to enhancing the success rate and reliability of fixing ambiguities. This paper presents an approach of combined GPS/GLONASS PPP with fixed GPS ambiguities (GGPPP-FGA) in which GPS ambiguities are fixed into integers, while all GLONASS ambiguities are kept as float values. An improved minimum constellation method (MCM) is proposed to enhance the efficiency of GPS ambiguity fixing. Datasets from 20 globally distributed stations on two consecutive days are employed to investigate the performance of the GGPPP-FGA, including the positioning accuracy, convergence time and the time to first fix (TTFF). All datasets are processed for a time span of three hours in three scenarios, i.e., the GPS ambiguity-float solution, the GPS ambiguity-fixed resolution and the GGPPP-FGA resolution. The results indicate that the performance of the GPS ambiguity-fixed resolutions is significantly better than that of the GPS ambiguity-float solutions. In addition, the GGPPP-FGA improves the positioning accuracy by 38%, 25% and 44% and reduces the convergence time by 36%, 36% and 29% in the east, north and up coordinate components over the GPS-only ambiguity-fixed resolutions, respectively. Moreover, the TTFF is reduced by 27% after adding GLONASS observations. Wilcoxon rank sum tests and chi-square two-sample tests are made to examine the significance of the improvement on the positioning accuracy, convergence time and TTFF. PMID:25237901

  18. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  19. Forecasting daily patient volumes in the emergency department.

    PubMed

    Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L

    2008-02-01

    Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.

  20. [Purity Detection Model Update of Maize Seeds Based on Active Learning].

    PubMed

    Tang, Jin-ya; Huang, Min; Zhu, Qi-bing

    2015-08-01

    Seed purity reflects the degree of seed varieties in typical consistent characteristics, so it is great important to improve the reliability and accuracy of seed purity detection to guarantee the quality of seeds. Hyperspectral imaging can reflect the internal and external characteristics of seeds at the same time, which has been widely used in nondestructive detection of agricultural products. The essence of nondestructive detection of agricultural products using hyperspectral imaging technique is to establish the mathematical model between the spectral information and the quality of agricultural products. Since the spectral information is easily affected by the sample growth environment, the stability and generalization of model would weaken when the test samples harvested from different origin and year. Active learning algorithm was investigated to add representative samples to expand the sample space for the original model, so as to implement the rapid update of the model's ability. Random selection (RS) and Kennard-Stone algorithm (KS) were performed to compare the model update effect with active learning algorithm. The experimental results indicated that in the division of different proportion of sample set (1:1, 3:1, 4:1), the updated purity detection model for maize seeds from 2010 year which was added 40 samples selected by active learning algorithm from 2011 year increased the prediction accuracy for 2011 new samples from 47%, 33.75%, 49% to 98.89%, 98.33%, 98.33%. For the updated purity detection model of 2011 year, its prediction accuracy for 2010 new samples increased by 50.83%, 54.58%, 53.75% to 94.57%, 94.02%, 94.57% after adding 56 new samples from 2010 year. Meanwhile the effect of model updated by active learning algorithm was better than that of RS and KS. Therefore, the update for purity detection model of maize seeds is feasible by active learning algorithm.

  1. Improvement of AOAC Official Method 984.27 for the determination of nine nutritional elements in food products by Inductively coupled plasma-atomic emission spectroscopy after microwave digestion: single-laboratory validation and ring trial.

    PubMed

    Poitevin, Eric; Nicolas, Marine; Graveleau, Laetitia; Richoz, Janique; Andrey, Daniel; Monard, Florence

    2009-01-01

    A single-laboratory validation (SLV) and a ring trial (RT) were undertaken to determine nine nutritional elements in food products by inductively coupled plasma-atomic emission spectroscopy in order to improve and update AOAC Official Method 984.27. The improvements involved optimized microwave digestion, selected analytical lines, internal standardization, and ion buffering. Simultaneous determination of nine elements (calcium, copper, iron, potassium, magnesium, manganese, sodium, phosphorus, and zinc) was made in food products. Sample digestion was performed through wet digestion of food samples by microwave technology with either closed or open vessel systems. Validation was performed to characterize the method for selectivity, sensitivity, linearity, accuracy, precision, recovery, ruggedness, and uncertainty. The robustness and efficiency of this method was proved through a successful internal RT using experienced food industry laboratories. Performance characteristics are reported for 13 certified and in-house reference materials, populating the AOAC triangle food sectors, which fulfilled AOAC criteria and recommendations for accuracy (trueness, recovery, and z-scores) and precision (repeatability and reproducibility RSD and HorRat values) regarding SLV and RT. This multielemental method is cost-efficient, time-saving, accurate, and fit-for-purpose according to ISO 17025 Norm and AOAC acceptability criteria, and is proposed as an improved version of AOAC Official Method 984.27 for fortified food products, including infant formula.

  2. SNV-PPILP: refined SNV calling for tumor data using perfect phylogenies and ILP.

    PubMed

    van Rens, Karen E; Mäkinen, Veli; Tomescu, Alexandru I

    2015-04-01

    Recent studies sequenced tumor samples from the same progenitor at different development stages and showed that by taking into account the phylogeny of this development, single-nucleotide variant (SNV) calling can be improved. Accurate SNV calls can better reveal early-stage tumors, identify mechanisms of cancer progression or help in drug targeting. We present SNV-PPILP, a fast and easy to use tool for refining GATK's Unified Genotyper SNV calls, for multiple samples assumed to form a phylogeny. We tested SNV-PPILP on simulated data, with a varying number of samples, SNVs, read coverage and violations of the perfect phylogeny assumption. We always match or improve the accuracy of GATK, with a significant improvement on low read coverage. SNV-PPILP, available at cs.helsinki.fi/gsa/snv-ppilp/, is written in Python and requires the free ILP solver lp_solve. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. DOD Financial Management: Additional Efforts Needed to Improve Audit Readiness of Navy Military Pay and Other Related Activities

    DTIC Science & Technology

    2015-09-01

    accuracy and validity of selected basic pay and entitlement transactions the IPA tested. The IPA tested a statistical sample of 405 leave and earnings...Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts . Visit GAO on the web at www.gao.gov. Contact: Website: http://www.gao.gov/fraudnet

  4. Biomarker Development for Intraductal Papillary Mucinous Neoplasms Using Multiple Reaction Monitoring Mass Spectrometry.

    PubMed

    Kim, Yikwon; Kang, MeeJoo; Han, Dohyun; Kim, Hyunsoo; Lee, KyoungBun; Kim, Sun-Whe; Kim, Yongkang; Park, Taesung; Jang, Jin-Young; Kim, Youngsoo

    2016-01-04

    Intraductal papillary mucinous neoplasm (IPMN) is a common precursor of pancreatic cancer (PC). Much clinical attention has been directed toward IPMNs due to the increase in the prevalence of PC. The diagnosis of IPMN depends primarily on a radiological examination, but the diagnostic accuracy of this tool is not satisfactory, necessitating the development of accurate diagnostic biomarkers for IPMN to prevent PC. Recently, high-throughput targeted proteomic quantification methods have accelerated the discovery of biomarkers, rendering them powerful platforms for the evolution of IPMN diagnostic biomarkers. In this study, a robust multiple reaction monitoring (MRM) pipeline was applied to discovery and verify IPMN biomarker candidates in a large cohort of plasma samples. Through highly reproducible MRM assays and a stringent statistical analysis, 11 proteins were selected as IPMN marker candidates with high confidence in 184 plasma samples, comprising a training (n = 84) and test set (n = 100). To improve the discriminatory power, we constructed a six-protein panel by combining marker candidates. The multimarker panel had high discriminatory power in distinguishing between IPMN and controls, including other benign diseases. Consequently, the diagnostic accuracy of IPMN can be improved dramatically with this novel plasma-based panel in combination with a radiological examination.

  5. Improvements to sample processing and measurement to enable more widespread environmental application of tritium.

    PubMed

    Moran, James; Alexander, Thomas; Aalseth, Craig; Back, Henning; Mace, Emily; Overman, Cory; Seifert, Allen; Freeburg, Wilcox

    2017-08-01

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133Bq of total T activity. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of both natural and artificial T behavior in the environment. Copyright © 2017. Published by Elsevier Ltd.

  6. Improvements to sample processing and measurement to enable more widespread environmental application of tritium

    DOE PAGES

    Moran, James; Alexander, Thomas; Aalseth, Craig; ...

    2017-01-26

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. Here, we present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We also identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133 Bq of total T activity. Furthermore, this enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps inmore » our understanding of both natural and artificial T behavior in the environment.« less

  7. On the design of paleoenvironmental data networks for estimating large-scale patterns of climate

    NASA Astrophysics Data System (ADS)

    Kutzbach, J. E.; Guetter, P. J.

    1980-09-01

    Guidelines are determined for the spatial density and location of climatic variables (temperature and precipitation) that are appropriate for estimating the continental- to hemispheric-scale pattern of atmospheric circulation (sea-level pressure). Because instrumental records of temperature and precipitation simulate the climatic information that is contained in certain paleoenvironmental records (tree-ring, pollen, and written-documentary records, for example), these guidelines provide useful sampling strategies for reconstructing the pattern of atmospheric circulation from paleoenvironmental records. The statistical analysis uses a multiple linear regression model. The sampling strategies consist of changes in site density (from 0.5 to 2.5 sites per million square kilometers) and site location (from western North American sites only to sites in Japan, North America, and western Europe) of the climatic data. The results showed that the accuracy of specification of the pattern of sea-level pressure: (1) is improved if sites with climatic records are spread as uniformly as possible over the area of interest; (2) increases with increasing site density-at least up to the maximum site density used in this study; (3) is improved if sites cover an area that extends considerably beyond the limits of the area of interest. The accuracy of specification was lower for independent data than for the data that were used to develop the regression model; some skill was found for almost all sampling strategies.

  8. Current developments in forensic interpretation of mixed DNA samples (Review).

    PubMed

    Hu, Na; Cong, Bin; Li, Shujin; Ma, Chunling; Fu, Lihong; Zhang, Xiaojing

    2014-05-01

    A number of recent improvements have provided contemporary forensic investigations with a variety of tools to improve the analysis of mixed DNA samples in criminal investigations, producing notable improvements in the analysis of complex trace samples in cases of sexual assult and homicide. Mixed DNA contains DNA from two or more contributors, compounding DNA analysis by combining DNA from one or more major contributors with small amounts of DNA from potentially numerous minor contributors. These samples are characterized by a high probability of drop-out or drop-in combined with elevated stutter, significantly increasing analysis complexity. At some loci, minor contributor alleles may be completely obscured due to amplification bias or over-amplification, creating the illusion of additional contributors. Thus, estimating the number of contributors and separating contributor genotypes at a given locus is significantly more difficult in mixed DNA samples, requiring the application of specialized protocols that have only recently been widely commercialized and standardized. Over the last decade, the accuracy and repeatability of mixed DNA analyses available to conventional forensic laboratories has greatly advanced in terms of laboratory technology, mathematical models and biostatistical software, generating more accurate, rapid and readily available data for legal proceedings and criminal cases.

  9. Current developments in forensic interpretation of mixed DNA samples (Review)

    PubMed Central

    HU, NA; CONG, BIN; LI, SHUJIN; MA, CHUNLING; FU, LIHONG; ZHANG, XIAOJING

    2014-01-01

    A number of recent improvements have provided contemporary forensic investigations with a variety of tools to improve the analysis of mixed DNA samples in criminal investigations, producing notable improvements in the analysis of complex trace samples in cases of sexual assult and homicide. Mixed DNA contains DNA from two or more contributors, compounding DNA analysis by combining DNA from one or more major contributors with small amounts of DNA from potentially numerous minor contributors. These samples are characterized by a high probability of drop-out or drop-in combined with elevated stutter, significantly increasing analysis complexity. At some loci, minor contributor alleles may be completely obscured due to amplification bias or over-amplification, creating the illusion of additional contributors. Thus, estimating the number of contributors and separating contributor genotypes at a given locus is significantly more difficult in mixed DNA samples, requiring the application of specialized protocols that have only recently been widely commercialized and standardized. Over the last decade, the accuracy and repeatability of mixed DNA analyses available to conventional forensic laboratories has greatly advanced in terms of laboratory technology, mathematical models and biostatistical software, generating more accurate, rapid and readily available data for legal proceedings and criminal cases. PMID:24748965

  10. Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations

    NASA Astrophysics Data System (ADS)

    Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.

    2017-12-01

    Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.

  11. Classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.

    2002-01-01

    An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

  12. Investigation of the interpolation method to improve the distributed strain measurement accuracy in optical frequency domain reflectometry systems.

    PubMed

    Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang

    2018-02-20

    We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.

  13. Polished sample preparing and backscattered electron imaging and of fly ash-cement paste

    NASA Astrophysics Data System (ADS)

    Feng, Shuxia; Li, Yanqi

    2018-03-01

    In recent decades, the technology of backscattered electron imaging and image analysis was applied in more and more study of mixed cement paste because of its special advantages. Test accuracy of this technology is affected by polished sample preparation and image acquisition. In our work, effects of two factors in polished sample preparing and backscattered electron imaging were investigated. The results showed that increasing smoothing pressure could improve the flatness of polished surface and then help to eliminate interference of morphology on grey level distribution of backscattered electron images; increasing accelerating voltage was beneficial to increase gray difference among different phases in backscattered electron images.

  14. Noise suppressing capillary separation system

    DOEpatents

    Yeung, Edward S.; Xue, Yongjun

    1996-07-30

    A noise-suppressing capillary separation system for detecting the real-time presence or concentration of an analyte in a sample is provided. The system contains a capillary separation means through which the analyte is moved, a coherent light source that generates a beam which is split into a reference beam and a sample beam that irradiate the capillary, and a detector for detecting the reference beam and the sample beam light that transmits through the capillary. The laser beam is of a wavelength effective to be absorbed by a chromophore in the capillary. The system includes a noise suppressing system to improve performance and accuracy without signal averaging or multiple scans.

  15. Noise suppressing capillary separation system

    DOEpatents

    Yeung, E.S.; Xue, Y.

    1996-07-30

    A noise-suppressing capillary separation system for detecting the real-time presence or concentration of an analyte in a sample is provided. The system contains a capillary separation means through which the analyte is moved, a coherent light source that generates a beam which is split into a reference beam and a sample beam that irradiate the capillary, and a detector for detecting the reference beam and the sample beam light that transmits through the capillary. The laser beam is of a wavelength effective to be absorbed by a chromophore in the capillary. The system includes a noise suppressing system to improve performance and accuracy without signal averaging or multiple scans. 13 figs.

  16. Concomitant Ion Effects on Isotope Ratio Measurements with Liquid Sampling – Atmospheric Pressure Glow Discharge Ion Source Orbitrap Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoegg, Edward D.; Marcus, R. Kenneth; Hager, Georg

    2018-02-28

    In an effort to understand and improve the accuracy and precision of the liquid sampling- atmospheric pressure glow discharge (LS-APGD)/Orbitrap system, effects of concomitant ions on the acquired mass spectra are examined and presented. The LS-APGD/ Orbitrap instrument system is capable of high quality isotope ratio measurements, which are of high analytical interest for nuclear non-proliferation detection applications. The presence of background and concomitant ions (water clusters, matrix, and other analytes) has presented limitations in earlier studies. In order to mitigate these effects, an alternative quadrupole-Orbitrap hybrid mass spectrometer was employed in this study. This instrument configuration has a quadrupolemore » mass filter preceding the Orbitrap to filter-out undesired non-analyte ions. Results are presented for the analysis of U in the presence of Rb, Ag, Ba, and Pb as concomitants, each present at 5 µg/mL concentration. Progressive filtering of each concomitant ion shows steadily improved U isotope ratio performance. Ultimately, a 235U/238U ratio of 0.007133, with a relative accuracy of -2.1% and a relative standard deviation of 0.087% was achieved using this system, along with improved calibration linearity and lowered limits of detection. The resultant performance compares very favorably with other commonly accepted isotope ratio measurement platforms - surprisingly so for an ion trap type mass spectrometry instrument.« less

  17. Moderate efficiency of clinicians' predictions decreased for blurred clinical conditions and benefits from the use of BRASS index. A longitudinal study on geriatric patients' outcomes.

    PubMed

    Signorini, Giulia; Dagani, Jessica; Bulgari, Viola; Ferrari, Clarissa; de Girolamo, Giovanni

    2016-01-01

    Accurate prognosis is an essential aspect of good clinical practice and efficient health services, particularly for chronic and disabling diseases, as in geriatric populations. This study aims to examine the accuracy of clinical prognostic predictions and to devise prediction models combining clinical variables and clinicians' prognosis for a geriatric patient sample. In a sample of 329 consecutive older patients admitted to 10 geriatric units, we evaluated the accuracy of clinicians' prognosis regarding three outcomes at discharge: global functioning, length of stay (LoS) in hospital, and destination at discharge (DD). A comprehensive set of sociodemographic, clinical, and treatment-related information were also collected. Moderate predictive performance was found for all three outcomes: area under receiver operating characteristic curve of 0.79 and 0.78 for functioning and LoS, respectively, and moderate concordance, Cohen's K = 0.45, between predicted and observed DD. Predictive models found the Blaylock Risk Assessment Screening Score together with clinicians' judgment relevant to improve predictions for all outcomes (absolute improvement in adjusted and pseudo-R(2) up to 19%). Although the clinicians' estimates were important factors in predicting global functioning, LoS, and DD, more research is needed regarding both methodological aspects and clinical measurements, to improve prognostic clinical indices. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Radial k-t SPIRiT: autocalibrated parallel imaging for generalized phase-contrast MRI.

    PubMed

    Santelli, Claudio; Schaeffter, Tobias; Kozerke, Sebastian

    2014-11-01

    To extend SPIRiT to additionally exploit temporal correlations for highly accelerated generalized phase-contrast MRI and to compare the performance of the proposed radial k-t SPIRiT method relative to frame-by-frame SPIRiT and radial k-t GRAPPA reconstruction for velocity and turbulence mapping in the aortic arch. Free-breathing navigator-gated two-dimensional radial cine imaging with three-directional multi-point velocity encoding was implemented and fully sampled data were obtained in the aortic arch of healthy volunteers. Velocities were encoded with three different first gradient moments per axis to permit quantification of mean velocity and turbulent kinetic energy. Velocity and turbulent kinetic energy maps from up to 14-fold undersampled data were compared for k-t SPIRiT, frame-by-frame SPIRiT, and k-t GRAPPA relative to the fully sampled reference. Using k-t SPIRiT, improvements in magnitude and velocity reconstruction accuracy were found. Temporally resolved magnitude profiles revealed a reduction in spatial blurring with k-t SPIRiT compared with frame-by-frame SPIRiT and k-t GRAPPA for all velocity encodings, leading to improved estimates of turbulent kinetic energy. k-t SPIRiT offers improved reconstruction accuracy at high radial undersampling factors and hence facilitates the use of generalized phase-contrast MRI for routine use. Copyright © 2013 Wiley Periodicals, Inc.

  19. Novel applications of multitask learning and multiple output regression to multiple genetic trait prediction.

    PubMed

    He, Dan; Kuhn, David; Parida, Laxmi

    2016-06-15

    Given a set of biallelic molecular markers, such as SNPs, with genotype values encoded numerically on a collection of plant, animal or human samples, the goal of genetic trait prediction is to predict the quantitative trait values by simultaneously modeling all marker effects. Genetic trait prediction is usually represented as linear regression models. In many cases, for the same set of samples and markers, multiple traits are observed. Some of these traits might be correlated with each other. Therefore, modeling all the multiple traits together may improve the prediction accuracy. In this work, we view the multitrait prediction problem from a machine learning angle: as either a multitask learning problem or a multiple output regression problem, depending on whether different traits share the same genotype matrix or not. We then adapted multitask learning algorithms and multiple output regression algorithms to solve the multitrait prediction problem. We proposed a few strategies to improve the least square error of the prediction from these algorithms. Our experiments show that modeling multiple traits together could improve the prediction accuracy for correlated traits. The programs we used are either public or directly from the referred authors, such as MALSAR (http://www.public.asu.edu/~jye02/Software/MALSAR/) package. The Avocado data set has not been published yet and is available upon request. dhe@us.ibm.com. © The Author 2016. Published by Oxford University Press.

  20. High diagnostic accuracy of histone H4-IgG autoantibodies in systemic lupus erythematosus.

    PubMed

    Vordenbäumen, Stefan; Böhmer, Paloma; Brinks, Ralph; Fischer-Betz, Rebecca; Richter, Jutta; Bleck, Ellen; Rengers, Petra; Göhler, Heike; Zucht, Hans-Dieter; Budde, Petra; Schulz-Knappe, Peter; Schneider, Matthias

    2018-03-01

    Diagnosis of SLE relies on the detection of autoantibodies. We aimed to assess the diagnostic potential of histone H4 and H2A variant antibodies in SLE. IgG-autoantibodies to histones H4 (HIST1H4A), H2A type 2-A (HIST2H2AA3) and H2A type 2-C (HIST2H2AC) were measured along with a standard antibody (SA) set including SSA, SSB, Sm, U1-RNP and RPLP2 in a multiplex magnetic microsphere-based assay in 153 SLE patients [85% female, 41 (13.5) years] and 81 healthy controls [77% female, 43.3 (12.4) years]. Receiver operating characteristic analysis was performed to assess diagnostic performance of individual markers. Logistic regression analysis was performed on a random split of samples to determine the additional value of histone antibodies in comparison with SA by likelihood ratio test and determination of diagnostic accuracy in the remaining validation samples. Microsphere-based assay showed good interclass correlation (mean 0.85, range 0.73-0.99) and diagnostic performance in receiver operating characteristic analysis (area under the curve (AUC) range 84.8-93.2) compared with routine assay for SA parameters. HIST1H4A-IgG was the marker with the best individual diagnostic performance for SLE vs healthy (AUC 0.97, sensitivity 95% at 90% specificity). HIST1H4A-IgG was an independent significant predictor for the diagnosis of SLE in multivariate modelling (P < 0.0001), and significantly improved prediction of SLE over SA parameters alone (residual deviance 45.9 vs 97.1, P = 4.3 × 10-11). Diagnostic accuracy in the training and validation samples was 89 and 86% for SA, and 95 and 89% with the addition of HIST1H4A-IgG. HIST1H4A-IgG antibodies improve diagnostic accuracy for SLE vs healthy. © The Author(s) 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. [br]For Permissions, please email: journals.permissions@oup.com

  1. Accuracy requirements and uncertainties in radiotherapy: a report of the International Atomic Energy Agency.

    PubMed

    van der Merwe, Debbie; Van Dyk, Jacob; Healy, Brendan; Zubizarreta, Eduardo; Izewska, Joanna; Mijnheer, Ben; Meghzifene, Ahmed

    2017-01-01

    Radiotherapy technology continues to advance and the expectation of improved outcomes requires greater accuracy in various radiotherapy steps. Different factors affect the overall accuracy of dose delivery. Institutional comprehensive quality assurance (QA) programs should ensure that uncertainties are maintained at acceptable levels. The International Atomic Energy Agency has recently developed a report summarizing the accuracy achievable and the suggested action levels, for each step in the radiotherapy process. Overview of the report: The report seeks to promote awareness and encourage quantification of uncertainties in order to promote safer and more effective patient treatments. The radiotherapy process and the radiobiological and clinical frameworks that define the need for accuracy are depicted. Factors that influence uncertainty are described for a range of techniques, technologies and systems. Methodologies for determining and combining uncertainties are presented, and strategies for reducing uncertainties through QA programs are suggested. The role of quality audits in providing international benchmarking of achievable accuracy and realistic action levels is also discussed. The report concludes with nine general recommendations: (1) Radiotherapy should be applied as accurately as reasonably achievable, technical and biological factors being taken into account. (2) For consistency in prescribing, reporting and recording, recommendations of the International Commission on Radiation Units and Measurements should be implemented. (3) Each institution should determine uncertainties for their treatment procedures. Sample data are tabulated for typical clinical scenarios with estimates of the levels of accuracy that are practically achievable and suggested action levels. (4) Independent dosimetry audits should be performed regularly. (5) Comprehensive quality assurance programs should be in place. (6) Professional staff should be appropriately educated and adequate staffing levels should be maintained. (7) For reporting purposes, uncertainties should be presented. (8) Manufacturers should provide training on all equipment. (9) Research should aid in improving the accuracy of radiotherapy. Some example research projects are suggested.

  2. Photographic techniques for characterizing streambed particle sizes

    USGS Publications Warehouse

    Whitman, Matthew S.; Moran, Edward H.; Ourso, Robert T.

    2003-01-01

    We developed photographic techniques to characterize coarse (>2-mm) and fine (≤2-mm) streambed particle sizes in 12 streams in Anchorage, Alaska. Results were compared with current sampling techniques to assess which provided greater sampling efficiency and accuracy. The streams sampled were wadeable and contained gravel—cobble streambeds. Gradients ranged from about 5% at the upstream sites to about 0.25% at the downstream sites. Mean particle sizes and size-frequency distributions resulting from digitized photographs differed significantly from those resulting from Wolman pebble counts for five sites in the analysis. Wolman counts were biased toward selecting larger particles. Photographic analysis also yielded a greater number of measured particles (mean = 989) than did the Wolman counts (mean = 328). Stream embeddedness ratings assigned from field and photographic observations were significantly different at 5 of the 12 sites, although both types of ratings showed a positive relationship with digitized surface fines. Visual estimates of embeddedness and digitized surface fines may both be useful indicators of benthic conditions, but digitizing surface fines produces quantitative rather than qualitative data. Benefits of the photographic techniques include reduced field time, minimal streambed disturbance, convenience of postfield processing, easy sample archiving, and improved accuracy and replication potential.

  3. Improvement of a sample preparation procedure for multi-elemental determination in Brazil nuts by ICP-OES.

    PubMed

    Welna, Maja; Szymczycha-Madeja, Anna

    2014-04-01

    Various sample preparation procedures, such as common wet digestions and alternatives based on solubilisation in aqua regia or tetramethyl ammonium hydroxide, were compared for the determination of the total Ba, Ca, Cr, Cd, Cu, Fe, Mg, Mn, Ni, P, Pb, Se, Sr and Zn contents in Brazil nuts using inductively coupled plasma optical emission spectrometry (ICP-OES). For measurement of Se, a hydride generation technique was used. The performance of these procedures was measured in terms of precision, accuracy and limits of detection of the elements. It was found that solubilisation in aqua regia gave the best results, i.e. limits of detection from 0.60 to 41.9 ng ml(-1), precision of 1.0-3.9% and accuracy better than 5%. External calibration with simple standard solutions could be applied for the analysis. The proposed procedure is simple, reduces sample handling, and minimises the time and reagent consumption. Thus, this can be a vital alternative to traditional sample treatment approaches based on the total digestion with concentrated reagents. A phenomenon resulting from levels of Ba, Se and Sr in Brazil nuts was also discussed.

  4. 3D Higher Order Modeling in the BEM/FEM Hybrid Formulation

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Wilton, D. R.

    2000-01-01

    Higher order divergence- and curl-conforming bases have been shown to provide significant benefits, in both convergence rate and accuracy, in the 2D hybrid finite element/boundary element formulation (P. Fink and D. Wilton, National Radio Science Meeting, Boulder, CO, Jan. 2000). A critical issue in achieving the potential for accuracy of the approach is the accurate evaluation of all matrix elements. These involve products of high order polynomials and, in some instances, singular Green's functions. In the 2D formulation, the use of a generalized Gaussian quadrature method was found to greatly facilitate the computation and to improve the accuracy of the boundary integral equation self-terms. In this paper, a 3D, hybrid electric field formulation employing higher order bases and higher order elements is presented. The improvements in convergence rate and accuracy, compared to those resulting from lower order modeling, are established. Techniques developed to facilitate the computation of the boundary integral self-terms are also shown to improve the accuracy of these terms. Finally, simple preconditioning techniques are used in conjunction with iterative solution procedures to solve the resulting linear system efficiently. In order to handle the boundary integral singularities in the 3D formulation, the parent element- either a triangle or rectangle-is subdivided into a set of sub-triangles with a common vertex at the singularity. The contribution to the integral from each of the sub-triangles is computed using the Duffy transformation to remove the singularity. This method is shown to greatly facilitate t'pe self-term computation when the bases are of higher order. In addition, the sub-triangles can be further divided to achieve near arbitrary accuracy in the self-term computation. An efficient method for subdividing the parent element is presented. The accuracy obtained using higher order bases is compared to that obtained using lower order bases when the number of unknowns is approximately equal. Also, convergence rates obtained using higher order bases are compared to those obtained with lower order bases for selected sample

  5. Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad

    2017-04-01

    Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.

  6. Training sample selection based on self-training for liver cirrhosis classification using ultrasound images

    NASA Astrophysics Data System (ADS)

    Fujita, Yusuke; Mitani, Yoshihiro; Hamamoto, Yoshihiko; Segawa, Makoto; Terai, Shuji; Sakaida, Isao

    2017-03-01

    Ultrasound imaging is a popular and non-invasive tool used in the diagnoses of liver disease. Cirrhosis is a chronic liver disease and it can advance to liver cancer. Early detection and appropriate treatment are crucial to prevent liver cancer. However, ultrasound image analysis is very challenging, because of the low signal-to-noise ratio of ultrasound images. To achieve the higher classification performance, selection of training regions of interest (ROIs) is very important that effect to classification accuracy. The purpose of our study is cirrhosis detection with high accuracy using liver ultrasound images. In our previous works, training ROI selection by MILBoost and multiple-ROI classification based on the product rule had been proposed, to achieve high classification performance. In this article, we propose self-training method to select training ROIs effectively. Evaluation experiments were performed to evaluate effect of self-training, using manually selected ROIs and also automatically selected ROIs. Experimental results show that self-training for manually selected ROIs achieved higher classification performance than other approaches, including our conventional methods. The manually ROI definition and sample selection are important to improve classification accuracy in cirrhosis detection using ultrasound images.

  7. THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Habib, Salman; Biswas, Rahul

    2016-04-01

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  8. The mira-titan universe. Precision predictions for dark energy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Bingham, Derek; Lawrence, Earl

    2016-03-28

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  9. Effects of subsampling of passive acoustic recordings on acoustic metrics.

    PubMed

    Thomisch, Karolin; Boebel, Olaf; Zitterbart, Daniel P; Samaran, Flore; Van Parijs, Sofie; Van Opzeeland, Ilse

    2015-07-01

    Passive acoustic monitoring is an important tool in marine mammal studies. However, logistics and finances frequently constrain the number and servicing schedules of acoustic recorders, requiring a trade-off between deployment periods and sampling continuity, i.e., the implementation of a subsampling scheme. Optimizing such schemes to each project's specific research questions is desirable. This study investigates the impact of subsampling on the accuracy of two common metrics, acoustic presence and call rate, for different vocalization patterns (regimes) of baleen whales: (1) variable vocal activity, (2) vocalizations organized in song bouts, and (3) vocal activity with diel patterns. To this end, above metrics are compared for continuous and subsampled data subject to different sampling strategies, covering duty cycles between 50% and 2%. The results show that a reduction of the duty cycle impacts negatively on the accuracy of both acoustic presence and call rate estimates. For a given duty cycle, frequent short listening periods improve accuracy of daily acoustic presence estimates over few long listening periods. Overall, subsampling effects are most pronounced for low and/or temporally clustered vocal activity. These findings illustrate the importance of informed decisions when applying subsampling strategies to passive acoustic recordings or analyses for a given target species.

  10. Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach

    NASA Astrophysics Data System (ADS)

    Xiao, T.

    2012-12-01

    One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.

  11. Development of improved space sampling strategies for ocean chemical properties: Total carbon dioxide and dissolved nitrate

    NASA Technical Reports Server (NTRS)

    Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.

    1995-01-01

    Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.

  12. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment.

    PubMed

    Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-05-13

    In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.

  13. Analogous on-axis interference topographic phase microscopy (AOITPM).

    PubMed

    Xiu, P; Liu, Q; Zhou, X; Xu, Y; Kuang, C; Liu, X

    2018-05-01

    The refractive index (RI) of a sample as an endogenous contrast agent plays an important role in transparent live cell imaging. In tomographic phase microscopy (TPM), 3D quantitative RI maps can be reconstructed based on the measured projections of the RI in multiple directions. The resolution of the RI maps not only depends on the numerical aperture of the employed objective lens, but also is determined by the accuracy of the quantitative phase of the sample measured at multiple scanning illumination angles. This paper reports an analogous on-axis interference TPM, where the interference angle between the sample and reference beams is kept constant for projections in multiple directions to improve the accuracy of the phase maps and the resolution of RI tomograms. The system has been validated with both silica beads and red blood cells. Compared with conventional TPM, the proposed system acquires quantitative RI maps with higher resolution (420 nm @λ = 633 nm) and signal-to-noise ratio that can be beneficial for live cell imaging in biomedical applications. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  14. Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies

    USGS Publications Warehouse

    Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.

    2017-01-01

    Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.

  15. Comparative evaluation of the accuracy of linear measurements between cone beam computed tomography and 3D microtomography.

    PubMed

    Mangione, Francesca; Meleo, Deborah; Talocco, Marco; Pecci, Raffaella; Pacifici, Luciano; Bedini, Rossella

    2013-01-01

    The aim of this study was to evaluate the influence of artifacts on the accuracy of linear measurements estimated with a common cone beam computed tomography (CBCT) system used in dental clinical practice, by comparing it with microCT system as standard reference. Ten bovine bone cylindrical samples containing one implant each, able to provide both points of reference and image quality degradation, have been scanned by CBCT and microCT systems. Thanks to the software of the two systems, for each cylindrical sample, two diameters taken at different levels, by using implants different points as references, have been measured. Results have been analyzed by ANOVA and a significant statistically difference has been found. Due to the obtained results, in this work it is possible to say that the measurements made with the two different instruments are still not statistically comparable, although in some samples were obtained similar performances and therefore not statistically significant. With the improvement of the hardware and software of CBCT systems, in the near future the two instruments will be able to provide similar performances.

  16. Digital Sensor Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Ken D.; Quinn, Edward L.; Mauck, Jerry L.

    The nuclear industry has been slow to incorporate digital sensor technology into nuclear plant designs due to concerns with digital qualification issues. However, the benefits of digital sensor technology for nuclear plant instrumentation are substantial in terms of accuracy and reliability. This paper, which refers to a final report issued in 2013, demonstrates these benefits in direct comparisons of digital and analog sensor applications. Improved accuracy results from the superior operating characteristics of digital sensors. These include improvements in sensor accuracy and drift and other related parameters which reduce total loop uncertainty and thereby increase safety and operating margins. Anmore » example instrument loop uncertainty calculation for a pressure sensor application is presented to illustrate these improvements. This is a side-by-side comparison of the instrument loop uncertainty for both an analog and a digital sensor in the same pressure measurement application. Similarly, improved sensor reliability is illustrated with a sample calculation for determining the probability of failure on demand, an industry standard reliability measure. This looks at equivalent analog and digital temperature sensors to draw the comparison. The results confirm substantial reliability improvement with the digital sensor, due in large part to ability to continuously monitor the health of a digital sensor such that problems can be immediately identified and corrected. This greatly reduces the likelihood of a latent failure condition of the sensor at the time of a design basis event. Notwithstanding the benefits of digital sensors, there are certain qualification issues that are inherent with digital technology and these are described in the report. One major qualification impediment for digital sensor implementation is software common cause failure (SCCF).« less

  17. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacques, Robert; Wong, John; Taylor, Russell

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less

  18. Predictive accuracy of particle filtering in dynamic models supporting outbreak projections.

    PubMed

    Safarishahrbijari, Anahita; Teyhouee, Aydin; Waldner, Cheryl; Liu, Juxin; Osgood, Nathaniel D

    2017-09-26

    While a new generation of computational statistics algorithms and availability of data streams raises the potential for recurrently regrounding dynamic models with incoming observations, the effectiveness of such arrangements can be highly subject to specifics of the configuration (e.g., frequency of sampling and representation of behaviour change), and there has been little attempt to identify effective configurations. Combining dynamic models with particle filtering, we explored a solution focusing on creating quickly formulated models regrounded automatically and recurrently as new data becomes available. Given a latent underlying case count, we assumed that observed incident case counts followed a negative binomial distribution. In accordance with the condensation algorithm, each such observation led to updating of particle weights. We evaluated the effectiveness of various particle filtering configurations against each other and against an approach without particle filtering according to the accuracy of the model in predicting future prevalence, given data to a certain point and a norm-based discrepancy metric. We examined the effectiveness of particle filtering under varying times between observations, negative binomial dispersion parameters, and rates with which the contact rate could evolve. We observed that more frequent observations of empirical data yielded super-linearly improved accuracy in model predictions. We further found that for the data studied here, the most favourable assumptions to make regarding the parameters associated with the negative binomial distribution and changes in contact rate were robust across observation frequency and the observation point in the outbreak. Combining dynamic models with particle filtering can perform well in projecting future evolution of an outbreak. Most importantly, the remarkable improvements in predictive accuracy resulting from more frequent sampling suggest that investments to achieve efficient reporting mechanisms may be more than paid back by improved planning capacity. The robustness of the results on particle filter configuration in this case study suggests that it may be possible to formulate effective standard guidelines and regularized approaches for such techniques in particular epidemiological contexts. Most importantly, the work tentatively suggests potential for health decision makers to secure strong guidance when anticipating outbreak evolution for emerging infectious diseases by combining even very rough models with particle filtering method.

  19. Global Precipitation Measurement (GPM) Mission Development Status

    NASA Technical Reports Server (NTRS)

    Azarbarzin, Ardeshir Art

    2011-01-01

    Mission Objective: (1) Improve scientific understanding of the global water cycle and fresh water availability (2) Improve the accuracy of precipitation forecasts (3) Provide frequent and complete sampling of the Earth s precipitation Mission Description (Class B, Category I): (1) Constellation of spacecraft provide global precipitation measurement coverage (2) NASA/JAXA Core spacecraft: Provides a microwave radiometer (GMI) and dual-frequency precipitation radar (DPR) to cross-calibrate entire constellation (3) 65 deg inclination, 400 km altitude (4) Launch July 2013 on HII-A (5) 3 year mission (5 year propellant) (6) Partner constellation spacecraft.

  20. Does Combined Physical and Cognitive Training Improve Dual-Task Balance and Gait Outcomes in Sedentary Older Adults?

    PubMed Central

    Fraser, Sarah A.; Li, Karen Z.-H.; Berryman, Nicolas; Desjardins-Crépeau, Laurence; Lussier, Maxime; Vadaga, Kiran; Lehr, Lora; Minh Vu, Thien Tuong; Bosquet, Laurent; Bherer, Louis

    2017-01-01

    Everyday activities like walking and talking can put an older adult at risk for a fall if they have difficulty dividing their attention between motor and cognitive tasks. Training studies have demonstrated that both cognitive and physical training regimens can improve motor and cognitive task performance. Few studies have examined the benefits of combined training (cognitive and physical) and whether or not this type of combined training would transfer to walking or balancing dual-tasks. This study examines the dual-task benefits of combined training in a sample of sedentary older adults. Seventy-two older adults (≥60 years) were randomly assigned to one of four training groups: Aerobic + Cognitive training (CT), Aerobic + Computer lessons (CL), Stretch + CT and Stretch + CL. It was expected that the Aerobic + CT group would demonstrate the largest benefits and that the active placebo control (Stretch + CL) would show the least benefits after training. Walking and standing balance were paired with an auditory n-back with two levels of difficulty (0- and 1-back). Dual-task walking and balance were assessed with: walk speed (m/s), cognitive accuracy (% correct) and several mediolateral sway measures for pre- to post-test improvements. All groups demonstrated improvements in walk speed from pre- (M = 1.33 m/s) to post-test (M = 1.42 m/s, p < 0.001) and in accuracy from pre- (M = 97.57%) to post-test (M = 98.57%, p = 0.005).They also increased their walk speed in the more difficult 1-back (M = 1.38 m/s) in comparison to the 0-back (M = 1.36 m/s, p < 0.001) but reduced their accuracy in the 1-back (M = 96.39%) in comparison to the 0-back (M = 99.92%, p < 0.001). Three out of the five mediolateral sway variables (Peak, SD, RMS) demonstrated significant reductions in sway from pre to post test (p-values < 0.05). With the exception of a group difference between Aerobic + CT and Stretch + CT in accuracy, there were no significant group differences after training. Results suggest that there can be dual-task benefits from training but that in this sedentary sample Aerobic + CT training was not more beneficial than other types of combined training. PMID:28149274

  1. Detailed analysis of grid-based molecular docking: A case study of CDOCKER-A CHARMm-based MD docking algorithm.

    PubMed

    Wu, Guosheng; Robertson, Daniel H; Brooks, Charles L; Vieth, Michal

    2003-10-01

    The influence of various factors on the accuracy of protein-ligand docking is examined. The factors investigated include the role of a grid representation of protein-ligand interactions, the initial ligand conformation and orientation, the sampling rate of the energy hyper-surface, and the final minimization. A representative docking method is used to study these factors, namely, CDOCKER, a molecular dynamics (MD) simulated-annealing-based algorithm. A major emphasis in these studies is to compare the relative performance and accuracy of various grid-based approximations to explicit all-atom force field calculations. In these docking studies, the protein is kept rigid while the ligands are treated as fully flexible and a final minimization step is used to refine the docked poses. A docking success rate of 74% is observed when an explicit all-atom representation of the protein (full force field) is used, while a lower accuracy of 66-76% is observed for grid-based methods. All docking experiments considered a 41-member protein-ligand validation set. A significant improvement in accuracy (76 vs. 66%) for the grid-based docking is achieved if the explicit all-atom force field is used in a final minimization step to refine the docking poses. Statistical analysis shows that even lower-accuracy grid-based energy representations can be effectively used when followed with full force field minimization. The results of these grid-based protocols are statistically indistinguishable from the detailed atomic dockings and provide up to a sixfold reduction in computation time. For the test case examined here, improving the docking accuracy did not necessarily enhance the ability to estimate binding affinities using the docked structures. Copyright 2003 Wiley Periodicals, Inc.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Debono, Josephine C, E-mail: josephine.debono@bci.org.au; Poulos, Ann E; Houssami, Nehmat

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median =more » 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes.« less

  3. Exploring geo-tagged photos for land cover validation with deep learning

    NASA Astrophysics Data System (ADS)

    Xing, Hanfa; Meng, Yuan; Wang, Zixuan; Fan, Kaixuan; Hou, Dongyang

    2018-07-01

    Land cover validation plays an important role in the process of generating and distributing land cover thematic maps, which is usually implemented by high cost of sample interpretation with remotely sensed images or field survey. With an increasing availability of geo-tagged landscape photos, the automatic photo recognition methodologies, e.g., deep learning, can be effectively utilised for land cover applications. However, they have hardly been utilised in validation processes, as challenges remain in sample selection and classification for highly heterogeneous photos. This study proposed an approach to employ geo-tagged photos for land cover validation by using the deep learning technology. The approach first identified photos automatically based on the VGG-16 network. Then, samples for validation were selected and further classified by considering photos distribution and classification probabilities. The implementations were conducted for the validation of the GlobeLand30 land cover product in a heterogeneous area, western California. Experimental results represented promises in land cover validation, given that GlobeLand30 showed an overall accuracy of 83.80% with classified samples, which was close to the validation result of 80.45% based on visual interpretation. Additionally, the performances of deep learning based on ResNet-50 and AlexNet were also quantified, revealing no substantial differences in final validation results. The proposed approach ensures geo-tagged photo quality, and supports the sample classification strategy by considering photo distribution, with accuracy improvement from 72.07% to 79.33% compared with solely considering the single nearest photo. Consequently, the presented approach proves the feasibility of deep learning technology on land cover information identification of geo-tagged photos, and has a great potential to support and improve the efficiency of land cover validation.

  4. Filtering method of star control points for geometric correction of remote sensing image based on RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Xiangli; Yang, Jungang; Deng, Xinpu

    2018-04-01

    In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.

  5. New insights from cluster analysis methods for RNA secondary structure prediction

    PubMed Central

    Rogers, Emily; Heitsch, Christine

    2016-01-01

    A widening gap exists between the best practices for RNA secondary structure prediction developed by computational researchers and the methods used in practice by experimentalists. Minimum free energy (MFE) predictions, although broadly used, are outperformed by methods which sample from the Boltzmann distribution and data mine the results. In particular, moving beyond the single structure prediction paradigm yields substantial gains in accuracy. Furthermore, the largest improvements in accuracy and precision come from viewing secondary structures not at the base pair level but at lower granularity/higher abstraction. This suggests that random errors affecting precision and systematic ones affecting accuracy are both reduced by this “fuzzier” view of secondary structures. Thus experimentalists who are willing to adopt a more rigorous, multilayered approach to secondary structure prediction by iterating through these levels of granularity will be much better able to capture fundamental aspects of RNA base pairing. PMID:26971529

  6. Automated characterisation of ultrasound images of ovarian tumours: the diagnostic accuracy of a support vector machine and image processing with a local binary pattern operator.

    PubMed

    Khazendar, S; Sayasneh, A; Al-Assam, H; Du, H; Kaijser, J; Ferrara, L; Timmerman, D; Jassim, S; Bourne, T

    2015-01-01

    Preoperative characterisation of ovarian masses into benign or malignant is of paramount importance to optimise patient management. In this study, we developed and validated a computerised model to characterise ovarian masses as benign or malignant. Transvaginal 2D B mode static ultrasound images of 187 ovarian masses with known histological diagnosis were included. Images were first pre-processed and enhanced, and Local Binary Pattern Histograms were then extracted from 2 × 2 blocks of each image. A Support Vector Machine (SVM) was trained using stratified cross validation with randomised sampling. The process was repeated 15 times and in each round 100 images were randomly selected. The SVM classified the original non-treated static images as benign or malignant masses with an average accuracy of 0.62 (95% CI: 0.59-0.65). This performance significantly improved to an average accuracy of 0.77 (95% CI: 0.75-0.79) when images were pre-processed, enhanced and treated with a Local Binary Pattern operator (mean difference 0.15: 95% 0.11-0.19, p < 0.0001, two-tailed t test). We have shown that an SVM can classify static 2D B mode ultrasound images of ovarian masses into benign and malignant categories. The accuracy improves if texture related LBP features extracted from the images are considered.

  7. The Effect of Introducing a Smaller and Lighter Basketball on Female Basketball Players’ Shot Accuracy

    PubMed Central

    Podmenik, Nadja; Leskošek, Bojan; Erčulj, Frane

    2012-01-01

    Our study examined whether the introduction of a smaller and lighter basketball (no. 6) affected the accuracy of female basketball players’ shots at the basket. The International Basketball Federation (FIBA) introduced a size 6 ball in the 2004/2005 season to improve the efficiency and accuracy of technical elements, primarily shots at the basket. The sample for this study included 573 European female basketball players who were members of national teams that had qualified for the senior women’s European championships in 2001, 2003, 2005 and 2007. A size 7 (larger and heavier) basketball was used by 286 players in 1,870 matches, and a size 6 basketball was used by 287 players in 1,966 matches. The players were categorised into three playing positions: guards, forwards and centres. The results revealed that statistically significant changes by year occurred only in terms of the percentage of successful free throws. With the size 6 basketball, this percentage decreased. Statistically significant differences between the playing positions were observed in terms of the percentage of field goals worth three points (between guards and forwards) and two points (between guards and centres). The results show that the introduction of the size 6 basketball did not lead to improvement in shooting accuracy (the opposite was found for free throws), although the number of three-point shots increased. PMID:23486286

  8. Predicting stem total and assortment volumes in an industrial Pinus taeda L. forest plantation using airborne laser scanning data and random forest

    Treesearch

    Carlos Alberto Silva; Carine Klauberg; Andrew Thomas Hudak; Lee Alexander Vierling; Wan Shafrina Wan Mohd Jaafar; Midhun Mohan; Mariano Garcia; Antonio Ferraz; Adrian Cardil; Sassan Saatchi

    2017-01-01

    Improvements in the management of pine plantations result in multiple industrial and environmental benefits. Remote sensing techniques can dramatically increase the efficiency of plantation management by reducing or replacing time-consuming field sampling. We tested the utility and accuracy of combining field and airborne lidar data with Random Forest, a supervised...

  9. CFD Code Survey for Thrust Chamber Application

    NASA Technical Reports Server (NTRS)

    Gross, Klaus W.

    1990-01-01

    In the quest fo find analytical reference codes, responses from a questionnaire are presented which portray the current computational fluid dynamics (CFD) program status and capability at various organizations, characterizing liquid rocket thrust chamber flow fields. Sample cases are identified to examine the ability, operational condition, and accuracy of the codes. To select the best suited programs for accelerated improvements, evaluation criteria are being proposed.

  10. Using an EM Covariance Matrix to Estimate Structural Equation Models with Missing Data: Choosing an Adjusted Sample Size to Improve the Accuracy of Inferences

    ERIC Educational Resources Information Center

    Enders, Craig K.; Peugh, James L.

    2004-01-01

    Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…

  11. Perceptual experience and posttest improvements in perceptual accuracy and consistency.

    PubMed

    Wagman, Jeffrey B; McBride, Dawn M; Trefzger, Amanda J

    2008-08-01

    Two experiments investigated the relationship between perceptual experience (during practice) and posttest improvements in perceptual accuracy and consistency. Experiment 1 investigated the potential relationship between how often knowledge of results (KR) is provided during a practice session and posttest improvements in perceptual accuracy. Experiment 2 investigated the potential relationship between how often practice (PR) is provided during a practice session and posttest improvements in perceptual consistency. The results of both experiments are consistent with previous findings that perceptual accuracy improves only when practice includes KR and that perceptual consistency improves regardless of whether practice includes KR. In addition, the results showed that although there is a relationship between how often KR is provided during a practice session and posttest improvements in perceptual accuracy, there is no relationship between how often PR is provided during a practice session and posttest improvements in consistency.

  12. Accuracy and generalizability of using automated methods for identifying adverse events from electronic health record data: a validation study protocol.

    PubMed

    Rochefort, Christian M; Buckeridge, David L; Tanguay, Andréanne; Biron, Alain; D'Aragon, Frédérick; Wang, Shengrui; Gallix, Benoit; Valiquette, Louis; Audet, Li-Anne; Lee, Todd C; Jayaraman, Dev; Petrucci, Bruno; Lefebvre, Patricia

    2017-02-16

    Adverse events (AEs) in acute care hospitals are frequent and associated with significant morbidity, mortality, and costs. Measuring AEs is necessary for quality improvement and benchmarking purposes, but current detection methods lack in accuracy, efficiency, and generalizability. The growing availability of electronic health records (EHR) and the development of natural language processing techniques for encoding narrative data offer an opportunity to develop potentially better methods. The purpose of this study is to determine the accuracy and generalizability of using automated methods for detecting three high-incidence and high-impact AEs from EHR data: a) hospital-acquired pneumonia, b) ventilator-associated event and, c) central line-associated bloodstream infection. This validation study will be conducted among medical, surgical and ICU patients admitted between 2013 and 2016 to the Centre hospitalier universitaire de Sherbrooke (CHUS) and the McGill University Health Centre (MUHC), which has both French and English sites. A random 60% sample of CHUS patients will be used for model development purposes (cohort 1, development set). Using a random sample of these patients, a reference standard assessment of their medical chart will be performed. Multivariate logistic regression and the area under the curve (AUC) will be employed to iteratively develop and optimize three automated AE detection models (i.e., one per AE of interest) using EHR data from the CHUS. These models will then be validated on a random sample of the remaining 40% of CHUS patients (cohort 1, internal validation set) using chart review to assess accuracy. The most accurate models developed and validated at the CHUS will then be applied to EHR data from a random sample of patients admitted to the MUHC French site (cohort 2) and English site (cohort 3)-a critical requirement given the use of narrative data -, and accuracy will be assessed using chart review. Generalizability will be determined by comparing AUCs from cohorts 2 and 3 to those from cohort 1. This study will likely produce more accurate and efficient measures of AEs. These measures could be used to assess the incidence rates of AEs, evaluate the success of preventive interventions, or benchmark performance across hospitals.

  13. Short-arc measurement and fitting based on the bidirectional prediction of observed data

    NASA Astrophysics Data System (ADS)

    Fei, Zhigen; Xu, Xiaojie; Georgiadis, Anthimos

    2016-02-01

    To measure a short arc is a notoriously difficult problem. In this study, the bidirectional prediction method based on the Radial Basis Function Neural Network (RBFNN) to the observed data distributed along a short arc is proposed to increase the corresponding arc length, and thus improve its fitting accuracy. Firstly, the rationality of regarding observed data as a time series is discussed in accordance with the definition of a time series. Secondly, the RBFNN is constructed to predict the observed data where the interpolation method is used for enlarging the size of training examples in order to improve the learning accuracy of the RBFNN’s parameters. Finally, in the numerical simulation section, we focus on simulating how the size of the training sample and noise level influence the learning error and prediction error of the built RBFNN. Typically, the observed data coming from a 5{}^\\circ short arc are used to evaluate the performance of the Hyper method known as the ‘unbiased fitting method of circle’ with a different noise level before and after prediction. A number of simulation experiments reveal that the fitting stability and accuracy of the Hyper method after prediction are far superior to the ones before prediction.

  14. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  15. Urinary Volatile Organic Compounds for the Detection of Prostate Cancer

    PubMed Central

    Khalid, Tanzeela; Aggio, Raphael; White, Paul; De Lacy Costello, Ben; Persad, Raj; Al-Kateb, Huda; Jones, Peter; Probert, Chris S.; Ratcliffe, Norman

    2015-01-01

    The aim of this work was to investigate volatile organic compounds (VOCs) emanating from urine samples to determine whether they can be used to classify samples into those from prostate cancer and non-cancer groups. Participants were men referred for a trans-rectal ultrasound-guided prostate biopsy because of an elevated prostate specific antigen (PSA) level or abnormal findings on digital rectal examination. Urine samples were collected from patients with prostate cancer (n = 59) and cancer-free controls (n = 43), on the day of their biopsy, prior to their procedure. VOCs from the headspace of basified urine samples were extracted using solid-phase micro-extraction and analysed by gas chromatography/mass spectrometry. Classifiers were developed using Random Forest (RF) and Linear Discriminant Analysis (LDA) classification techniques. PSA alone had an accuracy of 62–64% in these samples. A model based on 4 VOCs, 2,6-dimethyl-7-octen-2-ol, pentanal, 3-octanone, and 2-octanone, was marginally more accurate 63–65%. When combined, PSA level and these four VOCs had mean accuracies of 74% and 65%, using RF and LDA, respectively. With repeated double cross-validation, the mean accuracies fell to 71% and 65%, using RF and LDA, respectively. Results from VOC profiling of urine headspace are encouraging and suggest that there are other metabolomic avenues worth exploring which could help improve the stratification of men at risk of prostate cancer. This study also adds to our knowledge on the profile of compounds found in basified urine, from controls and cancer patients, which is useful information for future studies comparing the urine from patients with other disease states. PMID:26599280

  16. An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.

    PubMed

    Yang, Yifei; Tan, Minjia; Dai, Yuewei

    2017-01-01

    A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.

  17. IMEKO TC1-TC7 Symposium in London: The assurance as a result of blood chemical analysis by ISO-GUM and QE

    NASA Astrophysics Data System (ADS)

    Iwaki, Y.

    2010-07-01

    The Quality Assurance (QA) of measurand has been discussed over many years by Quality Engineering (QE). It is need to more discuss about ISO standard. It is mining to find out root fault element for improvement of measured accuracy, and it remove. The accuracy assurance needs to investigate the Reference Material (RM) for calibration and an improvement accuracy of data processing. This research follows the accuracy improvement in field of data processing by how to improve of accuracy. As for the fault element relevant to measurement accuracy, in many cases, two or more element is buried exist. The QE is to assume the generating frequency of fault state, and it is solving from higher ranks for fault factor first by "Failure Mode and Effects Analysis (FMEA)". Then QE investigate the root cause over the fault element by "Root Cause Analysis (RCA)" and "Fault Tree Analysis (FTA)" and calculate order to the generating element of assume specific fault. These days comes, the accuracy assurance of measurement result became duty in the Professional Test (PT). ISO standard was legislated by ISO-GUM (Guide of express Uncertainty in Measurement) as guidance of an accuracy assurance in 1993 [1] for QA. Analysis method of ISO-GUM is changed into Exploratory Data Analysis (EDA) from Analysis of Valiance (ANOVA). EDA calculate one by one until an assurance performance is obtained according to "Law of the propagation of uncertainty". If the truth value was unknown, ISO-GUM is changed into reference value. A reference value set up by the EDA and it does check with a Key Comparison (KC) method. KC is comparing between null hypothesis and frequency hypothesis. It performs operation of assurance by ISO-GUM in order of standard uncertainty, the combined uncertainty of many fault elements and an expansion uncertain for assurance. An assurance value is authorized by multiplying the final expansion uncertainty [2] by K of coverage factor. K-value is calculated from the Effective Free Degree (EFD) which thought the number of samples is important. Free degree is based on maximum likelihood method of an improved information criterion (AIC) for a Quality Control (QC). The assurance performance of ISO-GUM is come out by set up of the confidence interval [3] and is decided. The result of research of "Decided level/Minimum Detectable Concentration (DL/MDC)" was able to profit by the operation. QE has developed for the QC of industry. However, these have been processed by regression analysis by making frequency probability of a statistic value into normalized distribution. The occurrence probability of the statistics value of a fault element which is accompanied element by a natural phenomenon becomes an abnormal distribution in many cases. The abnormal distribution needs to obtain an assurance value by other method than statistical work of type B in ISO-GUM. It is tried fusion the improvement of worker by QE became important for reservation of the reliability of measurement accuracy and safety. This research was to make the result of Blood Chemical Analysis (BCA) in the field of clinical test.

  18. Accuracy of LightCycler(R) SeptiFast for the detection and identification of pathogens in the blood of patients with suspected sepsis: a systematic review protocol.

    PubMed

    Dark, Paul; Wilson, Claire; Blackwood, Bronagh; McAuley, Danny F; Perkins, Gavin D; McMullan, Ronan; Gates, Simon; Warhurst, Geoffrey

    2012-01-01

    Background There is growing interest in the potential utility of molecular diagnostics in improving the detection of life-threatening infection (sepsis). LightCycler® SeptiFast is a multipathogen probe-based real-time PCR system targeting DNA sequences of bacteria and fungi present in blood samples within a few hours. We report here the protocol of the first systematic review of published clinical diagnostic accuracy studies of this technology when compared with blood culture in the setting of suspected sepsis. Methods/design Data sources: the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment Database (HTA), the NHS Economic Evaluation Database (NHSEED), The Cochrane Library, MEDLINE, EMBASE, ISI Web of Science, BIOSIS Previews, MEDION and the Aggressive Research Intelligence Facility Database (ARIF). diagnostic accuracy studies that compare the real-time PCR technology with standard culture results performed on a patient's blood sample during the management of sepsis. three reviewers, working independently, will determine the level of evidence, methodological quality and a standard data set relating to demographics and diagnostic accuracy metrics for each study. Statistical analysis/data synthesis: heterogeneity of studies will be investigated using a coupled forest plot of sensitivity and specificity and a scatter plot in Receiver Operator Characteristic (ROC) space. Bivariate model method will be used to estimate summary sensitivity and specificity. The authors will investigate reporting biases using funnel plots based on effective sample size and regression tests of asymmetry. Subgroup analyses are planned for adults, children and infection setting (hospital vs community) if sufficient data are uncovered. Dissemination Recommendations will be made to the Department of Health (as part of an open-access HTA report) as to whether the real-time PCR technology has sufficient clinical diagnostic accuracy potential to move forward to efficacy testing during the provision of routine clinical care. Registration PROSPERO-NIHR Prospective Register of Systematic Reviews (CRD42011001289).

  19. Improved Motor-Timing: Effects of Synchronized Metro-Nome Training on Golf Shot Accuracy

    PubMed Central

    Sommer, Marius; Rönnqvist, Louise

    2009-01-01

    This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6) in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study’s results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy. Key points This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. A randomized control group design was used. The 4 week SMT intervention showed significant improvements in motor timing, golf shot accuracy, and lead to less variability. We conclude that this study’s results provide further evidence that motor timing can be improved by SMT training and that such timing improvement also improves golf accuracy. PMID:24149608

  20. A portable blood plasma clot micro-elastometry device based on resonant acoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Krebs, C. R.; Li, Ling; Wolberg, Alisa S.; Oldenburg, Amy L.

    2015-07-01

    Abnormal blood clot stiffness is an important indicator of coagulation disorders arising from a variety of cardiovascular diseases and drug treatments. Here, we present a portable instrument for elastometry of microliter volume blood samples based upon the principle of resonant acoustic spectroscopy, where a sample of well-defined dimensions exhibits a fundamental longitudinal resonance mode proportional to the square root of the Young's modulus. In contrast to commercial thromboelastography, the resonant acoustic method offers improved repeatability and accuracy due to the high signal-to-noise ratio of the resonant vibration. We review the measurement principles and the design of a magnetically actuated microbead force transducer applying between 23 pN and 6.7 nN, providing a wide dynamic range of elastic moduli (3 Pa-27 kPa) appropriate for measurement of clot elastic modulus (CEM). An automated and portable device, the CEMport, is introduced and implemented using a 2 nm resolution displacement sensor with demonstrated accuracy and precision of 3% and 2%, respectively, of CEM in biogels. Importantly, the small strains (<0.13%) and low strain rates (<1/s) employed by the CEMport maintain a linear stress-to-strain relationship which provides a perturbative measurement of the Young's modulus. Measurements of blood plasma CEM versus heparin concentration show that CEMport is sensitive to heparin levels below 0.050 U/ml, which suggests future applications in sensing heparin levels of post-surgical cardiopulmonary bypass patients. The portability, high accuracy, and high precision of this device enable new clinical and animal studies for associating CEM with blood coagulation disorders, potentially leading to improved diagnostics and therapeutic monitoring.

  1. A portable blood plasma clot micro-elastometry device based on resonant acoustic spectroscopy.

    PubMed

    Krebs, C R; Li, Ling; Wolberg, Alisa S; Oldenburg, Amy L

    2015-07-01

    Abnormal blood clot stiffness is an important indicator of coagulation disorders arising from a variety of cardiovascular diseases and drug treatments. Here, we present a portable instrument for elastometry of microliter volume blood samples based upon the principle of resonant acoustic spectroscopy, where a sample of well-defined dimensions exhibits a fundamental longitudinal resonance mode proportional to the square root of the Young's modulus. In contrast to commercial thromboelastography, the resonant acoustic method offers improved repeatability and accuracy due to the high signal-to-noise ratio of the resonant vibration. We review the measurement principles and the design of a magnetically actuated microbead force transducer applying between 23 pN and 6.7 nN, providing a wide dynamic range of elastic moduli (3 Pa-27 kPa) appropriate for measurement of clot elastic modulus (CEM). An automated and portable device, the CEMport, is introduced and implemented using a 2 nm resolution displacement sensor with demonstrated accuracy and precision of 3% and 2%, respectively, of CEM in biogels. Importantly, the small strains (<0.13%) and low strain rates (<1/s) employed by the CEMport maintain a linear stress-to-strain relationship which provides a perturbative measurement of the Young's modulus. Measurements of blood plasma CEM versus heparin concentration show that CEMport is sensitive to heparin levels below 0.050 U/ml, which suggests future applications in sensing heparin levels of post-surgical cardiopulmonary bypass patients. The portability, high accuracy, and high precision of this device enable new clinical and animal studies for associating CEM with blood coagulation disorders, potentially leading to improved diagnostics and therapeutic monitoring.

  2. The Power Within: The Experimental Manipulation of Power Interacts with Trait BDD Symptoms to Predict Interoceptive Accuracy

    PubMed Central

    Kunstman, Jonathan W.; Clerkin, Elise M.; Palmer, Kateyln; Peters, M. Taylar; Dodd, Dorian R.; Smith, April R.

    2015-01-01

    Background and Objectives This study tested whether relatively low levels of interoceptive accuracy (IAcc) are associated with body dysmorphic disorder (BDD) symptoms. Additionally, given research indicating that power attunes individuals to their internal states, we sought to determine if state interoceptive accuracy could be improved through an experimental manipulation of power. Method Undergraduate women (N = 101) completed a baseline measure of interoceptive accuracy and then were randomized to a power or control condition. Participants were primed with power or a neutral control topic and then completed a post-manipulation measure of state IAcc. Trait BDD symptoms were assessed with a self-report measure. Results Controlling for baseline IAcc, within the control condition, there was a significant inverse relationship between trait BDD symptoms and interoceptive accuracy. Continuing to control for baseline IAcc, within the power condition, there was not a significant relationship between trait BDD symptoms and IAcc, suggesting that power may have attenuated this relationship. At high levels of BDD symptomology, there was also a significant simple effect of experimental condition, such that participants in the power (vs. control) condition had better interoceptive accuracy. These results provide initial evidence that power may positively impact interoceptive accuracy among those with high levels of BDD symptoms. Limitations This cross-sectional study utilized a demographically homogenous sample of women that reflected a broad range of symptoms; thus, although there were a number of participants reporting elevated BDD symptoms, these findings might not generalize to other populations or clinical samples. Conclusions . This study provides the first direct test of the relationship between trait BDD symptoms and IAcc, and provides preliminary evidence that among those with severe BDD symptoms, power may help connect individuals with their internal states. Future research testing the mechanisms linking BDD symptoms with IAcc, as well as how individuals can better connect with their internal experiences is needed. PMID:26295932

  3. Comparison of baseline removal methods for laser-induced breakdown spectroscopy of geological samples

    NASA Astrophysics Data System (ADS)

    Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas

    2016-12-01

    This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.

  4. The power within: The experimental manipulation of power interacts with trait BDD symptoms to predict interoceptive accuracy.

    PubMed

    Kunstman, Jonathan W; Clerkin, Elise M; Palmer, Kateyln; Peters, M Taylar; Dodd, Dorian R; Smith, April R

    2016-03-01

    This study tested whether relatively low levels of interoceptive accuracy (IAcc) are associated with body dysmorphic disorder (BDD) symptoms. Additionally, given research indicating that power attunes individuals to their internal states, we sought to determine if state interoceptive accuracy could be improved through an experimental manipulation of power.. Undergraduate women (N = 101) completed a baseline measure of interoceptive accuracy and then were randomized to a power or control condition. Participants were primed with power or a neutral control topic and then completed a post-manipulation measure of state IAcc. Trait BDD symptoms were assessed with a self-report measure. Controlling for baseline IAcc, within the control condition, there was a significant inverse relationship between trait BDD symptoms and interoceptive accuracy. Continuing to control for baseline IAcc, within the power condition, there was not a significant relationship between trait BDD symptoms and IAcc, suggesting that power may have attenuated this relationship. At high levels of BDD symptomology, there was also a significant simple effect of experimental condition, such that participants in the power (vs. control) condition had better interoceptive accuracy. These results provide initial evidence that power may positively impact interoceptive accuracy among those with high levels of BDD symptoms.. This cross-sectional study utilized a demographically homogenous sample of women that reflected a broad range of symptoms; thus, although there were a number of participants reporting elevated BDD symptoms, these findings might not generalize to other populations or clinical samples. This study provides the first direct test of the relationship between trait BDD symptoms and IAcc, and provides preliminary evidence that among those with severe BDD symptoms, power may help connect individuals with their internal states. Future research testing the mechanisms linking BDD symptoms with IAcc, as well as how individuals can better connect with their internal experiences is needed.. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. High-resolution correlation

    NASA Astrophysics Data System (ADS)

    Nelson, D. J.

    2007-09-01

    In the basic correlation process a sequence of time-lag-indexed correlation coefficients are computed as the inner or dot product of segments of two signals. The time-lag(s) for which the magnitude of the correlation coefficient sequence is maximized is the estimated relative time delay of the two signals. For discrete sampled signals, the delay estimated in this manner is quantized with the same relative accuracy as the clock used in sampling the signals. In addition, the correlation coefficients are real if the input signals are real. There have been many methods proposed to estimate signal delay to more accuracy than the sample interval of the digitizer clock, with some success. These methods include interpolation of the correlation coefficients, estimation of the signal delay from the group delay function, and beam forming techniques, such as the MUSIC algorithm. For spectral estimation, techniques based on phase differentiation have been popular, but these techniques have apparently not been applied to the correlation problem . We propose a phase based delay estimation method (PBDEM) based on the phase of the correlation function that provides a significant improvement of the accuracy of time delay estimation. In the process, the standard correlation function is first calculated. A time lag error function is then calculated from the correlation phase and is used to interpolate the correlation function. The signal delay is shown to be accurately estimated as the zero crossing of the correlation phase near the index of the peak correlation magnitude. This process is nearly as fast as the conventional correlation function on which it is based. For real valued signals, a simple modification is provided, which results in the same correlation accuracy as is obtained for complex valued signals.

  6. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  7. Improving Density Functional Tight Binding Predictions of Free Energy Surfaces for Slow Chemical Reactions in Solution

    NASA Astrophysics Data System (ADS)

    Kroonblawd, Matthew; Goldman, Nir

    2017-06-01

    First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for reactions that are fast relative to DFT simulation times (<10 ps), but the effects on slow reactions and the free energy surface are not well-known. We present a force matching approach to improve the chemical accuracy of DFTB. Accelerated sampling techniques are combined with path collective variables to generate the reference DFT data set and validate fitted DFTB potentials. Accuracy of force-matched DFTB free energy surfaces is assessed for slow peptide-forming reactions by direct comparison to DFT for particular paths. Extensions to model prebiotic chemistry under shock conditions are discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  8. Note: A simple image processing based fiducial auto-alignment method for sample registration.

    PubMed

    Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne

    2015-08-01

    A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.

  9. Breath-based biomarkers for tuberculosis

    NASA Astrophysics Data System (ADS)

    Kolk, Arend H. J.; van Berkel, Joep J. B. N.; Claassens, Mareli M.; Walters, Elisabeth; Kuijper, Sjoukje; Dallinga, Jan W.; van Schooten, Fredrik-Jan

    2012-06-01

    We investigated the potential of breath analysis by gas chromatography - mass spectrometry (GC-MS) to discriminate between samples collected prospectively from patients with suspected tuberculosis (TB). Samples were obtained in a TB endemic setting in South Africa where 28% of the culture proven TB patients had a Ziehl-Neelsen (ZN) negative sputum smear. A training set of breath samples from 50 sputum culture proven TB patients and 50 culture negative non-TB patients was analyzed by GC-MS. A classification model with 7 compounds resulted in a training set with a sensitivity of 72%, specificity of 86% and accuracy of 79% compared with culture. The classification model was validated with an independent set of breath samples from 21 TB and 50 non-TB patients. A sensitivity of 62%, specificity of 84% and accuracy of 77% was found. We conclude that the 7 volatile organic compounds (VOCs) that discriminate breath samples from TB and non-TB patients in our study population are probably host-response related VOCs and are not derived from the VOCs secreted by M. tuberculosis. It is concluded that at present GC-MS breath analysis is able to differentiate between TB and non-TB breath samples even among patients with a negative ZN sputum smear but a positive culture for M. tuberculosis. Further research is required to improve the sensitivity and specificity before this method can be used in routine laboratories.

  10. Improvement of a wind-tunnel sampling system for odour and VOCs.

    PubMed

    Wang, X; Jiang, J; Kaye, R

    2001-01-01

    Wind-tunnel systems are widely used for collecting odour emission samples from surface area sources. Consequently, a portable wind-tunnel system was developed at the University of New South Wales that was easy to handle and suitable for sampling from liquid surfaces. Development work was undertaken to ensure even air-flows above the emitting surface and to optimise air velocities to simulate real situations. However, recovery efficiencies for emissions have not previously been studied for wind-tunnel systems. A series of experiments was carried out for determining and improving the recovery rate of the wind-tunnel sampling system by using carbon monoxide as a tracer gas. It was observed by mass balance that carbon monoxide recovery rates were initially only 37% to 48% from a simulated surface area emission source. It was therefore apparent that further development work was required to improve recovery efficiencies. By analysing the aerodynamic character of air movement and CO transportation inside the wind-tunnel, it was determined that the apparent poor recoveries resulted from uneven mixing at the sample collection point. A number of modifications were made for the mixing chamber of the wind-tunnel system. A special sampling chamber extension and a sampling manifold with optimally distributed sampling orifices were developed for the wind-tunnel sampling system. The simulation experiments were repeated with the new sampling system. Over a series of experiments, the recovery efficiency of sampling was improved to 83-100% with an average of 90%, where the CO tracer gas was introduced at a single point and 92-102% with an average of 97%, where the CO tracer gas was introduced along a line transverse to the sweep air. The stability and accuracy of the new system were determined statistically and are reported.

  11. The effects of instructions on mothers' ratings of child attention-deficit/hyperactivity disorder symptoms.

    PubMed

    Johnston, Charlotte; Weiss, Margaret; Murray, Candice; Miller, Natalie

    2011-11-01

    We examined whether instructional materials describing how to rate child ADHD symptoms would improve the accuracy of mothers' ratings of ADHD symptoms presented in standard child behavior stimuli, and whether instructions would be equally effective across a range of maternal depressive symptoms and family incomes. A community sample of 100 mothers with 5 to 12 year old sons were randomly assigned to either receive or not receive the instructions. All mothers watched standard video recordings of boys displaying nonproblem behavior, ADHD symptoms, ADHD plus oppositional behaviors, or ADHD plus anxious behaviors, and then rated the ADHD symptoms of the boys in the videos. These ratings were compared to ratings of the boys' ADHD symptoms made by objective coders. Results indicated an interaction such that the instructional materials improved the agreement between mothers' and coders' ratings, but only for mothers at lower family income levels. The instructional materials improved all mothers' open-ended responses regarding knowledge of ADHD. All mothers rated more ADHD symptoms in boys with comorbid oppositional or anxious behaviors, and this effect was not reduced by the instructional materials. The potential utility of these instructions to improve the accuracy of ratings of child ADHD symptoms is explored.

  12. Enhanced CT images by the wavelet transform improving diagnostic accuracy of chest nodules.

    PubMed

    Guo, Xiuhua; Liu, Xiangye; Wang, Huan; Liang, Zhigang; Wu, Wei; He, Qian; Li, Kuncheng; Wang, Wei

    2011-02-01

    The objective of this study was to compare the diagnostic accuracy in the interpretation of chest nodules using original CT images versus enhanced CT images based on the wavelet transform. The CT images of 118 patients with cancers and 60 with benign nodules were used in this study. All images were enhanced through an algorithm based on the wavelet transform. Two experienced radiologists interpreted all the images in two reading sessions. The reading sessions were separated by a minimum of 1 month in order to minimize the effect of observer's recall. The Mann-Whitney U nonparametric test was used to analyze the interpretation results between original and enhanced images. The Kruskal-Wallis H nonparametric test of K independent samples was used to investigate the related factors which could affect the diagnostic accuracy of observers. The area under the ROC curves for the original and enhanced images was 0.681 and 0.736, respectively. There is significant difference in diagnosing the malignant nodules between the original and enhanced images (z = 7.122, P < 0.001), whereas there is no significant difference in diagnosing the benign nodules (z = 0.894, P = 0.371). The results showed that there is significant difference between original and enhancement images when the size of nodules was larger than 2 cm (Z = -2.509, P = 0.012, indicating the size of the nodules is a critical evaluating factor of the diagnostic accuracy of observers). This study indicated that the image enhancement based on wavelet transform could improve the diagnostic accuracy of radiologists for the malignant chest nodules.

  13. A Facile Stable-Isotope Dilution Method for Determination of Sphingosine Phosphate Lyase Activity

    PubMed Central

    Suh, Jung H.; Eltanawy, Abeer; Rangan, Apoorva; Saba, Julie D.

    2015-01-01

    A new technique for quantifying sphingosine phosphate lyase activity in biological samples is described. In this procedure, 2-hydrazinoquinoline is used to convert (2E)-hexadecenal into the corresponding hydrazone derivative to improve ionization efficiency and selectivity of detection. Combined utilization of liquid chromatographic separation and multiple reaction monitoring-mass spectrometry allows for simultaneous quantification of the substrate S1P and product (2E)-hexadecenal. Incorporation of (2E)-d5-hexadecenal as an internal standard improves detection accuracy and precision. A simple one-step derivatization procedure eliminates the need for further extractions. Limits of quantification for (2E)-hexadecenal and sphingosine-1-phosphate are 100 and 50 fmol, respectively. The assay displays a wide dynamic detection range useful for detection of low basal sphingosine phosphate lyase activity in wild type cells, SPL-overexpressing cell lines, and wild type mouse tissues. Compared to current methods, the capacity for simultaneous detection of sphingosine-1-phosphate and (2E)-hexadecenal greatly improves the accuracy of results and shows excellent sensitivity and specificity for sphingosine phosphate lyase activity detection. PMID:26408264

  14. An adaptive clustering algorithm for image matching based on corner feature

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  15. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  16. Suitability of the echo-time-shift method as laboratory standard for thermal ultrasound dosimetry

    NASA Astrophysics Data System (ADS)

    Fuhrmann, Tina; Georg, Olga; Haller, Julian; Jenderka, Klaus-Vitold

    2017-03-01

    Ultrasound therapy is a promising, non-invasive application with potential to significantly improve cancer therapies like surgery, viro- or immunotherapy. This therapy needs faster, cheaper and more easy-to-handle quality assurance tools for therapy devices as well as possibilities to verify treatment plans and for dosimetry. This limits comparability and safety of treatments. Accurate spatial and temporal temperature maps could be used to overcome these shortcomings. In this contribution first results of suitability and accuracy investigations of the echo-time-shift method for two-dimensional temperature mapping during and after sonication are presented. The analysis methods used to calculate time-shifts were a discrete frame-to-frame and a discrete frame-to-base-frame algorithm as well as a sigmoid fit for temperature calculation. In the future accuracy could be significantly enhanced by using continuous methods for time-shift calculation. Further improvements can be achieved by improving filtering algorithms and interpolation of sampled diagnostic ultrasound data. It might be a comparatively accurate, fast and affordable method for laboratory and clinical quality control.

  17. Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions

    PubMed Central

    Sükösd, Zsuzsanna; Swenson, M. Shel; Kjems, Jørgen; Heitsch, Christine E.

    2013-01-01

    Recent advances in RNA structure determination include using data from high-throughput probing experiments to improve thermodynamic prediction accuracy. We evaluate the extent and nature of improvements in data-directed predictions for a diverse set of 16S/18S ribosomal sequences using a stochastic model of experimental SHAPE data. The average accuracy for 1000 data-directed predictions always improves over the original minimum free energy (MFE) structure. However, the amount of improvement varies with the sequence, exhibiting a correlation with MFE accuracy. Further analysis of this correlation shows that accurate MFE base pairs are typically preserved in a data-directed prediction, whereas inaccurate ones are not. Thus, the positive predictive value of common base pairs is consistently higher than the directed prediction accuracy. Finally, we confirm sequence dependencies in the directability of thermodynamic predictions and investigate the potential for greater accuracy improvements in the worst performing test sequence. PMID:23325843

  18. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  19. The influence of temperature calibration on the OC-EC results from a dual-optics thermal carbon analyzer

    NASA Astrophysics Data System (ADS)

    Pavlovic, J.; Kinsey, J. S.; Hays, M. D.

    2014-09-01

    Thermal-optical analysis (TOA) is a widely used technique that fractionates carbonaceous aerosol particles into organic and elemental carbon (OC and EC), or carbonate. Thermal sub-fractions of evolved OC and EC are also used for source identification and apportionment; thus, oven temperature accuracy during TOA analysis is essential. Evidence now indicates that the "actual" sample (filter) temperature and the temperature measured by the built-in oven thermocouple (or set-point temperature) can differ by as much as 50 °C. This difference can affect the OC-EC split point selection and consequently the OC and EC fraction and sub-fraction concentrations being reported, depending on the sample composition and in-use TOA method and instrument. The present study systematically investigates the influence of an oven temperature calibration procedure for TOA. A dual-optical carbon analyzer that simultaneously measures transmission and reflectance (TOT and TOR) is used, functioning under the conditions of both the National Institute of Occupational Safety and Health Method 5040 (NIOSH) and Interagency Monitoring of Protected Visual Environment (IMPROVE) protocols. The application of the oven calibration procedure to our dual-optics instrument significantly changed NIOSH 5040 carbon fractions (OC and EC) and the IMPROVE OC fraction. In addition, the well-known OC-EC split difference between NIOSH and IMPROVE methods is even further perturbed following the instrument calibration. Further study is needed to determine if the widespread application of this oven temperature calibration procedure will indeed improve accuracy and our ability to compare among carbonaceous aerosol studies that use TOA.

  20. New approaches to the analysis of complex samples using fluorescence lifetime techniques and organized media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertz, P.R.

    Fluorescence spectroscopy is a highly sensitive and selective tool for the analysis of complex systems. In order to investigate the efficacy of several steady state and dynamic techniques for the analysis of complex systems, this work focuses on two types of complex, multicomponent samples: petrolatums and coal liquids. It is shown in these studies dynamic, fluorescence lifetime-based measurements provide enhanced discrimination between complex petrolatum samples. Additionally, improved quantitative analysis of multicomponent systems is demonstrated via incorporation of organized media in coal liquid samples. This research provides the first systematic studies of (1) multifrequency phase-resolved fluorescence spectroscopy for dynamic fluorescence spectralmore » fingerprinting of complex samples, and (2) the incorporation of bile salt micellar media to improve accuracy and sensitivity for characterization of complex systems. In the petroleum studies, phase-resolved fluorescence spectroscopy is used to combine spectral and lifetime information through the measurement of phase-resolved fluorescence intensity. The intensity is collected as a function of excitation and emission wavelengths, angular modulation frequency, and detector phase angle. This multidimensional information enhances the ability to distinguish between complex samples with similar spectral characteristics. Examination of the eigenvalues and eigenvectors from factor analysis of phase-resolved and steady state excitation-emission matrices, using chemometric methods of data analysis, confirms that phase-resolved fluorescence techniques offer improved discrimination between complex samples as compared with conventional steady state methods.« less

  1. Validation of a Sampling Method to Collect Exposure Data for Central-Line-Associated Bloodstream Infections.

    PubMed

    Hammami, Naïma; Mertens, Karl; Overholser, Rosanna; Goetghebeur, Els; Catry, Boudewijn; Lambert, Marie-Laurence

    2016-05-01

    Surveillance of central-line-associated bloodstream infections requires the labor-intensive counting of central-line days (CLDs). This workload could be reduced by sampling. Our objective was to evaluate the accuracy of various sampling strategies in the estimation of CLDs in intensive care units (ICUs) and to establish a set of rules to identify optimal sampling strategies depending on ICU characteristics. Analyses of existing data collected according to the European protocol for patient-based surveillance of ICU-acquired infections in Belgium between 2004 and 2012. CLD data were reported by 56 ICUs in 39 hospitals during 364 trimesters. We compared estimated CLD data obtained from weekly and monthly sampling schemes with the observed exhaustive CLD data over the trimester by assessing the CLD percentage error (ie, observed CLDs - estimated CLDs/observed CLDs). We identified predictors of improved accuracy using linear mixed models. When sampling once per week or 3 times per month, 80% of ICU trimesters had a CLD percentage error within 10%. When sampling twice per week, this was >90% of ICU trimesters. Sampling on Tuesdays provided the best estimations. In the linear mixed model, the observed CLD count was the best predictor for a smaller percentage error. The following sampling strategies provided an estimate within 10% of the actual CLD for 97% of the ICU trimesters with 90% confidence: 3 times per month in an ICU with >650 CLDs per trimester or each Tuesday in an ICU with >480 CLDs per trimester. Sampling of CLDs provides an acceptable alternative to daily collection of CLD data.

  2. Assessing clinical reasoning (ASCLIRE): Instrument development and validation.

    PubMed

    Kunina-Habenicht, Olga; Hautz, Wolf E; Knigge, Michel; Spies, Claudia; Ahlers, Olaf

    2015-12-01

    Clinical reasoning is an essential competency in medical education. This study aimed at developing and validating a test to assess diagnostic accuracy, collected information, and diagnostic decision time in clinical reasoning. A norm-referenced computer-based test for the assessment of clinical reasoning (ASCLIRE) was developed, integrating the entire clinical decision process. In a cross-sectional study participants were asked to choose as many diagnostic measures as they deemed necessary to diagnose the underlying disease of six different cases with acute or sub-acute dyspnea and provide a diagnosis. 283 students and 20 content experts participated. In addition to diagnostic accuracy, respective decision time and number of used relevant diagnostic measures were documented as distinct performance indicators. The empirical structure of the test was investigated using a structural equation modeling approach. Experts showed higher accuracy rates and lower decision times than students. In a cross-sectional comparison, the diagnostic accuracy of students improved with the year of study. Wrong diagnoses provided by our sample were comparable to wrong diagnoses in practice. We found an excellent fit for a model with three latent factors-diagnostic accuracy, decision time, and choice of relevant diagnostic information-with diagnostic accuracy showing no significant correlation with decision time. ASCLIRE considers decision time as an important performance indicator beneath diagnostic accuracy and provides evidence that clinical reasoning is a complex ability comprising diagnostic accuracy, decision time, and choice of relevant diagnostic information as three partly correlated but still distinct aspects.

  3. Improved microautoradiographic method to determine individual microorganisms active in substrate uptake in natural waters.

    PubMed

    Tabor, P S; Neihof, R A

    1982-10-01

    We report a method which combines epifluorescence microscopy and microautoradiography to determine both the total number of microorganisms in natural water populations and those individual organisms active in the uptake of specific substrates. After incubation with H-labeled substrate, the sample is filtered and, while still on the filter, mounted directly in a film of autoradiographic emulsion on a microscope slide. The microautoradiogram is processed and stained with acridine orange, and, subsequently, the filter is removed before microscopic observation. This novel preparation resulted in increased accuracy in direct counts made from the autoradiogram, improved sensitivity in the recognition of uptake-active (H-labeled) organisms, and enumeration of a significantly greater number of labeled organisms compared with corresponding samples prepared by a previously reported method.

  4. Improved Microautoradiographic Method to Determine Individual Microorganisms Active in Substrate Uptake in Natural Waters

    PubMed Central

    Tabor, Paul S.; Neihof, Rex A.

    1982-01-01

    We report a method which combines epifluorescence microscopy and microautoradiography to determine both the total number of microorganisms in natural water populations and those individual organisms active in the uptake of specific substrates. After incubation with 3H-labeled substrate, the sample is filtered and, while still on the filter, mounted directly in a film of autoradiographic emulsion on a microscope slide. The microautoradiogram is processed and stained with acridine orange, and, subsequently, the filter is removed before microscopic observation. This novel preparation resulted in increased accuracy in direct counts made from the autoradiogram, improved sensitivity in the recognition of uptake-active (3H-labeled) organisms, and enumeration of a significantly greater number of labeled organisms compared with corresponding samples prepared by a previously reported method. Images PMID:16346120

  5. Drift correction of the dissolved signal in single particle ICPMS.

    PubMed

    Cornelis, Geert; Rauch, Sebastien

    2016-07-01

    A method is presented where drift, the random fluctuation of the signal intensity, is compensated for based on the estimation of the drift function by a moving average. It was shown using single particle ICPMS (spICPMS) measurements of 10 and 60 nm Au NPs that drift reduces accuracy of spICPMS analysis at the calibration stage and during calculations of the particle size distribution (PSD), but that the present method can again correct the average signal intensity as well as the signal distribution of particle-containing samples skewed by drift. Moreover, deconvolution, a method that models signal distributions of dissolved signals, fails in some cases when using standards and samples affected by drift, but the present method was shown to improve accuracy again. Relatively high particle signals have to be removed prior to drift correction in this procedure, which was done using a 3 × sigma method, and the signals are treated separately and added again. The method can also correct for flicker noise that increases when signal intensity is increased because of drift. The accuracy was improved in many cases when flicker correction was used, but when accurate results were obtained despite drift, the correction procedures did not reduce accuracy. The procedure may be useful to extract results from experimental runs that would otherwise have to be run again. Graphical Abstract A method is presented where a spICP-MS signal affected by drift (left) is corrected (right) by adjusting the local (moving) averages (green) and standard deviations (purple) to the respective values at a reference time (red). In combination with removing particle events (blue) in the case of calibration standards, this method is shown to obtain particle size distributions where that would otherwise be impossible, even when the deconvolution method is used to discriminate dissolved and particle signals.

  6. Method for improving instrument response

    DOEpatents

    Hahn, David W.; Hencken, Kenneth R.; Johnsen, Howard A.; Flower, William L.

    2000-01-01

    This invention pertains generally to a method for improving the accuracy of particle analysis under conditions of discrete particle loading and particularly to a method for improving signal-to-noise ratio and instrument response in laser spark spectroscopic analysis of particulate emissions. Under conditions of low particle density loading (particles/m.sup.3) resulting from low overall metal concentrations and/or large particle size uniform sampling can not be guaranteed. The present invention discloses a technique for separating laser sparks that arise from sample particles from those that do not; that is, a process for systematically "gating" the instrument response arising from "sampled" particles from those responses which do not, is dislosed as a solution to his problem. The disclosed approach is based on random sampling combined with a conditional analysis of each pulse. A threshold value is determined for the ratio of the intensity of a spectral line for a given element to a baseline region. If the threshold value is exceeded, the pulse is classified as a "hit" and that data is collected and an average spectrum is generated from an arithmetic average of "hits". The true metal concentration is determined from the averaged spectrum.

  7. Modification of kanamycin-esculin-azide agar to improve selectivity in the enumeration of fecal streptococci from water samples.

    PubMed

    Audicana, A; Perales, I; Borrego, J J

    1995-12-01

    Kanamycin-esculin-azide agar was modified by increasing the concentration of sodium azide to 0.4 g liter-1 and replacing kanamycin sulfate with 5 mg of oxolinic acid liter-1. The modification, named oxolinic acid-esculin-azide (OAA) agar, was compared with Slanetz-Bartley and KF agars by using drinking water and seawater samples. The OAA agar showed higher specificity, selectivity, and recovery efficiencies than those obtained by using the other media. In addition, no confirmation of typical colonies was needed when OAA agar was used, which significantly shortens the time of sample processing and increases the accuracy of the method.

  8. Modification of kanamycin-esculin-azide agar to improve selectivity in the enumeration of fecal streptococci from water samples.

    PubMed Central

    Audicana, A; Perales, I; Borrego, J J

    1995-01-01

    Kanamycin-esculin-azide agar was modified by increasing the concentration of sodium azide to 0.4 g liter-1 and replacing kanamycin sulfate with 5 mg of oxolinic acid liter-1. The modification, named oxolinic acid-esculin-azide (OAA) agar, was compared with Slanetz-Bartley and KF agars by using drinking water and seawater samples. The OAA agar showed higher specificity, selectivity, and recovery efficiencies than those obtained by using the other media. In addition, no confirmation of typical colonies was needed when OAA agar was used, which significantly shortens the time of sample processing and increases the accuracy of the method. PMID:8534085

  9. Rock images classification by using deep convolution neural network

    NASA Astrophysics Data System (ADS)

    Cheng, Guojian; Guo, Wenhui

    2017-08-01

    Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.

  10. Improving semi-automated segmentation by integrating learning with active sampling

    NASA Astrophysics Data System (ADS)

    Huo, Jing; Okada, Kazunori; Brown, Matthew

    2012-02-01

    Interactive segmentation algorithms such as GrowCut usually require quite a few user interactions to perform well, and have poor repeatability. In this study, we developed a novel technique to boost the performance of the interactive segmentation method GrowCut involving: 1) a novel "focused sampling" approach for supervised learning, as opposed to conventional random sampling; 2) boosting GrowCut using the machine learned results. We applied the proposed technique to the glioblastoma multiforme (GBM) brain tumor segmentation, and evaluated on a dataset of ten cases from a multiple center pharmaceutical drug trial. The results showed that the proposed system has the potential to reduce user interaction while maintaining similar segmentation accuracy.

  11. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  12. Making High Accuracy Null Depth Measurements for the LBTI Exozodi Survey

    NASA Technical Reports Server (NTRS)

    Mennesson, Bertrand; Defrere, Denis; Nowak, Matthias; Hinz, Philip; Millan-Gabet, Rafael; Absil, Oliver; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William C.; Kennedy, Grant M.; hide

    2016-01-01

    The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of 12 zodis per star, for a representative ensemble of 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.

  13. Making High Accuracy Null Depth Measurements for the LBTI ExoZodi Survey

    NASA Technical Reports Server (NTRS)

    Mennesson, Bertrand; Defrere, Denis; Nowak, Matthew; Hinz, Philip; Millan-Gabet, Rafael; Absil, Olivier; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William; Kennedy, Grant M.; hide

    2016-01-01

    The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of approximately 12 zodis per star, for a representative ensemble of approximately 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.

  14. Diagnostic accuracy of tuberculous lymphadenitis fine needle aspiration biopsy confirmed by PCR as gold standard

    NASA Astrophysics Data System (ADS)

    DSuryadi; Delyuzar; Soekimin

    2018-03-01

    Indonesia is the second country with the TB (tuberculosis) burden in the world. Improvement in controlling TB and reducing the complications can accelerate early diagnosis and correct treatment. PCR test is a gold standard. However, it is quite expensive for routine diagnosis. Therefore, an accurate and cheaper diagnostic method such as fine needle aspiration biopsy is needed. The study aimsto determine the accuracy of fine needle aspiration biopsy cytology in the diagnosis of tuberculous lymphadenitis. A cross-sectional analytic study was conducted to the samples from patients suspected with tuberculous lymphadenitis. The fine needle aspiration biopsy (FNAB)test was performed and confirmed by PCR test.There is a comparison to the sensitivity, specificity, accuracy, positive predictive value and negative predictive value of both methods. Sensitivity (92.50%), specificity (96.49%), accuracy (94.85%), positive predictive value (94.87%) and negative predictive value (94.83%) were in FNAB test compared to gold standard. We concluded that fine needle aspiration biopsy is a recommendation for a cheaper and accurate diagnostic test for tuberculous lymphadenitis diagnosis.

  15. Inverse Ising Inference Using All the Data

    NASA Astrophysics Data System (ADS)

    Aurell, Erik; Ekeberg, Magnus

    2012-03-01

    We show that a method based on logistic regression, using all the data, solves the inverse Ising problem far better than mean-field calculations relying only on sample pairwise correlation functions, while still computationally feasible for hundreds of nodes. The largest improvement in reconstruction occurs for strong interactions. Using two examples, a diluted Sherrington-Kirkpatrick model and a two-dimensional lattice, we also show that interaction topologies can be recovered from few samples with good accuracy and that the use of l1 regularization is beneficial in this process, pushing inference abilities further into low-temperature regimes.

  16. Quasi interpolation with Voronoi splines.

    PubMed

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  17. How Many Oral and Maxillofacial Surgeons Does It Take to Perform Virtual Orthognathic Surgical Planning?

    PubMed

    Borba, Alexandre Meireles; Haupt, Dustin; de Almeida Romualdo, Leiliane Teresinha; da Silva, André Luis Fernandes; da Graça Naclério-Homem, Maria; Miloro, Michael

    2016-09-01

    Virtual surgical planning (VSP) has become routine practice in orthognathic treatment planning; however, most surgeons do not perform the planning without technical assistance, nor do they routinely evaluate the accuracy of the postoperative outcomes. The purpose of the present study was to propose a reproducible method that would allow surgeons to have an improved understanding of VSP orthognathic planning and to compare the planned surgical movements with the results obtained. A retrospective cohort of bimaxillary orthognathic surgery cases was used to evaluate the variability between the predicted and obtained movements using craniofacial landmarks and McNamara 3-dimensional cephalometric analysis from computed tomography scans. The demographic data (age, gender, and skeletal deformity type) were gathered from the medical records. The data analysis included the level of variability from the predicted to obtained surgical movements as assessed by the mean and standard deviation. For the overall sample, statistical analysis was performed using the 1-sample t test. The statistical analysis between the Class II and III patient groups used an unpaired t test. The study sample consisted of 50 patients who had undergone bimaxillary orthognathic surgery. The overall evaluation of the mean values revealed a discrepancy between the predicted and obtained values of less than 2.0 ± 2.0 mm for all maxillary landmarks, although some mandibular landmarks were greater than this value. An evaluation of the influence of gender and deformity type on the accuracy of surgical movements did not demonstrate statistical significance for most landmarks (P > .05). The method provides a reproducible tool for surgeons who use orthognathic VSP to perform routine evaluation of the postoperative outcomes, permitting the identification of specific variables that could assist in improving the accuracy of surgical planning and execution. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  18. Microwave Resonator Measurements of Atmospheric Absorption Coefficients: A Preliminary Design Study

    NASA Technical Reports Server (NTRS)

    Walter, Steven J.; Spilker, Thomas R.

    1995-01-01

    A preliminary design study examined the feasibility of using microwave resonator measurements to improve the accuracy of atmospheric absorption coefficients and refractivity between 18 and 35 GHz. Increased accuracies would improve the capability of water vapor radiometers to correct for radio signal delays caused by Earth's atmosphere. Calibration of delays incurred by radio signals traversing the atmosphere has applications to both deep space tracking and planetary radio science experiments. Currently, the Cassini gravity wave search requires 0.8-1.0% absorption coefficient accuracy. This study examined current atmospheric absorption models and estimated that current model accuracy ranges from 5% to 7%. The refractivity of water vapor is known to 1% accuracy, while the refractivity of many dry gases (oxygen, nitrogen, etc.) are known to better than 0.1%. Improvements to the current generation of models will require that both the functional form and absolute absorption of the water vapor spectrum be calibrated and validated. Several laboratory techniques for measuring atmospheric absorption and refractivity were investigated, including absorption cells, single and multimode rectangular cavity resonators, and Fabry-Perot resonators. Semi-confocal Fabry-Perot resonators were shown to provide the most cost-effective and accurate method of measuring atmospheric gas refractivity. The need for accurate environmental measurement and control was also addressed. A preliminary design for the environmental control and measurement system was developed to aid in identifying significant design issues. The analysis indicated that overall measurement accuracy will be limited by measurement errors and imprecise control of the gas sample's thermodynamic state, thermal expansion and vibration- induced deformation of the resonator structure, and electronic measurement error. The central problem is to identify systematic errors because random errors can be reduced by averaging. Calibrating the resonator measurements by checking the refractivity of dry gases which are known to better than 0.1% provides a method of controlling the systematic errors to 0.1%. The primary source of error in absorptivity and refractivity measurements is thus the ability to measure the concentration of water vapor in the resonator path. Over the whole thermodynamic range of interest the accuracy of water vapor measurement is 1.5%. However, over the range responsible for most of the radio delay (i.e. conditions in the bottom two kilometers of the atmosphere) the accuracy of water vapor measurements ranges from 0.5% to 1.0%. Therefore the precision of the resonator measurements could be held to 0.3% and the overall absolute accuracy of resonator-based absorption and refractivity measurements will range from 0.6% to 1.

  19. Research on material removal accuracy analysis and correction of removal function during ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin

    2016-09-01

    Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.

  20. A peptide-retrieval strategy enables significant improvement of quantitative performance without compromising confidence of identification.

    PubMed

    Tu, Chengjian; Shen, Shichen; Sheng, Quanhu; Shyr, Yu; Qu, Jun

    2017-01-30

    Reliable quantification of low-abundance proteins in complex proteomes is challenging largely owing to the limited number of spectra/peptides identified. In this study we developed a straightforward method to improve the quantitative accuracy and precision of proteins by strategically retrieving the less confident peptides that were previously filtered out using the standard target-decoy search strategy. The filtered-out MS/MS spectra matched to confidently-identified proteins were recovered, and the peptide-spectrum-match FDR were re-calculated and controlled at a confident level of FDR≤1%, while protein FDR maintained at ~1%. We evaluated the performance of this strategy in both spectral count- and ion current-based methods. >60% increase of total quantified spectra/peptides was respectively achieved for analyzing a spike-in sample set and a public dataset from CPTAC. Incorporating the peptide retrieval strategy significantly improved the quantitative accuracy and precision, especially for low-abundance proteins (e.g. one-hit proteins). Moreover, the capacity of confidently discovering significantly-altered proteins was also enhanced substantially, as demonstrated with two spike-in datasets. In summary, improved quantitative performance was achieved by this peptide recovery strategy without compromising confidence of protein identification, which can be readily implemented in a broad range of quantitative proteomics techniques including label-free or labeling approaches. We hypothesize that more quantifiable spectra and peptides in a protein, even including less confident peptides, could help reduce variations and improve protein quantification. Hence the peptide retrieval strategy was developed and evaluated in two spike-in sample sets with different LC-MS/MS variations using both MS1- and MS2-based quantitative approach. The list of confidently identified proteins using the standard target-decoy search strategy was fixed and more spectra/peptides with less confidence matched to confident proteins were retrieved. However, the total peptide-spectrum-match false discovery rate (PSM FDR) after retrieval analysis was still controlled at a confident level of FDR≤1%. As expected, the penalty for occasionally incorporating incorrect peptide identifications is negligible by comparison with the improvements in quantitative performance. More quantifiable peptides, lower missing value rate, better quantitative accuracy and precision were significantly achieved for the same protein identifications by this simple strategy. This strategy is theoretically applicable for any quantitative approaches in proteomics and thereby provides more quantitative information, especially on low-abundance proteins. Published by Elsevier B.V.

  1. Design of a breath analysis system for diabetes screening and blood glucose level prediction.

    PubMed

    Yan, Ke; Zhang, David; Wu, Darong; Wei, Hua; Lu, Guangming

    2014-11-01

    It has been reported that concentrations of several biomarkers in diabetics' breath show significant difference from those in healthy people's breath. Concentrations of some biomarkers are also correlated with the blood glucose levels (BGLs) of diabetics. Therefore, it is possible to screen for diabetes and predict BGLs by analyzing one's breath. In this paper, we describe the design of a novel breath analysis system for this purpose. The system uses carefully selected chemical sensors to detect biomarkers in breath. Common interferential factors, including humidity and the ratio of alveolar air in breath, are compensated or handled in the algorithm. Considering the intersubject variance of the components in breath, we build subject-specific prediction models to improve the accuracy of BGL prediction. A total of 295 breath samples from healthy subjects and 279 samples from diabetic subjects were collected to evaluate the performance of the system. The sensitivity and specificity of diabetes screening are 91.51% and 90.77%, respectively. The mean relative absolute error for BGL prediction is 21.7%. Experiments show that the system is effective and that the strategies adopted in the system can improve its accuracy. The system potentially provides a noninvasive and convenient method for diabetes screening and BGL monitoring as an adjunct to the standard criteria.

  2. Improving image reconstruction of bioluminescence imaging using a priori information from ultrasound imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jayet, Baptiste; Ahmad, Junaid; Taylor, Shelley L.; Hill, Philip J.; Dehghani, Hamid; Morgan, Stephen P.

    2017-03-01

    Bioluminescence imaging (BLI) is a commonly used imaging modality in biology to study cancer in vivo in small animals. Images are generated using a camera to map the optical fluence emerging from the studied animal, then a numerical reconstruction algorithm is used to locate the sources and estimate their sizes. However, due to the strong light scattering properties of biological tissues, the resolution is very limited (around a few millimetres). Therefore obtaining accurate information about the pathology is complicated. We propose a combined ultrasound/optics approach to improve accuracy of these techniques. In addition to the BLI data, an ultrasound probe driven by a scanner is used for two main objectives. First, to obtain a pure acoustic image, which provides structural information of the sample. And second, to alter the light emission by the bioluminescent sources embedded inside the sample, which is monitored using a high speed optical detector (e.g. photomultiplier tube). We will show that this last measurement, used in conjunction with the ultrasound data, can provide accurate localisation of the bioluminescent sources. This can be used as a priori information by the numerical reconstruction algorithm, greatly increasing the accuracy of the BLI image reconstruction as compared to the image generated using only BLI data.

  3. Improved electron probe microanalysis of trace elements in quartz

    USGS Publications Warehouse

    Donovan, John J.; Lowers, Heather; Rusk, Brian G.

    2011-01-01

    Quartz occurs in a wide range of geologic environments throughout the Earth's crust. The concentration and distribution of trace elements in quartz provide information such as temperature and other physical conditions of formation. Trace element analyses with modern electron-probe microanalysis (EPMA) instruments can achieve 99% confidence detection of ~100 ppm with fairly minimal effort for many elements in samples of low to moderate average atomic number such as many common oxides and silicates. However, trace element measurements below 100 ppm in many materials are limited, not only by the precision of the background measurement, but also by the accuracy with which background levels are determined. A new "blank" correction algorithm has been developed and tested on both Cameca and JEOL instruments, which applies a quantitative correction to the emitted X-ray intensities during the iteration of the sample matrix correction based on a zero level (or known trace) abundance calibration standard. This iterated blank correction, when combined with improved background fit models, and an "aggregate" intensity calculation utilizing multiple spectrometer intensities in software for greater geometric efficiency, yields a detection limit of 2 to 3 ppm for Ti and 6 to 7 ppm for Al in quartz at 99% t-test confidence with similar levels for absolute accuracy.

  4. Use of x-ray fluorescence for in-situ detection of metals

    NASA Astrophysics Data System (ADS)

    Elam, W. T. E.; Whitlock, Robert R.; Gilfrich, John V.

    1995-01-01

    X-ray fluorescence (XRF) is a well-established, non-destructive method of determining elemental concentrations at ppm levels in complex samples. It can operate in atmosphere with no sample preparation, and provides accuracies of 1% or better under optimum conditions. This report addresses two sets of issues concerning the use of x-ray fluorescence as a sensor technology for the cone penetrometer, for shipboard waste disposal, or for other in-situ, real- time environmental applications. The first issue concerns the applicability of XRF to these applications, and includes investigation of detection limits and matrix effects. We have evaluated the detection limits and quantitative accuracy of a sensor mock-up for metals in soils under conditions expected in the field. In addition, several novel ways of improving the lower limits of detection to reach the drinking water regulatory limits have been explored. The second issue is the engineering involved with constructing a spectrometer within the 1.75 inch diameter of the penetrometer pipe, which is the most rigorous physical constraint. Only small improvements over current state-of-the-art are required. Additional advantages of XRF are that no radioactive sources or hazardous materials are used in the sensor design, and no reagents or any possible sources of ignition are involved.

  5. Massive metrology using fast e-beam technology improves OPC model accuracy by >2x at faster turnaround time

    NASA Astrophysics Data System (ADS)

    Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei

    2018-03-01

    Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.

  6. Measurement of 3D refractive index distribution by optical diffraction tomography

    NASA Astrophysics Data System (ADS)

    Chi, Weining; Wang, Dayong; Wang, Yunxin; Zhao, Jie; Rong, Lu; Yuan, Yuanyuan

    2018-01-01

    Optical Diffraction Tomography (ODT), as a novel 3D imaging technique, can obtain a 3D refractive index (RI) distribution to reveal the important optical properties of transparent samples. According to the theory of ODT, an optical diffraction tomography setup is built based on the Mach-Zehnder interferometer. The propagation direction of object beam is controlled by a 2D translation stage, and 121 holograms based on different illumination angles are recorded by a Charge-coupled Device (CCD). In order to prove the validity and accuracy of the ODT, the 3D RI profile of microsphere with a known RI is firstly measured. An iterative constraint algorithm is employed to improve the imaging accuracy effectively. The 3D morphology and average RI of the microsphere are consistent with that of the actual situation, and the RI error is less than 0.0033. Then, an optical element fabricated by laser with a non-uniform RI is taken as the sample. Its 3D RI profile is obtained by the optical diffraction tomography system.

  7. Active relearning for robust supervised classification of pulmonary emphysema

    NASA Astrophysics Data System (ADS)

    Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Radiologists are adept at recognizing the appearance of lung parenchymal abnormalities in CT scans. However, the inconsistent differential diagnosis, due to subjective aggregation, mandates supervised classification. Towards optimizing Emphysema classification, we introduce a physician-in-the-loop feedback approach in order to minimize uncertainty in the selected training samples. Using multi-view inductive learning with the training samples, an ensemble of Support Vector Machine (SVM) models, each based on a specific pair-wise dissimilarity metric, was constructed in less than six seconds. In the active relearning phase, the ensemble-expert label conflicts were resolved by an expert. This just-in-time feedback with unoptimized SVMs yielded 15% increase in classification accuracy and 25% reduction in the number of support vectors. The generality of relearning was assessed in the optimized parameter space of six different classifiers across seven dissimilarity metrics. The resultant average accuracy improved to 21%. The co-operative feedback method proposed here could enhance both diagnostic and staging throughput efficiency in chest radiology practice.

  8. A double sealing technique for increasing the precision of headspace-gas chromatographic analysis.

    PubMed

    Xie, Wei-Qi; Yu, Kong-Xian; Gong, Yi-Xian

    2018-01-19

    This paper investigates a new double sealing technique for increasing the precision of the headspace gas chromatographic method. The air leakage problem caused by the high pressure in the headspace vial during the headspace sampling process has a great impact to the measurement precision in the conventional headspace analysis (i.e., single sealing technique). The results (using ethanol solution as the model sample) show that the present technique is effective to minimize such a problem. The double sealing technique has an excellent measurement precision (RSD < 0.15%) and accuracy (recovery = 99.1%-100.6%) for the ethanol quantification. The detection precision of the present method was 10-20 times higher than that in earlier HS-GC work that use conventional single sealing technique. The present double sealing technique may open up a new avenue, and also serve as a general strategy for improving the performance (i.e., accuracy and precision) of headspace analysis of various volatile compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Photoacoustic-based sO2 estimation through excised bovine prostate tissue with interstitial light delivery.

    PubMed

    Mitcham, Trevor; Taghavi, Houra; Long, James; Wood, Cayla; Fuentes, David; Stefan, Wolfgang; Ward, John; Bouchard, Richard

    2017-09-01

    Photoacoustic (PA) imaging is capable of probing blood oxygen saturation (sO 2 ), which has been shown to correlate with tissue hypoxia, a promising cancer biomarker. However, wavelength-dependent local fluence changes can compromise sO 2 estimation accuracy in tissue. This work investigates using PA imaging with interstitial irradiation and local fluence correction to assess precision and accuracy of sO 2 estimation of blood samples through ex vivo bovine prostate tissue ranging from 14% to 100% sO 2 . Study results for bovine blood samples at distances up to 20 mm from the irradiation source show that local fluence correction improved average sO 2 estimation error from 16.8% to 3.2% and maintained an average precision of 2.3% when compared to matched CO-oximeter sO 2 measurements. This work demonstrates the potential for future clinical translation of using fluence-corrected and interstitially driven PA imaging to accurately and precisely assess sO 2 at depth in tissue with high resolution.

  10. The efficacy of protoporphyrin as a predictive biomarker for lead exposure in canvasback ducks: effect of sample storage time

    USGS Publications Warehouse

    Franson, J.C.; Hohman, W.L.; Moore, J.L.; Smith, M.R.

    1996-01-01

    We used 363 blood samples collected from wild canvasback dueks (Aythya valisineria) at Catahoula Lake, Louisiana, U.S.A. to evaluate the effect of sample storage time on the efficacy of erythrocytic protoporphyrin as an indicator of lead exposure. The protoporphyrin concentration of each sample was determined by hematofluorometry within 5 min of blood collection and after refrigeration at 4 °C for 24 and 48 h. All samples were analyzed for lead by atomic absorption spectrophotometry. Based on a blood lead concentration of ≥0.2 ppm wet weight as positive evidence for lead exposure, the protoporphyrin technique resulted in overall error rates of 29%, 20%, and 19% and false negative error rates of 47%, 29% and 25% when hematofluorometric determinations were made on blood at 5 min, 24 h, and 48 h, respectively. False positive error rates were less than 10% for all three measurement times. The accuracy of the 24-h erythrocytic protoporphyrin classification of blood samples as positive or negative for lead exposure was significantly greater than the 5-min classification, but no improvement in accuracy was gained when samples were tested at 48 h. The false negative errors were probably due, at least in part, to the lag time between lead exposure and the increase of blood protoporphyrin concentrations. False negatives resulted in an underestimation of the true number of canvasbacks exposed to lead, indicating that hematofluorometry provides a conservative estimate of lead exposure.

  11. Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations

    NASA Astrophysics Data System (ADS)

    Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong

    2017-08-01

    Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.

  12. Real-time estimation of BDS/GPS high-rate satellite clock offsets using sequential least squares

    NASA Astrophysics Data System (ADS)

    Fu, Wenju; Yang, Yuanxi; Zhang, Qin; Huang, Guanwen

    2018-07-01

    The real-time precise satellite clock product is one of key prerequisites for real-time Precise Point Positioning (PPP). The accuracy of the 24-hour predicted satellite clock product with 15 min sampling interval and an update of 6 h provided by the International GNSS Service (IGS) is only 3 ns, which could not meet the needs of all real-time PPP applications. The real-time estimation of high-rate satellite clock offsets is an efficient method for improving the accuracy. In this paper, the sequential least squares method to estimate real-time satellite clock offsets with high sample rate is proposed to improve the computational speed by applying an optimized sparse matrix operation to compute the normal equation and using special measures to take full advantage of modern computer power. The method is first applied to BeiDou Navigation Satellite System (BDS) and provides real-time estimation with a 1 s sample rate. The results show that the amount of time taken to process a single epoch is about 0.12 s using 28 stations. The Standard Deviation (STD) and Root Mean Square (RMS) of the real-time estimated BDS satellite clock offsets are 0.17 ns and 0.44 ns respectively when compared to German Research Center for Geosciences (GFZ) final clock products. The positioning performance of the real-time estimated satellite clock offsets is evaluated. The RMSs of the real-time BDS kinematic PPP in east, north, and vertical components are 7.6 cm, 6.4 cm and 19.6 cm respectively. The method is also applied to Global Positioning System (GPS) with a 10 s sample rate and the computational time of most epochs is less than 1.5 s with 75 stations. The STD and RMS of the real-time estimated GPS satellite clocks are 0.11 ns and 0.27 ns, respectively. The accuracies of 5.6 cm, 2.6 cm and 7.9 cm in east, north, and vertical components are achieved for the real-time GPS kinematic PPP.

  13. Microfluidic-Based sample chips for radioactive solutions

    DOE PAGES

    Tripp, J. L.; Law, J. D.; Smith, T. E.; ...

    2015-01-01

    Historical nuclear fuel cycle process sampling techniques required sample volumes ranging in the tens of milliliters. The radiation levels experienced by analytical personnel and equipment, in addition to the waste volumes generated from analysis of these samples, have been significant. These sample volumes also impacted accountability inventories of required analytes during process operations. To mitigate radiation dose and other issues associated with the historically larger sample volumes, a microcapillary sample chip was chosen for further investigation. The ability to obtain microliter volume samples coupled with a remote automated means of sample loading, tracking, and transporting to the analytical instrument wouldmore » greatly improve analytical efficiency while reducing both personnel exposure and radioactive waste volumes. Sample chip testing was completed to determine the accuracy, repeatability, and issues associated with the use of microfluidic sample chips used to supply µL sample volumes of lanthanide analytes dissolved in nitric acid for introduction to an analytical instrument for elemental analysis.« less

  14. Superresolution microscope image reconstruction by spatiotemporal object decomposition and association: application in resolving t-tubule structure in skeletal muscle

    PubMed Central

    Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie

    2014-01-01

    One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle. PMID:24921337

  15. An Improved P-Phase Arrival Picking Method S/L-K-A with an Application to the Yongshaba Mine in China

    NASA Astrophysics Data System (ADS)

    Shang, Xueyi; Li, Xibing; Morales-Esteban, A.; Dong, Longjun

    2018-02-01

    Automatic microseismic P-phase arrival picking is paramount for microseismic event identification, event location and source mechanism analysis. The commonly used STA/LTA picker, PAI-K picker, AIC picker and three proposed pickers have been applied to determine the P-phase arrivals of 580 microseismic signals (the sampling frequency is 6000 Hz). These have been obtained from the Institute of Mine Seismology (IMS) acquisition system of the Yongshaba mine in China. Then, the six above-mentioned pickers have been compared in their picking accuracy, typical waveforms, signal-to-noise ratio (SNR) adaptabilities and quantitative evaluation. The results have shown that: (1) the triggered STA/LTA picker has a good picking stability but a low picking accuracy. While the PAI-K and the AIC pickers have a higher picking accuracy but a poorer picking stability compared with the triggered STA/LTA picker. Moreover, the AIC picker usually has a better picking result than the PAI-K picker; (2) the S/L-K-A picker significantly improves the STA/LTA, the PAI-K and the S/L + PAI-K pickers. Moreover, it obviously improves the AIC and the S/L + AIC pickers' large picking error (> 30 ms) signals; (3) the picking error ratios of the S/L-K-A picker within 10, 20 and 30 ms achieve 92.76, 95.86 and 97.41%, respectively. The S/L-K-A picker enhances the picking adaptability to different waveforms and SNRs. In conclusion, the S/L-K-A picker provides a new method for automatic microseismic P-phase arrival picking with a high accuracy and a good stability.

  16. Effects of field plot size on prediction accuracy of aboveground biomass in airborne laser scanning-assisted inventories in tropical rain forests of Tanzania.

    PubMed

    Mauya, Ernest William; Hansen, Endre Hofstad; Gobakken, Terje; Bollandsås, Ole Martin; Malimbwi, Rogers Ernest; Næsset, Erik

    2015-12-01

    Airborne laser scanning (ALS) has recently emerged as a promising tool to acquire auxiliary information for improving aboveground biomass (AGB) estimation in sample-based forest inventories. Under design-based and model-assisted inferential frameworks, the estimation relies on a model that relates the auxiliary ALS metrics to AGB estimated on ground plots. The size of the field plots has been identified as one source of model uncertainty because of the so-called boundary effects which increases with decreasing plot size. Recent research in tropical forests has aimed to quantify the boundary effects on model prediction accuracy, but evidence of the consequences for the final AGB estimates is lacking. In this study we analyzed the effect of field plot size on model prediction accuracy and its implication when used in a model-assisted inferential framework. The results showed that the prediction accuracy of the model improved as the plot size increased. The adjusted R 2 increased from 0.35 to 0.74 while the relative root mean square error decreased from 63.6 to 29.2%. Indicators of boundary effects were identified and confirmed to have significant effects on the model residuals. Variance estimates of model-assisted mean AGB relative to corresponding variance estimates of pure field-based AGB, decreased with increasing plot size in the range from 200 to 3000 m 2 . The variance ratio of field-based estimates relative to model-assisted variance ranged from 1.7 to 7.7. This study showed that the relative improvement in precision of AGB estimation when increasing field-plot size, was greater for an ALS-assisted inventory compared to that of a pure field-based inventory.

  17. Superresolution microscope image reconstruction by spatiotemporal object decomposition and association: application in resolving t-tubule structure in skeletal muscle.

    PubMed

    Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie

    2014-05-19

    One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle.

  18. Classifying coastal resources by integrating optical and radar imagery and color infrared photography

    USGS Publications Warehouse

    Ramsey, Elijah W.; Nelson, Gene A.; Sapkota, Sijan

    1998-01-01

    A progressive classification of a marsh and forest system using Landsat Thematic Mapper (TM), color infrared (CIR) photograph, and ERS-1 synthetic aperture radar (SAR) data improved classification accuracy when compared to classification using solely TM reflective band data. The classification resulted in a detailed identification of differences within a nearly monotypic black needlerush marsh. Accuracy percentages of these classes were surprisingly high given the complexities of classification. The detailed classification resulted in a more accurate portrayal of the marsh transgressive sequence than was obtainable with TM data alone. Individual sensor contribution to the improved classification was compared to that using only the six reflective TM bands. Individually, the green reflective CIR and SAR data identified broad categories of water, marsh, and forest. In combination with TM, SAR and the green CIR band each improved overall accuracy by about 3% and 15% respectively. The SAR data improved the TM classification accuracy mostly in the marsh classes. The green CIR data also improved the marsh classification accuracy and accuracies in some water classes. The final combination of all sensor data improved almost all class accuracies from 2% to 70% with an overall improvement of about 20% over TM data alone. Not only was the identification of vegetation types improved, but the spatial detail of the classification approached 10 m in some areas.

  19. Quantitative bioimaging by LA-ICP-MS: a methodological study on the distribution of Pt and Ru in viscera originating from cisplatin- and KP1339-treated mice.

    PubMed

    Egger, Alexander E; Theiner, Sarah; Kornauth, Christoph; Heffeter, Petra; Berger, Walter; Keppler, Bernhard K; Hartinger, Christian G

    2014-09-01

    Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) was used to study the spatially-resolved distribution of ruthenium and platinum in viscera (liver, kidney, spleen, and muscle) originating from mice treated with the investigational ruthenium-based antitumor compound KP1339 or cisplatin, a potent, but nephrotoxic clinically-approved platinum-based anticancer drug. Method development was based on homogenized Ru- and Pt-containing samples (22.0 and 0.257 μg g(-1), respectively). Averaging yielded satisfactory precision and accuracy for both concentrations (3-15% and 93-120%, respectively), however when considering only single data points, the highly concentrated Ru sample maintained satisfactory precision and accuracy, while the low concentrated Pt sample yielded low recoveries and precision, which could not be improved by use of internal standards ((115)In, (185)Re or (13)C). Matrix-matched standards were used for quantification in LA-ICP-MS which yielded comparable metal distributions, i.e., enrichment in the cortex of the kidney in comparison with the medulla, a homogenous distribution in the liver and the muscle and areas of enrichment in the spleen. Elemental distributions were assigned to histological structures exceeding 100 μm in size. The accuracy of a quantitative LA-ICP-MS imaging experiment was validated by an independent method using microwave-assisted digestion (MW) followed by direct infusion ICP-MS analysis.

  20. The influence of different referencing methods on the accuracy of δ(13) C value measurement of ethanol fuel by gas chromatography/combustion/isotope ratio mass spectrometry.

    PubMed

    Neves, Laura A; Rodrigues, Janaína M; Daroda, Romeu J; Silva, Paulo R M; Ferreira, Alexandre A; Aranda, Donato A G; Eberlin, Marcos N; Fasciotti, Maíra

    2015-11-15

    Brazil is the largest producer of sugar cane bioethanol in the world. Isotope ratio mass spectrometry (IRMS) is the technique of choice to certify the origin/raw materials for ethanol production, but the lack of certified reference materials (CRMs) for accurate measurements of δ(13) C values traceable to Vienna Pee Dee Belemnite (VPDB), the international zero point for (13) C/(12) C measurements, certified and compatible with gas chromatography (GC)/IRMS instruments may compromise the accuracy of δ(13) C determinations. We evaluated the influence of methods for the calibration and normalization of raw δ(13) C values of ethanol samples. Samples were analyzed by GC/C/IRMS using two different GC columns. Different substances were used as isotopic standards for the working gas calibration. The δ(13) C values obtained with the three methods of normalization were statistically compared with those obtained with elemental analyzer (EA)/IRMS, since the δ(13) C results obtained using EA are traceable to VPDB via the NBS 22 reference material. It was observed that both the isotopic reference material for CO2 calibration and the GC column have a major effect on the δ(13) C measurements, leading to a bias of almost 2-3 ‰ in the δ(13) C values. All three methods of normalization were equivalent in performance, enabling an improvement in the GC/C/IRMS accuracy, compared with the EA/IRMS reference values for the samples. All the methods of CO2 calibration, chromatography and normalization presented in this work demonstrated several sources of traceability and accuracy loss for the determination of δ(13) C values in ethanol fuel samples by GC/C/IRMS. This work has also shown the importance of using proper CRMs traceable to VPBD that should be compatible and certified using GC/C/IRMS, ideally in a wide range of δ(13) C values. This is important not only for bioethanol fuel samples, but also for many analytes commonly analyzed by IRMS. Copyright © 2015 John Wiley & Sons, Ltd.

  1. [Spatial interpolation of soil organic matter using regression Kriging and geographically weighted regression Kriging].

    PubMed

    Yang, Shun-hua; Zhang, Hai-tao; Guo, Long; Ren, Yan

    2015-06-01

    Relative elevation and stream power index were selected as auxiliary variables based on correlation analysis for mapping soil organic matter. Geographically weighted regression Kriging (GWRK) and regression Kriging (RK) were used for spatial interpolation of soil organic matter and compared with ordinary Kriging (OK), which acts as a control. The results indicated that soil or- ganic matter was significantly positively correlated with relative elevation whilst it had a significantly negative correlation with stream power index. Semivariance analysis showed that both soil organic matter content and its residuals (including ordinary least square regression residual and GWR resi- dual) had strong spatial autocorrelation. Interpolation accuracies by different methods were esti- mated based on a data set of 98 validation samples. Results showed that the mean error (ME), mean absolute error (MAE) and root mean square error (RMSE) of RK were respectively 39.2%, 17.7% and 20.6% lower than the corresponding values of OK, with a relative-improvement (RI) of 20.63. GWRK showed a similar tendency, having its ME, MAE and RMSE to be respectively 60.6%, 23.7% and 27.6% lower than those of OK, with a RI of 59.79. Therefore, both RK and GWRK significantly improved the accuracy of OK interpolation of soil organic matter due to their in- corporation of auxiliary variables. In addition, GWRK performed obviously better than RK did in this study, and its improved performance should be attributed to the consideration of sample spatial locations.

  2. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    PubMed

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  3. Modification and Validation of Conceptual Design Aerodynamic Prediction Method HASC95 With VTXCHN

    NASA Technical Reports Server (NTRS)

    Albright, Alan E.; Dixon, Charles J.; Hegedus, Martin C.

    1996-01-01

    A conceptual/preliminary design level subsonic aerodynamic prediction code HASC (High Angle of Attack Stability and Control) has been improved in several areas, validated, and documented. The improved code includes improved methodologies for increased accuracy and robustness, and simplified input/output files. An engineering method called VTXCHN (Vortex Chine) for prediciting nose vortex shedding from circular and non-circular forebodies with sharp chine edges has been improved and integrated into the HASC code. This report contains a summary of modifications, description of the code, user's guide, and validation of HASC. Appendices include discussion of a new HASC utility code, listings of sample input and output files, and a discussion of the application of HASC to buffet analysis.

  4. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.

    PubMed

    Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing

    2015-08-14

    Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.

  5. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  6. The accuracy of selected land use and land cover maps at scales of 1:250,000 and 1:100,000

    USGS Publications Warehouse

    Fitzpatrick-Lins, Katherine

    1980-01-01

    Land use and land cover maps produced by the U.S. Geological Survey are found to meet or exceed the established standard of accuracy. When analyzed using a point sampling technique and binomial probability theory, several maps, illustrative of those produced for different parts of the country, were found to meet or exceed accuracies of 85 percent. Those maps tested were Tampa, Fla., Portland, Me., Charleston, W. Va., and Greeley, Colo., published at a scale of 1:250,000, and Atlanta, Ga., and Seattle and Tacoma, Wash., published at a scale of 1:100,000. For each map, the values were determined by calculating the ratio of the total number of points correctly interpreted to the total number of points sampled. Six of the seven maps tested have accuracies of 85 percent or better at the 95-percent lower confidence limit. When the sample data for predominant categories (those sampled with a significant number of points) were grouped together for all maps, accuracies of those predominant categories met the 85-percent accuracy criterion, with one exception. One category, Residential, had less than 85-percent accuracy at the 95-percent lower confidence limit. Nearly all residential land sampled was mapped correctly, but some areas of other land uses were mapped incorrectly as Residential.

  7. Efficient differentially private learning improves drug sensitivity prediction.

    PubMed

    Honkela, Antti; Das, Mrinal; Nieminen, Arttu; Dikmen, Onur; Kaski, Samuel

    2018-02-06

    Users of a personalised recommendation system face a dilemma: recommendations can be improved by learning from data, but only if other users are willing to share their private information. Good personalised predictions are vitally important in precision medicine, but genomic information on which the predictions are based is also particularly sensitive, as it directly identifies the patients and hence cannot easily be anonymised. Differential privacy has emerged as a potentially promising solution: privacy is considered sufficient if presence of individual patients cannot be distinguished. However, differentially private learning with current methods does not improve predictions with feasible data sizes and dimensionalities. We show that useful predictors can be learned under powerful differential privacy guarantees, and even from moderately-sized data sets, by demonstrating significant improvements in the accuracy of private drug sensitivity prediction with a new robust private regression method. Our method matches the predictive accuracy of the state-of-the-art non-private lasso regression using only 4x more samples under relatively strong differential privacy guarantees. Good performance with limited data is achieved by limiting the sharing of private information by decreasing the dimensionality and by projecting outliers to fit tighter bounds, therefore needing to add less noise for equal privacy. The proposed differentially private regression method combines theoretical appeal and asymptotic efficiency with good prediction accuracy even with moderate-sized data. As already the simple-to-implement method shows promise on the challenging genomic data, we anticipate rapid progress towards practical applications in many fields. This article was reviewed by Zoltan Gaspari and David Kreil.

  8. The availability of prior ECGs improves paramedic accuracy in recognizing ST-segment elevation myocardial infarction.

    PubMed

    O'Donnell, Daniel; Mancera, Mike; Savory, Eric; Christopher, Shawn; Schaffer, Jason; Roumpf, Steve

    2015-01-01

    Early and accurate identification of ST-elevation myocardial infarction (STEMI) by prehospital providers has been shown to significantly improve door to balloon times and improve patient outcomes. Previous studies have shown that paramedic accuracy in reading 12 lead ECGs can range from 86% to 94%. However, recent studies have demonstrated that accuracy diminishes for the more uncommon STEMI presentations (e.g. lateral). Unlike hospital physicians, paramedics rarely have the ability to review previous ECGs for comparison. Whether or not a prior ECG can improve paramedic accuracy is not known. The availability of prior ECGs improves paramedic accuracy in ECG interpretation. 130 paramedics were given a single clinical scenario. Then they were randomly assigned 12 computerized prehospital ECGs, 6 with and 6 without an accompanying prior ECG. All ECGs were obtained from a local STEMI registry. For each ECG paramedics were asked to determine whether or not there was a STEMI and to rate their confidence in their interpretation. To determine if the old ECGs improved accuracy we used a mixed effects logistic regression model to calculate p-values between the control and intervention. The addition of a previous ECG improved the accuracy of identifying STEMIs from 75.5% to 80.5% (p=0.015). A previous ECG also increased paramedic confidence in their interpretation (p=0.011). The availability of previous ECGs improves paramedic accuracy and enhances their confidence in interpreting STEMIs. Further studies are needed to evaluate this impact in a clinical setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Quantitative differential phase contrast imaging at high resolution with radially asymmetric illumination.

    PubMed

    Lin, Yu-Zi; Huang, Kuang-Yuh; Luo, Yuan

    2018-06-15

    Half-circle illumination-based differential phase contrast (DPC) microscopy has been utilized to recover phase images through a pair of images along multiple axes. Recently, the half-circle based DPC using 12-axis measurements significantly provides a circularly symmetric phase transfer function to improve accuracy for more stable phase recovery. Instead of using half-circle-based DPC, we propose a new scheme of DPC under radially asymmetric illumination to achieve circularly symmetric phase transfer function and enhance the accuracy of phase recovery in a more stable and efficient fashion. We present the design, implementation, and experimental image data demonstrating the ability of our method to obtain quantitative phase images of microspheres, as well as live fibroblast cell samples.

  10. Overview of the SAMPL5 host–guest challenge: Are we doing better?

    PubMed Central

    Yin, Jian; Henriksen, Niel M.; Slochower, David R.; Shirts, Michael R.; Chiu, Michael W.; Mobley, David L.; Gilson, Michael K.

    2016-01-01

    The ability to computationally predict protein-small molecule binding affinities with high accuracy would accelerate drug discovery and reduce its cost by eliminating rounds of trial-and-error synthesis and experimental evaluation of candidate ligands. As academic and industrial groups work toward this capability, there is an ongoing need for datasets that can be used to rigorously test new computational methods. Although protein–ligand data are clearly important for this purpose, their size and complexity make it difficult to obtain well-converged results and to troubleshoot computational methods. Host–guest systems offer a valuable alternative class of test cases, as they exemplify noncovalent molecular recognition but are far smaller and simpler. As a consequence, host–guest systems have been part of the prior two rounds of SAMPL prediction exercises, and they also figure in the present SAMPL5 round. In addition to being blinded, and thus avoiding biases that may arise in retrospective studies, the SAMPL challenges have the merit of focusing multiple researchers on a common set of molecular systems, so that methods may be compared and ideas exchanged. The present paper provides an overview of the host–guest component of SAMPL5, which centers on three different hosts, two octa-acids and a glycoluril-based molecular clip, and two different sets of guest molecules, in aqueous solution. A range of methods were applied, including electronic structure calculations with implicit solvent models; methods that combine empirical force fields with implicit solvent models; and explicit solvent free energy simulations. The most reliable methods tend to fall in the latter class, consistent with results in prior SAMPL rounds, but the level of accuracy is still below that sought for reliable computer-aided drug design. Advances in force field accuracy, modeling of protonation equilibria, electronic structure methods, and solvent models, hold promise for future improvements. PMID:27658802

  11. Overview of the SAMPL5 host-guest challenge: Are we doing better?

    PubMed

    Yin, Jian; Henriksen, Niel M; Slochower, David R; Shirts, Michael R; Chiu, Michael W; Mobley, David L; Gilson, Michael K

    2017-01-01

    The ability to computationally predict protein-small molecule binding affinities with high accuracy would accelerate drug discovery and reduce its cost by eliminating rounds of trial-and-error synthesis and experimental evaluation of candidate ligands. As academic and industrial groups work toward this capability, there is an ongoing need for datasets that can be used to rigorously test new computational methods. Although protein-ligand data are clearly important for this purpose, their size and complexity make it difficult to obtain well-converged results and to troubleshoot computational methods. Host-guest systems offer a valuable alternative class of test cases, as they exemplify noncovalent molecular recognition but are far smaller and simpler. As a consequence, host-guest systems have been part of the prior two rounds of SAMPL prediction exercises, and they also figure in the present SAMPL5 round. In addition to being blinded, and thus avoiding biases that may arise in retrospective studies, the SAMPL challenges have the merit of focusing multiple researchers on a common set of molecular systems, so that methods may be compared and ideas exchanged. The present paper provides an overview of the host-guest component of SAMPL5, which centers on three different hosts, two octa-acids and a glycoluril-based molecular clip, and two different sets of guest molecules, in aqueous solution. A range of methods were applied, including electronic structure calculations with implicit solvent models; methods that combine empirical force fields with implicit solvent models; and explicit solvent free energy simulations. The most reliable methods tend to fall in the latter class, consistent with results in prior SAMPL rounds, but the level of accuracy is still below that sought for reliable computer-aided drug design. Advances in force field accuracy, modeling of protonation equilibria, electronic structure methods, and solvent models, hold promise for future improvements.

  12. Identification of Long Bone Fractures in Radiology Reports Using Natural Language Processing to support Healthcare Quality Improvement.

    PubMed

    Grundmeier, Robert W; Masino, Aaron J; Casper, T Charles; Dean, Jonathan M; Bell, Jamie; Enriquez, Rene; Deakyne, Sara; Chamberlain, James M; Alpern, Elizabeth R

    2016-11-09

    Important information to support healthcare quality improvement is often recorded in free text documents such as radiology reports. Natural language processing (NLP) methods may help extract this information, but these methods have rarely been applied outside the research laboratories where they were developed. To implement and validate NLP tools to identify long bone fractures for pediatric emergency medicine quality improvement. Using freely available statistical software packages, we implemented NLP methods to identify long bone fractures from radiology reports. A sample of 1,000 radiology reports was used to construct three candidate classification models. A test set of 500 reports was used to validate the model performance. Blinded manual review of radiology reports by two independent physicians provided the reference standard. Each radiology report was segmented and word stem and bigram features were constructed. Common English "stop words" and rare features were excluded. We used 10-fold cross-validation to select optimal configuration parameters for each model. Accuracy, recall, precision and the F1 score were calculated. The final model was compared to the use of diagnosis codes for the identification of patients with long bone fractures. There were 329 unique word stems and 344 bigrams in the training documents. A support vector machine classifier with Gaussian kernel performed best on the test set with accuracy=0.958, recall=0.969, precision=0.940, and F1 score=0.954. Optimal parameters for this model were cost=4 and gamma=0.005. The three classification models that we tested all performed better than diagnosis codes in terms of accuracy, precision, and F1 score (diagnosis code accuracy=0.932, recall=0.960, precision=0.896, and F1 score=0.927). NLP methods using a corpus of 1,000 training documents accurately identified acute long bone fractures from radiology reports. Strategic use of straightforward NLP methods, implemented with freely available software, offers quality improvement teams new opportunities to extract information from narrative documents.

  13. The role of molecular diagnostic testing in the management of thyroid nodules.

    PubMed

    Moore, Maureen D; Panjwani, Suraj; Gray, Katherine D; Finnerty, Brendan M; Zarnegar, Rasa; Fahey, Thomas J

    2017-06-01

    Fine needle aspiration (FNA) with cytologic examination remains the standard of care for investigation of thyroid nodules. However, as many as 30% of FNA samples are cytologically indeterminate for malignancy, which confounds clinical management. To reduce the burden of repeat diagnostic testing and unnecessary surgery, there has been extensive investigation into molecular markers that can be detected on FNA specimens to more accurately stratify a patient's risk of malignancy. Areas covered: In this review, the authors discuss recent evidence and progress in molecular markers used in the diagnosis of thyroid cancer highlighting somatic gene alterations, molecular technologies and microRNA analysis. Expert commentary: The goal of molecular markers is to improve diagnostic accuracy and aid clinicians in the preoperative management of thyroid lesions. Modalities such as direct mutation analysis, mRNA gene expression profiling, next-generation sequencing, and miRNA expression profiling have been explored to improve the diagnostic accuracy of thyroid nodule FNA. Although no perfect test has been discovered, molecular diagnostic testing has revolutionized the management of thyroid nodules.

  14. Ultra-deep mutant spectrum profiling: improving sequencing accuracy using overlapping read pairs.

    PubMed

    Chen-Harris, Haiyin; Borucki, Monica K; Torres, Clinton; Slezak, Tom R; Allen, Jonathan E

    2013-02-12

    High throughput sequencing is beginning to make a transformative impact in the area of viral evolution. Deep sequencing has the potential to reveal the mutant spectrum within a viral sample at high resolution, thus enabling the close examination of viral mutational dynamics both within- and between-hosts. The challenge however, is to accurately model the errors in the sequencing data and differentiate real viral mutations, particularly those that exist at low frequencies, from sequencing errors. We demonstrate that overlapping read pairs (ORP) -- generated by combining short fragment sequencing libraries and longer sequencing reads -- significantly reduce sequencing error rates and improve rare variant detection accuracy. Using this sequencing protocol and an error model optimized for variant detection, we are able to capture a large number of genetic mutations present within a viral population at ultra-low frequency levels (<0.05%). Our rare variant detection strategies have important implications beyond viral evolution and can be applied to any basic and clinical research area that requires the identification of rare mutations.

  15. Newborn screening healthcare information system based on service-oriented architecture.

    PubMed

    Hsieh, Sung-Huai; Hsieh, Sheau-Ling; Chien, Yin-Hsiu; Weng, Yung-Ching; Hsu, Kai-Ping; Chen, Chi-Huang; Tu, Chien-Ming; Wang, Zhenyu; Lai, Feipei

    2010-08-01

    In this paper, we established a newborn screening system under the HL7/Web Services frameworks. We rebuilt the NTUH Newborn Screening Laboratory's original standalone architecture, having various heterogeneous systems operating individually, and restructured it into a Service-Oriented Architecture (SOA), distributed platform for further integrity and enhancements of sample collections, testing, diagnoses, evaluations, treatments or follow-up services, screening database management, as well as collaboration, communication among hospitals; decision supports and improving screening accuracy over the Taiwan neonatal systems are also addressed. In addition, the new system not only integrates the newborn screening procedures among phlebotomy clinics, referral hospitals, as well as the newborn screening center in Taiwan, but also introduces new models of screening procedures for the associated, medical practitioners. Furthermore, it reduces the burden of manual operations, especially the reporting services, those were heavily dependent upon previously. The new system can accelerate the whole procedures effectively and efficiently. It improves the accuracy and the reliability of the screening by ensuring the quality control during the processing as well.

  16. Elemental and Isotopic Analysis of Uranium Oxide an NIST Glass Standards by FEMTOSECOND-LA-ICP-MIC-MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebert, Chris; Zamzow, Daniel S.; McBay, Eddie H.

    2009-06-01

    The objective of this work was to test and demonstrate the analytical figures of merit of a femtosecond-laser ablation (fs-LA) system coupled with an inductively coupled plasma-multi-ion collector-mass spectrometer (ICP-MIC-MS). The mobile fs-LA sampling system was designed and assembled at Ames Laboratory and shipped to Oak Ridge National Laboratory (ORNL), where it was integrated with an ICP-MIC-MS. The test period of the integrated systems was February 2-6, 2009. Spatially-resolved analysis of particulate samples is accomplished by 100-shot laser ablation using a fs-pulsewidth laser and monitoring selected isotopes in the resulting ICP-MS transient signal. The capability of performing high sensitivity, spatiallymore » resolved, isotopic analyses with high accuracy and precision and with virtually no sample preparation makes fs-LA-ICP-MIC-MS valuable for the measurement of actinide isotopes at low concentrations in very small samples for nonproliferation purposes. Femtosecond-LA has been shown to generate particles from the sample that are more representative of the bulk composition, thereby minimizing weaknesses encountered in previous work using nanosecond-LA (ns-LA). The improvement of fs- over ns-LA sampling arises from the different mechanisms for transfer of energy into the sample in these two laser pulse-length regimes. The shorter duration fs-LA pulses induce less heating and cause less damage to the sample than the longer ns pulses. This results in better stoichiometric sampling (i.e., a closer correlation between the composition of the ablated particles and that of the original solid sample), which improves accuracy for both intra- and inter-elemental analysis. The primary samples analyzed in this work are (a) solid uranium oxide powdered samples having different {sup 235}U to {sup 238}U concentration ratios, and (b) glass reference materials (NIST 610, 612, 614, and 616). Solid uranium oxide samples containing {sup 235}U in depleted, natural, and enriched abundances were analyzed as particle aggregates immobilized in a collodion substrate. The uranium oxide samples were nuclear reference materials (CRMs U0002, U005-A, 129-A, U015, U030-A, and U050) obtained from New Brunswick Laboratory-USDOE.« less

  17. Audit of accuracy of clinical coding in oral surgery.

    PubMed

    Naran, S; Hudovsky, A; Antscherl, J; Howells, S; Nouraei, S A R

    2014-10-01

    We aimed to study the accuracy of clinical coding within oral surgery and to identify ways in which it can be improved. We undertook did a multidisciplinary audit of a sample of 646 day case patients who had had oral surgery procedures between 2011 and 2012. We compared the codes given with their case notes and amended any discrepancies. The accuracy of coding was assessed for primary and secondary diagnoses and procedures, and for health resource groupings (HRGs). The financial impact of coding Subjectivity, Variability and Error (SVE) was assessed by reference to national tariffs. The audit resulted in 122 (19%) changes to primary diagnoses. The codes for primary procedures changed in 224 (35%) cases; 310 (48%) morbidities and complications had been missed, and 266 (41%) secondary procedures had been missed or were incorrect. This led to at least one change of coding in 496 (77%) patients, and to the HRG changes in 348 (54%) patients. The financial impact of this was £114 in lost revenue per patient. There is a high incidence of coding errors in oral surgery because of the large number of day cases, a lack of awareness by clinicians of coding issues, and because clinical coders are not always familiar with the large number of highly specialised abbreviations used. Accuracy of coding can be improved through the use of a well-designed proforma, and standards can be maintained by the use of an ongoing data quality assurance programme. Copyright © 2014. Published by Elsevier Ltd.

  18. Automated characterisation of ultrasound images of ovarian tumours: the diagnostic accuracy of a support vector machine and image processing with a local binary pattern operator

    PubMed Central

    Khazendar, S.; Sayasneh, A.; Al-Assam, H.; Du, H.; Kaijser, J.; Ferrara, L.; Timmerman, D.; Jassim, S.; Bourne, T.

    2015-01-01

    Introduction: Preoperative characterisation of ovarian masses into benign or malignant is of paramount importance to optimise patient management. Objectives: In this study, we developed and validated a computerised model to characterise ovarian masses as benign or malignant. Materials and methods: Transvaginal 2D B mode static ultrasound images of 187 ovarian masses with known histological diagnosis were included. Images were first pre-processed and enhanced, and Local Binary Pattern Histograms were then extracted from 2 × 2 blocks of each image. A Support Vector Machine (SVM) was trained using stratified cross validation with randomised sampling. The process was repeated 15 times and in each round 100 images were randomly selected. Results: The SVM classified the original non-treated static images as benign or malignant masses with an average accuracy of 0.62 (95% CI: 0.59-0.65). This performance significantly improved to an average accuracy of 0.77 (95% CI: 0.75-0.79) when images were pre-processed, enhanced and treated with a Local Binary Pattern operator (mean difference 0.15: 95% 0.11-0.19, p < 0.0001, two-tailed t test). Conclusion: We have shown that an SVM can classify static 2D B mode ultrasound images of ovarian masses into benign and malignant categories. The accuracy improves if texture related LBP features extracted from the images are considered. PMID:25897367

  19. Comparison of univariate and multivariate models for prediction of major and minor elements from laser-induced breakdown spectra with and without masking

    NASA Astrophysics Data System (ADS)

    Dyar, M. Darby; Fassett, Caleb I.; Giguere, Stephen; Lepore, Kate; Byrne, Sarah; Boucher, Thomas; Carey, CJ; Mahadevan, Sridhar

    2016-09-01

    This study uses 1356 spectra from 452 geologically-diverse samples, the largest suite of LIBS rock spectra ever assembled, to compare the accuracy of elemental predictions in models that use only spectral regions thought to contain peaks arising from the element of interest versus those that use information in the entire spectrum. Results show that for the elements Si, Al, Ti, Fe, Mg, Ca, Na, K, Ni, Mn, Cr, Co, and Zn, univariate predictions based on single emission lines are by far the least accurate, no matter how carefully the region of channels/wavelengths is chosen and despite the prominence of the selected emission lines. An automated iterative algorithm was developed to sweep through all 5485 channels of data and select the single region that produces the optimal prediction accuracy for each element using univariate analysis. For the eight major elements, use of this technique results in a 35% improvement in prediction accuracy; for minors, the improvement is 13%. The best wavelength region choice for any given univariate analysis is likely to be an inherent property of the specific training set that cannot be generalized. In comparison, multivariate analysis using partial least-squares (PLS) almost universally outperforms univariate analysis. PLS using all the same wavelength regions from the univariate analysis produces results that improve in accuracy by 63% for major elements and 3% for minor element. This difference is likely a reflection of signal to noise ratios, which are far better for major elements than for minor elements, and likely limit their prediction accuracy by any technique. We also compare predictions using specific wavelength ranges for each element against those employing all channels. Masking out channels to focus on emission lines from a specific element that occurs decreases prediction accuracy for major elements but is useful for minor elements with low signals and proportionally much higher noise; use of PLS rather than univariate analysis is still recommended. Finally, we tested the generalizability of our results by analyzing a second data set from a different instrument. Overall prediction accuracies for the mixed data sets are higher than for either set alone for all major and minor elements except Ni, Cr, and Co, where results are roughly comparable.

  20. A time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes with applications in substance abuse research.

    PubMed

    Yang, Songshan; Cranford, James A; Jester, Jennifer M; Li, Runze; Zucker, Robert A; Buu, Anne

    2017-02-28

    This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

Top