Accuracy of glenohumeral joint injections: comparing approach and experience of provider.
Tobola, Allison; Cook, Chad; Cassas, Kyle J; Hawkins, Richard J; Wienke, Jeffrey R; Tolan, Stefan; Kissenberth, Michael J
2011-10-01
The purpose of this study was to prospectively evaluate the accuracy of three different approaches used for glenohumeral injections. In addition, the accuracy of the injection was compared to the experience and confidence of the provider. One-hundred six consecutive patients with shoulder pain underwent attempted intra-articular injection either posteriorly, supraclavicularly, or anteriorly. Each approach was performed by an experienced and inexperienced provider. A musculoskeletal radiologist blinded to technique used and provider interpreted fluoroscopic images to determine accuracy. Providers were blinded to these results. The accuracy of the anterior approach regardless of experience was 64.7%, the posterior approach was 45.7%, and the supraclavicular approach was 45.5%. With each approach, experience did not provide an advantage. For the anterior approach, the experienced provider was 50% accurate compared to 85.7%. For the posterior approach, the experienced provider had a 42.1% accuracy rate compared to 50%. The experienced provider was accurate 50% of the time in the supraclavicular approach compared to 38.5%. The providers were not able to predict their accuracy regardless of experience. The experienced providers, when compared to those who were less experienced, were more likely to be overconfident, particularly with the anterior and supraclavicular approaches. There was no statistically significant difference between the 3 approaches. The anterior approach was the most accurate, independent of the experience level of the provider. The posterior approach produced the lowest level of confidence regardless of experience. The experienced providers were not able to accurately predict the results of their injections, and were more likely to be overconfident with the anterior and supraclavicular approaches. Copyright © 2011 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
Hirth, Jacqueline; Kuo, Yong-Fang; Laz, Tabassum Haque; Starkey, Jonathan M; Rupp, Richard E; Rahman, Mahbubur; Berenson, Abbey B
2016-08-17
To examine the accuracy of parental report of HPV vaccination through examination of concordance, with healthcare provider vaccination report as the comparison. The 2008-2013 National Immunization Survey (NIS)-Teen was used to examine accuracy of parent reports of HPV vaccination for their female daughters aged 13-17years, as compared with provider report of initiation and number of doses. Multivariable logistic regression models were used to examine associations related to concordance of parent and provider report. Of 51,746 adolescents, 84% concordance for HPV vaccine initiation and 70% concordance for number of doses was observed. Accuracy varied by race/ethnicity, region, time, and income. The parent report of number of doses was more likely to be accurate among parents of 13 and 14year old females than 17year olds. Accuracy of initiation and number of doses were lower among Hispanic and black adolescents compared to white parents. The odds of over-report was higher among minorities compared to whites, but the odds of underreport was also markedly higher in these groups compared to parents of white teens. Accuracy of parental vaccine report decreased across time. These findings are important for healthcare providers who need to ascertain the vaccination status of young adults. Strengthening existing immunization registries to improve data sharing capabilities and record completeness could improve vaccination rates, while avoiding costs associated with over-vaccination. Copyright © 2016 Elsevier Ltd. All rights reserved.
Online Periodic Table: A Cautionary Note
NASA Astrophysics Data System (ADS)
Izci, Kemal; Barrow, Lloyd H.; Thornhill, Erica
2013-08-01
The purpose of this study was (a) to evaluate ten online periodic table sources for their accuracy and (b) to compare the types of information and links provided to users. Limited studies have been reported on online periodic table (Diener and Moore 2011; Slocum and Moore in J Chem Educ 86(10):1167, 2009). Chemistry students' understanding of periodic table is vital for their success in chemistry, and the online periodic table has the potential to advance learners' understanding of chemical elements and fundamental chemistry concepts (Brito et al. in J Res Sci Teach 42(1):84-111, 2005). The ten sites were compared for accuracy of data with the Handbook of Chemistry and Physics (HCP, Haynes in CRC handbook of chemistry and physics: a ready-reference book of chemical and physical data. CRC Press, Boca Raton 2012). The 10 sites are the most visited periodic table Web sites available. Four different elements, carbon, gold, argon, and plutonium, were selected for comparison, and 11 different attributes for each element were identified for evaluating accuracy. A wide variation of accuracy was found among the 10 periodic table sources. Chemicool was the most accurate information provider with 66.67 % accuracy when compared to the HCP. The 22 types of information including meaning of name and use in industry and society provided by these sites were, also, compared. WebElements, "Chemicool", "Periodic Table Live", and "the Photographic Periodic Table of the Elements" were the most information providers, providing 86.36 % of information among the 10 Web sites. "WebElements" provides the most links among the 10 Web sites. It was concluded that if an individual teacher or student desires only raw physical data from element, the Internet might not be the best choice.
Accuracy of Referring Provider and Endoscopist Impressions of Colonoscopy Indication.
Naveed, Mariam; Clary, Meredith; Ahn, Chul; Kubiliun, Nisa; Agrawal, Deepak; Cryer, Byron; Murphy, Caitlin; Singal, Amit G
2017-07-01
Background: Referring provider and endoscopist impressions of colonoscopy indication are used for clinical care, reimbursement, and quality reporting decisions; however, the accuracy of these impressions is unknown. This study assessed the sensitivity, specificity, positive and negative predictive value, and overall accuracy of methods to classify colonoscopy indication, including referring provider impression, endoscopist impression, and administrative algorithm compared with gold standard chart review. Methods: We randomly sampled 400 patients undergoing a colonoscopy at a Veterans Affairs health system between January 2010 and December 2010. Referring provider and endoscopist impressions of colonoscopy indication were compared with gold-standard chart review. Indications were classified into 4 mutually exclusive categories: diagnostic, surveillance, high-risk screening, or average-risk screening. Results: Of 400 colonoscopies, 26% were performed for average-risk screening, 7% for high-risk screening, 26% for surveillance, and 41% for diagnostic indications. Accuracy of referring provider and endoscopist impressions of colonoscopy indication were 87% and 84%, respectively, which were significantly higher than that of the administrative algorithm (45%; P <.001 for both). There was substantial agreement between endoscopist and referring provider impressions (κ=0.76). All 3 methods showed high sensitivity (>90%) for determining screening (vs nonscreening) indication, but specificity of the administrative algorithm was lower (40.3%) compared with referring provider (93.7%) and endoscopist (84.0%) impressions. Accuracy of endoscopist, but not referring provider, impression was lower in patients with a family history of colon cancer than in those without (65% vs 84%; P =.001). Conclusions: Referring provider and endoscopist impressions of colonoscopy indication are both accurate and may be useful data to incorporate into algorithms classifying colonoscopy indication. Copyright © 2017 by the National Comprehensive Cancer Network.
Exploring a Three-Level Model of Calibration Accuracy
ERIC Educational Resources Information Center
Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.
2014-01-01
We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
The Effectiveness of a Rater Training Booklet in Increasing Accuracy of Performance Ratings
1988-04-01
subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...r IS % _. Findings: The absolute deviation scores of each individual’s ratings from the "true score" provided by subject matter experts were analyzed
Parent, Francois; Loranger, Sebastien; Mandal, Koushik Kanti; Iezzi, Victor Lambin; Lapointe, Jerome; Boisvert, Jean-Sébastien; Baiad, Mohamed Diaa; Kadoury, Samuel; Kashyap, Raman
2017-04-01
We demonstrate a novel approach to enhance the precision of surgical needle shape tracking based on distributed strain sensing using optical frequency domain reflectometry (OFDR). The precision enhancement is provided by using optical fibers with high scattering properties. Shape tracking of surgical tools using strain sensing properties of optical fibers has seen increased attention in recent years. Most of the investigations made in this field use fiber Bragg gratings (FBG), which can be used as discrete or quasi-distributed strain sensors. By using a truly distributed sensing approach (OFDR), preliminary results show that the attainable accuracy is comparable to accuracies reported in the literature using FBG sensors for tracking applications (~1mm). We propose a technique that enhanced our accuracy by 47% using UV exposed fibers, which have higher light scattering compared to un-exposed standard single mode fibers. Improving the experimental setup will enhance the accuracy provided by shape tracking using OFDR and will contribute significantly to clinical applications.
Spatial modeling and classification of corneal shape.
Marsolo, Keith; Twa, Michael; Bullimore, Mark A; Parthasarathy, Srinivasan
2007-03-01
One of the most promising applications of data mining is in biomedical data used in patient diagnosis. Any method of data analysis intended to support the clinical decision-making process should meet several criteria: it should capture clinically relevant features, be computationally feasible, and provide easily interpretable results. In an initial study, we examined the feasibility of using Zernike polynomials to represent biomedical instrument data in conjunction with a decision tree classifier to distinguish between the diseased and non-diseased eyes. Here, we provide a comprehensive follow-up to that work, examining a second representation, pseudo-Zernike polynomials, to determine whether they provide any increase in classification accuracy. We compare the fidelity of both methods using residual root-mean-square (rms) error and evaluate accuracy using several classifiers: neural networks, C4.5 decision trees, Voting Feature Intervals, and Naïve Bayes. We also examine the effect of several meta-learning strategies: boosting, bagging, and Random Forests (RFs). We present results comparing accuracy as it relates to dataset and transformation resolution over a larger, more challenging, multi-class dataset. They show that classification accuracy is similar for both data transformations, but differs by classifier. We find that the Zernike polynomials provide better feature representation than the pseudo-Zernikes and that the decision trees yield the best balance of classification accuracy and interpretability.
ERIC Educational Resources Information Center
Igoe, D. P.; Parisi, A. V.; Wagner, S.
2017-01-01
Smartphones used as tools provide opportunities for the teaching of the concepts of accuracy and precision and the mathematical concept of arctan. The accuracy and precision of a trigonometric experiment using entirely mechanical tools is compared to one using electronic tools, such as a smartphone clinometer application and a laser pointer. This…
On what it means to know someone: a matter of pragmatics.
Gill, Michael J; Swann, William B
2004-03-01
Two studies provide support for W. B. Swann's (1984) argument that perceivers achieve substantial pragmatic accuracy--accuracy that facilitates the achievement of relationship-specific interaction goals--in their social relationships. Study 1 assessed the extent to which group members reached consensus regarding the behavior of a member in familiar (as compared with unfamiliar) contexts and found that groups do indeed achieve this form of pragmatic accuracy. Study 2 assessed the degree of insight romantic partners had into the self-views of their partners on relationship-relevant (as compared with less relevant) traits and found that couples do indeed achieve this form of pragmatic accuracy. Furthermore, pragmatic accuracy was uniquely associated with relationship harmony. Implications for a functional approach to person perception are discussed.
Zavala, Mary Wassel; Yule, Arthur; Kwan, Lorna; Lambrechts, Sylvia; Maliski, Sally L; Litwin, Mark S
2016-11-01
To examine accuracy of patient-reported prostate-specific antigen (PSA) levels among indigent, uninsured men in a state-funded prostate cancer treatment program that provides case management, care coordination, and health education. Program evaluation. About 114 men with matched self- and lab-reported PSA levels at program enrollment and another time point within 18 months. Abstraction of self- and lab-reported PSA levels to determine self-report as "accurate" or "inaccurate," and evaluate accuracy change over time, before and after nursing interventions. Chi-square tests compared patients with accurate versus inaccurate PSA values. Nonlinear multivariate analyses explored trends in self-reported accuracy over time. Program enrollees receive prostate cancer education from a Nurse Case Manager (NCM), including significance of PSA levels. Men self-report PSA results to their NCM following lab draws and appointments. The NCM provides ongoing education about PSA levels. Of the sample, 46% (n = 53) accurately reported PSA levels. Accuracy of PSA self-reports improved with increasing time since program enrollment. Compared with men at public facilities, those treated at private facilities showed increasing accuracy in self-reported PSA (p = .038). A targeted nursing intervention may increase specific knowledge of PSA levels. Additionally, the provider/treatment setting significantly impacts a patient's disease education and knowledge. © 2016 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Goomas, David T.
2012-01-01
The effects of wireless ring scanners, which provided immediate auditory and visual feedback, were evaluated to increase the performance and accuracy of order selectors at a meat distribution center. The scanners not only increased performance and accuracy compared to paper pick sheets, but were also instrumental in immediate and accurate data…
NASA Astrophysics Data System (ADS)
Obuchowski, Nancy A.; Bullen, Jennifer A.
2018-04-01
Receiver operating characteristic (ROC) analysis is a tool used to describe the discrimination accuracy of a diagnostic test or prediction model. While sensitivity and specificity are the basic metrics of accuracy, they have many limitations when characterizing test accuracy, particularly when comparing the accuracies of competing tests. In this article we review the basic study design features of ROC studies, illustrate sample size calculations, present statistical methods for measuring and comparing accuracy, and highlight commonly used ROC software. We include descriptions of multi-reader ROC study design and analysis, address frequently seen problems of verification and location bias, discuss clustered data, and provide strategies for testing endpoints in ROC studies. The methods are illustrated with a study of transmission ultrasound for diagnosing breast lesions.
Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification
NASA Astrophysics Data System (ADS)
Sharif, I.; Khare, S.
2014-11-01
With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.
Accuracy and Measurement Error of the Medial Clear Space of the Ankle.
Metitiri, Ogheneochuko; Ghorbanhoseini, Mohammad; Zurakowski, David; Hochman, Mary G; Nazarian, Ara; Kwon, John Y
2017-04-01
Measurement of the medial clear space (MCS) is commonly used to assess deltoid ligament competency and mortise stability when managing ankle fractures. Lacking knowledge of the true anatomic width measured, previous studies have been unable to measure accuracy of measurement. The purpose of this study was to determine MCS measurement error and accuracy and any influencing factors. Using 3 normal transtibial ankle cadaver specimens, deltoid and syndesmotic ligaments were transected and the mortise widened and affixed at a width of 6 mm (specimen 1) and 4 mm (specimen 2). The mortise was left intact in specimen 3. Radiographs were obtained of each cadaver at varying degrees of rotation. Radiographs were randomized, and providers measured the MCS using a standardized technique. Lack of accuracy as well as lack of precision in measurement of the medial clear space compared to a known anatomic value was present for all 3 specimens tested. There were no significant differences in mean delta with regard to level of training for specimens 1 and 2; however, with specimen 3, staff physicians showed increased measurement accuracy compared with trainees. Accuracy and precision of MCS measurements are poor. Provider experience did not appear to influence accuracy and precision of measurements for the displaced mortise. This high degree of measurement error and lack of precision should be considered when deciding treatment options based on MCS measurements.
Parameter estimation using weighted total least squares in the two-compartment exchange model.
Garpebring, Anders; Löfstedt, Tommy
2018-01-01
The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Effect of provider volume on the accuracy of hospital report cards: a Monte Carlo study.
Austin, Peter C; Reeves, Mathew J
2014-03-01
Hospital report cards, in which outcomes after the provision of medical or surgical care are compared across healthcare providers, are being published with increasing frequency. However, the accuracy of such comparisons is controversial, especially when case volumes are small. The objective was to determine the relationship between hospital case volume and the accuracy of hospital report cards. Monte Carlo simulations were used to examine the influence of hospital case volume on the accuracy of hospital report cards in a setting in which true hospital performance was known with certainty, and perfect risk-adjustment was feasible. The parameters used to generate the simulated data sets were obtained from empirical analyses of data on patients hospitalized with acute myocardial infarction in Ontario, Canada, in which the overall 30-day mortality rate was 11.1%. We found that provider volume had a strong effect on the accuracy of hospital report cards. However, provider volume had to be >300 before ≥70% of hospitals were correctly classified. Furthermore, hospital volume had to be >1000 before ≥80% of hospitals were correctly classified. Producers and users of hospital report cards need to be aware that, even when perfect risk adjustment is possible, the accuracy of hospital report cards is, at best, modest for small to medium-sized case loads (i.e., 100-300). Hospital report cards displayed high degrees of accuracy only when provider volumes exceeded the typical annual hospital case load for many cardiovascular conditions and procedures.
2003-01-01
Data are not readily available on the accuracy of one of the most commonly used home blood glucose meters, the One Touch Ultra (LifeScan, Milpitas, California). The purpose of this report is to provide information on the accuracy of this home glucose meter in children with type 1 diabetes. During a 24-h clinical research center stay, the accuracy of the Ultra meter was assessed in 91 children, 3-17 years old, with type 1 diabetes by comparing the Ultra glucose values with concurrent reference serum glucose values measured in a central laboratory. The Pearson correlation between the 2,068 paired Ultra and reference values was 0.97, with the median relative absolute difference being 6%. Ninety-four percent of all Ultra values (96% of venous and 84% of capillary samples) met the proposed International Organisation for Standardisation (ISO) standard for instruments used for self-monitoring of glucose when compared with venous reference values. Ninety-nine percent of values were in zones A + B of the Modified Error Grid. A high degree of accuracy was seen across the full range of glucose values. For 353 data points during an insulin-induced hypoglycemia test, the Ultra meter was found to have accuracy that was comparable to concurrently used benchmark instruments (Beckman, YSI, or i-STAT); 95% and 96% of readings from the Ultra meter and the benchmark instruments met the proposed ISO criteria, respectively. These results confirm that the One Touch Ultra meter provides accurate glucose measurements for both hypoglycemia and hyperglycemia in children with type 1 diabetes.
Achamrah, Najate; Jésus, Pierre; Grigioni, Sébastien; Rimbert, Agnès; Petit, André; Déchelotte, Pierre; Folope, Vanessa; Coëffier, Moïse
2018-01-01
Predictive equations have been specifically developed for obese patients to estimate resting energy expenditure (REE). Body composition (BC) assessment is needed for some of these equations. We assessed the impact of BC methods on the accuracy of specific predictive equations developed in obese patients. REE was measured (mREE) by indirect calorimetry and BC assessed by bioelectrical impedance analysis (BIA) and dual-energy X-ray absorptiometry (DXA). mREE, percentages of prediction accuracy (±10% of mREE) were compared. Predictive equations were studied in 2588 obese patients. Mean mREE was 1788 ± 6.3 kcal/24 h. Only the Müller (BIA) and Harris & Benedict (HB) equations provided REE with no difference from mREE. The Huang, Müller, Horie-Waitzberg, and HB formulas provided a higher accurate prediction (>60% of cases). The use of BIA provided better predictions of REE than DXA for the Huang and Müller equations. Inversely, the Horie-Waitzberg and Lazzer formulas provided a higher accuracy using DXA. Accuracy decreased when applied to patients with BMI ≥ 40, except for the Horie-Waitzberg and Lazzer (DXA) formulas. Müller equations based on BIA provided a marked improvement of REE prediction accuracy than equations not based on BC. The interest of BC to improve REE predictive equations accuracy in obese patients should be confirmed. PMID:29320432
Evaluating the decision accuracy and speed of clinical data visualizations.
Pieczkiewicz, David S; Finkelstein, Stanley M
2010-01-01
Clinicians face an increasing volume of biomedical data. Assessing the efficacy of systems that enable accurate and timely clinical decision making merits corresponding attention. This paper discusses the multiple-reader multiple-case (MRMC) experimental design and linear mixed models as means of assessing and comparing decision accuracy and latency (time) for decision tasks in which clinician readers must interpret visual displays of data. These tools can assess and compare decision accuracy and latency (time). These experimental and statistical techniques, used extensively in radiology imaging studies, offer a number of practical and analytic advantages over more traditional quantitative methods such as percent-correct measurements and ANOVAs, and are recommended for their statistical efficiency and generalizability. An example analysis using readily available, free, and commercial statistical software is provided as an appendix. While these techniques are not appropriate for all evaluation questions, they can provide a valuable addition to the evaluative toolkit of medical informatics research.
Sound source localization identification accuracy: Envelope dependencies.
Yost, William A
2017-07-01
Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.
Eash, David A.
2015-01-01
An examination was conducted to understand why the 1987 single-variable RREs seem to provide better accuracy and less bias than either of the 2013 multi- or single-variable RREs. A comparison of 1-percent annual exceedance-probability regression lines for hydrologic regions 1-4 from the 1987 single-variable RREs and for flood regions 1-3 from the 2013 single-variable RREs indicates that the 1987 single-variable regional-regression lines generally have steeper slopes and lower discharges when compared to 2013 single-variable regional-regression lines for corresponding areas of Iowa. The combination of the definition of hydrologic regions, the lower discharges, and the steeper slopes of regression lines associated with the 1987 single-variable RREs seem to provide better accuracy and less bias when compared to the 2013 multi- or single-variable RREs; better accuracy and less bias was determined particularly for drainage areas less than 2 mi2, and also for some drainage areas between 2 and 20 mi2. The 2013 multi- and single-variable RREs are considered to provide better accuracy and less bias for larger drainage areas. Results of this study indicate that additional research is needed to address the curvilinear relation between drainage area and AEPDs for areas of Iowa.
NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Hu, Hai-xiang; Zhang, Zhi-yu; Zhang, Feng; Zheng, Li-gong; Zhang, Xue-jun
2017-08-01
A feasible way to improve the manufacturing efficiency of large reaction-bonded silicon carbide optics is to increase the processing accuracy in the ground stage before polishing, which requires high accuracy metrology. A swing arm profilometer (SAP) has been used to measure large optics during the ground stage. A method has been developed for improving the measurement accuracy of SAP using a capacitive probe and implementing calibrations. The experimental result compared with the interferometer test shows the accuracy of 0.068 μm in root-mean-square (RMS) and maps in 37 low-order Zernike terms show accuracy of 0.048 μm RMS, which shows a powerful capability to provide a major input in high-precision grinding.
Charnot-Katsikas, Angella; Tesic, Vera; Boonlayangoor, Sue; Bethel, Cindy; Frank, Karen M
2014-02-01
This study assessed the accuracy of bacterial and yeast identification using the VITEK MS, and the time to reporting of isolates before and after its implementation in routine clinical practice. Three hundred and sixty-two isolates of bacteria and yeast, consisting of a variety of clinical isolates and American Type Culture Collection strains, were tested. Results were compared with reference identifications from the VITEK 2 system and with 16S rRNA sequence analysis. The VITEK MS provided an acceptable identification to species level for 283 (78 %) isolates. Considering organisms for which genus-level identification is acceptable for routine clinical care, 315 isolates (87 %) had an acceptable identification. Six isolates (2 %) were identified incorrectly, five of which were Shigella species. Finally, the time for reporting the identifications was decreased significantly after implementation of the VITEK MS for a total mean reduction in time of 10.52 h (P<0.0001). Overall, accuracy of the VITEK MS was comparable or superior to that from the VITEK 2. The findings were also comparable to other studies examining the accuracy of the VITEK MS, although differences exist, depending on the diversity of species represented as well as on the versions of the databases used. The VITEK MS can be incorporated effectively into routine use in a clinical microbiology laboratory and future expansion of the database should provide improved accuracy for the identification of micro-organisms.
Shah, Shabir A; Naqash, Talib Amin; Padmanabhan, T V; Subramanium; Lambodaran; Nazir, Shazana
2014-03-01
The sole objective of casting procedure is to provide a metallic duplication of missing tooth structure, with as great accuracy as possible. The ability to produce well fitting castings require strict adherence to certain fundamentals. A study was undertaken to comparatively evaluate the effect on casting accuracy by subjecting the invested wax patterns to burnout after different time intervals. The effect on casting accuracy using metal ring into a pre heated burnout furnace and using split ring was also carried. The readings obtained were tabulated and subjected to statistical analysis.
Thermocouple Calibration and Accuracy in a Materials Testing Laboratory
NASA Technical Reports Server (NTRS)
Lerch, B. A.; Nathal, M. V.; Keller, D. J.
2002-01-01
A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.
Parametric Loop Division for 3D Localization in Wireless Sensor Networks
Ahmad, Tanveer
2017-01-01
Localization in Wireless Sensor Networks (WSNs) has been an active topic for more than two decades. A variety of algorithms were proposed to improve the localization accuracy. However, they are either limited to two-dimensional (2D) space, or require specific sensor deployment for proper operations. In this paper, we proposed a three-dimensional (3D) localization scheme for WSNs based on the well-known parametric Loop division (PLD) algorithm. The proposed scheme localizes a sensor node in a region bounded by a network of anchor nodes. By iteratively shrinking that region towards its center point, the proposed scheme provides better localization accuracy as compared to existing schemes. Furthermore, it is cost-effective and independent of environmental irregularity. We provide an analytical framework for the proposed scheme and find its lower bound accuracy. Simulation results shows that the proposed algorithm provides an average localization accuracy of 0.89 m with a standard deviation of 1.2 m. PMID:28737714
Accuracy of a continuous noninvasive hemoglobin monitor in intensive care unit patients.
Frasca, Denis; Dahyot-Fizelier, Claire; Catherine, Karen; Levrat, Quentin; Debaene, Bertrand; Mimoz, Olivier
2011-10-01
To determine whether noninvasive hemoglobin measurement by Pulse CO-Oximetry could provide clinically acceptable absolute and trend accuracy in critically ill patients, compared to other invasive methods of hemoglobin assessment available at bedside and the gold standard, the laboratory analyzer. Prospective study. Surgical intensive care unit of a university teaching hospital. Sixty-two patients continuously monitored with Pulse CO-Oximetry (Masimo Radical-7). None. Four hundred seventy-one blood samples were analyzed by a point-of-care device (HemoCue 301), a satellite lab CO-Oximeter (Siemens RapidPoint 405), and a laboratory hematology analyzer (Sysmex XT-2000i), which was considered the reference device. Hemoglobin values reported from the invasive methods were compared to the values reported by the Pulse CO-Oximeter at the time of blood draw. When the case-to-case variation was assessed, the bias and limits of agreement were 0.0±1.0 g/dL for the Pulse CO-Oximeter, 0.3±1.3g/dL for the point-of-care device, and 0.9±0.6 g/dL for the satellite lab CO-Oximeter compared to the reference method. Pulse CO-Oximetry showed similar trend accuracy as satellite lab CO-Oximetry, whereas the point-of-care device did not appear to follow the trend of the laboratory analyzer as well as the other test devices. When compared to laboratory reference values, hemoglobin measurement with Pulse CO-Oximetry has absolute accuracy and trending accuracy similar to widely used, invasive methods of hemoglobin measurement at bedside. Hemoglobin measurement with pulse CO-Oximetry has the additional advantages of providing continuous measurements, noninvasively, which may facilitate hemoglobin monitoring in the intensive care unit.
Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C
2018-06-01
Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.
Assessing clinical reasoning (ASCLIRE): Instrument development and validation.
Kunina-Habenicht, Olga; Hautz, Wolf E; Knigge, Michel; Spies, Claudia; Ahlers, Olaf
2015-12-01
Clinical reasoning is an essential competency in medical education. This study aimed at developing and validating a test to assess diagnostic accuracy, collected information, and diagnostic decision time in clinical reasoning. A norm-referenced computer-based test for the assessment of clinical reasoning (ASCLIRE) was developed, integrating the entire clinical decision process. In a cross-sectional study participants were asked to choose as many diagnostic measures as they deemed necessary to diagnose the underlying disease of six different cases with acute or sub-acute dyspnea and provide a diagnosis. 283 students and 20 content experts participated. In addition to diagnostic accuracy, respective decision time and number of used relevant diagnostic measures were documented as distinct performance indicators. The empirical structure of the test was investigated using a structural equation modeling approach. Experts showed higher accuracy rates and lower decision times than students. In a cross-sectional comparison, the diagnostic accuracy of students improved with the year of study. Wrong diagnoses provided by our sample were comparable to wrong diagnoses in practice. We found an excellent fit for a model with three latent factors-diagnostic accuracy, decision time, and choice of relevant diagnostic information-with diagnostic accuracy showing no significant correlation with decision time. ASCLIRE considers decision time as an important performance indicator beneath diagnostic accuracy and provides evidence that clinical reasoning is a complex ability comprising diagnostic accuracy, decision time, and choice of relevant diagnostic information as three partly correlated but still distinct aspects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang
2017-03-01
We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.
A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.
Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei
2018-01-01
Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.
Optimization of camera exposure durations for multi-exposure speckle imaging of the microcirculation
Kazmi, S. M. Shams; Balial, Satyajit; Dunn, Andrew K.
2014-01-01
Improved Laser Speckle Contrast Imaging (LSCI) blood flow analyses that incorporate inverse models of the underlying laser-tissue interaction have been used to develop more quantitative implementations of speckle flowmetry such as Multi-Exposure Speckle Imaging (MESI). In this paper, we determine the optimal camera exposure durations required for obtaining flow information with comparable accuracy with the prevailing MESI implementation utilized in recent in vivo rodent studies. A looping leave-one-out (LOO) algorithm was used to identify exposure subsets which were analyzed for accuracy against flows obtained from analysis with the original full exposure set over 9 animals comprising n = 314 regional flow measurements. From the 15 original exposures, 6 exposures were found using the LOO process to provide comparable accuracy, defined as being no more than 10% deviant, with the original flow measurements. The optimal subset of exposures provides a basis set of camera durations for speckle flowmetry studies of the microcirculation and confers a two-fold faster acquisition rate and a 28% reduction in processing time without sacrificing accuracy. Additionally, the optimization process can be used to identify further reductions in the exposure subsets for tailoring imaging over less expansive flow distributions to enable even faster imaging. PMID:25071956
Accuracy of visual inspection performed by community health workers in cervical cancer screening.
Driscoll, Susan D; Tappen, Ruth M; Newman, David; Voege-Harvey, Kathi
2018-05-22
Cervical cancer remains the leading cause of cancer and mortality in low-resource areas with healthcare personnel shortages. Visual inspection is a low-resource alternative method of cervical cancer screening in areas with limited access to healthcare. To assess accuracy of visual inspection performed by community health workers (CHWs) and licensed providers, and the effect of provider training on visual inspection accuracy. Five databases and four websites were queried for studies published in English up to December 31, 2015. Derivations of "cervical cancer screening" and "visual inspection" were search terms. Visual inspection screening studies with provider definitions, colposcopy reference standards, and accuracy data were included. A priori variables were extracted by two independent reviewers. Bivariate linear mixed-effects models were used to compare visual inspection accuracy. Provider type was a significant predictor of visual inspection sensitivity (P=0.048); sensitivity was 15 percentage points higher among CHWs than physicians (P=0.014). Components of provider training were significant predictors of sensitivity and specificity. Community-based visual inspection programs using adequately trained CHWs could reduce barriers and expand access to screening, thereby decreasing cervical cancer incidence and mortality for women at highest risk and those living in remote areas with limited access to healthcare personnel. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Technical Reports Server (NTRS)
Clegg, R. H.; Scherz, J. P.
1975-01-01
Successful aerial photography depends on aerial cameras providing acceptable photographs within cost restrictions of the job. For topographic mapping where ultimate accuracy is required only large format mapping cameras will suffice. For mapping environmental patterns of vegetation, soils, or water pollution, 9-inch cameras often exceed accuracy and cost requirements, and small formats may be better. In choosing the best camera for environmental mapping, relative capabilities and costs must be understood. This study compares resolution, photo interpretation potential, metric accuracy, and cost of 9-inch, 70mm, and 35mm cameras for obtaining simultaneous color and color infrared photography for environmental mapping purposes.
Impact of a "No Mobile Device" Policy on Developmental Surveillance in a Pediatric Clinic.
Regan, Paul A; Fogel, Benjamin S; Hicks, Steven D
2018-04-01
Children commonly use mobile devices at pediatric office visits. This practice may affect patient-provider interaction and undermine accuracy of developmental surveillance. A randomized, provider-blinded, controlled trial examined whether a policy prohibiting mobile device use in a pediatric clinic improved accuracy of pediatricians' developmental surveillance. Children, aged 18 to 36 months, were randomized to device-prohibited (intervention; n = 58) or device-allowed (control; n = 54) groups. After a 30-minute well-visit, development was evaluated as "normal," "borderline," or "delayed" in 5 categories using the Ages and Stages Questionnaire (ASQ-3). ASQ-3 results were compared with providers' clinical assessment in each category. Provider-ASQ discrepancies were more common for intervention participants ( P = .025). Providers "missed" more ASQ-3 "delayed" scores ( P = .005) in the intervention group, particularly in the fine motor domain ( P = .018). Prohibiting mobile device use at well-visits did not improve accuracy of providers' developmental surveillance. Mobile devices may entertain children at well-visits, allowing opportunities for parent-provider discussion, or observation of fine motor skills.
Diagnosis of hydronephrosis: comparison of radionuclide scanning and sonography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malave, S.R.; Neiman, H.L.; Spies, S.M.
1980-12-01
Diagnostic sonographic and radioisotope scanning techniques have been shown to be useful in the diagnosis of obstructive uropathy. The accuracy of both methods was compared and sonography was found to provide the more accurate data (sensitivity, 90%, specificity, 98%; accuracy, 97%). Sonography provides excellent anatomic information and enables one to grade the degree of dilatation. Renal radionuclide studies were less sensitive in detecting obstruction, particularly in the presence of chronic renal disease, but offered additional information regarding relative renal blood flow, total effective renal plasma flow, and interval change in renal parenchymal function.
Sauer, James; Hope, Lorraine
2016-09-01
Eyewitnesses regulate the level of detail (grain size) reported to balance competing demands for informativeness and accuracy. However, research to date has predominantly examined metacognitive monitoring for semantic memory tasks, and used relatively artificial phased reporting procedures. Further, although the established role of confidence in this regulation process may affect the confidence-accuracy relation for volunteered responses in predictable ways, previous investigations of the confidence-accuracy relation for eyewitness recall have largely overlooked the regulation of response granularity. Using a non-phased paradigm, Experiment 1 compared reporting and monitoring following optimal and sub-optimal (divided attention) encoding conditions. Participants showed evidence of sacrificing accuracy for informativeness, even when memory quality was relatively weak. Participants in the divided (cf. full) attention condition showed reduced accuracy for fine- but not coarse-grained responses. However, indices of discrimination and confidence diagnosticity showed no effect of divided attention. Experiment 2 compared the effects of divided attention at encoding on reporting and monitoring using both non-phased and 2-phase procedures. Divided attention effects were consistent with Experiment 1. However, compared to those in the non-phased condition, participants in the 2-phase condition displayed a more conservative control strategy, and confidence ratings were less diagnostic of accuracy. When memory quality was reduced, although attempts to balance informativeness and accuracy increased the chance of fine-grained response errors, confidence provided an index of the likely accuracy of volunteered fine-grained responses for both condition. Copyright © 2016 Elsevier B.V. All rights reserved.
Validation of geometric accuracy of Global Land Survey (GLS) 2000 data
Rengarajan, Rajagopalan; Sampath, Aparajithan; Storey, James C.; Choate, Michael J.
2015-01-01
The Global Land Survey (GLS) 2000 data were generated from Geocover™ 2000 data with the aim of producing a global data set of accuracy better than 25 m Root Mean Square Error (RMSE). An assessment and validation of accuracy of GLS 2000 data set, and its co-registration with Geocover™ 2000 data set is presented here. Since the availability of global data sets that have higher nominal accuracy than the GLS 2000 is a concern, the data sets were assessed in three tiers. In the first tier, the data were compared with the Geocover™ 2000 data. This comparison provided a means of localizing regions of higher differences. In the second tier, the GLS 2000 data were compared with systematically corrected Landsat-7 scenes that were obtained in a time period when the spacecraft pointing information was extremely accurate. These comparisons localize regions where the data are consistently off, which may indicate regions of higher errors. The third tier consisted of comparing the GLS 2000 data against higher accuracy reference data. The reference data were the Digital Ortho Quads over the United States, orthorectified SPOT data over Australia, and high accuracy check points obtained using triangulation bundle adjustment of Landsat-7 images over selected sites around the world. The study reveals that the geometric errors in Geocover™ 2000 data have been rectified in GLS 2000 data, and that the accuracy of GLS 2000 data can be expected to be better than 25 m RMSE for most of its constituent scenes.
The influence of delaying judgments of learning on metacognitive accuracy: a meta-analytic review.
Rhodes, Matthew G; Tauber, Sarah K
2011-01-01
Many studies have examined the accuracy of predictions of future memory performance solicited through judgments of learning (JOLs). Among the most robust findings in this literature is that delaying predictions serves to substantially increase the relative accuracy of JOLs compared with soliciting JOLs immediately after study, a finding termed the delayed JOL effect. The meta-analyses reported in the current study examined the predominant theoretical accounts as well as potential moderators of the delayed JOL effect. The first meta-analysis examined the relative accuracy of delayed compared with immediate JOLs across 4,554 participants (112 effect sizes) through gamma correlations between JOLs and memory accuracy. Those data showed that delaying JOLs leads to robust benefits to relative accuracy (g = 0.93). The second meta-analysis examined memory performance for delayed compared with immediate JOLs across 3,807 participants (98 effect sizes). Those data showed that delayed JOLs result in a modest but reliable benefit for memory performance relative to immediate JOLs (g = 0.08). Findings from these meta-analyses are well accommodated by theories suggesting that delayed JOL accuracy reflects access to more diagnostic information from long-term memory rather than being a by-product of a retrieval opportunity. However, these data also suggest that theories proposing that the delayed JOL effect results from a memorial benefit or the match between the cues available for JOLs and those available at test may also provide viable explanatory mechanisms necessary for a comprehensive account.
Computer-based rhythm diagnosis and its possible influence on nonexpert electrocardiogram readers.
Hakacova, Nina; Trägårdh-Johansson, Elin; Wagner, Galen S; Maynard, Charles; Pahlm, Olle
2012-01-01
Systems providing computer-based analysis of the resting electrocardiogram (ECG) seek to improve the quality of health care by providing accurate and timely automatic diagnosis of, for example, cardiac rhythm to clinicians. The accuracy of these diagnoses, however, remains questionable. We tested the hypothesis that (a) 2 independent automated ECG systems have better accuracy in rhythm diagnosis than nonexpert clinicians and (b) both systems provide correct diagnostic suggestions in a large percentage of cases where the diagnosis of nonexpert clinicians is incorrect. Five hundred ECGs were manually analyzed by 2 senior experts, 3 nonexpert clinicians, and automatically by 2 automated systems. The accuracy of the nonexpert rhythm statements was compared with the accuracy of each system statement. The proportion of rhythm statements when the clinician's diagnoses were incorrect and the systems instead provided correct diagnosis was assessed. A total of 420 sinus rhythms and 156 rhythm disturbances were recognized by expert reading. Significance of the difference in accuracy between nonexperts and systems was P = .45 for system A and P = .11 for system B. The percentage of correct automated diagnoses in cases when the clinician was incorrect was 28% ± 10% for system A and 25% ± 11% for system B (P = .09). The rhythm diagnoses of automated systems did not reach better average accuracy than those of nonexpert readings. The computer diagnosis of rhythm can be incorrect in cases where the clinicians fail in reaching the correct ECG diagnosis. Copyright © 2012. Published by Elsevier Inc.
Evidence for Enhanced Interoceptive Accuracy in Professional Musicians
Schirmer-Mokwa, Katharina L.; Fard, Pouyan R.; Zamorano, Anna M.; Finkel, Sebastian; Birbaumer, Niels; Kleber, Boris A.
2015-01-01
Interoception is defined as the perceptual activity involved in the processing of internal bodily signals. While the ability of internal perception is considered a relatively stable trait, recent data suggest that learning to integrate multisensory information can modulate it. Making music is a uniquely rich multisensory experience that has shown to alter motor, sensory, and multimodal representations in the brain of musicians. We hypothesize that musical training also heightens interoceptive accuracy comparable to other perceptual modalities. Thirteen professional singers, twelve string players, and thirteen matched non-musicians were examined using a well-established heartbeat discrimination paradigm complemented by self-reported dispositional traits. Results revealed that both groups of musicians displayed higher interoceptive accuracy than non-musicians, whereas no differences were found between singers and string-players. Regression analyses showed that accumulated musical practice explained about 49% variation in heartbeat perception accuracy in singers but not in string-players. Psychometric data yielded a number of psychologically plausible inter-correlations in musicians related to performance anxiety. However, dispositional traits were not a confounding factor on heartbeat discrimination accuracy. Together, these data provide first evidence indicating that professional musicians show enhanced interoceptive accuracy compared to non-musicians. We argue that musical training largely accounted for this effect. PMID:26733836
Validation of hand and foot anatomical feature measurements from smartphone images
NASA Astrophysics Data System (ADS)
Amini, Mohammad; Vasefi, Fartash; MacKinnon, Nicholas
2018-02-01
A smartphone mobile medical application, previously presented as a tool for individuals with hand arthritis to assess and monitor the progress of their disease, has been modified and expanded to include extraction of anatomical features from the hand (joint/finger width, and angulation) and foot (length, width, big toe angle, and arch height index) from smartphone camera images. Image processing algorithms and automated measurements were validated by performing tests on digital hand models, rigid plastic hand models, and real human hands and feet to determine accuracy and reproducibility compared to conventional measurement tools such as calipers, rulers, and goniometers. The mobile application was able to provide finger joint width measurements with accuracy better than 0.34 (+/-0.25) millimeters. Joint angulation measurement accuracy was better than 0.50 (+/-0.45) degrees. The automatically calculated foot length accuracy was 1.20 (+/-1.27) millimeters and the foot width accuracy was 1.93 (+/-1.92) millimeters. Hallux valgus angle (used in assessing bunions) accuracy was 1.30 (+/-1.29) degrees. Arch height index (AHI) measurements had an accuracy of 0.02 (+/-0.01). Combined with in-app documentation of symptoms, treatment, and lifestyle factors, the anatomical feature measurements can be used by both healthcare professionals and manufacturers. Applications include: diagnosing hand osteoarthritis; providing custom finger splint measurements; providing compression glove measurements for burn and lymphedema patients; determining foot dimensions for custom shoe sizing, insoles, orthotics, or foot splints; and assessing arch height index and bunion treatment effectiveness.
Wright, Gavin; Harrold, Natalie; Bownes, Peter
2018-01-01
Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896
Accuracy Analysis of a Low-Cost Platform for Positioning and Navigation
NASA Astrophysics Data System (ADS)
Hofmann, S.; Kuntzsch, C.; Schulze, M. J.; Eggert, D.; Sester, M.
2012-07-01
This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner's characteristics.
Importance of Personalized Health-Care Models: A Case Study in Activity Recognition.
Zdravevski, Eftim; Lameski, Petre; Trajkovik, Vladimir; Pombo, Nuno; Garcia, Nuno
2018-01-01
Novel information and communication technologies create possibilities to change the future of health care. Ambient Assisted Living (AAL) is seen as a promising supplement of the current care models. The main goal of AAL solutions is to apply ambient intelligence technologies to enable elderly people to continue to live in their preferred environments. Applying trained models from health data is challenging because the personalized environments could differ significantly than the ones which provided training data. This paper investigates the effects on activity recognition accuracy using single accelerometer of personalized models compared to models built on general population. In addition, we propose a collaborative filtering based approach which provides balance between fully personalized models and generic models. The results show that the accuracy could be improved to 95% with fully personalized models, and up to 91.6% with collaborative filtering based models, which is significantly better than common models that exhibit accuracy of 85.1%. The collaborative filtering approach seems to provide highly personalized models with substantial accuracy, while overcoming the cold start problem that is common for fully personalized models.
Leveraging transcript quantification for fast computation of alternative splicing profiles.
Alamancos, Gael P; Pagès, Amadís; Trincado, Juan L; Bellora, Nicolás; Eyras, Eduardo
2015-09-01
Alternative splicing plays an essential role in many cellular processes and bears major relevance in the understanding of multiple diseases, including cancer. High-throughput RNA sequencing allows genome-wide analyses of splicing across multiple conditions. However, the increasing number of available data sets represents a major challenge in terms of computation time and storage requirements. We describe SUPPA, a computational tool to calculate relative inclusion values of alternative splicing events, exploiting fast transcript quantification. SUPPA accuracy is comparable and sometimes superior to standard methods using simulated as well as real RNA-sequencing data compared with experimentally validated events. We assess the variability in terms of the choice of annotation and provide evidence that using complete transcripts rather than more transcripts per gene provides better estimates. Moreover, SUPPA coupled with de novo transcript reconstruction methods does not achieve accuracies as high as using quantification of known transcripts, but remains comparable to existing methods. Finally, we show that SUPPA is more than 1000 times faster than standard methods. Coupled with fast transcript quantification, SUPPA provides inclusion values at a much higher speed than existing methods without compromising accuracy, thereby facilitating the systematic splicing analysis of large data sets with limited computational resources. The software is implemented in Python 2.7 and is available under the MIT license at https://bitbucket.org/regulatorygenomicsupf/suppa. © 2015 Alamancos et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Perez-Cruz, Pedro E.; dos Santos, Renata; Silva, Thiago Buosi; Crovador, Camila Souza; Nascimento, Maria Salete de Angelis; Hall, Stacy; Fajardo, Julieta; Bruera, Eduardo; Hui, David
2014-01-01
Context Survival prognostication is important during end-of-life. The accuracy of clinician prediction of survival (CPS) over time has not been well characterized. Objectives To examine changes in prognostication accuracy during the last 14 days of life in a cohort of patients with advanced cancer admitted to two acute palliative care units and to compare the accuracy between the temporal and probabilistic approaches. Methods Physicians and nurses prognosticated survival daily for cancer patients in two hospitals until death/discharge using two prognostic approaches: temporal and probabilistic. We assessed accuracy for each method daily during the last 14 days of life comparing accuracy at day −14 (baseline) with accuracy at each time point using a test of proportions. Results 6718 temporal and 6621 probabilistic estimations were provided by physicians and nurses for 311 patients, respectively. Median (interquartile range) survival was 8 (4, 20) days. Temporal CPS had low accuracy (10–40%) and did not change over time. In contrast, probabilistic CPS was significantly more accurate (p<.05 at each time point) but decreased close to death. Conclusion Probabilistic CPS was consistently more accurate than temporal CPS over the last 14 days of life; however, its accuracy decreased as patients approached death. Our findings suggest that better tools to predict impending death are necessary. PMID:24746583
Assessing the dependence of sensitivity and specificity on prevalence in meta-analysis
Li, Jialiang; Fine, Jason P.
2011-01-01
We consider modeling the dependence of sensitivity and specificity on the disease prevalence in diagnostic accuracy studies. Many meta-analyses compare test accuracy across studies and fail to incorporate the possible connection between the accuracy measures and the prevalence. We propose a Pearson type correlation coefficient and an estimating equation–based regression framework to help understand such a practical dependence. The results we derive may then be used to better interpret the results from meta-analyses. In the biomedical examples analyzed in this paper, the diagnostic accuracy of biomarkers are shown to be associated with prevalence, providing insights into the utility of these biomarkers in low- and high-prevalence populations. PMID:21525421
Transportation Modes Classification Using Sensors on Smartphones.
Fang, Shih-Hau; Liao, Hao-Hsiang; Fei, Yu-Xiang; Chen, Kai-Hsiang; Huang, Jen-Wei; Lu, Yu-Ding; Tsao, Yu
2016-08-19
This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user's transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes.
Transportation Modes Classification Using Sensors on Smartphones
Fang, Shih-Hau; Liao, Hao-Hsiang; Fei, Yu-Xiang; Chen, Kai-Hsiang; Huang, Jen-Wei; Lu, Yu-Ding; Tsao, Yu
2016-01-01
This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user’s transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes. PMID:27548182
Asiimwe, Stephen; Oloya, James; Song, Xiao; Whalen, Christopher C
2014-12-01
Unsupervised HIV self-testing (HST) has potential to increase knowledge of HIV status; however, its accuracy is unknown. To estimate the accuracy of unsupervised HST in field settings in Uganda, we performed a non-blinded, randomized controlled, non-inferiority trial of unsupervised compared with supervised HST among selected high HIV risk fisherfolk (22.1 % HIV Prevalence) in three fishing villages in Uganda between July and September 2013. The study enrolled 246 participants and randomized them in a 1:1 ratio to unsupervised HST or provider-supervised HST. In an intent-to-treat analysis, the HST sensitivity was 90 % in the unsupervised arm and 100 % among the provider-supervised, yielding a difference 0f -10 % (90 % CI -21, 1 %); non-inferiority was not shown. In a per protocol analysis, the difference in sensitivity was -5.6 % (90 % CI -14.4, 3.3 %) and did show non-inferiority. We conclude that unsupervised HST is feasible in rural Africa and may be non-inferior to provider-supervised HST.
Turkers in Africa: A Crowdsourcing Approach to Improving Agricultural Landcover Maps
NASA Astrophysics Data System (ADS)
Estes, L. D.; Caylor, K. K.; Choi, J.
2012-12-01
In the coming decades a substantial portion of Africa is expected to be transformed to agriculture. The scale of this conversion may match or exceed that which occurred in the Brazilian Cerrado and Argentinian Pampa in recent years. Tracking the rate and extent of this conversion will depend on having an accurate baseline of the current extent of croplands. Continent-wide baseline data do exist, but the accuracy of these relatively coarse resolution, remotely sensed assessments is suspect in many regions. To develop more accurate maps of the distribution and nature of African croplands, we develop a distributed "crowdsourcing" approach that harnesses human eyeballs and image interpretation capabilities. Our initial goal is to assess the accuracy of existing agricultural land cover maps, but ultimately we aim to generate "wall-to-wall" cropland maps that can be revisited and updated to track agricultural transformation. Our approach utilizes the freely avail- able, high-resolution satellite imagery provided by Google Earth, combined with Amazon.com's Mechanical Turk platform, an online service that provides a large, global pool of workers (known as "Turkers") who perform "Human Intelligence Tasks" (HITs) for a fee. Using open-source R and python software, we select a random sample of 1 km2 cells from a grid placed over our study area, stratified by field density classes drawn from one of the coarse-scale land cover maps, and send these in batches to Mechanical Turk for processing. Each Turker is required to conduct an initial training session, on the basis of which they are assigned an accuracy score that determines whether the Turker is allowed to proceed with mapping tasks. Completed mapping tasks are automatically retrieved and processed on our server, and subject to two further quality control measures. The first of these is a measure of the spatial accuracy of Turker mapped areas compared to a "gold standard" maps from selected locations that are randomly inserted (at relatively low frequency, ˜1/100) into batches sent to Mechanical Turk. This check provides a measure of overall map accuracy, and is used to update individual Turker's accuracy scores, which is the basis for determining pay rates. The second measure compares the area of each mapped Turkers' results with the expected area derived from existing land cover data, accepting or rejecting each Turker's batch based on how closely the two distributions match, with accuracy scores adjusted accordingly. Those two checks balance the need to ensure mapping quality with the overall cost of the project. Our initial study is developed for South Africa, where an existing dataset of hand digitized fields commissioned by the South African Department of Agriculture provides our validation and gold standard data. We compare our Turker-produced results with these existing maps, and with the the coarser-scaled land cover datasets, providing insight into their relative accuracies, classified according to cropland type (e.g. small-scale/subsistence cropping; large-scale commercial farms), and provide information on the cost effectiveness of our approach.
Investigation into discretization methods of the six-parameter Iwan model
NASA Astrophysics Data System (ADS)
Li, Yikun; Hao, Zhiming; Feng, Jiaquan; Zhang, Dingguo
2017-02-01
Iwan model is widely applied for the purpose of describing nonlinear mechanisms of jointed structures. In this paper, parameter identification procedures of the six-parameter Iwan model based on joint experiments with different preload techniques are performed. Four kinds of discretization methods deduced from stiffness equation of the six-parameter Iwan model are provided, which can be used to discretize the integral-form Iwan model into a sum of finite Jenkins elements. In finite element simulation, the influences of discretization methods and numbers of Jenkins elements on computing accuracy are discussed. Simulation results indicate that a higher accuracy can be obtained with larger numbers of Jenkins elements. It is also shown that compared with other three kinds of discretization methods, the geometric series discretization based on stiffness provides the highest computing accuracy.
NASA Astrophysics Data System (ADS)
Juliane, C.; Arman, A. A.; Sastramihardja, H. S.; Supriana, I.
2017-03-01
Having motivation to learn is a successful requirement in a learning process, and needs to be maintained properly. This study aims to measure learning motivation, especially in the process of electronic learning (e-learning). Here, data mining approach was chosen as a research method. For the testing process, the accuracy comparative study on the different testing techniques was conducted, involving Cross Validation and Percentage Split. The best accuracy was generated by J48 algorithm with a percentage split technique reaching at 92.19 %. This study provided an overview on how to detect the presence of learning motivation in the context of e-learning. It is expected to be good contribution for education, and to warn the teachers for whom they have to provide motivation.
NASA Astrophysics Data System (ADS)
Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul
2018-07-01
Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.
Thematic accuracy of the NLCD 2001 land cover for the conterminous United States
Wickham, J.D.; Stehman, S.V.; Fry, J.A.; Smith, J.H.; Homer, Collin G.
2010-01-01
The land-cover thematic accuracy of NLCD 2001 was assessed from a probability-sample of 15,000 pixels. Nationwide, NLCD 2001 overall Anderson Level II and Level I accuracies were 78.7% and 85.3%, respectively. By comparison, overall accuracies at Level II and Level I for the NLCD 1992 were 58% and 80%. Forest and cropland were two classes showing substantial improvements in accuracy in NLCD 2001 relative to NLCD 1992. NLCD 2001 forest and cropland user's accuracies were 87% and 82%, respectively, compared to 80% and 43% for NLCD 1992. Accuracy results are reported for 10 geographic regions of the United States, with regional overall accuracies ranging from 68% to 86% for Level II and from 79% to 91% at Level I. Geographic variation in class-specific accuracy was strongly associated with the phenomenon that regionally more abundant land-cover classes had higher accuracy. Accuracy estimates based on several definitions of agreement are reported to provide an indication of the potential impact of reference data error on accuracy. Drawing on our experience from two NLCD national accuracy assessments, we discuss the use of designs incorporating auxiliary data to more seamlessly quantify reference data quality as a means to further advance thematic map accuracy assessment.
NASA Astrophysics Data System (ADS)
Austin, Rickey W.
In Einstein's theory of Special Relativity (SR), one method to derive relativistic kinetic energy is via applying the classical work-energy theorem to relativistic momentum. This approach starts with a classical based work-energy theorem and applies SR's momentum to the derivation. One outcome of this derivation is relativistic kinetic energy. From this derivation, it is rather straight forward to form a kinetic energy based time dilation function. In the derivation of General Relativity a common approach is to bypass classical laws as a starting point. Instead a rigorous development of differential geometry and Riemannian space is constructed, from which classical based laws are derived. This is in contrast to SR's approach of starting with classical laws and applying the consequences of the universal speed of light by all observers. A possible method to derive time dilation due to Newtonian gravitational potential energy (NGPE) is to apply SR's approach to deriving relativistic kinetic energy. It will be shown this method gives a first order accuracy compared to Schwarzschild's metric. The SR's kinetic energy and the newly derived NGPE derivation are combined to form a Riemannian metric based on these two energies. A geodesic is derived and calculations compared to Schwarzschild's geodesic for an orbiting test mass about a central, non-rotating, non-charged massive body. The new metric results in high accuracy calculations when compared to Einsteins General Relativity's prediction. The new method provides a candidate approach for starting with classical laws and deriving General Relativity effects. This approach mimics SR's method of starting with classical mechanics when deriving relativistic equations. As a compliment to introducing General Relativity, it provides a plausible scaffolding method from classical physics when teaching introductory General Relativity. A straight forward path from classical laws to General Relativity will be derived. This derivation provides a minimum first order accuracy to Schwarzschild's solution to Einstein's field equations.
Inattentional blindness increased with augmented reality surgical navigation.
Dixon, Benjamin J; Daly, Michael J; Chan, Harley H L; Vescan, Allan; Witterick, Ian J; Irish, Jonathan C
2014-01-01
Augmented reality (AR) surgical navigation systems, designed to increase accuracy and efficiency, have been shown to negatively impact on attention. We wished to assess the effect "head-up" AR displays have on attention, efficiency, and accuracy, while performing a surgical task, compared with the same information being presented on a submonitor (SM). Fifty experienced otolaryngology surgeons (n = 42) and senior otolaryngology trainees (n = 8) performed an endoscopic surgical navigation exercise on a predissected cadaveric model. Computed tomography-generated anatomic contours were fused with the endoscopic image to provide an AR view. Subjects were randomized to perform the task with a standard endoscopic monitor with the AR navigation displayed on an SM or with AR as a single display. Accuracy, task completion time, and the recognition of unexpected findings (a foreign body and a critical complication) were recorded. Recognition of the foreign body was significantly better in the SM group (15/25 [60%]) compared with the AR alone group (8/25 [32%]; p = 0.02). There was no significant difference in task completion time (p = 0.83) or accuracy (p = 0.78) between the two groups. Providing identical surgical navigation on a SM, rather than on a single head-up display, reduced the level of inattentional blindness as measured by detection of unexpected findings. These gains were achieved without any measurable impact on efficiency or accuracy. AR displays may distract the user and we caution injudicious adoption of this technology for medical procedures.
Arend, Carlos Frederico; Arend, Ana Amalia; da Silva, Tiago Rodrigues
2014-06-01
The aim of our study was to systematically compare different methodologies to establish an evidence-based approach based on tendon thickness and structure for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. US was obtained from 164 symptomatic patients with supraspinatus tendinopathy detected at MRI and 42 asymptomatic controls with normal MRI. Diagnostic yield was calculated for either maximal supraspinatus tendon thickness (MSTT) and tendon structure as isolated criteria and using different combinations of parallel and sequential testing at US. Chi-squared tests were performed to assess sensitivity, specificity, and accuracy of different diagnostic approaches. Mean MSTT was 6.68 mm in symptomatic patients and 5.61 mm in asymptomatic controls (P<.05). When used as an isolated criterion, MSTT>6.0mm provided best results for accuracy (93.7%) when compared to other measurements of tendon thickness. Also as an isolated criterion, abnormal tendon structure (ATS) yielded 93.2% accuracy for diagnosis. The best overall yield was obtained by both parallel and sequential testing using either MSTT>6.0mm or ATS as diagnostic criteria at no particular order, which provided 99.0% accuracy, 100% sensitivity, and 95.2% specificity. Among these parallel and sequential tests that provided best overall yield, additional analysis revealed that sequential testing first evaluating tendon structure required assessment of 258 criteria (vs. 261 for sequential testing first evaluating tendon thickness and 412 for parallel testing) and demanded a mean of 16.1s to assess diagnostic criteria and reach the diagnosis (vs. 43.3s for sequential testing first evaluating tendon thickness and 47.4s for parallel testing). We found that using either MSTT>6.0mm or ATS as diagnostic criteria for both parallel and sequential testing provides the best overall yield for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. Among these strategies, a two-step sequential approach first assessing tendon structure was advantageous because it required a lower number of criteria to be assessed and demanded less time to assess diagnostic criteria and reach the diagnosis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Baksi, B Güniz
2008-07-01
The aim of this study was to compare the subjective diagnostic quality of F-speed film images and original and enhanced storage phosphor plate (SPP) digital images for the visualization of periodontal ligament space (PLS) and periapical (PB) and alveolar crestal bone (CB) and to assess the accuracy of these image modalities for the measurement of alveolar bone levels. Standardized images of six dried mandibles were obtained with film and Digora SPPs. Six evaluators rated the visibility of anatomical structures using a three-point scale. Alveolar bone levels were measured from the coronal-most tip of the marginal bone to a reference point. Results were compared by using Friedman and Wilcoxon signed-ranks tests. The kappa (kappa) statistic was used to measure agreement among observers. The measurements were compared using repeated measures analysis of variance and Bonferroni tests (P = 0.05). A paired t test was used for comparison with true bone levels (P = 0.05). Enhanced SPP images were rated superior, followed by film and then the original SPP images, for the evaluation of anatomical structures. The value of kappa rose from fair to substantial after the enhancement of the SPP images. Film and enhanced SPP images provided alveolar bone lengths close to the true bone lengths. Enhancement of digital images provided better visibility and resulted in comparable accuracy to film images for the evaluation of periodontal structures.
Duvekot, Jorieke; van der Ende, Jan; Verhulst, Frank C; Greaves-Lord, Kirstin
2015-06-01
The screening accuracy of the parent and teacher-reported Social Responsiveness Scale (SRS) was compared with an autism spectrum disorder (ASD) classification according to (1) the Developmental, Dimensional, and Diagnostic Interview (3 Di), (2) the Autism Diagnostic Observation Schedule (ADOS), (3) both the 3 Di and ADOS, in 186 children referred to six mental health centers. The parent report showed excellent correspondence to an ASD classification according to the 3 Di and both the 3 Di and ADOS. The teacher report added significantly to the screening accuracy over and above the parent report when compared with the ADOS classification. Findings support the screening utility of the parent-reported SRS among clinically referred children and indicate that different informants may provide unique information relevant for ASD assessment.
Hybrid simplified spherical harmonics with diffusion equation for light propagation in tissues.
Chen, Xueli; Sun, Fangfang; Yang, Defu; Ren, Shenghan; Zhang, Qian; Liang, Jimin
2015-08-21
Aiming at the limitations of the simplified spherical harmonics approximation (SPN) and diffusion equation (DE) in describing the light propagation in tissues, a hybrid simplified spherical harmonics with diffusion equation (HSDE) based diffuse light transport model is proposed. In the HSDE model, the living body is first segmented into several major organs, and then the organs are divided into high scattering tissues and other tissues. DE and SPN are employed to describe the light propagation in these two kinds of tissues respectively, which are finally coupled using the established boundary coupling condition. The HSDE model makes full use of the advantages of SPN and DE, and abandons their disadvantages, so that it can provide a perfect balance between accuracy and computation time. Using the finite element method, the HSDE is solved for light flux density map on body surface. The accuracy and efficiency of the HSDE are validated with both regular geometries and digital mouse model based simulations. Corresponding results reveal that a comparable accuracy and much less computation time are achieved compared with the SPN model as well as a much better accuracy compared with the DE one.
Chiu, Herng-Chia; Ho, Te-Wei; Lee, King-Teh; Chen, Hong-Yaw; Ho, Wen-Hsien
2013-01-01
The aim of this present study is firstly to compare significant predictors of mortality for hepatocellular carcinoma (HCC) patients undergoing resection between artificial neural network (ANN) and logistic regression (LR) models and secondly to evaluate the predictive accuracy of ANN and LR in different survival year estimation models. We constructed a prognostic model for 434 patients with 21 potential input variables by Cox regression model. Model performance was measured by numbers of significant predictors and predictive accuracy. The results indicated that ANN had double to triple numbers of significant predictors at 1-, 3-, and 5-year survival models as compared with LR models. Scores of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) of 1-, 3-, and 5-year survival estimation models using ANN were superior to those of LR in all the training sets and most of the validation sets. The study demonstrated that ANN not only had a great number of predictors of mortality variables but also provided accurate prediction, as compared with conventional methods. It is suggested that physicians consider using data mining methods as supplemental tools for clinical decision-making and prognostic evaluation. PMID:23737707
Perez-Cruz, Pedro E; Dos Santos, Renata; Silva, Thiago Buosi; Crovador, Camila Souza; Nascimento, Maria Salete de Angelis; Hall, Stacy; Fajardo, Julieta; Bruera, Eduardo; Hui, David
2014-11-01
Survival prognostication is important during the end of life. The accuracy of clinician prediction of survival (CPS) over time has not been well characterized. The aims of the study were to examine changes in prognostication accuracy during the last 14 days of life in a cohort of patients with advanced cancer admitted to two acute palliative care units and to compare the accuracy between the temporal and probabilistic approaches. Physicians and nurses prognosticated survival daily for cancer patients in two hospitals until death/discharge using two prognostic approaches: temporal and probabilistic. We assessed accuracy for each method daily during the last 14 days of life comparing accuracy at Day -14 (baseline) with accuracy at each time point using a test of proportions. A total of 6718 temporal and 6621 probabilistic estimations were provided by physicians and nurses for 311 patients, respectively. Median (interquartile range) survival was 8 days (4-20 days). Temporal CPS had low accuracy (10%-40%) and did not change over time. In contrast, probabilistic CPS was significantly more accurate (P < .05 at each time point) but decreased close to death. Probabilistic CPS was consistently more accurate than temporal CPS over the last 14 days of life; however, its accuracy decreased as patients approached death. Our findings suggest that better tools to predict impending death are necessary. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Critically re-evaluating a common technique: Accuracy, reliability, and confirmation bias of EMG.
Narayanaswami, Pushpa; Geisbush, Thomas; Jones, Lyell; Weiss, Michael; Mozaffar, Tahseen; Gronseth, Gary; Rutkove, Seward B
2016-01-19
(1) To assess the diagnostic accuracy of EMG in radiculopathy. (2) To evaluate the intrarater reliability and interrater reliability of EMG in radiculopathy. (3) To assess the presence of confirmation bias in EMG. Three experienced academic electromyographers interpreted 3 compact discs with 20 EMG videos (10 normal, 10 radiculopathy) in a blinded, standardized fashion without information regarding the nature of the study. The EMGs were interpreted 3 times (discs A, B, C) 1 month apart. Clinical information was provided only with disc C. Intrarater reliability was calculated by comparing interpretations in discs A and B, interrater reliability by comparing interpretation between reviewers. Confirmation bias was estimated by the difference in correct interpretations when clinical information was provided. Sensitivity was similar to previous reports (77%, confidence interval [CI] 63%-90%); specificity was 71%, CI 56%-85%. Intrarater reliability was good (κ 0.61, 95% CI 0.41-0.81); interrater reliability was lower (κ 0.53, CI 0.35-0.71). There was no substantial confirmation bias when clinical information was provided (absolute difference in correct responses 2.2%, CI -13.3% to 17.7%); the study lacked precision to exclude moderate confirmation bias. This study supports that (1) serial EMG studies should be performed by the same electromyographer since intrarater reliability is better than interrater reliability; (2) knowledge of clinical information does not bias EMG interpretation substantially; (3) EMG has moderate diagnostic accuracy for radiculopathy with modest specificity and electromyographers should exercise caution interpreting mild abnormalities. This study provides Class III evidence that EMG has moderate diagnostic accuracy and specificity for radiculopathy. © 2015 American Academy of Neurology.
Study design requirements for RNA sequencing-based breast cancer diagnostics.
Mer, Arvind Singh; Klevebring, Daniel; Grönberg, Henrik; Rantalainen, Mattias
2016-02-01
Sequencing-based molecular characterization of tumors provides information required for individualized cancer treatment. There are well-defined molecular subtypes of breast cancer that provide improved prognostication compared to routine biomarkers. However, molecular subtyping is not yet implemented in routine breast cancer care. Clinical translation is dependent on subtype prediction models providing high sensitivity and specificity. In this study we evaluate sample size and RNA-sequencing read requirements for breast cancer subtyping to facilitate rational design of translational studies. We applied subsampling to ascertain the effect of training sample size and the number of RNA sequencing reads on classification accuracy of molecular subtype and routine biomarker prediction models (unsupervised and supervised). Subtype classification accuracy improved with increasing sample size up to N = 750 (accuracy = 0.93), although with a modest improvement beyond N = 350 (accuracy = 0.92). Prediction of routine biomarkers achieved accuracy of 0.94 (ER) and 0.92 (Her2) at N = 200. Subtype classification improved with RNA-sequencing library size up to 5 million reads. Development of molecular subtyping models for cancer diagnostics requires well-designed studies. Sample size and the number of RNA sequencing reads directly influence accuracy of molecular subtyping. Results in this study provide key information for rational design of translational studies aiming to bring sequencing-based diagnostics to the clinic.
Farrell, Todd R.; Weir, Richard F. ff.
2011-01-01
The use of surface versus intramuscular electrodes as well as the effect of electrode targeting on pattern-recognition-based multifunctional prosthesis control was explored. Surface electrodes are touted for their ability to record activity from relatively large portions of muscle tissue. Intramuscular electromyograms (EMGs) can provide focal recordings from deep muscles of the forearm and independent signals relatively free of crosstalk. However, little work has been done to compare the two. Additionally, while previous investigations have either targeted electrodes to specific muscles or used untargeted (symmetric) electrode arrays, no work has compared these approaches to determine if one is superior. The classification accuracies of pattern-recognition-based classifiers utilizing surface and intramuscular as well as targeted and untargeted electrodes were compared across 11 subjects. A repeated-measures analysis of variance revealed that when only EMG amplitude information was used from all available EMG channels, the targeted surface, targeted intramuscular, and untargeted surface electrodes produced similar classification accuracies while the untargeted intramuscular electrodes produced significantly lower accuracies. However, no statistical differences were observed between any of the electrode conditions when additional features were extracted from the EMG signal. It was concluded that the choice of electrode should be driven by clinical factors, such as signal robustness/stability, cost, etc., instead of by classification accuracy. PMID:18713689
Eye movements provide insights into the conscious use of context in prospective memory.
Bowden, Vanessa K; Smith, Rebekah E; Loft, Shayne
2017-07-01
Prior research examining the impact of context on prospective memory (PM) has produced mixed results. Our study aimed to determine whether providing progressive context information could increase PM accuracy and reduce costs to ongoing tasks. Seventy-two participants made ongoing true/false judgements for simple sentences while maintaining a PM intention to respond differently to four memorised words. The context condition were informed of the trial numbers where PM targets could appear, and eye-tracking recorded trial number fixation frequency. The context condition showed reduced costs during irrelevant contexts, increased costs during relevant contexts, and had better PM accuracy compared to a standard condition that was not provided with context. The context condition also made an increasing number of trial number fixations leading up to relevant contexts, indicating the conscious use of context. Furthermore, this trial number checking was beneficial to PM, with participants who checked more frequently having better PM accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.
Steganalysis using logistic regression
NASA Astrophysics Data System (ADS)
Lubenko, Ivans; Ker, Andrew D.
2011-02-01
We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.
Misawa, Masashi; Kudo, Shin-Ei; Mori, Yuichi; Takeda, Kenichi; Maeda, Yasuharu; Kataoka, Shinichi; Nakamura, Hiroki; Kudo, Toyoki; Wakamura, Kunihiko; Hayashi, Takemasa; Katagiri, Atsushi; Baba, Toshiyuki; Ishida, Fumio; Inoue, Haruhiro; Nimura, Yukitaka; Oda, Msahiro; Mori, Kensaku
2017-05-01
Real-time characterization of colorectal lesions during colonoscopy is important for reducing medical costs, given that the need for a pathological diagnosis can be omitted if the accuracy of the diagnostic modality is sufficiently high. However, it is sometimes difficult for community-based gastroenterologists to achieve the required level of diagnostic accuracy. In this regard, we developed a computer-aided diagnosis (CAD) system based on endocytoscopy (EC) to evaluate cellular, glandular, and vessel structure atypia in vivo. The purpose of this study was to compare the diagnostic ability and efficacy of this CAD system with the performances of human expert and trainee endoscopists. We developed a CAD system based on EC with narrow-band imaging that allowed microvascular evaluation without dye (ECV-CAD). The CAD algorithm was programmed based on texture analysis and provided a two-class diagnosis of neoplastic or non-neoplastic, with probabilities. We validated the diagnostic ability of the ECV-CAD system using 173 randomly selected EC images (49 non-neoplasms, 124 neoplasms). The images were evaluated by the CAD and by four expert endoscopists and three trainees. The diagnostic accuracies for distinguishing between neoplasms and non-neoplasms were calculated. ECV-CAD had higher overall diagnostic accuracy than trainees (87.8 vs 63.4%; [Formula: see text]), but similar to experts (87.8 vs 84.2%; [Formula: see text]). With regard to high-confidence cases, the overall accuracy of ECV-CAD was also higher than trainees (93.5 vs 71.7%; [Formula: see text]) and comparable to experts (93.5 vs 90.8%; [Formula: see text]). ECV-CAD showed better diagnostic accuracy than trainee endoscopists and was comparable to that of experts. ECV-CAD could thus be a powerful decision-making tool for less-experienced endoscopists.
Botti, Lorenzo; Paliwal, Nikhil; Conti, Pierangelo; Antiga, Luca; Meng, Hui
2018-06-01
Image-based computational fluid dynamics (CFD) has shown potential to aid in the clinical management of intracranial aneurysms (IAs) but its adoption in the clinical practice has been missing, partially due to lack of accuracy assessment and sensitivity analysis. To numerically solve the flow-governing equations CFD solvers generally rely on two spatial discretization schemes: Finite Volume (FV) and Finite Element (FE). Since increasingly accurate numerical solutions are obtained by different means, accuracies and computational costs of FV and FE formulations cannot be compared directly. To this end, in this study we benchmark two representative CFD solvers in simulating flow in a patient-specific IA model: (1) ANSYS Fluent, a commercial FV-based solver and (2) VMTKLab multidGetto, a discontinuous Galerkin (dG) FE-based solver. The FV solver's accuracy is improved by increasing the spatial mesh resolution (134k, 1.1m, 8.6m and 68.5m tetrahedral element meshes). The dGFE solver accuracy is increased by increasing the degree of polynomials (first, second, third and fourth degree) on the base 134k tetrahedral element mesh. Solutions from best FV and dGFE approximations are used as baseline for error quantification. On average, velocity errors for second-best approximations are approximately 1cm/s for a [0,125]cm/s velocity magnitude field. Results show that high-order dGFE provide better accuracy per degree of freedom but worse accuracy per Jacobian non-zero entry as compared to FV. Cross-comparison of velocity errors demonstrates asymptotic convergence of both solvers to the same numerical solution. Nevertheless, the discrepancy between under-resolved velocity fields suggests that mesh independence is reached following different paths. This article is protected by copyright. All rights reserved.
Van Hemelen, Geert; Van Genechten, Maarten; Renier, Lieven; Desmedt, Maria; Verbruggen, Elric; Nadjmi, Nasser
2015-07-01
Throughout the history of computing, shortening the gap between the physical and digital world behind the screen has always been strived for. Recent advances in three-dimensional (3D) virtual surgery programs have reduced this gap significantly. Although 3D assisted surgery is now widely available for orthognathic surgery, one might still argue whether a 3D virtual planning approach is a better alternative to a conventional two-dimensional (2D) planning technique. The purpose of this study was to compare the accuracy of a traditional 2D technique and a 3D computer-aided prediction method. A double blind randomised prospective study was performed to compare the prediction accuracy of a traditional 2D planning technique versus a 3D computer-aided planning approach. The accuracy of the hard and soft tissue profile predictions using both planning methods was investigated. There was a statistically significant difference between 2D and 3D soft tissue planning (p < 0.05). The statistically significant difference found between 2D and 3D planning and the actual soft tissue outcome was not confirmed by a statistically significant difference between methods. The 3D planning approach provides more accurate soft tissue planning. However, the 2D orthognathic planning is comparable to 3D planning when it comes to hard tissue planning. This study provides relevant results for choosing between 3D and 2D planning in clinical practice. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Yang, Xiaoyan; Chen, Longgao; Li, Yingkui; Xi, Wenjia; Chen, Longqian
2015-07-01
Land use/land cover (LULC) inventory provides an important dataset in regional planning and environmental assessment. To efficiently obtain the LULC inventory, we compared the LULC classifications based on single satellite imagery with a rule-based classification based on multi-seasonal imagery in Lianyungang City, a coastal city in China, using CBERS-02 (the 2nd China-Brazil Environmental Resource Satellites) images. The overall accuracies of the classification based on single imagery are 78.9, 82.8, and 82.0% in winter, early summer, and autumn, respectively. The rule-based classification improves the accuracy to 87.9% (kappa 0.85), suggesting that combining multi-seasonal images can considerably improve the classification accuracy over any single image-based classification. This method could also be used to analyze seasonal changes of LULC types, especially for those associated with tidal changes in coastal areas. The distribution and inventory of LULC types with an overall accuracy of 87.9% and a spatial resolution of 19.5 m can assist regional planning and environmental assessment efficiently in Lianyungang City. This rule-based classification provides a guidance to improve accuracy for coastal areas with distinct LULC temporal spectral features.
Parameter estimation accuracies of Galactic binaries with eLISA
NASA Astrophysics Data System (ADS)
Błaut, Arkadiusz
2018-09-01
We study parameter estimation accuracy of nearly monochromatic sources of gravitational waves with the future eLISA-like detectors. eLISA will be capable of observing millions of such signals generated by orbiting pairs of compact binaries consisting of white dwarf, neutron star or black hole and to resolve and estimate parameters of several thousands of them providing crucial information regarding their orbital dynamics, formation rates and evolutionary paths. Using the Fisher matrix analysis we compare accuracies of the estimated parameters for different mission designs defined by the GOAT advisory team established to asses the scientific capabilities and the technological issues of the eLISA-like missions.
Ooi, Choon Ean; Rofe, Olivia; Vienet, Michelle; Elliott, Rohan A
2017-04-01
Background Discontinuity of care between hospital and primary care is often due to poor information transfer. Medication information in medical discharge summaries (DS) is often incomplete or incorrect. The effectiveness and feasibility of hospital pharmacists communicating medication information, including changes made in the hospital, is not clearly defined. Objective To explore the impact of a pharmacist-prepared Discharge Medication Management Summary (DMMS) on the accuracy of information about medication changes provided to patients' general practitioners (GPs). Setting Two medical wards at a major metropolitan hospital in Australia. Method An intervention was developed in which ward pharmacists communicated medication change information to GPs using the DMMS. Retrospective audits were conducted at baseline and after implementation of the DMMS to compare the accuracy of information provided by doctors and pharmacists. GPs' satisfaction with the DMMS was assessed through a faxed survey. Main outcome measure Accuracy of medication change information communicated to GPs; GP satisfaction and feasibility of a pharmacist-prepared DMMS. Results At baseline, 263/573 (45.9%) medication changes were documented by doctors in the DS. In the post-intervention audit, more medication changes were documented in the pharmacist-prepared DMMS compared to the doctor-prepared DS (72.8% vs. 31.5%; p < 0.001). Most GPs (73.3%) were satisfied with the information provided and wanted to receive the DMMS in the future. Completing the DMMS took pharmacists an average of 11.7 minutes. Conclusion The accuracy of medication information transferred upon discharge can be improved by expanding the role of hospital pharmacists to include documenting medication changes.
Comparing Features for Classification of MEG Responses to Motor Imagery.
Halme, Hanna-Leena; Parkkonen, Lauri
2016-01-01
Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio-spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system.
Wunderle, Kevin A; Rakowski, Joseph T; Dong, Frank F
2016-05-08
The first goal of this study was to investigate the accuracy of the displayed reference plane air kerma (Ka,r) or air kerma-area product (Pk,a) over a broad spectrum of X-ray beam qualities on clinically used interventional fluoroscopes incorporating air kerma-area product meters (KAP meters) to measure X-ray output. The second goal was to investigate the accuracy of a correction coefficient (CC) determined at a single beam quality and applied to the measured Ka,r over a broad spectrum of beam qualities. Eleven state-of-the-art interventional fluoroscopes were evaluated, consisting of eight Siemens Artis zee and Artis Q systems and three Philips Allura FD systems. A separate calibrated 60 cc ionization chamber (external chamber) was used to determine the accuracy of the KAP meter over a broad range of clinically used beam qualities. For typical adult beam qualities, applying a single CC deter-mined at 100 kVp with copper (Cu) in the beam resulted in a deviation of < 5% due to beam quality variation. This result indicates that applying a CC determined using The American Association of Physicists in Medicine Task Group 190 protocol or a similar protocol provides very good accuracy as compared to the allowed ± 35% deviation of the KAP meter in this limited beam quality range. For interventional fluoroscopes dedicated to or routinely used to perform pediatric interventions, using a CC established with a low kVp (~ 55-60 kVp) and large amount of Cu filtration (~ 0.6-0.9 mm) may result in greater accuracy as compared to using the 100 kVp values. KAP meter responses indicate that fluoroscope vendors are likely normalizing or otherwise influencing the KAP meter output data. Although this may provide improved accuracy in some instances, there is the potential for large discrete errors to occur, and these errors may be difficult to identify.
Indoor Pedestrian Localization Using iBeacon and Improved Kalman Filter.
Sung, Kwangjae; Lee, Dong Kyu 'Roy'; Kim, Hwangnam
2018-05-26
The reliable and accurate indoor pedestrian positioning is one of the biggest challenges for location-based systems and applications. Most pedestrian positioning systems have drift error and large bias due to low-cost inertial sensors and random motions of human being, as well as unpredictable and time-varying radio-frequency (RF) signals used for position determination. To solve this problem, many indoor positioning approaches that integrate the user's motion estimated by dead reckoning (DR) method and the location data obtained by RSS fingerprinting through Bayesian filter, such as the Kalman filter (KF), unscented Kalman filter (UKF), and particle filter (PF), have recently been proposed to achieve higher positioning accuracy in indoor environments. Among Bayesian filtering methods, PF is the most popular integrating approach and can provide the best localization performance. However, since PF uses a large number of particles for the high performance, it can lead to considerable computational cost. This paper presents an indoor positioning system implemented on a smartphone, which uses simple dead reckoning (DR), RSS fingerprinting using iBeacon and machine learning scheme, and improved KF. The core of the system is the enhanced KF called a sigma-point Kalman particle filter (SKPF), which localize the user leveraging both the unscented transform of UKF and the weighting method of PF. The SKPF algorithm proposed in this study is used to provide the enhanced positioning accuracy by fusing positional data obtained from both DR and fingerprinting with uncertainty. The SKPF algorithm can achieve better positioning accuracy than KF and UKF and comparable performance compared to PF, and it can provide higher computational efficiency compared with PF. iBeacon in our positioning system is used for energy-efficient localization and RSS fingerprinting. We aim to design the localization scheme that can realize the high positioning accuracy, computational efficiency, and energy efficiency through the SKPF and iBeacon indoors. Empirical experiments in real environments show that the use of the SKPF algorithm and iBeacon in our indoor localization scheme can achieve very satisfactory performance in terms of localization accuracy, computational cost, and energy efficiency.
Lin, You-Yu; Hsieh, Chia-Hung; Chen, Jiun-Hong; Lu, Xuemei; Kao, Jia-Horng; Chen, Pei-Jer; Chen, Ding-Shinn; Wang, Hurng-Yi
2017-04-26
The accuracy of metagenomic assembly is usually compromised by high levels of polymorphism due to divergent reads from the same genomic region recognized as different loci when sequenced and assembled together. A viral quasispecies is a group of abundant and diversified genetically related viruses found in a single carrier. Current mainstream assembly methods, such as Velvet and SOAPdenovo, were not originally intended for the assembly of such metagenomics data, and therefore demands for new methods to provide accurate and informative assembly results for metagenomic data. In this study, we present a hybrid method for assembling highly polymorphic data combining the partial de novo-reference assembly (PDR) strategy and the BLAST-based assembly pipeline (BBAP). The PDR strategy generates in situ reference sequences through de novo assembly of a randomly extracted partial data set which is subsequently used for the reference assembly for the full data set. BBAP employs a greedy algorithm to assemble polymorphic reads. We used 12 hepatitis B virus quasispecies NGS data sets from a previous study to assess and compare the performance of both PDR and BBAP. Analyses suggest the high polymorphism of a full metagenomic data set leads to fragmentized de novo assembly results, whereas the biased or limited representation of external reference sequences included fewer reads into the assembly with lower assembly accuracy and variation sensitivity. In comparison, the PDR generated in situ reference sequence incorporated more reads into the final PDR assembly of the full metagenomics data set along with greater accuracy and higher variation sensitivity. BBAP assembly results also suggest higher assembly efficiency and accuracy compared to other assembly methods. Additionally, BBAP assembly recovered HBV structural variants that were not observed amongst assembly results of other methods. Together, PDR/BBAP assembly results were significantly better than other compared methods. Both PDR and BBAP independently increased the assembly efficiency and accuracy of highly polymorphic data, and assembly performances were further improved when used together. BBAP also provides nucleotide frequency information. Together, PDR and BBAP provide powerful tools for metagenomic data studies.
Reference Management Software: A Comparative Analysis of Four Products
ERIC Educational Resources Information Center
Gilmour, Ron; Cobus-Kuo, Laura
2011-01-01
Reference management (RM) software is widely used by researchers in the health and natural sciences. Librarians are often called upon to provide support for these products. The present study compares four prominent RMs: CiteULike, RefWorks, Mendeley, and Zotero, in terms of features offered and the accuracy of the bibliographies that they…
Developing a Weighted Measure of Speech Sound Accuracy
Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.
2010-01-01
Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344
Real-time teleophthalmology versus face-to-face consultation: A systematic review.
Tan, Irene J; Dobson, Lucy P; Bartnik, Stephen; Muir, Josephine; Turner, Angus W
2017-08-01
Introduction Advances in imaging capabilities and the evolution of real-time teleophthalmology have the potential to provide increased coverage to areas with limited ophthalmology services. However, there is limited research assessing the diagnostic accuracy of face-to-face teleophthalmology consultation. This systematic review aims to determine if real-time teleophthalmology provides comparable accuracy to face-to-face consultation for the diagnosis of common eye health conditions. Methods A search of PubMed, Embase, Medline and Cochrane databases and manual citation review was conducted on 6 February and 7 April 2016. Included studies involved real-time telemedicine in the field of ophthalmology or optometry, and assessed diagnostic accuracy against gold-standard face-to-face consultation. The revised quality assessment of diagnostic accuracy studies (QUADAS-2) tool assessed risk of bias. Results Twelve studies were included, with participants ranging from four to 89 years old. A broad number of conditions were assessed and include corneal and retinal pathologies, strabismus, oculoplastics and post-operative review. Quality assessment identified a high or unclear risk of bias in patient selection (75%) due to an undisclosed recruitment processes. The index test showed high risk of bias in the included studies, due to the varied interpretation and conduct of real-time teleophthalmology methods. Reference standard risk was overall low (75%), as was the risk due to flow and timing (75%). Conclusion In terms of diagnostic accuracy, real-time teleophthalmology was considered superior to face-to-face consultation in one study and comparable in six studies. Store-and-forward image transmission coupled with real-time videoconferencing is a suitable alternative to overcome poor internet transmission speeds.
Improved solution accuracy for Landsat-4 (TDRSS-user) orbit determination
NASA Technical Reports Server (NTRS)
Oza, D. H.; Niklewski, D. J.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1994-01-01
This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using a Prototype Filter Smoother (PFS), with the accuracy of an established batch-least-squares system, the Goddard Trajectory Determination System (GTDS). The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and convariances for the sequential case) of solutions produced by the batch and sequential methods. The filtered and smoothed PFS orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 15 meters.
TreeCmp: Comparison of Trees in Polynomial Time
Bogdanowicz, Damian; Giaro, Krzysztof; Wróbel, Borys
2012-01-01
When a phylogenetic reconstruction does not result in one tree but in several, tree metrics permit finding out how far the reconstructed trees are from one another. They also permit to assess the accuracy of a reconstruction if a true tree is known. TreeCmp implements eight metrics that can be calculated in polynomial time for arbitrary (not only bifurcating) trees: four for unrooted (Matching Split metric, which we have recently proposed, Robinson-Foulds, Path Difference, Quartet) and four for rooted trees (Matching Cluster, Robinson-Foulds cluster, Nodal Splitted and Triple). TreeCmp is the first implementation of Matching Split/Cluster metrics and the first efficient and convenient implementation of Nodal Splitted. It allows to compare relatively large trees. We provide an example of the application of TreeCmp to compare the accuracy of ten approaches to phylogenetic reconstruction with trees up to 5000 external nodes, using a measure of accuracy based on normalized similarity between trees.
12 CFR Appendix C to Part 325 - Risk-Based Capital for State Non-Member Banks: Market Risk
Code of Federal Regulations, 2010 CFR
2010-01-01
... provide information about the impact of adverse market events on a bank's covered positions. Backtests provide information about the accuracy of an internal model by comparing a bank's daily VAR measures to... determines the bank meets such criteria as a consequence of accounting, operational, or similar...
Accuracy and completeness of drug information in Wikipedia medication monographs.
Reilly, Timothy; Jackson, William; Berger, Victoria; Candelario, Danielle
The primary objective of this study was to determine the accuracy and completeness of drug information on Wikipedia and Micromedex compared with U.S. Food and Drug Administration-approved U.S. product inserts. The top 10 brand and top 10 generic medications from the 2012 Institute for Health Informatics' list of top 200 drugs were selected for evaluation. Wikipedia medication information was evaluated and compared with Micromedex in 7 sections of drug information; the U.S. product inserts were used as the standard comparator. Wikipedia demonstrated significantly lower completeness and accuracy scores compared with Micromedex (mean composite scores 18.55 vs. 38.4, respectively; P <0.01). No difference was found between the mean composite scores for brand versus generic drugs in either reference (17.8 vs. 19.3, respectively [P = 0.62], for Wikipedia; 39.2 vs. 37.6, [P = 0.06] for Micromedex). Limitations to these results include the speed with which information is edited on Wikipedia, that there was no evaluation of off-label information, and the limited number of drugs that were evaluated. Wikipedia lacks the accuracy and completeness of standard clinical references and should not be a routine part of clinical decision making. More research should be conducted to evaluate the rationale for health care providers' use of Wikipedia. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Mizinga, Kemmy M; Burnett, Thomas J; Brunelle, Sharon L; Wallace, Michael A; Coleman, Mark R
2018-05-01
The U.S. Department of Agriculture, Food Safety Inspection Service regulatory method for monensin, Chemistry Laboratory Guidebook CLG-MON, is a semiquantitative bioautographic method adopted in 1991. Official Method of AnalysisSM (OMA) 2011.24, a modern quantitative and confirmatory LC-tandem MS method, uses no chlorinated solvents and has several advantages, including ease of use, ready availability of reagents and materials, shorter run-time, and higher throughput than CLG-MON. Therefore, a bridging study was conducted to support the replacement of method CLG-MON with OMA 2011.24 for regulatory use. Using fortified bovine tissue samples, CLG-MON yielded accuracies of 80-120% in 44 of the 56 samples tested (one sample had no result, six samples had accuracies of >120%, and five samples had accuracies of 40-160%), but the semiquantitative nature of CLG-MON prevented assessment of precision, whereas OMA 2011.24 had accuracies of 88-110% and RSDr of 0.00-15.6%. Incurred residue results corroborated these results, demonstrating improved accuracy (83.3-114%) and good precision (RSDr of 2.6-20.5%) for OMA 2011.24 compared with CLG-MON (accuracy generally within 80-150%, with exceptions). Furthermore, χ2 analysis revealed no statistically significant difference between the two methods. Thus, the microbiological activity of monensin correlated with the determination of monensin A in bovine tissues, and OMA 2011.24 provided improved accuracy and precision over CLG-MON.
Smoot, Betty J.; Wong, Josephine F.; Dodd, Marylin J.
2013-01-01
Objective To compare diagnostic accuracy of measures of breast cancer–related lymphedema (BCRL). Design Cross-sectional design comparing clinical measures with the criterion standard of previous diagnosis of BCRL. Setting University of California San Francisco Translational Science Clinical Research Center. Participants Women older than 18 years and more than 6 months posttreatment for breast cancer (n=141; 70 with BCRL, 71 without BCRL). Interventions Not applicable. Main Outcome Measures Sensitivity, specificity, receiver operator characteristic curve, and area under the curve (AUC) were used to evaluate accuracy. Results A total of 141 women were categorized as having (n=70) or not having (n=71) BCRL based on past diagnosis by a health care provider, which was used as the reference standard. Analyses of ROC curves for the continuous outcomes yielded AUC of .68 to .88 (P<.001); of the physical measures bioimpedance spectroscopy yielded the highest accuracy with an AUC of .88 (95% confidence interval, .80–.96) for women whose dominant arm was the affected arm. The lowest accuracy was found using the 2-cm diagnostic cutoff score to identify previously diagnosed BCRL (AUC, .54–.65). Conclusions Our findings support the use of bioimpedance spectroscopy in the assessment of existing BCRL. Refining diagnostic cutoff values may improve accuracy of diagnosis and warrant further investigation. PMID:21440706
Wu, C; de Jong, J R; Gratama van Andel, H A; van der Have, F; Vastenhouw, B; Laverman, P; Boerman, O C; Dierckx, R A J O; Beekman, F J
2011-09-21
Attenuation of photon flux on trajectories between the source and pinhole apertures affects the quantitative accuracy of reconstructed single-photon emission computed tomography (SPECT) images. We propose a Chang-based non-uniform attenuation correction (NUA-CT) for small-animal SPECT/CT with focusing pinhole collimation, and compare the quantitative accuracy with uniform Chang correction based on (i) body outlines extracted from x-ray CT (UA-CT) and (ii) on hand drawn body contours on the images obtained with three integrated optical cameras (UA-BC). Measurements in phantoms and rats containing known activities of isotopes were conducted for evaluation. In (125)I, (201)Tl, (99m)Tc and (111)In phantom experiments, average relative errors comparing to the gold standards measured in a dose calibrator were reduced to 5.5%, 6.8%, 4.9% and 2.8%, respectively, with NUA-CT. In animal studies, these errors were 2.1%, 3.3%, 2.0% and 2.0%, respectively. Differences in accuracy on average between results of NUA-CT, UA-CT and UA-BC were less than 2.3% in phantom studies and 3.1% in animal studies except for (125)I (3.6% and 5.1%, respectively). All methods tested provide reasonable attenuation correction and result in high quantitative accuracy. NUA-CT shows superior accuracy except for (125)I, where other factors may have more impact on the quantitative accuracy than the selected attenuation correction.
NASA Astrophysics Data System (ADS)
Zolot, A. M.; Giorgetta, F. R.; Baumann, E.; Swann, W. C.; Coddington, I.; Newbury, N. R.
2013-03-01
The Doppler-limited spectra of methane between 176 THz and 184 THz (5870-6130 cm-1) and acetylene between 193 THz and 199 THz (6430-6630 cm-1) are acquired via comb-tooth resolved dual comb spectroscopy with frequency accuracy traceable to atomic standards. A least squares analysis of the measured absorbance and phase line shapes provides line center frequencies with absolute accuracy of 0.2 MHz, or less than one thousandth of the room temperature Doppler width. This accuracy is verified through comparison with previous saturated absorption spectroscopy of 37 strong isolated lines of acetylene. For the methane spectrum, the center frequencies of 46 well-isolated strong lines are determined with similar high accuracy, along with the center frequencies for 1107 non-isolated lines at lower accuracy. The measured methane line-center frequencies have an uncertainty comparable to the few available laser heterodyne measurements in this region but span a much larger optical bandwidth, marking the first broad-band measurements of the methane 2ν3 region directly referenced to atomic frequency standards. This study demonstrates the promise of dual comb spectroscopy to obtain high resolution broadband spectra that are comparable to state-of-the-art Fourier-transform spectrometer measurements but with much improved frequency accuracy.Work of the US government, not subject to US copyright.
Evaluation of registration accuracy between Sentinel-2 and Landsat 8
NASA Astrophysics Data System (ADS)
Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia
2016-08-01
Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).
New tests of the common calibration context for ISO, IRTS, and MSX
NASA Technical Reports Server (NTRS)
Cohen, Martin
1997-01-01
The work carried out in order to test, verify and validate the accuracy of the calibration spectra provided to the Infrared Space Observatory (ISO), to the Infrared Telescope in Space (IRTS) and to the Midcourse Space Experiment (MSX) for external calibration support of instruments, is reviewed. The techniques, used to vindicate the accuracy of the absolute spectra, are discussed. The work planned for comparing far infrared spectra of Mars and some of the bright stellar calibrators with long wavelength spectrometer data are summarized.
Mathisen, R W; Mazess, R B
1981-02-01
The authors present a revised method for calculating life expectancy tables for populations where individual ages at death are known or can be estimated. The conventional and revised methods are compared using data for U.S. and Hungarian males in an attempt to determine the accuracy of each method in calculating life expectancy at advanced ages. Means of correcting errors caused by age rounding, age exaggeration, and infant mortality are presented
Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.
Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G
2017-09-01
To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.
Fallis, Don; Frické, Martin
2002-01-01
To identify indicators of accuracy for consumer health information on the Internet. The results will help lay people distinguish accurate from inaccurate health information on the Internet. Several popular search engines (Yahoo, AltaVista, and Google) were used to find Web pages on the treatment of fever in children. The accuracy and completeness of these Web pages was determined by comparing their content with that of an instrument developed from authoritative sources on treating fever in children. The presence on these Web pages of a number of proposed indicators of accuracy, taken from published guidelines for evaluating the quality of health information on the Internet, was noted. Correlation between the accuracy of Web pages on treating fever in children and the presence of proposed indicators of accuracy on these pages. Likelihood ratios for the presence (and absence) of these proposed indicators. One hundred Web pages were identified and characterized as "more accurate" or "less accurate." Three indicators correlated with accuracy: displaying the HONcode logo, having an organization domain, and displaying a copyright. Many proposed indicators taken from published guidelines did not correlate with accuracy (e.g., the author being identified and the author having medical credentials) or inaccuracy (e.g., lack of currency and advertising). This method provides a systematic way of identifying indicators that are correlated with the accuracy (or inaccuracy) of health information on the Internet. Three such indicators have been identified in this study. Identifying such indicators and informing the providers and consumers of health information about them would be valuable for public health care.
Indicators of Accuracy of Consumer Health Information on the Internet
Fallis, Don; Frické, Martin
2002-01-01
Objectives: To identify indicators of accuracy for consumer health information on the Internet. The results will help lay people distinguish accurate from inaccurate health information on the Internet. Design: Several popular search engines (Yahoo, AltaVista, and Google) were used to find Web pages on the treatment of fever in children. The accuracy and completeness of these Web pages was determined by comparing their content with that of an instrument developed from authoritative sources on treating fever in children. The presence on these Web pages of a number of proposed indicators of accuracy, taken from published guidelines for evaluating the quality of health information on the Internet, was noted. Main Outcome Measures: Correlation between the accuracy of Web pages on treating fever in children and the presence of proposed indicators of accuracy on these pages. Likelihood ratios for the presence (and absence) of these proposed indicators. Results: One hundred Web pages were identified and characterized as “more accurate” or “less accurate.” Three indicators correlated with accuracy: displaying the HONcode logo, having an organization domain, and displaying a copyright. Many proposed indicators taken from published guidelines did not correlate with accuracy (e.g., the author being identified and the author having medical credentials) or inaccuracy (e.g., lack of currency and advertising). Conclusions: This method provides a systematic way of identifying indicators that are correlated with the accuracy (or inaccuracy) of health information on the Internet. Three such indicators have been identified in this study. Identifying such indicators and informing the providers and consumers of health information about them would be valuable for public health care. PMID:11751805
Boushey, Carol J; Spoden, Melissa; Delp, Edward J; Zhu, Fengqing; Bosch, Marc; Ahmad, Ziad; Shvetsov, Yurii B; DeLany, James P; Kerr, Deborah A
2017-03-22
The mobile Food Record (mFR) is an image-based dietary assessment method for mobile devices. The study primary aim was to test the accuracy of the mFR by comparing reported energy intake (rEI) to total energy expenditure (TEE) using the doubly labeled water (DLW) method. Usability of the mFR was assessed by questionnaires before and after the study. Participants were 45 community dwelling men and women, 21-65 years. They were provided pack-out meals and snacks and encouraged to supplement with usual foods and beverages not provided. After being dosed with DLW, participants were instructed to record all eating occasions over a 7.5 days period using the mFR. Three trained analysts estimated rEI from the images sent to a secure server. rEI and TEE correlated significantly (Spearman correlation coefficient of 0.58, p < 0.0001). The mean percentage of underreporting below the lower 95% confidence interval of the ratio of rEI to TEE was 12% for men (standard deviation (SD) ± 11%) and 10% for women (SD ± 10%). The results demonstrate the accuracy of the mFR is comparable to traditional dietary records and other image-based methods. No systematic biases could be found. The mFR was received well by the participants and usability was rated as easy.
TU-F-17A-03: A 4D Lung Phantom for Coupled Registration/Segmentation Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, D; El Naqa, I; Levesque, I
2014-06-15
Purpose: Coupling the processes of segmentation and registration (regmentation) is a recent development that allows improved efficiency and accuracy for both steps and may improve the clinical feasibility of online adaptive radiotherapy. Presented is a multimodality animal tissue model designed specifically to provide a ground truth to simultaneously evaluate segmentation and registration errors during respiratory motion. Methods: Tumor surrogates were constructed from vacuum sealed hydrated natural sea sponges with catheters used for the injection of PET radiotracer. These contained two compartments allowing for two concentrations of radiotracer mimicking both tumor and background signals. The lungs were inflated to different volumesmore » using an air pump and flow valve and scanned using PET/CT and MRI. Anatomical landmarks were used to evaluate the registration accuracy using an automated bifurcation tracking pipeline for reproducibility. The bifurcation tracking accuracy was assessed using virtual deformations of 2.6 cm, 5.2 cm and 7.8 cm of a CT scan of a corresponding human thorax. Bifurcations were detected in the deformed dataset and compared to known deformation coordinates for 76 points. Results: The bifurcation tracking accuracy was found to have a mean error of −0.94, 0.79 and −0.57 voxels in the left-right, anterior-posterior and inferior-superior axes using a 1×1×5 mm3 resolution after the CT volume was deformed 7.8 cm. The tumor surrogates provided a segmentation ground truth after being registered to the phantom image. Conclusion: A swine lung model in conjunction with vacuum sealed sponges and a bifurcation tracking algorithm is presented that is MRI, PET and CT compatible and anatomically and kinetically realistic. Corresponding software for tracking anatomical landmarks within the phantom shows sub-voxel accuracy. Vacuum sealed sponges provide realistic tumor surrogate with a known boundary. A ground truth with minimal uncertainty is thus realized that can be used for comparing the performance of registration and segmentation algorithms.« less
77 FR 3477 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-24
... collection for the proper performance of the agency's functions; (2) the accuracy of the estimated burden; (3... submitted to CMS through the 372 web-based form. The report is used by CMS to compare actual data in the... provided is compared to that in the Medicaid Statistical Information System (CMS-R-284, OCN 0938-0345...
Heidelberg Retina Tomograph 3 machine learning classifiers for glaucoma detection
Townsend, K A; Wollstein, G; Danks, D; Sung, K R; Ishikawa, H; Kagemann, L; Gabriele, M L; Schuman, J S
2010-01-01
Aims To assess performance of classifiers trained on Heidelberg Retina Tomograph 3 (HRT3) parameters for discriminating between healthy and glaucomatous eyes. Methods Classifiers were trained using HRT3 parameters from 60 healthy subjects and 140 glaucomatous subjects. The classifiers were trained on all 95 variables and smaller sets created with backward elimination. Seven types of classifiers, including Support Vector Machines with radial basis (SVM-radial), and Recursive Partitioning and Regression Trees (RPART), were trained on the parameters. The area under the ROC curve (AUC) was calculated for classifiers, individual parameters and HRT3 glaucoma probability scores (GPS). Classifier AUCs and leave-one-out accuracy were compared with the highest individual parameter and GPS AUCs and accuracies. Results The highest AUC and accuracy for an individual parameter were 0.848 and 0.79, for vertical cup/disc ratio (vC/D). For GPS, global GPS performed best with AUC 0.829 and accuracy 0.78. SVM-radial with all parameters showed significant improvement over global GPS and vC/ D with AUC 0.916 and accuracy 0.85. RPART with all parameters provided significant improvement over global GPS with AUC 0.899 and significant improvement over global GPS and vC/D with accuracy 0.875. Conclusions Machine learning classifiers of HRT3 data provide significant enhancement over current methods for detection of glaucoma. PMID:18523087
Comprehension of synthetic speech and digitized natural speech by adults with aphasia.
Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E
2017-09-01
Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.
Accuracy of localization of prostate lesions using manual palpation and ultrasound elastography
NASA Astrophysics Data System (ADS)
Kut, Carmen; Schneider, Caitlin; Carter-Monroe, Naima; Su, Li-Ming; Boctor, Emad; Taylor, Russell
2009-02-01
Purpose: To compare the accuracy of detecting tumor location and size in the prostate using both manual palpation and ultrasound elastography (UE). Methods: Tumors in the prostate were simulated using both synthetic and ex vivo tissue phantoms. 25 participants were asked to provide the presence, size and depth of these simulated lesions using manual palpation and UE. Ultrasound images were captured using a laparoscopic ultrasound probe, fitted with a Gore-Tetrad transducer with frequency of 7.5 MHz and a RF capture depth of 4-5 cm. A MATLAB GUI application was employed to process the RF data for ex vivo phantoms, and to generate UE images using a cross-correlation algorithm. Ultrasonix software was used to provide real time elastography during laparoscopic palpation of the synthetic phantoms. Statistical analyses were performed based on a two-tailed, student t-test with α = 0.05. Results: UE displays both a higher accuracy and specificity in tumor detection (sensitivity = 84%, specificity = 74%). Tumor diameters and depths are better estimated using ultrasound elastography when compared with manual palpation. Conclusions: Our results indicate that UE has strong potential in assisting surgeons to intra-operatively evaluate the tumor depth and size. We have also demonstrated that ultrasound elastography can be implemented in a laparoscopic environment, in which manual palpation would not be feasible. With further work, this application can provide accurate and clinically relevant information for surgeons during prostate resection.
Khrustaleva, A M; Volkov, A A; Stoklitskaia, D S; Miuge, N S; Zelenina, D A
2010-11-01
Sockeye salmon samples from five largest lacustrine-riverine systems of Kamchatka Peninsula were tested for polymorphism at six microsatellite (STR) and five single nucleotide polymorphism (SNP) loci. Statistically significant genetic differentiation among local populations from this part of the species range examined was demonstrated. The data presented point to pronounced genetic divergence of the populations from two geographical regions, Eastern and Western Kamchatka. For sockeye salmon, the individual identification test accuracy was higher for microsatellites compared to similar number of SNP markers. Pooling of the STR and SNP allele frequency data sets provided the highest accuracy of the individual fish population assignment.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.
A space system for high-accuracy global time and frequency comparison of clocks
NASA Technical Reports Server (NTRS)
Decher, R.; Allan, D. W.; Alley, C. O.; Vessot, R. F. C.; Winkler, G. M. R.
1981-01-01
A Space Shuttle experiment in which a hydrogen maser clock on board the Space Shuttle will be compared with clocks on the ground using two-way microwave and short pulse laser signals is described. The accuracy goal for the experiment is 1 nsec or better for the time transfer and 10 to the minus 14th power for the frequency comparison. A direct frequency comparison of primary standards at the 10 to the minus 14th power accuracy level is a unique feature of the proposed system. Both time and frequency transfer will be accomplished by microwave transmission, while the laser signals provide calibration of the system as well as subnanosecond time transfer.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM BANK HOLDING COMPANIES AND CHANGE IN BANK CONTROL... Stress tests provide information about the impact of adverse market events on a bank's covered positions. Backtests provide information about the accuracy of an internal model by comparing an organization's daily...
Code of Federal Regulations, 2010 CFR
2010-01-01
... provide information about the impact of adverse market events on a bank's covered positions. Backtests provide information about the accuracy of an internal model by comparing a bank's daily VAR measures to... Banks; Market Risk Measure E Appendix E to Part 208 Banks and Banking FEDERAL RESERVE SYSTEM BOARD OF...
Edla, Damodar Reddy; Kuppili, Venkatanareshbabu; Dharavath, Ramesh; Beechu, Nareshkumar Reddy
2017-01-01
Low-power wearable devices for disease diagnosis are used at anytime and anywhere. These are non-invasive and pain-free for the better quality of life. However, these devices are resource constrained in terms of memory and processing capability. Memory constraint allows these devices to store a limited number of patterns and processing constraint provides delayed response. It is a challenging task to design a robust classification system under above constraints with high accuracy. In this Letter, to resolve this problem, a novel architecture for weightless neural networks (WNNs) has been proposed. It uses variable sized random access memories to optimise the memory usage and a modified binary TRIE data structure for reducing the test time. In addition, a bio-inspired-based genetic algorithm has been employed to improve the accuracy. The proposed architecture is experimented on various disease datasets using its software and hardware realisations. The experimental results prove that the proposed architecture achieves better performance in terms of accuracy, memory saving and test time as compared to standard WNNs. It also outperforms in terms of accuracy as compared to conventional neural network-based classifiers. The proposed architecture is a powerful part of most of the low-power wearable devices for the solution of memory, accuracy and time issues. PMID:28868148
NASA Astrophysics Data System (ADS)
Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo
2017-03-01
Contrast-enhanced mammography has been used to demonstrate functional information about a breast tumor by injecting contrast agents. However, a conventional technique with a single exposure degrades the efficiency of tumor detection due to structure overlapping. Dual-energy techniques with energy-integrating detectors (EIDs) also cause an increase of radiation dose and an inaccuracy of material decomposition due to the limitations of EIDs. On the other hands, spectral mammography with photon-counting detectors (PCDs) is able to resolve the issues induced by the conventional technique and EIDs using their energy-discrimination capabilities. In this study, the contrast-enhanced spectral mammography based on a PCD was implemented by using a polychromatic dual-energy model, and the proposed technique was compared with the dual-energy technique with an EID in terms of quantitative accuracy and radiation dose. The results showed that the proposed technique improved the quantitative accuracy as well as reduced radiation dose comparing to the dual-energy technique with an EID. The quantitative accuracy of the contrast-enhanced spectral mammography based on a PCD was slightly improved as a function of radiation dose. Therefore, the contrast-enhanced spectral mammography based on a PCD is able to provide useful information for detecting breast tumors and improving diagnostic accuracy.
Steidl, Matthew; Zimmern, Philippe
2013-01-01
We determined whether a custom computer program can improve the extraction and accuracy of key outcome measures from progress notes in an electronic medical record compared to a traditional data recording system for incontinence and prolapse repair procedures. Following institutional review board approval, progress notes were exported from the Epic electronic medical record system for outcome measure extraction by a custom computer program. The extracted data (D1) were compared against a manually maintained outcome measures database (D2). This work took place in 2 phases. During the first phase, volatile data such as questionnaires and standardized physical examination findings using the POP-Q (pelvic organ prolapse quantification) system were extracted from existing progress notes. The second phase used a progress note template incorporating key outcome measures to evaluate improvement in data accuracy and extraction rates. Phase 1 compared 6,625 individual outcome measures from 316 patients in D2 to 3,534 outcome measures extracted from progress notes in D1, resulting in an extraction rate of 53.3%. A subset of 3,763 outcome measures from D1 was created by excluding data that did not exist in the extraction, yielding an accuracy rate of 93.9%. With the use of the template in phase 2, the extraction rate improved to 91.9% (273 of 297) and the accuracy rate improved to 100% (273 of 273). In the field of incontinence and prolapse, the disciplined use of an electronic medical record template containing a preestablished set of key outcome measures can provide the ideal interface between required documentation and clinical research. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Kusy, Maciej; Obrzut, Bogdan; Kluska, Jacek
2013-12-01
The aim of this article was to compare gene expression programming (GEP) method with three types of neural networks in the prediction of adverse events of radical hysterectomy in cervical cancer patients. One-hundred and seven patients treated by radical hysterectomy were analyzed. Each record representing a single patient consisted of 10 parameters. The occurrence and lack of perioperative complications imposed a two-class classification problem. In the simulations, GEP algorithm was compared to a multilayer perceptron (MLP), a radial basis function network neural, and a probabilistic neural network. The generalization ability of the models was assessed on the basis of their accuracy, the sensitivity, the specificity, and the area under the receiver operating characteristic curve (AUROC). The GEP classifier provided best results in the prediction of the adverse events with the accuracy of 71.96 %. Comparable but slightly worse outcomes were obtained using MLP, i.e., 71.87 %. For each of measured indices: accuracy, sensitivity, specificity, and the AUROC, the standard deviation was the smallest for the models generated by GEP classifier.
Accuracy of frozen section in the diagnosis of ovarian tumours.
Toneva, F; Wright, H; Razvi, K
2012-07-01
The purpose of our retrospective study was to assess the accuracy of intraoperative frozen section diagnosis compared to final paraffin diagnosis in ovarian tumours at a gynaecological oncology centre in the UK. We analysed 66 cases and observed that frozen section consultation agreed with final paraffin diagnosis in 59 cases, which provided an accuracy of 89.4%. The overall sensitivity and specificity for all tumours were 85.4% and 100%, respectively. The positive predictive value (PPV) and negative predictive value (NPV) were 100% and 89.4%, respectively. Of the seven cases with discordant results, the majority were large, mucinous tumours, which is in line with previous studies. Our study demonstrated that despite its limitations, intraoperative frozen section has a high accuracy and sensitivity for assessing ovarian tumours; however, care needs to be taken with large, mucinous tumours.
Performance assessment of multi-GNSS real-time PPP over Iran
NASA Astrophysics Data System (ADS)
Abdi, Naser; Ardalan, Alireza A.; Karimi, Roohollah; Rezvani, Mohammad-Hadi
2017-06-01
With the advent of multi-GNSS constellations and thanks to providing the real-time precise products by IGS, multi-GNSS Real-Time PPP has been of special interest to the geodetic community. These products stream in the form of RTCM-SSR through NTRIP broadcaster. In this contribution, we aim at assessing the convergence time and positioning accuracy of Real-Time PPP over Iran by means of GPS, GPS + GLONASS, GPS + BeiDou, and GPS + GLONASS + BeiDou configurations. To this end, RINEX observations of six GNSS stations, within Iranian Permanent GNSS Network (IPGN), over consecutive sixteen days were processed via BKG NTRIP Client (BNC, v 2.12). In the processing steps, the IGS-MGEX broadcast ephemerides (BRDM, provided by TUM/DLR) and the pre-saved CLK93 broadcast corrections stream (provided by CNES) have been used as the satellites known information. The numerical results were compared against the station coordinates obtained from the double-difference solutions by Bernese GPS Software v 5.0. Accordingly, we have found that GPS + BeiDou combination can reduce the convergence time by 27%, 16% and 10% and improve the positioning accuracy by 22%, 18% and 2%, in the north, east and up components, respectively, as compared with the GPS PPP. Additionally, in comparison to the GPS + GLONASS results, GPS + GLONASS + BeiDou combination speeds up the convergence time by 9%, 8% and 9% and enhance the positioning accuracy by 8%, 5% and 6%, in the north, east and up components, respectively. Overall, thanks to the availability of the current BeiDou constellation observations, the considerable decrease in the convergence time on one hand, and the improvement in the positioning accuracy on the other, can verify the efficiency of utilizing multi-GNSS PPP for real-time applications over Iran.
Hand-eye calibration for rigid laparoscopes using an invariant point.
Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J
2016-06-01
Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.
Developing a weighted measure of speech sound accuracy.
Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J
2011-02-01
To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.
Moschetta, M; Telegrafo, M; Carluccio, D A; Jablonska, J P; Rella, L; Serio, Gabriella; Carrozzo, M; Stabile Ianora, A A; Angelelli, G
2014-01-01
To compare the diagnostic accuracy of fine-needle aspiration cytology (FNAC) and core needle biopsy (CNB) in patients with USdetected breast lesions. Between September 2011 and May 2013, 3469 consecutive breast US examinations were performed. 400 breast nodules were detected in 398 patients. 210 FNACs and 190 CNBs were performed. 183 out of 400 (46%) lesions were surgically removed within 30 days form diagnosis; in the remaining cases, a six month follow up US examination was performed. Sensitivity, specificity, diagnostic accuracy, positive predictive (PPV) and negative predictive (NPV) values were calculated for FNAC and CNB. 174 out of 400 (43%) malignant lesions were found while the remaining 226 resulted to be benign lesions. 166 out of 210 (79%) FNACs and 154 out of 190 (81%) CNBs provided diagnostic specimens. Sensitivity, specificity, diagnostic accuracy, PPV and NPV of 97%, 94%, 95%, 91% and 98% were found for FNAC, and values of 92%, 82%, 89%, 92% and 82% were obtained for CNB. Sensitivity, specificity, diagnostic accuracy, PPV and NPV of 97%, 96%, 96%, 97% and 96% were found for FNAC, and values of 97%, 96%, 96%, 97% and 96% were obtained for CNB. FNAC and CNB provide similar values of diagnostic accuracy.
Borrego, Adrián; Latorre, Jorge; Alcañiz, Mariano; Llorens, Roberto
2018-06-01
The latest generation of head-mounted displays (HMDs) provides built-in head tracking, which enables estimating position in a room-size setting. This feature allows users to explore, navigate, and move within real-size virtual environments, such as kitchens, supermarket aisles, or streets. Previously, these actions were commonly facilitated by external peripherals and interaction metaphors. The objective of this study was to compare the Oculus Rift and the HTC Vive in terms of the working range of the head tracking and the working area, accuracy, and jitter in a room-size environment, and to determine their feasibility for serious games, rehabilitation, and health-related applications. The position of the HMDs was registered in a 10 × 10 grid covering an area of 25 m 2 at sitting (1.3 m) and standing (1.7 m) heights. Accuracy and jitter were estimated from positional data. The working range was estimated by moving the HMDs away from the cameras until no data were obtained. The HTC Vive provided a working area (24.87 m 2 ) twice as large as that of the Oculus Rift. Both devices showed excellent and comparable performance at sitting height (accuracy up to 1 cm and jitter <0.35 mm), and the HTC Vive presented worse but still excellent accuracy and jitter at standing height (accuracy up to 1.5 cm and jitter <0.5 mm). The HTC Vive presented a larger working range (7 m) than did the Oculus Rift (4.25 m). Our results support the use of these devices for real navigation, exploration, exergaming, and motor rehabilitation in virtual reality environments.
NASA Astrophysics Data System (ADS)
O'Neil, Gina L.; Goodall, Jonathan L.; Watson, Layne T.
2018-04-01
Wetlands are important ecosystems that provide many ecological benefits, and their quality and presence are protected by federal regulations. These regulations require wetland delineations, which can be costly and time-consuming to perform. Computer models can assist in this process, but lack the accuracy necessary for environmental planning-scale wetland identification. In this study, the potential for improvement of wetland identification models through modification of digital elevation model (DEM) derivatives, derived from high-resolution and increasingly available light detection and ranging (LiDAR) data, at a scale necessary for small-scale wetland delineations is evaluated. A novel approach of flow convergence modelling is presented where Topographic Wetness Index (TWI), curvature, and Cartographic Depth-to-Water index (DTW), are modified to better distinguish wetland from upland areas, combined with ancillary soil data, and used in a Random Forest classification. This approach is applied to four study sites in Virginia, implemented as an ArcGIS model. The model resulted in significant improvement in average wetland accuracy compared to the commonly used National Wetland Inventory (84.9% vs. 32.1%), at the expense of a moderately lower average non-wetland accuracy (85.6% vs. 98.0%) and average overall accuracy (85.6% vs. 92.0%). From this, we concluded that modifying TWI, curvature, and DTW provides more robust wetland and non-wetland signatures to the models by improving accuracy rates compared to classifications using the original indices. The resulting ArcGIS model is a general tool able to modify these local LiDAR DEM derivatives based on site characteristics to identify wetlands at a high resolution.
Investigating the extent of neuroplasticity: Writing in children with perinatal stroke.
Woolpert, Darin; Reilly, Judy S
2016-08-01
The developing brain is remarkably plastic, as evidenced by language studies of children with perinatal stroke (PS). Despite initial delays and in contrast to adults with comparable lesions, children with PS perform comparably to their age-matched peers in free conversation by school age. Recent studies of spoken language in older children with PS have indicated limits to neural plasticity. Writing, a cognitively demanding and language dependent domain, is understudied in children with PS. Investigating writing development will provide another perspective on the continuing linguistic development in this population. Written language performance in 43 children with PS and 60 of their typically-developing (TD) peers was evaluated to further investigate the breadth and limits to neural plasticity. Two tasks of varying difficulty were administered: a picture description, which provided a referent to facilitate writing for the children, and a more challenging autobiographical narrative. Texts were analyzed across three broad writing dimensions - productivity, complexity, and linguistic accuracy. Group differences were primarily found on accuracy indices. Morphological accuracy was most impacted by early brain injury and older children with PS did not have higher morphological accuracy than their younger counterparts, suggesting limited development with age. There were no differences in performance based on hemisphere of lesion. In addition to enhancing our understanding of long-term language outcomes in children with PS, the results further illuminate the extent and limitations of early neural plasticity for language. Copyright © 2016 Elsevier Ltd. All rights reserved.
Towards frameless maskless SRS through real-time 6DoF robotic motion compensation.
Belcher, Andrew H; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D
2017-11-13
Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient's skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system's effectiveness in maintaining the target's 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system's effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system's success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS treatments, potentially able to achieve the same or better treatment accuracies compared to traditional frame-based approaches.
Towards frameless maskless SRS through real-time 6DoF robotic motion compensation
NASA Astrophysics Data System (ADS)
Belcher, Andrew H.; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D.
2017-12-01
Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient’s skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system’s effectiveness in maintaining the target’s 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system’s effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system’s success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS treatments, potentially able to achieve the same or better treatment accuracies compared to traditional frame-based approaches.
Sung, Yun J; Gu, C Charles; Tiwari, Hemant K; Arnett, Donna K; Broeckel, Ulrich; Rao, Dabeeru C
2012-07-01
Genotype imputation provides imputation of untyped single nucleotide polymorphisms (SNPs) that are present on a reference panel such as those from the HapMap Project. It is popular for increasing statistical power and comparing results across studies using different platforms. Imputation for African American populations is challenging because their linkage disequilibrium blocks are shorter and also because no ideal reference panel is available due to admixture. In this paper, we evaluated three imputation strategies for African Americans. The intersection strategy used a combined panel consisting of SNPs polymorphic in both CEU and YRI. The union strategy used a panel consisting of SNPs polymorphic in either CEU or YRI. The merge strategy merged results from two separate imputations, one using CEU and the other using YRI. Because recent investigators are increasingly using the data from the 1000 Genomes (1KG) Project for genotype imputation, we evaluated both 1KG-based imputations and HapMap-based imputations. We used 23,707 SNPs from chromosomes 21 and 22 on Affymetrix SNP Array 6.0 genotyped for 1,075 HyperGEN African Americans. We found that 1KG-based imputations provided a substantially larger number of variants than HapMap-based imputations, about three times as many common variants and eight times as many rare and low-frequency variants. This higher yield is expected because the 1KG panel includes more SNPs. Accuracy rates using 1KG data were slightly lower than those using HapMap data before filtering, but slightly higher after filtering. The union strategy provided the highest imputation yield with next highest accuracy. The intersection strategy provided the lowest imputation yield but the highest accuracy. The merge strategy provided the lowest imputation accuracy. We observed that SNPs polymorphic only in CEU had much lower accuracy, reducing the accuracy of the union strategy. Our findings suggest that 1KG-based imputations can facilitate discovery of significant associations for SNPs across the whole MAF spectrum. Because the 1KG Project is still under way, we expect that later versions will provide better imputation performance. © 2012 Wiley Periodicals, Inc.
Siekmann, Max; Lothes, Thomas; König, Ralph; Wirtz, Christian Rainer; Coburger, Jan
2018-03-01
Currently, intraoperative ultrasound in brain tumor surgery is a rapidly propagating option in imaging technology. We examined the accuracy and resolution limits of different ultrasound probes and the influence of 3D-reconstruction in a phantom and compared these results to MRI in an intraoperative setting (iMRI). An agarose gel phantom with predefined gel targets was examined with iMRI, a sector (SUS) and a linear (LUS) array probe with two-dimensional images. Additionally, 3D-reconstructed sweeps in perpendicular directions were made of every target with both probes, resulting in 392 measurements. Statistical calculations were performed, and comparative boxplots were generated. Every measurement of iMRI and LUS was more precise than SUS, while there was no apparent difference in height of iMRI and 3D-reconstructed LUS. Measurements with 3D-reconstructed LUS were always more accurate than in 2D-LUS, while 3D-reconstruction of SUS showed nearly no differences to 2D-SUS in some measurements. We found correlations of 3D-reconstructed SUS and LUS length and width measurements with 2D results in the same image orientation. LUS provides an accuracy and resolution comparable to iMRI, while SUS is less exact than LUS and iMRI. 3D-reconstruction showed the potential to distinctly improve accuracy and resolution of ultrasound images, although there is a strong correlation with the sweep direction during data acquisition.
Stieger-Vanegas, S M; Senthirajah, S K J; Nemanic, S; Baltzer, W; Warnock, J; Bobe, G
2015-01-01
The purpose of our study was (1) to determine whether four-view radiography of the pelvis is as reliable and accurate as computed tomography (CT) in diagnosing sacral and pelvic fractures, in addition to coxofemoral and sacroiliac joint subluxation or luxation, and (2) to evaluate the effect of the amount of training in reading diagnostic imaging studies on the accuracy of diagnosing sacral and pelvic fractures in dogs. Sacral and pelvic fractures were created in 11 canine cadavers using a lateral impactor. In all cadavers, frog-legged ventro-dorsal, lateral, right and left ventro-45°-medial to dorsolateral oblique frog leg ("rollover 45-degree view") radiographs and a CT of the pelvis were obtained. Two radiologists, two surgeons and two veterinary students classified fractures using a confidence scale and noted the duration of evaluation for each imaging modality and case. The imaging results were compared to gross dissection. All evaluators required significantly more time to analyse CT images compared to radiographic images. Sacral and pelvic fractures, specifically those of the sacral body, ischiatic table, and the pubic bone, were more accurately diagnosed using CT compared to radiography. Fractures of the acetabulum and iliac body were diagnosed with similar accuracy (at least 86%) using either modality. Computed tomography is a better method for detecting canine sacral and some pelvic fractures compared to radiography. Computed tomography provided an accuracy of close to 100% in persons trained in evaluating CT images.
Mangione, Francesca; Meleo, Deborah; Talocco, Marco; Pecci, Raffaella; Pacifici, Luciano; Bedini, Rossella
2013-01-01
The aim of this study was to evaluate the influence of artifacts on the accuracy of linear measurements estimated with a common cone beam computed tomography (CBCT) system used in dental clinical practice, by comparing it with microCT system as standard reference. Ten bovine bone cylindrical samples containing one implant each, able to provide both points of reference and image quality degradation, have been scanned by CBCT and microCT systems. Thanks to the software of the two systems, for each cylindrical sample, two diameters taken at different levels, by using implants different points as references, have been measured. Results have been analyzed by ANOVA and a significant statistically difference has been found. Due to the obtained results, in this work it is possible to say that the measurements made with the two different instruments are still not statistically comparable, although in some samples were obtained similar performances and therefore not statistically significant. With the improvement of the hardware and software of CBCT systems, in the near future the two instruments will be able to provide similar performances.
Geolocation and Pointing Accuracy Analysis for the WindSat Sensor
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.
2006-01-01
Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.
Empirical Accuracies of U.S. Space Surveillance Network Reentry Predictions
NASA Technical Reports Server (NTRS)
Johnson, Nicholas L.
2008-01-01
The U.S. Space Surveillance Network (SSN) issues formal satellite reentry predictions for objects which have the potential for generating debris which could pose a hazard to people or property on Earth. These prognostications, known as Tracking and Impact Prediction (TIP) messages, are nominally distributed at daily intervals beginning four days prior to the anticipated reentry and several times during the final 24 hours in orbit. The accuracy of these messages depends on the nature of the satellite s orbit, the characteristics of the space vehicle, solar activity, and many other factors. Despite the many influences on the time and the location of reentry, a useful assessment of the accuracies of TIP messages can be derived and compared with the official accuracies included with each TIP message. This paper summarizes the results of a study of numerous uncontrolled reentries of spacecraft and rocket bodies from nearly circular orbits over a span of several years. Insights are provided into the empirical accuracies and utility of SSN TIP messages.
3D Cloud Field Prediction using A-Train Data and Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Johnson, C. L.
2017-12-01
Validation of cloud process parameterizations used in global climate models (GCMs) would greatly benefit from observed 3D cloud fields at the size comparable to that of a GCM grid cell. For the highest resolution simulations, surface grid cells are on the order of 100 km by 100 km. CloudSat/CALIPSO data provides 1 km width of detailed vertical cloud fraction profile (CFP) and liquid and ice water content (LWC/IWC). This work utilizes four machine learning algorithms to create nonlinear regressions of CFP, LWC, and IWC data using radiances, surface type and location of measurement as predictors and applies the regression equations to off-track locations generating 3D cloud fields for 100 km by 100 km domains. The CERES-CloudSat-CALIPSO-MODIS (C3M) merged data set for February 2007 is used. Support Vector Machines, Artificial Neural Networks, Gaussian Processes and Decision Trees are trained on 1000 km of continuous C3M data. Accuracy is computed using existing vertical profiles that are excluded from the training data and occur within 100 km of the training data. Accuracy of the four algorithms is compared. Average accuracy for one day of predicted data is 86% for the most successful algorithm. The methodology for training the algorithms, determining valid prediction regions and applying the equations off-track is discussed. Predicted 3D cloud fields are provided as inputs to the Ed4 NASA LaRC Fu-Liou radiative transfer code and resulting TOA radiances compared to observed CERES/MODIS radiances. Differences in computed radiances using predicted profiles and observed radiances are compared.
Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs
NASA Astrophysics Data System (ADS)
Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.
2016-06-01
Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.
Multi-site evaluation of IKONOS data for classification of tropical coral reef environments
Andrefouet, S.; Kramer, Philip; Torres-Pulliza, D.; Joyce, K.E.; Hochberg, E.J.; Garza-Perez, R.; Mumby, P.J.; Riegl, Bernhard; Yamano, H.; White, W.H.; Zubia, M.; Brock, J.C.; Phinn, S.R.; Naseer, A.; Hatcher, B.G.; Muller-Karger, F. E.
2003-01-01
Ten IKONOS images of different coral reef sites distributed around the world were processed to assess the potential of 4-m resolution multispectral data for coral reef habitat mapping. Complexity of reef environments, established by field observation, ranged from 3 to 15 classes of benthic habitats containing various combinations of sediments, carbonate pavement, seagrass, algae, and corals in different geomorphologic zones (forereef, lagoon, patch reef, reef flats). Processing included corrections for sea surface roughness and bathymetry, unsupervised or supervised classification, and accuracy assessment based on ground-truth data. IKONOS classification results were compared with classified Landsat 7 imagery for simple to moderate complexity of reef habitats (5-11 classes). For both sensors, overall accuracies of the classifications show a general linear trend of decreasing accuracy with increasing habitat complexity. The IKONOS sensor performed better, with a 15-20% improvement in accuracy compared to Landsat. For IKONOS, overall accuracy was 77% for 4-5 classes, 71% for 7-8 classes, 65% in 9-11 classes, and 53% for more than 13 classes. The Landsat classification accuracy was systematically lower, with an average of 56% for 5-10 classes. Within this general trend, inter-site comparisons and specificities demonstrate the benefits of different approaches. Pre-segmentation of the different geomorphologic zones and depth correction provided different advantages in different environments. Our results help guide scientists and managers in applying IKONOS-class data for coral reef mapping applications. ?? 2003 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required.
Fifolt, Matthew; Blackburn, Justin; Rhodes, David J; Gillespie, Shemeka; Bennett, Aleena; Wolff, Paul; Rucks, Andrew
Historically, double data entry (DDE) has been considered the criterion standard for minimizing data entry errors. However, previous studies considered data entry alternatives through the limited lens of data accuracy. This study supplies information regarding data accuracy, operational efficiency, and cost for DDE and Optical Mark Recognition (OMR) for processing the Consumer Assessment of Healthcare Providers and Systems 5.0 survey. To assess data accuracy, we compared error rates for DDE and OMR by dividing the number of surveys that were arbitrated by the total number of surveys processed for each method. To assess operational efficiency, we tallied the cost of data entry for DDE and OMR after survey receipt. Costs were calculated on the basis of personnel, depreciation for capital equipment, and costs of noncapital equipment. The cost savings attributed to this method were negated by the operational efficiency of OMR. There was a statistical significance between rates of arbitration between DDE and OMR; however, this statistical significance did not create a practical significance. The potential benefits of DDE in terms of data accuracy did not outweigh the operational efficiency and thereby financial savings of OMR.
Existing methods for improving the accuracy of digital-to-analog converters
NASA Astrophysics Data System (ADS)
Eielsen, Arnfinn A.; Fleming, Andrew J.
2017-09-01
The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.
Trakoolwilaiwan, Thanawin; Behboodi, Bahareh; Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong
2018-01-01
The aim of this work is to develop an effective brain-computer interface (BCI) method based on functional near-infrared spectroscopy (fNIRS). In order to improve the performance of the BCI system in terms of accuracy, the ability to discriminate features from input signals and proper classification are desired. Previous studies have mainly extracted features from the signal manually, but proper features need to be selected carefully. To avoid performance degradation caused by manual feature selection, we applied convolutional neural networks (CNNs) as the automatic feature extractor and classifier for fNIRS-based BCI. In this study, the hemodynamic responses evoked by performing rest, right-, and left-hand motor execution tasks were measured on eight healthy subjects to compare performances. Our CNN-based method provided improvements in classification accuracy over conventional methods employing the most commonly used features of mean, peak, slope, variance, kurtosis, and skewness, classified by support vector machine (SVM) and artificial neural network (ANN). Specifically, up to 6.49% and 3.33% improvement in classification accuracy was achieved by CNN compared with SVM and ANN, respectively.
Brown, Jessica A; Hux, Karen; Knollman-Porter, Kelly; Wallace, Sarah E
2016-01-01
Concomitant visual and cognitive impairments following traumatic brain injuries (TBIs) may be problematic when the visual modality serves as a primary source for receiving information. Further difficulties comprehending visual information may occur when interpretation requires processing inferential rather than explicit content. The purpose of this study was to compare the accuracy with which people with and without severe TBI interpreted information in contextually rich drawings. Fifteen adults with and 15 adults without severe TBI. Repeated-measures between-groups design. Participants were asked to match images to sentences that either conveyed explicit (ie, main action or background) or inferential (ie, physical or mental inference) information. The researchers compared accuracy between participant groups and among stimulus conditions. Participants with TBI demonstrated significantly poorer accuracy than participants without TBI extracting information from images. In addition, participants with TBI demonstrated significantly higher response accuracy when interpreting explicit rather than inferential information; however, no significant difference emerged between sentences referencing main action versus background information or sentences providing physical versus mental inference information for this participant group. Difficulties gaining information from visual environmental cues may arise for people with TBI given their difficulties interpreting inferential content presented through the visual modality.
Anxiety, anticipation and contextual information: A test of attentional control theory.
Cocks, Adam J; Jackson, Robin C; Bishop, Daniel T; Williams, A Mark
2016-09-01
We tested the assumptions of Attentional Control Theory (ACT) by examining the impact of anxiety on anticipation using a dynamic, time-constrained task. Moreover, we examined the involvement of high- and low-level cognitive processes in anticipation and how their importance may interact with anxiety. Skilled and less-skilled tennis players anticipated the shots of opponents under low- and high-anxiety conditions. Participants viewed three types of video stimuli, each depicting different levels of contextual information. Performance effectiveness (response accuracy) and processing efficiency (response accuracy divided by corresponding mental effort) were measured. Skilled players recorded higher levels of response accuracy and processing efficiency compared to less-skilled counterparts. Processing efficiency significantly decreased under high- compared to low-anxiety conditions. No difference in response accuracy was observed. When reviewing directional errors, anxiety was most detrimental to performance in the condition conveying only contextual information, suggesting that anxiety may have a greater impact on high-level (top-down) cognitive processes, potentially due to a shift in attentional control. Our findings provide partial support for ACT; anxiety elicited greater decrements in processing efficiency than performance effectiveness, possibly due to predominance of the stimulus-driven attentional system.
Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao
2015-01-01
In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions. PMID:25977299
Forecasting daily patient volumes in the emergency department.
Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L
2008-02-01
Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.
Comparing Features for Classification of MEG Responses to Motor Imagery
Halme, Hanna-Leena; Parkkonen, Lauri
2016-01-01
Background Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. Methods MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio—spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. Results The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. Conclusions We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system. PMID:27992574
Conforming and nonconforming virtual element methods for elliptic problems
Cangiani, Andrea; Manzini, Gianmarco; Sutton, Oliver J.
2016-08-03
Here we present, in a unified framework, new conforming and nonconforming virtual element methods for general second-order elliptic problems in two and three dimensions. The differential operator is split into its symmetric and nonsymmetric parts and conditions for stability and accuracy on their discrete counterparts are established. These conditions are shown to lead to optimal H 1- and L 2-error estimates, confirmed by numerical experiments on a set of polygonal meshes. The accuracy of the numerical approximation provided by the two methods is shown to be comparable.
Conforming and nonconforming virtual element methods for elliptic problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cangiani, Andrea; Manzini, Gianmarco; Sutton, Oliver J.
Here we present, in a unified framework, new conforming and nonconforming virtual element methods for general second-order elliptic problems in two and three dimensions. The differential operator is split into its symmetric and nonsymmetric parts and conditions for stability and accuracy on their discrete counterparts are established. These conditions are shown to lead to optimal H 1- and L 2-error estimates, confirmed by numerical experiments on a set of polygonal meshes. The accuracy of the numerical approximation provided by the two methods is shown to be comparable.
Female preferences drive the evolution of mimetic accuracy in male sexual displays.
Coleman, Seth William; Patricelli, Gail Lisa; Coyle, Brian; Siani, Jennifer; Borgia, Gerald
2007-10-22
Males in many bird species mimic the vocalizations of other species during sexual displays, but the evolutionary and functional significance of interspecific vocal mimicry is unclear. Here we use spectrographic cross-correlation to compare mimetic calls produced by male satin bowerbirds (Ptilonorhynchus violaceus) in courtship with calls from several model species. We show that the accuracy of vocal mimicry and the number of model species mimicked are both independently related to male mating success. Multivariate analyses revealed that these mimetic traits were better predictors of male mating success than other male display traits previously shown to be important for male mating success. We suggest that preference-driven mimetic accuracy may be a widespread occurrence, and that mimetic accuracy may provide females with important information about male quality. Our findings support an alternative hypothesis to help explain a common element of male sexual displays.
Navigating highly elliptical earth orbiters with simultaneous VLBI from orthogonal baseline pairs
NASA Technical Reports Server (NTRS)
Frauenholz, Raymond B.
1986-01-01
Navigation strategies for determining highly elliptical orbits with VLBI are described. The predicted performance of wideband VLBI and Delta VLBI measurements obtained by orthogonal baseline pairs are compared for a 16-hr equatorial orbit. It is observed that the one-sigma apogee position accuracy improves two orders of magnitude to the meter level when Delta VLBI measurements are added to coherent Doppler and range, and the simpler VLBI strategy provides nearly the same orbit accuracy. The effects of differential measurement noise and acquisition geometry on orbit accuracy are investigated. The data reveal that quasar position uncertainty limits the accuracy of wideband Delta VLBI measurements, and that polar motion and baseline uncertainties and offsets between station clocks affect the wideband VLBI data. It is noted that differential one-way range (DOR) has performance nearly equal to that of the more complex Delta DOR and is recommended for use on spacecraft in high elliptical orbits.
NASA Astrophysics Data System (ADS)
Salleh, S. A.; Rahman, A. S. A. Abd; Othman, A. N.; Mohd, W. M. N. Wan
2018-02-01
As different approach produces different results, it is crucial to determine the methods that are accurate in order to perform analysis towards the event. This research aim is to compare the Rank Reciprocal (MCDM) and Artificial Neural Network (ANN) analysis techniques in determining susceptible zones of landslide hazard. The study is based on data obtained from various sources such as local authority; Dewan Bandaraya Kuala Lumpur (DBKL), Jabatan Kerja Raya (JKR) and other agencies. The data were analysed and processed using Arc GIS. The results were compared by quantifying the risk ranking and area differential. It was also compared with the zonation map classified by DBKL. The results suggested that ANN method gives better accuracy compared to MCDM with 18.18% higher accuracy assessment of the MCDM approach. This indicated that ANN provides more reliable results and it is probably due to its ability to learn from the environment thus portraying realistic and accurate result.
GStream: Improving SNP and CNV Coverage on Genome-Wide Association Studies
Alonso, Arnald; Marsal, Sara; Tortosa, Raül; Canela-Xandri, Oriol; Julià, Antonio
2013-01-01
We present GStream, a method that combines genome-wide SNP and CNV genotyping in the Illumina microarray platform with unprecedented accuracy. This new method outperforms previous well-established SNP genotyping software. More importantly, the CNV calling algorithm of GStream dramatically improves the results obtained by previous state-of-the-art methods and yields an accuracy that is close to that obtained by purely CNV-oriented technologies like Comparative Genomic Hybridization (CGH). We demonstrate the superior performance of GStream using microarray data generated from HapMap samples. Using the reference CNV calls generated by the 1000 Genomes Project (1KGP) and well-known studies on whole genome CNV characterization based either on CGH or genotyping microarray technologies, we show that GStream can increase the number of reliably detected variants up to 25% compared to previously developed methods. Furthermore, the increased genome coverage provided by GStream allows the discovery of CNVs in close linkage disequilibrium with SNPs, previously associated with disease risk in published Genome-Wide Association Studies (GWAS). These results could provide important insights into the biological mechanism underlying the detected disease risk association. With GStream, large-scale GWAS will not only benefit from the combined genotyping of SNPs and CNVs at an unprecedented accuracy, but will also take advantage of the computational efficiency of the method. PMID:23844243
TU-E-BRB-03: Overview of Proposed TG-132 Recommendations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brock, K.
2015-06-15
Deformable image registration (DIR) is developing rapidly and is poised to substantially improve dose fusion accuracy for adaptive and retreatment planning and motion management and PET fusion to enhance contour delineation for treatment planning. However, DIR dose warping accuracy is difficult to quantify, in general, and particularly difficult to do so on a patient-specific basis. As clinical DIR options become more widely available, there is an increased need to understand the implications of incorporating DIR into clinical workflow. Several groups have assessed DIR accuracy in clinically relevant scenarios, but no comprehensive review material is yet available. This session will alsomore » discuss aspects of the AAPM Task Group 132 on the Use of Image Registration and Data Fusion Algorithms and Techniques in Radiotherapy Treatment Planning official report, which provides recommendations for DIR clinical use. We will summarize and compare various commercial DIR software options, outline successful clinical techniques, show specific examples with discussion of appropriate and inappropriate applications of DIR, discuss the clinical implications of DIR, provide an overview of current DIR error analysis research, review QA options and research phantom development and present TG-132 recommendations. Learning Objectives: Compare/contrast commercial DIR software and QA options Overview clinical DIR workflow for retreatment To understand uncertainties introduced by DIR Review TG-132 proposed recommendations.« less
Cota-Ruiz, Juan; Rosiles, Jose-Gerardo; Sifuentes, Ernesto; Rivas-Perea, Pablo
2012-01-01
This research presents a distributed and formula-based bilateration algorithm that can be used to provide initial set of locations. In this scheme each node uses distance estimates to anchors to solve a set of circle-circle intersection (CCI) problems, solved through a purely geometric formulation. The resulting CCIs are processed to pick those that cluster together and then take the average to produce an initial node location. The algorithm is compared in terms of accuracy and computational complexity with a Least-Squares localization algorithm, based on the Levenberg-Marquardt methodology. Results in accuracy vs. computational performance show that the bilateration algorithm is competitive compared with well known optimized localization algorithms.
Ensemble coding remains accurate under object and spatial visual working memory load.
Epstein, Michael L; Emmanouil, Tatiana A
2017-10-01
A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.
Cosmic Origins Spectrograph : Target Acquisition Performance and Updated Guidelines
NASA Astrophysics Data System (ADS)
Penton, Steven V.; Keyes, C.; Osterman, S.; Sahnow, D.; Soderblom, D.; COS IDT Team; STScI COS Team
2010-01-01
The Cosmic Origins Spectrograph (COS) is a slit-less spectrograph with a very small aperture (radius = 1.25"). To achieve the desired wavelength accuracy of <15 km/s, HST+COS must center the target to within 0.1” of the center of the aperture. This is the angle subtended by a typical AAS poster when viewed from over 1400 miles away. During SMOV we have fine-tuned the COS target acquisition (TA) procedures to exceed this accuracy for all three COS TA modes; NUV imaging, NUV spectroscopic, and FUV spectroscopic. We will compare all COS TA modes in terms of centering accuracy, efficiency (elapsed time), and required signal-to-noise for all targets suitable for use with COS. We will also provide updated recommendations for the options of all TA modes (e.g., SCAN-SIZE and NUM-POS of ACQ/PEAKD). We have observed in SMOV that HST is providing an excellent initial 1-σ blind pointing accuracy of ±0.4” in both the along-dispersion and cross-dispersion directions. We will discuss the implications of this, and other lessons learned in SMOV, on Cycle 17 and 18 HST+COS TAs.
Burgner, J.; Simpson, A. L.; Fitzpatrick, J. M.; Lathrop, R. A.; Herrell, S. D.; Miga, M. I.; Webster, R. J.
2013-01-01
Background Registered medical images can assist with surgical navigation and enable image-guided therapy delivery. In soft tissues, surface-based registration is often used and can be facilitated by laser surface scanning. Tracked conoscopic holography (which provides distance measurements) has been recently proposed as a minimally invasive way to obtain surface scans. Moving this technique from concept to clinical use requires a rigorous accuracy evaluation, which is the purpose of our paper. Methods We adapt recent non-homogeneous and anisotropic point-based registration results to provide a theoretical framework for predicting the accuracy of tracked distance measurement systems. Experiments are conducted a complex objects of defined geometry, an anthropomorphic kidney phantom and a human cadaver kidney. Results Experiments agree with model predictions, producing point RMS errors consistently < 1 mm, surface-based registration with mean closest point error < 1 mm in the phantom and a RMS target registration error of 0.8 mm in the human cadaver kidney. Conclusions Tracked conoscopic holography is clinically viable; it enables minimally invasive surface scan accuracy comparable to current clinical methods that require open surgery. PMID:22761086
Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin
2015-10-25
I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed andmore » lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.« less
Storino, Alessandra; Castillo-Angeles, Manuel; Watkins, Ammara A; Vargas, Christina; Mancias, Joseph D; Bullock, Andrea; Demirjian, Aram; Moser, A James; Kent, Tara S
2016-09-01
The degree to which patients are empowered by written educational materials depends on the text's readability level and the accuracy of the information provided. The association of a website's affiliation or focus on treatment modality with its readability and accuracy has yet to be thoroughly elucidated. To compare the readability and accuracy of patient-oriented online resources for pancreatic cancer by treatment modality and website affiliation. An online search of 50 websites discussing 5 pancreatic cancer treatment modalities (alternative therapy, chemotherapy, clinical trials, radiation therapy, and surgery) was conducted. The website's affiliation was identified. Readability was measured by 9 standardized tests, and accuracy was assessed by an expert panel. Nine standardized tests were used to compute the median readability level of each website. The median readability scores were compared among treatment modality and affiliation categories. Accuracy was determined by an expert panel consisting of 2 medical specialists and 2 surgical specialists. The 4 raters independently evaluated all websites belonging to the 5 treatment modalities (a score of 1 indicates that <25% of the information is accurate, a score of 2 indicates that 26%-50% of the information is accurate, a score of 3 indicates that 51%-75% of the information is accurate, a score of 4 indicates that 76%-99% of the information is accurate, and a score of 5 indicates that 100% of the information is accurate). The 50 evaluated websites differed in readability and accuracy based on the focus of the treatment modality and the website's affiliation. Websites discussing surgery (with a median readability level of 13.7 and an interquartile range [IQR] of 11.9-15.6) were easier to read than those discussing radiotherapy (median readability level, 15.2 [IQR, 13.0-17.0]) (P = .003) and clinical trials (median readability level, 15.2 [IQR, 12.8-17.0]) (P = .002). Websites of nonprofit organizations (median readability level, 12.9 [IQR, 11.2-15.0]) were easier to read than media (median readability level, 16.0 [IQR, 13.4-17.0]) (P < .001) and academic (median readability level, 14.8 [IQR, 12.9-17.0]) (P < .001) websites. Privately owned websites (median readability level, 14.0 [IQR, 12.1-16.1]) were easier to read than media websites (P = .001). Among treatment modalities, alternative therapy websites exhibited the lowest accuracy scores (median accuracy score, 2 [IQR, 1-4]) (P < .001). Nonprofit (median accuracy score, 4 [IQR, 4-5]), government (median accuracy score, 5 [IQR, 4-5]), and academic (median accuracy score, 4 [IQR, 3.5-5]) websites were more accurate than privately owned (median accuracy score, 3.5 [IQR, 1.5-4]) and media (median accuracy score, 4 [IQR, 2-4]) websites (P < .004). Websites with higher accuracy were more difficult to read than websites with lower accuracy. Online information on pancreatic cancer overestimates the reading ability of the overall population and lacks accurate information about alternative therapy. In the absence of quality control on the Internet, physicians should provide guidance to patients in the selection of online resources with readable and accurate information.
Semaan, Hassan; Bazerbashi, Mohamad F; Siesel, Geoffrey; Aldinger, Paul; Obri, Tawfik
2018-03-01
To determine the accuracy and non-detection rate of cancer related findings (CRFs) on follow-up non-contrast-enhanced CT (NECT) versus contrast-enhanced CT (CECT) images of the abdomen in patients with a known cancer diagnosis. A retrospective review of 352 consecutive CTs of the abdomen performed with and without IV contrast between March 2010 and October 2014 for follow-up of cancer was included. Two radiologists independently assessed the NECT portions of the studies. The reader was provided the primary cancer diagnosis and access to the most recent prior NECT study. The accuracy and non-detection rates were determined by comparing our results to the archived reports as a gold standard. A total of 383 CRFs were found in the archived reports of the 352 abdominal CTs. The average non-detection rate for the NECTs compared to the CECTs was 3.0% (11.5/383) with an accuracy of 97.0% (371.5/383) in identifying CRFs. The most common findings missed were vascular thrombosis with a non-detection rate of 100%. The accuracy for non-vascular CRFs was 99.1%. Follow-up NECT abdomen studies are highly accurate in the detection of CRFs in patients with an established cancer diagnosis, except in cases where vascular involvement is suspected.
NASA Astrophysics Data System (ADS)
Wilson, J. Adam; Walton, Léo M.; Tyler, Mitch; Williams, Justin
2012-08-01
This article describes a new method of providing feedback during a brain-computer interface movement task using a non-invasive, high-resolution electrotactile vision substitution system. We compared the accuracy and movement times during a center-out cursor movement task, and found that the task performance with tactile feedback was comparable to visual feedback for 11 participants. These subjects were able to modulate the chosen BCI EEG features during both feedback modalities, indicating that the type of feedback chosen does not matter provided that the task information is clearly conveyed through the chosen medium. In addition, we tested a blind subject with the tactile feedback system, and found that the training time, accuracy, and movement times were indistinguishable from results obtained from subjects using visual feedback. We believe that BCI systems with alternative feedback pathways should be explored, allowing individuals with severe motor disabilities and accompanying reduced visual and sensory capabilities to effectively use a BCI.
Properties of young massive clusters obtained with different massive-star evolutionary models
NASA Astrophysics Data System (ADS)
Wofford, Aida; Charlot, Stéphane
We undertake a comprehensive comparative test of seven widely-used spectral synthesis models using multi-band HST photometry of a sample of eight YMCs in two galaxies. We provide a first quantitative estimate of the accuracies and uncertainties of new models, show the good progress of models in fitting high-quality observations, and highlight the need of further comprehensive comparative tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellens, N; Farahani, K
2015-06-15
Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precisionmore » of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many preclinical applications including focused drug delivery and thermal therapy. Funding support provided by Philips Healthcare.« less
NASA Astrophysics Data System (ADS)
Matongera, Trylee Nyasha; Mutanga, Onisimo; Dube, Timothy; Sibanda, Mbulisi
2017-05-01
Bracken fern is an invasive plant that presents serious environmental, ecological and economic problems around the world. An understanding of the spatial distribution of bracken fern weeds is therefore essential for providing appropriate management strategies at both local and regional scales. The aim of this study was to assess the utility of the freely available medium resolution Landsat 8 OLI sensor in the detection and mapping of bracken fern at the Cathedral Peak, South Africa. To achieve this objective, the results obtained from Landsat 8 OLI were compared with those derived using the costly, high spatial resolution WorldView-2 imagery. Since previous studies have already successfully mapped bracken fern using high spatial resolution WorldView-2 image, the comparison was done to investigate the magnitude of difference in accuracy between the two sensors in relation to their acquisition costs. To evaluate the performance of Landsat 8 OLI in discriminating bracken fern compared to that of Worldview-2, we tested the utility of (i) spectral bands; (ii) derived vegetation indices as well as (iii) the combination of spectral bands and vegetation indices based on discriminant analysis classification algorithm. After resampling the training and testing data and reclassifying several times (n = 100) based on the combined data sets, the overall accuracies for both Landsat 8 and WorldView-2 were tested for significant differences based on Mann-Whitney U test. The results showed that the integration of the spectral bands and derived vegetation indices yielded the best overall classification accuracy (80.08% and 87.80% for Landsat 8 OLI and WorldView-2 respectively). Additionally, the use of derived vegetation indices as a standalone data set produced the weakest overall accuracy results of 62.14% and 82.11% for both the Landsat 8 OLI and WorldView-2 images. There were significant differences {U (100) = 569.5, z = -10.8242, p < 0.01} between the classification accuracies derived based on Landsat OLI 8 and those derived using WorldView-2 sensor. Although there were significant differences between Landsat and WorldView-2 accuracies, the magnitude of variation (9%) between the two sensors was within an acceptable range. Therefore, the findings of this study demonstrated that the recently launched Landsat 8 OLI multispectral sensor provides valuable information that could aid in the long term continuous monitoring and formulation of effective bracken fern management with acceptable accuracies that are comparable to those obtained from the high resolution WorldView-2 commercial sensor.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry.
Bostani, Maryam; Mueller, Jonathon W; McMillan, Kyle; Cody, Dianna D; Cagnon, Chris H; DeMarco, John J; McNitt-Gray, Michael F
2015-02-01
The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. The calculated mean percent difference between TLD measurements and Monte Carlo simulations was -4.9% with standard deviation of 8.7% and a range of -22.7% to 5.7%. The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
Comparing diagnostic tests on benefit-risk.
Pennello, Gene; Pantoja-Galicia, Norberto; Evans, Scott
2016-01-01
Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy.
Accurate, Rapid Taxonomic Classification of Fungal Large-Subunit rRNA Genes
Liu, Kuan-Liang; Porras-Alfaro, Andrea; Eichorst, Stephanie A.
2012-01-01
Taxonomic and phylogenetic fingerprinting based on sequence analysis of gene fragments from the large-subunit rRNA (LSU) gene or the internal transcribed spacer (ITS) region is becoming an integral part of fungal classification. The lack of an accurate and robust classification tool trained by a validated sequence database for taxonomic placement of fungal LSU genes is a severe limitation in taxonomic analysis of fungal isolates or large data sets obtained from environmental surveys. Using a hand-curated set of 8,506 fungal LSU gene fragments, we determined the performance characteristics of a naïve Bayesian classifier across multiple taxonomic levels and compared the classifier performance to that of a sequence similarity-based (BLASTN) approach. The naïve Bayesian classifier was computationally more rapid (>460-fold with our system) than the BLASTN approach, and it provided equal or superior classification accuracy. Classifier accuracies were compared using sequence fragments of 100 bp and 400 bp and two different PCR primer anchor points to mimic sequence read lengths commonly obtained using current high-throughput sequencing technologies. Accuracy was higher with 400-bp sequence reads than with 100-bp reads. It was also significantly affected by sequence location across the 1,400-bp test region. The highest accuracy was obtained across either the D1 or D2 variable region. The naïve Bayesian classifier provides an effective and rapid means to classify fungal LSU sequences from large environmental surveys. The training set and tool are publicly available through the Ribosomal Database Project (http://rdp.cme.msu.edu/classifier/classifier.jsp). PMID:22194300
A novel technique for fetal heart rate estimation from Doppler ultrasound signal
2011-01-01
Background The currently used fetal monitoring instrumentation that is based on Doppler ultrasound technique provides the fetal heart rate (FHR) signal with limited accuracy. It is particularly noticeable as significant decrease of clinically important feature - the variability of FHR signal. The aim of our work was to develop a novel efficient technique for processing of the ultrasound signal, which could estimate the cardiac cycle duration with accuracy comparable to a direct electrocardiography. Methods We have proposed a new technique which provides the true beat-to-beat values of the FHR signal through multiple measurement of a given cardiac cycle in the ultrasound signal. The method consists in three steps: the dynamic adjustment of autocorrelation window, the adaptive autocorrelation peak detection and determination of beat-to-beat intervals. The estimated fetal heart rate values and calculated indices describing variability of FHR, were compared to the reference data obtained from the direct fetal electrocardiogram, as well as to another method for FHR estimation. Results The results revealed that our method increases the accuracy in comparison to currently used fetal monitoring instrumentation, and thus enables to calculate reliable parameters describing the variability of FHR. Relating these results to the other method for FHR estimation we showed that in our approach a much lower number of measured cardiac cycles was rejected as being invalid. Conclusions The proposed method for fetal heart rate determination on a beat-to-beat basis offers a high accuracy of the heart interval measurement enabling reliable quantitative assessment of the FHR variability, at the same time reducing the number of invalid cardiac cycle measurements. PMID:21999764
The Impact of Telemedicine on Pediatric Critical Care Triage.
Harvey, Jillian B; Yeager, Brooke E; Cramer, Christina; Wheeler, David; McSwain, S David
2017-11-01
To examine the relationship between pediatric critical care telemedicine consultation to rural emergency departments and triage decisions. We compare the triage location and provider rating of the accuracy of remote assessment for a cohort of patients who receive critical care telemedicine consultations and a similar group of patients receiving telephone consultations. Retrospective evaluation of consultations occurring between April 2012 and March 2016. Pediatric critical care telemedicine and telephone consultations in 52 rural healthcare settings in South Carolina. Pediatric patients receiving critical care telemedicine or telephone consultations. Telemedicine consultations. Data were collected from the consulting provider for 484 total consultations by telephone or telemedicine. We examined the providers' self-reported assessments about the consultation, decision-making, and triage outcomes. We estimate a logit model to predict triage location as a function of telemedicine consult age and sex. For telemedicine patients, the odds of triage to a non-ICU level of care are 2.55 times larger than the odds for patients receiving telephone consultations (p = 0.0005). Providers rated the accuracy of their assessments higher when consultations were provided via telemedicine. When patients were transferred to a non-ICU location following a telemedicine consultation, providers indicated that the use of telemedicine influenced the triage decision in 95.7% of cases (p < 0.001). For patients transferred to a non-ICU location, an increase in transfers to a higher level of care within 24 hours was not observed. Pediatric critical care telemedicine consultation to community hospitals is feasible and results in a reduction in PICU admissions. This study demonstrates an improvement in provider-reported accuracy of patient assessment via telemedicine compared with telephone, which may produce a higher comfort level with transporting patients to a lower level of care. Pediatric critical care telemedicine consultations represent a promising means of improving care and reducing costs for critically ill children in rural areas.
Porras-Alfaro, Andrea; Liu, Kuan-Liang; Kuske, Cheryl R; Xie, Gary
2014-02-01
We compared the classification accuracy of two sections of the fungal internal transcribed spacer (ITS) region, individually and combined, and the 5' section (about 600 bp) of the large-subunit rRNA (LSU), using a naive Bayesian classifier and BLASTN. A hand-curated ITS-LSU training set of 1,091 sequences and a larger training set of 8,967 ITS region sequences were used. Of the factors evaluated, database composition and quality had the largest effect on classification accuracy, followed by fragment size and use of a bootstrap cutoff to improve classification confidence. The naive Bayesian classifier and BLASTN gave similar results at higher taxonomic levels, but the classifier was faster and more accurate at the genus level when a bootstrap cutoff was used. All of the ITS and LSU sections performed well (>97.7% accuracy) at higher taxonomic ranks from kingdom to family, and differences between them were small at the genus level (within 0.66 to 1.23%). When full-length sequence sections were used, the LSU outperformed the ITS1 and ITS2 fragments at the genus level, but the ITS1 and ITS2 showed higher accuracy when smaller fragment sizes of the same length and a 50% bootstrap cutoff were used. In a comparison using the larger ITS training set, ITS1 and ITS2 had very similar accuracy classification for fragments between 100 and 200 bp. Collectively, the results show that any of the ITS or LSU sections we tested provided comparable classification accuracy to the genus level and underscore the need for larger and more diverse classification training sets.
Liu, Kuan-Liang; Kuske, Cheryl R.
2014-01-01
We compared the classification accuracy of two sections of the fungal internal transcribed spacer (ITS) region, individually and combined, and the 5′ section (about 600 bp) of the large-subunit rRNA (LSU), using a naive Bayesian classifier and BLASTN. A hand-curated ITS-LSU training set of 1,091 sequences and a larger training set of 8,967 ITS region sequences were used. Of the factors evaluated, database composition and quality had the largest effect on classification accuracy, followed by fragment size and use of a bootstrap cutoff to improve classification confidence. The naive Bayesian classifier and BLASTN gave similar results at higher taxonomic levels, but the classifier was faster and more accurate at the genus level when a bootstrap cutoff was used. All of the ITS and LSU sections performed well (>97.7% accuracy) at higher taxonomic ranks from kingdom to family, and differences between them were small at the genus level (within 0.66 to 1.23%). When full-length sequence sections were used, the LSU outperformed the ITS1 and ITS2 fragments at the genus level, but the ITS1 and ITS2 showed higher accuracy when smaller fragment sizes of the same length and a 50% bootstrap cutoff were used. In a comparison using the larger ITS training set, ITS1 and ITS2 had very similar accuracy classification for fragments between 100 and 200 bp. Collectively, the results show that any of the ITS or LSU sections we tested provided comparable classification accuracy to the genus level and underscore the need for larger and more diverse classification training sets. PMID:24242255
Fomekong, Edward; Pierrard, Julien; Raftopoulos, Christian
2018-03-01
The major limitation of computer-based three-dimensional fluoroscopy is increased radiation exposure of patients and operating room staff. Combining spine navigation with intraoperative three-dimensional fluoroscopy (io3DF) can likely overcome this shortcoming, while increasing pedicle screw accuracy rate. We compared data from a cohort of patients undergoing lumbar percutaneous pedicle screw placement using io3DF alone or in combination with spine navigation. This study consisted of 168 patients who underwent percutaneous pedicle screw implantation between 2009 and 2016. The primary endpoint was to compare pedicle screw accuracy between the 2 groups. Secondary endpoints were to compare radiation exposure of patients and operating room staff, duration of surgery, and postoperative complications. In group 1, 438 screws were placed without navigation guidance; in group 2, 276 screws were placed with spine navigation. Mean patient age in both groups was 58.6 ± 14.1 years. The final pedicle accuracy rate was 97.9% in group 1 and 99.6% in group 2. Average radiation dose per patient was significantly larger in group 1 (571.9 mGym 2 ) than in group 2 (365.6 mGym 2 ) (P = 0.000088). Surgery duration and complication rate were not significantly different between the 2 groups (P > 0.05). io3DF with spine navigation minimized radiation exposure of patients and operating room staff and provided an excellent percutaneous pedicle screw accuracy rate with no permanent complications compared with io3DF alone. This setup is recommended, especially for patients with a complex degenerative spine condition. Copyright © 2017 Elsevier Inc. All rights reserved.
Benz, Dominik C; Fuchs, Tobias A; Gräni, Christoph; Studer Bruengger, Annina A; Clerc, Olivier F; Mikulicic, Fran; Messerli, Michael; Stehli, Julia; Possner, Mathias; Pazhenkottil, Aju P; Gaemperli, Oliver; Kaufmann, Philipp A; Buechel, Ronny R
2018-02-01
Iterative reconstruction (IR) algorithms allow for a significant reduction in radiation dose of coronary computed tomography angiography (CCTA). We performed a head-to-head comparison of adaptive statistical IR (ASiR) and model-based IR (MBIR) algorithms to assess their impact on quantitative image parameters and diagnostic accuracy for submillisievert CCTA. CCTA datasets of 91 patients were reconstructed using filtered back projection (FBP), increasing contributions of ASiR (20, 40, 60, 80, and 100%), and MBIR. Signal and noise were measured in the aortic root to calculate signal-to-noise ratio (SNR). In a subgroup of 36 patients, diagnostic accuracy of ASiR 40%, ASiR 100%, and MBIR for diagnosis of coronary artery disease (CAD) was compared with invasive coronary angiography. Median radiation dose was 0.21 mSv for CCTA. While increasing levels of ASiR gradually reduced image noise compared with FBP (up to - 48%, P < 0.001), MBIR provided largest noise reduction (-79% compared with FBP) outperforming ASiR (-59% compared with ASiR 100%; P < 0.001). Increased noise and lower SNR with ASiR 40% and ASiR 100% resulted in substantially lower diagnostic accuracy to detect CAD as diagnosed by invasive coronary angiography compared with MBIR: sensitivity and specificity were 100 and 37%, 100 and 57%, and 100 and 74% for ASiR 40%, ASiR 100%, and MBIR, respectively. MBIR offers substantial noise reduction with increased SNR, paving the way for implementation of submillisievert CCTA protocols in clinical routine. In contrast, inferior noise reduction by ASiR negatively affects diagnostic accuracy of submillisievert CCTA for CAD detection. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.
Localization of a Robotic Crawler for CANDU Fuel Channel Inspection
NASA Astrophysics Data System (ADS)
Manning, Mark
This thesis discusses the design and development of a pipe crawling robot for the purpose of CANDU fuel channel inspection. The pipe crawling robot shall be capable of deploying the existing CIGAR (Channel Inspection and Gauging Apparatus for Reactors) sensor head. The main focus of this thesis is the design of the localization system for this robot and the many tests that were completed to demonstrate its accuracy. The proposed localization system consists of three redundant resolver wheels mounted to the robot's frame and two resolvers that are mounted inside a custom made cable drum. This cable drum shall be referred to in this thesis as the emergency retrieval device. This device serves the dual-purpose of providing absolute position measurements (via the cable that is tethered to the robot) as well as retrieving the robot if it is inoperable. The estimated accuracy of the proposed design is demonstrated with the use of a proof-of-concept prototype and a custom made test bench that uses a vision system to provide a more accurate estimate of the robot's position. The only major difference between the proof-of-concept prototype and the proposed solution is that the more expensive radiation hardened components were not used in the proof-of-concept prototype design. For example, the proposed solution shall use radiation hardened resolver wheels, whereas the proof-of-concept prototype used encoder wheels. These encoder wheels provide the same specified accuracy as the radiation hardened resolvers for the most realistic results possible. The rationale behind the design of the proof-of-concept prototype, the proposed final design, the design of the localization system test bench, and the test plan for developing all of the components of the design related to the robot's localization system are discussed in the thesis. The test plan provides a step by step guide to the configuration and optimization of an Unscented Kalman Filter (UKF). The UKF was selected as the ideal sensor fusion algorithm for use in this application. Benchmarking was completed to compare the accuracy achieved by the UKF algorithm to other data fusion algorithms. When compared to other algorithms, the UKF demonstrated the best accuracy when considering all likely sources of error such as sensor failure and surface unevenness. The test results show that the localization system is able to achieve a worst case positional accuracy of +/- 3.6 mm for the robot crawler over the full 6350 mm distance that the robot travels inside the pressure tube. This is extrapolated from the test results completed over the shorter length test bench with simulated surface unevenness. The key benefits of the pipe crawling robot when compared to the current system include: reduced dosage to workers and the reduced outage time. The advantages are due to the fact that the robot can be automated and multiple inspection robots can be deployed simultaneously. The current inspection system is only able to complete one inspection at a time.
Jasmine Ware,; Rode, Karyn D.; Pagano, Anthony M.; Bromaghin, Jeffrey F.; Robbins, Charles T.; Joy Erlenbach,; Shannon Jensen,; Amy Cutting,; Nicole Nicassio-Hiskey,; Amy Hash,; Owen, Megan A.; Heiko Jansen,
2015-01-01
Activity sensors are often included in wildlife transmitters and can provide information on the behavior and activity patterns of animals remotely. However, interpreting activity-sensor data relative to animal behavior can be difficult if animals cannot be continuously observed. In this study, we examined the performance of a mercury tip-switch and a tri-axial accelerometer housed in collars to determine whether sensor data can be accurately classified as resting and active behaviors and whether data are comparable for the 2 sensor types. Five captive bears (3 polar [Ursus maritimus] and 2 brown [U. arctos horribilis]) were fitted with a collar specially designed to internally house the sensors. The bears’ behaviors were recorded, classified, and then compared with sensor readings. A separate tri-axial accelerometer that sampled continuously at a higher frequency and provided raw acceleration values from 3 axes was also mounted on the collar to compare with the lower resolution sensors. Both accelerometers more accurately identified resting and active behaviors at time intervals ranging from 1 minute to 1 hour (≥91.1% accuracy) compared with the mercury tip-switch (range = 75.5–86.3%). However, mercury tip-switch accuracy improved when sampled at longer intervals (e.g., 30–60 min). Data from the lower resolution accelerometer, but not the mercury tip-switch, accurately predicted the percentage of time spent resting during an hour. Although the number of bears available for this study was small, our results suggest that these activity sensors can remotely identify resting versus active behaviors across most time intervals. We recommend that investigators consider both study objectives and the variation in accuracy of classifying resting and active behaviors reported here when determining sampling interval.
NOBLE - Flexible concept recognition for large-scale biomedical natural language processing.
Tseytlin, Eugene; Mitchell, Kevin; Legowski, Elizabeth; Corrigan, Julia; Chavan, Girish; Jacobson, Rebecca S
2016-01-14
Natural language processing (NLP) applications are increasingly important in biomedical data analysis, knowledge engineering, and decision support. Concept recognition is an important component task for NLP pipelines, and can be either general-purpose or domain-specific. We describe a novel, flexible, and general-purpose concept recognition component for NLP pipelines, and compare its speed and accuracy against five commonly used alternatives on both a biological and clinical corpus. NOBLE Coder implements a general algorithm for matching terms to concepts from an arbitrary vocabulary set. The system's matching options can be configured individually or in combination to yield specific system behavior for a variety of NLP tasks. The software is open source, freely available, and easily integrated into UIMA or GATE. We benchmarked speed and accuracy of the system against the CRAFT and ShARe corpora as reference standards and compared it to MMTx, MGrep, Concept Mapper, cTAKES Dictionary Lookup Annotator, and cTAKES Fast Dictionary Lookup Annotator. We describe key advantages of the NOBLE Coder system and associated tools, including its greedy algorithm, configurable matching strategies, and multiple terminology input formats. These features provide unique functionality when compared with existing alternatives, including state-of-the-art systems. On two benchmarking tasks, NOBLE's performance exceeded commonly used alternatives, performing almost as well as the most advanced systems. Error analysis revealed differences in error profiles among systems. NOBLE Coder is comparable to other widely used concept recognition systems in terms of accuracy and speed. Advantages of NOBLE Coder include its interactive terminology builder tool, ease of configuration, and adaptability to various domains and tasks. NOBLE provides a term-to-concept matching system suitable for general concept recognition in biomedical NLP pipelines.
Vathsangam, Harshvardhan; Emken, Adar; Schroeder, E. Todd; Spruijt-Metz, Donna; Sukhatme, Gaurav S.
2011-01-01
This paper describes an experimental study in estimating energy expenditure from treadmill walking using a single hip-mounted triaxial inertial sensor comprised of a triaxial accelerometer and a triaxial gyroscope. Typical physical activity characterization using accelerometer generated counts suffers from two drawbacks - imprecison (due to proprietary counts) and incompleteness (due to incomplete movement description). We address these problems in the context of steady state walking by directly estimating energy expenditure with data from a hip-mounted inertial sensor. We represent the cyclic nature of walking with a Fourier transform of sensor streams and show how one can map this representation to energy expenditure (as measured by V O2 consumption, mL/min) using three regression techniques - Least Squares Regression (LSR), Bayesian Linear Regression (BLR) and Gaussian Process Regression (GPR). We perform a comparative analysis of the accuracy of sensor streams in predicting energy expenditure (measured by RMS prediction accuracy). Triaxial information is more accurate than uniaxial information. LSR based approaches are prone to outlier sensitivity and overfitting. Gyroscopic information showed equivalent if not better prediction accuracy as compared to accelerometers. Combining accelerometer and gyroscopic information provided better accuracy than using either sensor alone. We also analyze the best algorithmic approach among linear and nonlinear methods as measured by RMS prediction accuracy and run time. Nonlinear regression methods showed better prediction accuracy but required an order of magnitude of run time. This paper emphasizes the role of probabilistic techniques in conjunction with joint modeling of triaxial accelerations and rotational rates to improve energy expenditure prediction for steady-state treadmill walking. PMID:21690001
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
DiBiase, Lauren; Fangman, Mary T.; Fleischauer, Aaron T.; Waller, Anna E.; MacDonald, Pia D. M.
2013-01-01
Objectives. We assessed the timeliness, accuracy, and cost of a new electronic disease surveillance system at the local health department level. We describe practices associated with lower cost and better surveillance timeliness and accuracy. Methods. Interviews conducted May through August 2010 with local health department (LHD) staff at a simple random sample of 30 of 100 North Carolina counties provided information on surveillance practices and costs; we used surveillance system data to calculate timeliness and accuracy. We identified LHDs with best timeliness and accuracy and used these categories to compare surveillance practices and costs. Results. Local health departments in the top tertiles for surveillance timeliness and accuracy had a lower cost per case reported than LHDs with lower timeliness and accuracy ($71 and $124 per case reported, respectively; P = .03). Best surveillance practices fell into 2 domains: efficient use of the electronic surveillance system and use of surveillance data for local evaluation and program management. Conclusions. Timely and accurate surveillance can be achieved in the setting of restricted funding experienced by many LHDs. Adopting best surveillance practices may improve both efficiency and public health outcomes. PMID:24134385
Senore, Carlo; Mandel, Jack S.; Allison, James E.; Atkin, Wendy S.; Benamouzig, Robert; Bossuyt, Patrick M. M.; Silva, Mahinda De; Guittet, Lydia; Halloran, Stephen P.; Haug, Ulrike; Hoff, Geir; Itzkowitz, Steven H.; Leja, Marcis; Levin, Bernard; Meijer, Gerrit A.; O'Morain, Colm A.; Parry, Susan; Rabeneck, Linda; Rozen, Paul; Saito, Hiroshi; Schoen, Robert E.; Seaman, Helen E.; Steele, Robert J. C.; Sung, Joseph J. Y.; Winawer, Sidney J.
2016-01-01
BACKGROUND New screening tests for colorectal cancer continue to emerge, but the evidence needed to justify their adoption in screening programs remains uncertain. METHODS A review of the literature and a consensus approach by experts was undertaken to provide practical guidance on how to compare new screening tests with proven screening tests. RESULTS Findings and recommendations from the review included the following: Adoption of a new screening test requires evidence of effectiveness relative to a proven comparator test. Clinical accuracy supported by programmatic population evaluation in the screening context on an intention‐to‐screen basis, including acceptability, is essential. Cancer‐specific mortality is not essential as an endpoint provided that the mortality benefit of the comparator has been demonstrated and that the biologic basis of detection is similar. Effectiveness of the guaiac‐based fecal occult blood test provides the minimum standard to be achieved by a new test. A 4‐phase evaluation is recommended. An initial retrospective evaluation in cancer cases and controls (Phase 1) is followed by a prospective evaluation of performance across the continuum of neoplastic lesions (Phase 2). Phase 3 follows the demonstration of adequate accuracy in these 2 prescreening phases and addresses programmatic outcomes at 1 screening round on an intention‐to‐screen basis. Phase 4 involves more comprehensive evaluation of ongoing screening over multiple rounds. Key information is provided from the following parameters: the test positivity rate in a screening population, the true‐positive and false‐positive rates, and the number needed to colonoscope to detect a target lesion. CONCLUSIONS New screening tests can be evaluated efficiently by this stepwise comparative approach. Cancer 2016;122:826–39. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society. PMID:26828588
Young, Graeme P; Senore, Carlo; Mandel, Jack S; Allison, James E; Atkin, Wendy S; Benamouzig, Robert; Bossuyt, Patrick M M; Silva, Mahinda De; Guittet, Lydia; Halloran, Stephen P; Haug, Ulrike; Hoff, Geir; Itzkowitz, Steven H; Leja, Marcis; Levin, Bernard; Meijer, Gerrit A; O'Morain, Colm A; Parry, Susan; Rabeneck, Linda; Rozen, Paul; Saito, Hiroshi; Schoen, Robert E; Seaman, Helen E; Steele, Robert J C; Sung, Joseph J Y; Winawer, Sidney J
2016-03-15
New screening tests for colorectal cancer continue to emerge, but the evidence needed to justify their adoption in screening programs remains uncertain. A review of the literature and a consensus approach by experts was undertaken to provide practical guidance on how to compare new screening tests with proven screening tests. Findings and recommendations from the review included the following: Adoption of a new screening test requires evidence of effectiveness relative to a proven comparator test. Clinical accuracy supported by programmatic population evaluation in the screening context on an intention-to-screen basis, including acceptability, is essential. Cancer-specific mortality is not essential as an endpoint provided that the mortality benefit of the comparator has been demonstrated and that the biologic basis of detection is similar. Effectiveness of the guaiac-based fecal occult blood test provides the minimum standard to be achieved by a new test. A 4-phase evaluation is recommended. An initial retrospective evaluation in cancer cases and controls (Phase 1) is followed by a prospective evaluation of performance across the continuum of neoplastic lesions (Phase 2). Phase 3 follows the demonstration of adequate accuracy in these 2 prescreening phases and addresses programmatic outcomes at 1 screening round on an intention-to-screen basis. Phase 4 involves more comprehensive evaluation of ongoing screening over multiple rounds. Key information is provided from the following parameters: the test positivity rate in a screening population, the true-positive and false-positive rates, and the number needed to colonoscope to detect a target lesion. New screening tests can be evaluated efficiently by this stepwise comparative approach. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.
Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L J; Maris, Gunter
2016-01-01
We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses.
Information filtering via biased heat conduction.
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou et al., Proc. Natl. Acad. Sci. USA 107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
Analysis of model output and science data in the Virtual Model Repository (VMR).
NASA Astrophysics Data System (ADS)
De Zeeuw, D.; Ridley, A. J.
2014-12-01
Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.
Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2013-03-01
Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.
NASA Astrophysics Data System (ADS)
Tian, Xin; Li, Hua; Jiang, Xiaoyu; Xie, Jingping; Gore, John C.; Xu, Junzhong
2017-02-01
Two diffusion-based approaches, CG (constant gradient) and FEXI (filtered exchange imaging) methods, have been previously proposed for measuring transcytolemmal water exchange rate constant kin, but their accuracy and feasibility have not been comprehensively evaluated and compared. In this work, both computer simulations and cell experiments in vitro were performed to evaluate these two methods. Simulations were done with different cell diameters (5, 10, 20 μm), a broad range of kin values (0.02-30 s-1) and different SNR's, and simulated kin's were directly compared with the ground truth values. Human leukemia K562 cells were cultured and treated with saponin to selectively change cell transmembrane permeability. The agreement between measured kin's of both methods was also evaluated. The results suggest that, without noise, the CG method provides reasonably accurate estimation of kin especially when it is smaller than 10 s-1, which is in the typical physiological range of many biological tissues. However, although the FEXI method overestimates kin even with corrections for the effects of extracellular water fraction, it provides reasonable estimates with practical SNR's and more importantly, the fitted apparent exchange rate AXR showed approximately linear dependence on the ground truth kin. In conclusion, either CG or FEXI method provides a sensitive means to characterize the variations in transcytolemmal water exchange rate constant kin, although the accuracy and specificity is usually compromised. The non-imaging CG method provides more accurate estimation of kin, but limited to large volume-of-interest. Although the accuracy of FEXI is compromised with extracellular volume fraction, it is capable of spatially mapping kin in practice.
Critically re-evaluating a common technique
Geisbush, Thomas; Jones, Lyell; Weiss, Michael; Mozaffar, Tahseen; Gronseth, Gary; Rutkove, Seward B.
2016-01-01
Objectives: (1) To assess the diagnostic accuracy of EMG in radiculopathy. (2) To evaluate the intrarater reliability and interrater reliability of EMG in radiculopathy. (3) To assess the presence of confirmation bias in EMG. Methods: Three experienced academic electromyographers interpreted 3 compact discs with 20 EMG videos (10 normal, 10 radiculopathy) in a blinded, standardized fashion without information regarding the nature of the study. The EMGs were interpreted 3 times (discs A, B, C) 1 month apart. Clinical information was provided only with disc C. Intrarater reliability was calculated by comparing interpretations in discs A and B, interrater reliability by comparing interpretation between reviewers. Confirmation bias was estimated by the difference in correct interpretations when clinical information was provided. Results: Sensitivity was similar to previous reports (77%, confidence interval [CI] 63%–90%); specificity was 71%, CI 56%–85%. Intrarater reliability was good (κ 0.61, 95% CI 0.41–0.81); interrater reliability was lower (κ 0.53, CI 0.35–0.71). There was no substantial confirmation bias when clinical information was provided (absolute difference in correct responses 2.2%, CI −13.3% to 17.7%); the study lacked precision to exclude moderate confirmation bias. Conclusions: This study supports that (1) serial EMG studies should be performed by the same electromyographer since intrarater reliability is better than interrater reliability; (2) knowledge of clinical information does not bias EMG interpretation substantially; (3) EMG has moderate diagnostic accuracy for radiculopathy with modest specificity and electromyographers should exercise caution interpreting mild abnormalities. Classification of evidence: This study provides Class III evidence that EMG has moderate diagnostic accuracy and specificity for radiculopathy. PMID:26701380
Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment
DeBlasio, Dan
2013-01-01
Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379
Uncertainty of OpenStreetMap data for the road network in Cyprus
NASA Astrophysics Data System (ADS)
Demetriou, Demetris
2016-08-01
Volunteered geographic information (VGI) refers to the geographic data compiled and created by individuals which are rendered on the Internet through specific web-based tools for diverse areas of interest. One of the most well-known VGI projects is the OpenStreetMap (OSM) that provides worldwide free geospatial data representing a variety of features. A critical issue for all VGI initiatives is the quality of the information offered. Thus, this report looks into the uncertainty of the OSM dataset for the main road network in Cyprus. The evaluation is based on three basic quality standards, namely positional accuracy, completeness and attribute accuracy. The work has been carried out by employing the Model Builder of ArcGIS which facilitated the comparison between the OSM data and the authoritative data provided by the Public Works Department (PWD). Findings showed that the positional accuracy increases with the hierarchical level of a road, it varies per administrative District and around 70% of the roads have a positional accuracy within 6m compared to the reference dataset. Completeness in terms of road length difference is around 25% for three out of four road categories examined and road name completeness is 100% and around 40% for higher and lower level roads, respectively. Attribute accuracy focusing on road name is very high for all levels of roads. These outputs indicate that OSM data are good enough if they fit for the purpose of use. Furthermore, the study revealed some weaknesses of the methods used for calculating the positional accuracy, suggesting the need for methodological improvements.
An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.
Obuchowski, Nancy A
2006-02-15
ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.
A New Perspective on Visual Word Processing Efficiency
Houpt, Joseph W.; Townsend, James T.; Donkin, Christopher
2013-01-01
As a fundamental part of our daily lives, visual word processing has received much attention in the psychological literature. Despite the well established advantage of perceiving letters in a word or in a pseudoword over letters alone or in random sequences using accuracy, a comparable effect using response times has been elusive. Some researchers continue to question whether the advantage due to word context is perceptual. We use the capacity coefficient, a well established, response time based measure of efficiency to provide evidence of word processing as a particularly efficient perceptual process to complement those results from the accuracy domain. PMID:24334151
Mapping river bathymetry with a small footprint green LiDAR: Applications and challenges
Kinzel, Paul J.; Legleiter, Carl; Nelson, Jonathan M.
2013-01-01
that environmental conditions and postprocessing algorithms can influence the accuracy and utility of these surveys and must be given consideration. These factors can lead to mapping errors that can have a direct bearing on derivative analyses such as hydraulic modeling and habitat assessment. We discuss the water and substrate characteristics of the sites, compare the conventional and remotely sensed river-bed topographies, and investigate the laser waveforms reflected from submerged targets to provide an evaluation as to the suitability and accuracy of the EAARL system and associated processing algorithms for riverine mapping applications.
Visual assessment of CPR quality during pediatric cardiac arrest: does point of view matter?
Jones, Angela; Lin, Yiqun; Nettel-Aguirre, Alberto; Gilfoyle, Elaine; Cheng, Adam
2015-05-01
In many clinical settings, providers rely on visual assessment when delivering feedback on CPR quality. Little is known about the accuracy of visual assessment of CPR quality. We aimed to determine how accurate pediatric providers are in their visual assessment of CPR quality and to identify the optimal position relative to the patient for accurate CPR assessment. We videotaped high-quality CPR (based on 2010 American Heart Association guidelines) and 3 variations of poor quality CPR in a simulated resuscitation, filmed from the foot, head and the side of the manikin. Participants watched 12 videos and completed a questionnaire to assess CPR quality. One hundred and twenty-five participants were recruited. The overall accuracy of visual assessment of CPR quality was 65.6%. Accuracy was better from the side (70.8%) and foot (68.8%) of the bed when compared to the head of the bed (57.2%; p<0.001). The side was the best position for assessing depth (p<0.001). Rate assessment was equivalent between positions (p=0.58). The side and foot of the bed were superior to the head when assessing chest recoil (p<0.001). Factors associated with increased accuracy in visual assessment of CPR quality included recent CPR course completion (p=0.034) and involvement in more cardiac arrests as a team member (p=0.003). Healthcare providers struggle to accurately assess the quality of CPR using visual assessment. If visual assessment is being used, providers should stand at the side of the bed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOT National Transportation Integrated Search
1966-10-01
The study provides correlative information with respect to the comparative accuracy of the traditional 'cuff' clinical method of obtaining blood pressure and the laboratory catheterization procedure which measures actual blood pressure. The informati...
McDowell, Michal; Pardee, Dana J; Peitzmeier, Sarah; Reisner, Sari L; Agénor, Madina; Alizaga, Natalie; Bernstein, Ida; Potter, Jennifer
2017-08-01
Trans-masculine (TM, i.e., persons who have a masculine spectrum gender identity, but were assigned female sex at birth) individuals face disparities in cervical cancer screening rates compared to cisgender women. Some unique barriers to screening in this population are specific to Pap tests. Introduction of self-collected frontal (i.e., vaginal) swabs for human papillomavirus (HPV) testing as a screening strategy may obviate these barriers. This study elucidates cervical cancer screening preferences among TM individuals. TM individuals participated in in-depth interviews (n = 31) and online surveys (n = 32) to explore perceptions and experiences regarding cervical cancer screening, including the acceptability of self-collected frontal HPV swabs for cervical cancer screening compared to provider-administered Pap tests. Provider-collected frontal HPV swab acceptability was also explored. Most TM individuals (94% in-person and 91% online participants) preferred either the self- or provider-collected frontal HPV swab to the Pap test. Participants perceived self- and provider-collected frontal HPV swabs to be less invasive, provoke less gender discordance, and promote a greater sense of agency compared to Pap tests. However, some participants expressed concern about HPV swab accuracy and, regarding the self-collected swab, discomfort about the need to engage with genitals they may not want to acknowledge. Individuals who reported positive provider relationships found Pap tests and provider-collected frontal swabs more acceptable than those who did not. Frontal HPV swabs have the potential to promote regular cervical cancer screening among TM individuals and to narrow screening disparities. Work is ongoing to establish swab accuracy and develop shared decision-making tools.
Precision and accuracy of 3D lower extremity residua measurement systems
NASA Astrophysics Data System (ADS)
Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.
1996-04-01
Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.
Spot diameters for scanning photorefractive keratectomy: a comparative study
NASA Astrophysics Data System (ADS)
Manns, Fabrice; Parel, Jean-Marie A.
1998-06-01
Purpose: The purpose of this study was to compare with computer simulations the duration, smoothness and accuracy of scanning photo-refractive keratectomy with spot diameters ranging from 0.2 to 1 mm. Methods: We calculated the number of pulses per diopter of flattening for spot sizes varying from 0.2 to 1 mm. We also computed the corneal shape after the correction of 4 diopters of myopia and 4 diopters of astigmatism with a 6 mm ablation zone and a spot size of 0.4 mm with 600 mJ/cm2 peak radiant exposure and 0.8 mm with 300 mJ/cm2 peak radiant exposure. The accuracy and smoothness of the ablations were compared. Results: The repetition rate required to produce corrections of myopia with a 6 mm ablation zone in a duration of 5 s per diopter is on the order of 1 kHz for spot sizes smaller than 0.5 mm, and of 100 Hz for spot sizes larger than 0.5 mm. The accuracy and smoothness after the correction of myopia and astigmatism with small and large spot sizes were not significantly different. Conclusions: This study seems to indicate that there is no theoretical advantage for using either smaller spots with higher radiant exposures or larger spots with lower radiant exposures. However, at fixed radiant exposure, treatments with smaller spots require a larger duration of surgery but provide a better accuracy for the correction of astigmatism.
NASA Astrophysics Data System (ADS)
Liu, J.
2017-12-01
Accurately estimate of ET is crucial for studies of land-atmosphere interactions. A series of ET products have been developed recently relying on various simulation methods, however, uncertainties in accuracy of products limit their implications. In this study, accuracies of total 8 popular global ET products simulated based on satellite retrieves (ETMODIS and ETZhang), reanalysis (ETJRA55), machine learning method (ETJung) and land surface models (ETCLM, ETMOS, ETNoah and ETVIC) forcing by Global Land Data Assimilation System (GLDAS), respectively, were comprehensively evaluated against observations from eddy covariance FLUXNET sites by yearly, land cover and climate zones. The result shows that all simulated ET products tend to underestimate in the lower ET ranges or overestimate in higher ET ranges compared with ET observations. Through the examining of four statistic criterias, the root mean square error (RMSE), mean bias error (MBE), R2, and Taylor skill score (TSS), ETJung provided a high performance whether yearly or land cover or climatic zones. Satellite based ET products also have impressive performance. ETMODIS and ETZhang present comparable accuracy, while were skilled for different land cover and climate zones, respectively. Generally, the ET products from GLDAS show reasonable accuracy, despite ETCLM has relative higher RMSE and MBE for yearly, land cover and climate zones comparisons. Although the ETJRA55 shows comparable R2 with other products, its performance was constraint by the high RMSE and MBE. Knowledge from this study is crucial for ET products improvement and selection when they were used.
Corona, Thaila Francini; Böger, Beatriz; Rocha, Tatiana Carneiro da; Svoboda, Walfrido Külh; Gomes, Eliane Carneiro
2018-01-01
Rabies is an acute zoonotic disease, caused by a rhabdovirus that can affect all mammals, and is commonly transmitted by the bite of a rabid animal. The definitive diagnosis is laboratorial, by the Fluorescent Antibody Test (FAT) as a quick test and Mouse Inoculation Test (MIT) as a confirmatory test (gold standard). Studies conducted over the past three decades indicate that MIT and Virus Isolation in Cell Culture (VICC) can provide the same effectiveness, the latter being considered superior in bioethics and animal welfare. The aim of this study was to compare VICC with MIT, in terms of accuracy, biosafety and occupational health, supply and equipment costs, bioethics and animal welfare, in a Brazilian public health lab. We utilized 400 samples of animal neurological tissue to compare the performance of VICC against MIT. The variables analyzed were accuracy, biosafety and occupational health, time spent in performing the tests, supply and equipment costs, bioethics and animal welfare evaluation. Both VICC and MIT had almost the same accuracy (99.8%), although VICC presented fewer risks regarding biosafety and mental health of the technicians, and reduced time between inoculation and obtaining the results (approximately 22 days less). In addition, VICC presented lower supply costs (86.5% less), equipment costs (32.6% less), and the advantage of not using animals. These results confirm that VICC can replace MIT, offering the same accuracy and better features regarding cost, results, biosafety and occupational health, and bioethics and animal welfare.
Cheng, Y; Cai, Y; Wang, Y
2014-01-01
The aim of this study was to assess the accuracy of ultrasonography in the diagnosis of chronic lateral ankle ligament injury. A total of 120 ankles in 120 patients with a clinical suspicion of chronic ankle ligament injury were examined by ultrasonography by using a 5- to 17-MHz linear array transducer before surgery. The results of ultrasonography were compared with the operative findings. There were 18 sprains and 24 partial and 52 complete tears of the anterior talofibular ligament (ATFL); 26 sprains, 27 partial and 12 complete tears of the calcaneofibular ligament (CFL); and 1 complete tear of the posterior talofibular ligament (PTFL) at arthroscopy and operation. Compared with operative findings, the sensitivity, specificity and accuracy of ultrasonography were 98.9%, 96.2% and 84.2%, respectively, for injury of the ATFL and 93.8%, 90.9% and 83.3%, respectively, for injury of the CFL. The PTFL tear was identified by ultrasonography. The accuracy of identification between acute-on-chronic and subacute-chronic patients did not differ. The accuracies of diagnosing three grades of ATFL injuries were almost the same as those of diagnosing CFL injuries. Ultrasonography provides useful information for the evaluation of patients presenting with chronic pain after ankle sprain. Intraoperative findings are the reference standard. We demonstrated that ultrasonography was highly sensitive and specific in detecting chronic lateral ligments injury of the ankle joint.
Cheng, Y; Cai, Y
2014-01-01
Objective: The aim of this study was to assess the accuracy of ultrasonography in the diagnosis of chronic lateral ankle ligament injury. Methods: A total of 120 ankles in 120 patients with a clinical suspicion of chronic ankle ligament injury were examined by ultrasonography by using a 5- to 17-MHz linear array transducer before surgery. The results of ultrasonography were compared with the operative findings. Results: There were 18 sprains and 24 partial and 52 complete tears of the anterior talofibular ligament (ATFL); 26 sprains, 27 partial and 12 complete tears of the calcaneofibular ligament (CFL); and 1 complete tear of the posterior talofibular ligament (PTFL) at arthroscopy and operation. Compared with operative findings, the sensitivity, specificity and accuracy of ultrasonography were 98.9%, 96.2% and 84.2%, respectively, for injury of the ATFL and 93.8%, 90.9% and 83.3%, respectively, for injury of the CFL. The PTFL tear was identified by ultrasonography. The accuracy of identification between acute-on-chronic and subacute–chronic patients did not differ. The accuracies of diagnosing three grades of ATFL injuries were almost the same as those of diagnosing CFL injuries. Conclusion: Ultrasonography provides useful information for the evaluation of patients presenting with chronic pain after ankle sprain. Advances in knowledge: Intraoperative findings are the reference standard. We demonstrated that ultrasonography was highly sensitive and specific in detecting chronic lateral ligments injury of the ankle joint. PMID:24352708
Inertial Measurements for Aero-assisted Navigation (IMAN)
NASA Technical Reports Server (NTRS)
Jah, Moriba; Lisano, Michael; Hockney, George
2007-01-01
IMAN is a Python tool that provides inertial sensor-based estimates of spacecraft trajectories within an atmospheric influence. It provides Kalman filter-derived spacecraft state estimates based upon data collected onboard, and is shown to perform at a level comparable to the conventional methods of spacecraft navigation in terms of accuracy and at a higher level with regard to the availability of results immediately after completion of an atmospheric drag pass.
Hinnen, Deborah A.; Buskirk, Ann; Lyden, Maureen; Amstutz, Linda; Hunter, Tracy; Parkin, Christopher G.; Wagner, Robin
2014-01-01
Background: We assessed users’ proficiency and efficiency in identifying and interpreting self-monitored blood glucose (SMBG), insulin, and carbohydrate intake data using data management software reports compared with standard logbooks. Method: This prospective, self-controlled, randomized study enrolled insulin-treated patients with diabetes (PWDs) (continuous subcutaneous insulin infusion [CSII] and multiple daily insulin injection [MDI] therapy), patient caregivers [CGVs]) and health care providers (HCPs) who were naïve to diabetes data management computer software. Six paired clinical cases (3 CSII, 3 MDI) and associated multiple-choice questions/answers were reviewed by diabetes specialists and presented to participants via a web portal in both software report (SR) and traditional logbook (TL) formats. Participant response time and accuracy were documented and assessed. Participants completed a preference questionnaire at study completion. Results: All participants (54 PWDs, 24 CGVs, 33 HCPs) completed the cases. Participants achieved greater accuracy (assessed by percentage of accurate answers) using the SR versus TL formats: PWDs, 80.3 (13.2)% versus 63.7 (15.0)%, P < .0001; CGVs, 84.6 (8.9)% versus 63.6 (14.4)%, P < .0001; HCPs, 89.5 (8.0)% versus 66.4 (12.3)%, P < .0001. Participants spent less time (minutes) with each case using the SR versus TL formats: PWDs, 8.6 (4.3) versus 19.9 (12.2), P < .0001; CGVs, 7.0 (3.5) versus 15.5 (11.8), P = .0005; HCPs, 6.7 (2.9) versus 16.0 (12.0), P < .0001. The majority of participants preferred using the software reports versus logbook data. Conclusions: Use of the Accu-Chek Connect Online software reports enabled PWDs, CGVs, and HCPs, naïve to diabetes data management software, to identify and utilize key diabetes information with significantly greater accuracy and efficiency compared with traditional logbook information. Use of SRs was preferred over logbooks. PMID:25367012
Tosun, Tuğçe; Berkay, Dilara; Sack, Alexander T; Çakmak, Yusuf Ö; Balcı, Fuat
2017-08-01
Decisions are made based on the integration of available evidence. The noise in evidence accumulation leads to a particular speed-accuracy tradeoff in decision-making, which can be modulated and optimized by adaptive decision threshold setting. Given the effect of pre-SMA activity on striatal excitability, we hypothesized that the inhibition of pre-SMA would lead to higher decision thresholds and an increased accuracy bias. We used offline continuous theta burst stimulation to assess the effect of transient inhibition of the right pre-SMA on the decision processes in a free-response two-alternative forced-choice task within the drift diffusion model framework. Participants became more cautious and set higher decision thresholds following right pre-SMA inhibition compared with inhibition of the control site (vertex). Increased decision thresholds were accompanied by an accuracy bias with no effects on post-error choice behavior. Participants also exhibited higher drift rates as a result of pre-SMA inhibition compared with the vertex inhibition. These results, in line with the striatal theory of speed-accuracy tradeoff, provide evidence for the functional role of pre-SMA activity in decision threshold modulation. Our results also suggest that pre-SMA might be a part of the brain network associated with the sensory evidence integration.
Janousova, Eva; Schwarz, Daniel; Kasparek, Tomas
2015-06-30
We investigated a combination of three classification algorithms, namely the modified maximum uncertainty linear discriminant analysis (mMLDA), the centroid method, and the average linkage, with three types of features extracted from three-dimensional T1-weighted magnetic resonance (MR) brain images, specifically MR intensities, grey matter densities, and local deformations for distinguishing 49 first episode schizophrenia male patients from 49 healthy male subjects. The feature sets were reduced using intersubject principal component analysis before classification. By combining the classifiers, we were able to obtain slightly improved results when compared with single classifiers. The best classification performance (81.6% accuracy, 75.5% sensitivity, and 87.8% specificity) was significantly better than classification by chance. We also showed that classifiers based on features calculated using more computation-intensive image preprocessing perform better; mMLDA with classification boundary calculated as weighted mean discriminative scores of the groups had improved sensitivity but similar accuracy compared to the original MLDA; reducing a number of eigenvectors during data reduction did not always lead to higher classification accuracy, since noise as well as the signal important for classification were removed. Our findings provide important information for schizophrenia research and may improve accuracy of computer-aided diagnostics of neuropsychiatric diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.
Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel
2017-08-18
Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among conventional methods, some of them slightly performed better than others, although the choice of a suitable technique is dependent on the computational complexity and accuracy requirements of the user.
Arabian, Sandra S; Marcus, Michael; Captain, Kevin; Pomphrey, Michelle; Breeze, Janis; Wolfe, Jennefer; Bugaev, Nikolay; Rabinovici, Reuven
2015-09-01
Analyses of data aggregated in state and national trauma registries provide the platform for clinical, research, development, and quality improvement efforts in trauma systems. However, the interhospital variability and accuracy in data abstraction and coding have not yet been directly evaluated. This multi-institutional, Web-based, anonymous study examines interhospital variability and accuracy in data coding and scoring by registrars. Eighty-two American College of Surgeons (ACS)/state-verified Level I and II trauma centers were invited to determine different data elements including diagnostic, procedure, and Abbreviated Injury Scale (AIS) coding as well as selected National Trauma Data Bank definitions for the same fictitious case. Variability and accuracy in data entries were assessed by the maximal percent agreement among the registrars for the tested data elements, and 95% confidence intervals were computed to compare this level of agreement to the ideal value of 100%. Variability and accuracy in all elements were compared (χ testing) based on Trauma Quality Improvement Program (TQIP) membership, level of trauma center, ACS verification, and registrar's certifications. Fifty registrars (61%) completed the survey. The overall accuracy for all tested elements was 64%. Variability was noted in all examined parameters except for the place of occurrence code in all groups and the lower extremity AIS code in Level II trauma centers and in the Certified Specialist in Trauma Registry- and Certified Abbreviated Injury Scale Specialist-certified registrar groups. No differences in variability were noted when groups were compared based on TQIP membership, level of center, ACS verification, and registrar's certifications, except for prehospital Glasgow Coma Scale (GCS), where TQIP respondents agreed more than non-TQIP centers (p = 0.004). There is variability and inaccuracy in interhospital data coding and scoring of injury information. This finding casts doubt on the validity of registry data used in all aspects of trauma care and injury surveillance.
Avila, Jacob; Smith, Ben; Mead, Therese; Jurma, Duane; Dawson, Matthew; Mallin, Michael; Dugan, Adam
2018-04-24
It is unknown whether the addition of M-mode to B-mode ultrasound (US) has any effect on the overall accuracy of interpretation of lung sliding in the evaluation of a pneumothorax by emergency physicians. This study aimed to determine what effect, if any, this addition has on US interpretation by emergency physicians of varying training levels. One hundred forty emergency physicians were randomized via online software to receive a quiz with B-mode clips alone or B-mode with corresponding M-mode images and asked to identify the presence or absence of lung sliding. The sensitivity, specificity, and accuracy of the diagnosis of lung sliding with and without M-mode US were compared. Overall, the sensitivities, specificities, and accuracies of B-mode + M-mode US versus B-mode US alone were 93.1% and 93.2% (P = .8), 96.0% and 89.8% (P < .0001), and 91.5% and 94.5% (P = .0091), respectively. A subgroup analysis showed that in those providers with fewer than 250 total US scans done previously, M-mode US increased accuracy from 88.2% (95% confidence interval, 86.2%-90.2%) to 94.4% (92.8%-96.0%; P = .001) and increased the specificity from 87.0% (84.5%-89.5%) to 97.2% (95.4%-99.0%; P < .0001) compared with B-mode US alone. There was no statistically significant difference observed in the sensitivity, specificity, and accuracy of B-mode + M-mode US compared with B-mode US alone in those with more than 250 scans. The addition of M-mode images to B-mode clips aids in the accurate diagnosis of lung sliding by emergency physicians. The subgroup analysis showed that the benefit of M-mode US disappears after emergency physicians have performed more than 250 US examinations. © 2018 by the American Institute of Ultrasound in Medicine.
Improved Short-Term Clock Prediction Method for Real-Time Positioning.
Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan
2017-06-06
The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.
Manjila, Sunil; Knudson, Kathleen E; Johnson, Carleton; Sloan, Andrew E
2016-06-01
Stereotactic biopsy is an important and minimally invasive technique used for a variety of indications in neurosurgery. Initially, this technique required a frame, but recently there have been a number of newer, less cumbersome approaches to biopsy including robotic arms, fixed arms, and, more recently, skull-mounted miniframes. Miniframes are attractive because they are disposable and low profile. However, the relatively limited degree of freedom offered by currently available devices necessitates a preplanned burr hole, which in turn limits flexibility and multiple trajectories. The AXiiiS device is a skull-mounted, magnetic resonance imaging-compatible miniframe that provides a similar degree of freedom with a frame while maintaining a low-profile, disposable platform. To assess the image-guided trajectory alignment accuracy using AXiiiS stereotactic miniframe biopsy of intracranial lesions. The accuracy of the AXiiiS device is compared with the Navigus Trajectory Guide as platforms. After approval by our institutional review board, medical records of 10 neurosurgical patients with intracranial pathologies were chosen for AXiiiS stereotactic miniframe biopsy, and histological correlation was obtained. Ten reported cases demonstrate the precision and ease of using the AXiiiS stereotactic miniframe for biopsy of intracranial lesions in conjunction with preoperative magnetic resonance imaging. Multiple trajectories and angles have been used with precision and safety. The AXiiiS stereotactic miniframe is a feasible, safe, and disposable platform for multitrajectory intracranial biopsies. Compared with existing platforms, this novel device provides a more stable base and wider limits of trajectory angles with comparable accuracy and precision.
TU-E-BRB-00: Deformable Image Registration: Is It Right for Your Clinic
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
Deformable image registration (DIR) is developing rapidly and is poised to substantially improve dose fusion accuracy for adaptive and retreatment planning and motion management and PET fusion to enhance contour delineation for treatment planning. However, DIR dose warping accuracy is difficult to quantify, in general, and particularly difficult to do so on a patient-specific basis. As clinical DIR options become more widely available, there is an increased need to understand the implications of incorporating DIR into clinical workflow. Several groups have assessed DIR accuracy in clinically relevant scenarios, but no comprehensive review material is yet available. This session will alsomore » discuss aspects of the AAPM Task Group 132 on the Use of Image Registration and Data Fusion Algorithms and Techniques in Radiotherapy Treatment Planning official report, which provides recommendations for DIR clinical use. We will summarize and compare various commercial DIR software options, outline successful clinical techniques, show specific examples with discussion of appropriate and inappropriate applications of DIR, discuss the clinical implications of DIR, provide an overview of current DIR error analysis research, review QA options and research phantom development and present TG-132 recommendations. Learning Objectives: Compare/contrast commercial DIR software and QA options Overview clinical DIR workflow for retreatment To understand uncertainties introduced by DIR Review TG-132 proposed recommendations.« less
Trend analysis of the aerosol optical depth from fusion of MISR and MODIS retrievals over China
NASA Astrophysics Data System (ADS)
Guo, Jing; Gu, Xingfa; Yu, Tao; Cheng, Tianhai; Chen, Hao
2014-03-01
Atmospheric aerosol plays an important role in the climate change though direct and indirect processes. In order to evaluate the effects of aerosols on climate, it is necessary to have a research on their spatial and temporal distributions. Satellite aerosol remote sensing is a developing technology that may provide good temporal sampling and superior spatial coverage to study aerosols. The Moderate Resolution Imaging Spectroradiometer (MODIS) and Multi-angle Imaging Spectroradiometer (MISR) have provided aerosol observations since 2000, with large coverage and high accuracy. However, due to the complex surface, cloud contamination, and aerosol models used in the retrieving process, the uncertainties still exist in current satellite aerosol products. There are several observed differences in comparing the MISR and MODIS AOD data with the AERONET AOD. Combing multiple sensors could reduce uncertainties and improve observational accuracy. The validation results reveal that a better agreement between fusion AOD and AERONET AOD. The results confirm that the fusion AOD values are more accurate than single sensor. We have researched the trend analysis of the aerosol properties over China based on nine-year (2002-2010) fusion data. Compared with trend analysis in Jingjintang and Yangtze River Delta, the accuracy has increased by 5% and 3%, respectively. It is obvious that the increasing trend of the AOD occurred in Yangtze River Delta, where human activities may be the main source of the increasing AOD.
NASA Astrophysics Data System (ADS)
Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Gallia, G. L.; Rigamonti, D.; Wolinsky, J.-P.; Gokaslan, Ziya L.; Khanna, A. J.; Siewerdsen, J. H.
2014-03-01
An algorithm for 3D-2D registration of CT and x-ray projections has been developed using dual projection views to provide 3D localization with accuracy exceeding that of conventional tracking systems. The registration framework employs a normalized gradient information (NGI) similarity metric and covariance matrix adaptation evolution strategy (CMAES) to solve for the patient pose in 6 degrees of freedom. Registration performance was evaluated in anthropomorphic head and chest phantoms, as well as a human torso cadaver, using C-arm projection views acquired at angular separations (Δ𝜃) ranging 0-178°. Registration accuracy was assessed in terms target registration error (TRE) and compared to that of an electromagnetic tracker. Studies evaluated the influence of C-arm magnification, x-ray dose, and preoperative CT slice thickness on registration accuracy and the minimum angular separation required to achieve TRE ~2 mm. The results indicate that Δ𝜃 as small as 10-20° is adequate to achieve TRE <2 mm with 95% confidence, comparable or superior to that of commercial trackers. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers, and manual registration. The studies support potential application to percutaneous spine procedures and intracranial neurosurgery.
AVHRR composite period selection for land cover classification
Maxwell, S.K.; Hoffer, R.M.; Chapman, P.L.
2002-01-01
Multitemporal satellite image datasets provide valuable information on the phenological characteristics of vegetation, thereby significantly increasing the accuracy of cover type classifications compared to single date classifications. However, the processing of these datasets can become very complex when dealing with multitemporal data combined with multispectral data. Advanced Very High Resolution Radiometer (AVHRR) biweekly composite data are commonly used to classify land cover over large regions. Selecting a subset of these biweekly composite periods may be required to reduce the complexity and cost of land cover mapping. The objective of our research was to evaluate the effect of reducing the number of composite periods and altering the spacing of those composite periods on classification accuracy. Because inter-annual variability can have a major impact on classification results, 5 years of AVHRR data were evaluated. AVHRR biweekly composite images for spectral channels 1-4 (visible, near-infrared and two thermal bands) covering the entire growing season were used to classify 14 cover types over the entire state of Colorado for each of five different years. A supervised classification method was applied to maintain consistent procedures for each case tested. Results indicate that the number of composite periods can be halved-reduced from 14 composite dates to seven composite dates-without significantly reducing overall classification accuracy (80.4% Kappa accuracy for the 14-composite data-set as compared to 80.0% for a seven-composite dataset). At least seven composite periods were required to ensure the classification accuracy was not affected by inter-annual variability due to climate fluctuations. Concentrating more composites near the beginning and end of the growing season, as compared to using evenly spaced time periods, consistently produced slightly higher classification values over the 5 years tested (average Kappa) of 80.3% for the heavy early/late case as compared to 79.0% for the alternate dataset case).
NASA Astrophysics Data System (ADS)
Kemppainen, R.; Vaara, T.; Joensuu, T.; Kiljunen, T.
2018-03-01
Background and Purpose. Magnetic resonance imaging (MRI) has in recent years emerged as an imaging modality to drive precise contouring of targets and organs at risk in external beam radiation therapy. Moreover, recent advances in MRI enable treatment of cancer without computed tomography (CT) simulation. A commercially available MR-only solution, MRCAT, offers a single-modality approach that provides density information for dose calculation and generation of positioning reference images. We evaluated the accuracy of patient positioning based on MRCAT digitally reconstructed radiographs (DRRs) by comparing to standard CT based workflow. Materials and Methods. Twenty consecutive prostate cancer patients being treated with external beam radiation therapy were included in the study. DRRs were generated for each patient based on the planning CT and MRCAT. The accuracy assessment was performed by manually registering the DRR images to planar kV setup images using bony landmarks. A Bayesian linear mixed effects model was used to separate systematic and random components (inter- and intra-observer variation) in the assessment. In addition, method agreement was assessed using a Bland-Altman analysis. Results. The systematic difference between MRCAT and CT based patient positioning, averaged over the study population, were found to be (mean [95% CI]) -0.49 [-0.85 to -0.13] mm, 0.11 [-0.33 to +0.57] mm and -0.05 [-0.23 to +0.36] mm in vertical, longitudinal and lateral directions, respectively. The increases in total random uncertainty were estimated to be below 0.5 mm for all directions, when using MR-only workflow instead of CT. Conclusions. The MRCAT pseudo-CT method provides clinically acceptable accuracy and precision for patient positioning for pelvic radiation therapy based on planar DRR images. Furthermore, due to the reduction of geometric uncertainty, compared to dual-modality workflow, the approach is likely to improve the total geometric accuracy of pelvic radiation therapy.
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Shoko, C.; Mutanga, O.
2017-07-01
C3 and C4 grass species discrimination has increasingly become relevant in understanding their response to environmental changes and to monitor their integrity in providing goods and services. While remotely-sensed data provide robust, cost-effective and repeatable monitoring tools for C3 and C4 grasses, this has been largely limited by the scarcity of sensors with better earth imaging characteristics. The recent launch of the advanced Sentinel 2 MultiSpectral Instrument (MSI) presents a new prospect for discriminating C3 and C4 grasses. The present study tested the potential of Sentinel 2, characterized by refined spatial resolution and more unique spectral bands in discriminating between Festuca (C3) and Themeda (C4) grasses. To evaluate the performance of Sentinel 2 MSI; spectral bands, vegetation indices and spectral bands plus indices were used. Findings from Sentinel 2 were compared with those derived from the widely-used Worldview 2 commercial sensor and the Landsat 8 Operational Land Imager (OLI). Overall classification accuracies have shown that Sentinel 2 bands have potential (90.36%), than indices (85.54%) and combined variables (88.61%). The results were comparable to Worldview 2 sensor, which produced slightly higher accuracies using spectral bands (95.69%), indices (86.02%) and combined variables (87.09%), and better than Landsat 8 OLI spectral bands (75.26%), indices (82.79%) and combined variables (86.02%). Sentinel 2 bands produced lower errors of commission and omission (between 4.76 and 14.63%), comparable to Worldview 2 (between 1.96 and 7.14%), than Landsat 8 (between 18.18 and 30.61%), when classifying the two species. The classification accuracy from Sentinel 2 also did not differ significantly (z = 1.34) from Worldview 2, using standard bands; it was significantly (z > 1.96) different using indices and combined variables, whereas when compared to Landsat 8, Sentinel 2 accuracies were significantly different (z > 1.96) using all variables. These results demonstrated that key vegetation species discrimination could be improved by the use of the freely and improved Sentinel 2 MSI data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar
With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less
Machine learning of molecular properties: Locality and active learning
NASA Astrophysics Data System (ADS)
Gubaev, Konstantin; Podryabinkin, Evgeny V.; Shapeev, Alexander V.
2018-06-01
In recent years, the machine learning techniques have shown great potent1ial in various problems from a multitude of disciplines, including materials design and drug discovery. The high computational speed on the one hand and the accuracy comparable to that of density functional theory on another hand make machine learning algorithms efficient for high-throughput screening through chemical and configurational space. However, the machine learning algorithms available in the literature require large training datasets to reach the chemical accuracy and also show large errors for the so-called outliers—the out-of-sample molecules, not well-represented in the training set. In the present paper, we propose a new machine learning algorithm for predicting molecular properties that addresses these two issues: it is based on a local model of interatomic interactions providing high accuracy when trained on relatively small training sets and an active learning algorithm of optimally choosing the training set that significantly reduces the errors for the outliers. We compare our model to the other state-of-the-art algorithms from the literature on the widely used benchmark tests.
Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao
2015-07-01
In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Bobojć, Andrzej
2016-12-01
This work contains a comparative study of the performance of six geopotential models in an orbit estimation process of the satellite of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission. For testing, such models as ULUX_CHAMP2013S, ITG-GRACE 2010S, EIGEN-51C, EIGEN5S, EGM2008, EGM96, were adopted. Different sets of pseudo-range simulations along reference GOCE satellite orbital arcs were obtained using real orbits of the Global Positioning System satellites. These sets were the basic observation data used in the adjustment. The centimeter-accuracy Precise Science Orbit (PSO) for the GOCE satellite provided by the European Space Agency (ESA) was adopted as the GOCE reference orbit. Comparing various variants of the orbital solutions, the relative accuracy of geopotential models in an orbital aspect is determined. Full geopotential models were used in the adjustment process. The solutions were also determined taking into account truncated geopotential models. In such case, an accuracy of the solutions was slightly enhanced. Different arc lengths were taken for the computation.
Investigating the use of multi-point coupling for single-sensor bearing estimation in one direction
NASA Astrophysics Data System (ADS)
Woolard, Americo G.; Phoenix, Austin A.; Tarazaga, Pablo A.
2018-04-01
Bearing estimation of radially propagating symmetric waves in solid structures typically requires a minimum of two sensors. As a test specimen, this research investigates the use of multi-point coupling to provide directional inference using a single-sensor. By this provision, the number of sensors required for localization can be reduced. A finite-element model of a beam is constructed with a symmetrically placed bipod that has asymmetric joint-stiffness properties. Impulse loading is applied at different points along the beam, and measurements are taken from the apex of the bipod. A technique is developed to determine the direction-of-arrival of the propagating wave. The accuracy when using the bipod with the developed technique is compared against results gathered without the bipod and measuring from an asymmetric location along the beam. The results show 92% accuracy when the bipod is used, compared to 75% when measuring without the bipod from an asymmetric location. A geometry investigation finds the best accuracy results when one leg of the bipod has a low stiffness and a large diameter relative to the other leg.
A reference standard-based quality assurance program for radiology.
Liu, Patrick T; Johnson, C Daniel; Miranda, Rafael; Patel, Maitray D; Phillips, Carrie J
2010-01-01
The authors have developed a comprehensive radiology quality assurance (QA) program that evaluates radiology interpretations and procedures by comparing them with reference standards. Performance metrics are calculated and then compared with benchmarks or goals on the basis of published multicenter data and meta-analyses. Additional workload for physicians is kept to a minimum by having trained allied health staff members perform the comparisons of radiology reports with the reference standards. The performance metrics tracked by the QA program include the accuracy of CT colonography for detecting polyps, the false-negative rate for mammographic detection of breast cancer, the accuracy of CT angiography detection of coronary artery stenosis, the accuracy of meniscal tear detection on MRI, the accuracy of carotid artery stenosis detection on MR angiography, the accuracy of parathyroid adenoma detection by parathyroid scintigraphy, the success rate for obtaining cortical tissue on ultrasound-guided core biopsies of pelvic renal transplants, and the technical success rate for peripheral arterial angioplasty procedures. In contrast with peer-review programs, this reference standard-based QA program minimizes the possibilities of reviewer bias and erroneous second reviewer interpretations. The more objective assessment of performance afforded by the QA program will provide data that can easily be used for education and management conferences, research projects, and multicenter evaluations. Additionally, such performance data could be used by radiology departments to demonstrate their value over nonradiology competitors to referring clinicians, hospitals, patients, and third-party payers. Copyright 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Jürgens, Rebecca; Grass, Annika; Drolet, Matthis; Fischer, Julia
Both in the performative arts and in emotion research, professional actors are assumed to be capable of delivering emotions comparable to spontaneous emotional expressions. This study examines the effects of acting training on vocal emotion depiction and recognition. We predicted that professional actors express emotions in a more realistic fashion than non-professional actors. However, professional acting training may lead to a particular speech pattern; this might account for vocal expressions by actors that are less comparable to authentic samples than the ones by non-professional actors. We compared 80 emotional speech tokens from radio interviews with 80 re-enactments by professional and inexperienced actors, respectively. We analyzed recognition accuracies for emotion and authenticity ratings and compared the acoustic structure of the speech tokens. Both play-acted conditions yielded similar recognition accuracies and possessed more variable pitch contours than the spontaneous recordings. However, professional actors exhibited signs of different articulation patterns compared to non-trained speakers. Our results indicate that for emotion research, emotional expressions by professional actors are not better suited than those from non-actors.
Larger core size has superior technical and analytical accuracy in bladder tissue microarray.
Eskaros, Adel Rh; Egloff, Shanna A Arnold; Boyd, Kelli L; Richardson, Joyce E; Hyndman, M Eric; Zijlstra, Andries
2017-03-01
The construction of tissue microarrays (TMAs) with cores from a large number of paraffin-embedded tissues (donors) into a single paraffin block (recipient) is an effective method of analyzing samples from many patient specimens simultaneously. For the TMA to be successful, the cores within it must capture the correct histologic areas from the donor blocks (technical accuracy) and maintain concordance with the tissue of origin (analytical accuracy). This can be particularly challenging for tissues with small histological features such as small islands of carcinoma in situ (CIS), thin layers of normal urothelial lining of the bladder, or cancers that exhibit intratumor heterogeneity. In an effort to create a comprehensive TMA of a bladder cancer patient cohort that accurately represents the tumor heterogeneity and captures the small features of normal and CIS, we determined how core size (0.6 vs 1.0 mm) impacted the technical and analytical accuracy of the TMA. The larger 1.0 mm core exhibited better technical accuracy for all tissue types at 80.9% (normal), 94.2% (tumor), and 71.4% (CIS) compared with 58.6%, 85.9%, and 63.8% for 0.6 mm cores. Although the 1.0 mm core provided better tissue capture, increasing the number of replicates from two to three allowed with the 0.6 mm core compensated for this reduced technical accuracy. However, quantitative image analysis of proliferation using both Ki67+ immunofluorescence counts and manual mitotic counts demonstrated that the 1.0 mm core size also exhibited significantly greater analytical accuracy (P=0.004 and 0.035, respectively, r 2 =0.979 and 0.669, respectively). Ultimately, our findings demonstrate that capturing two or more 1.0 mm cores for TMA construction provides superior technical and analytical accuracy over the smaller 0.6 mm cores, especially for tissues harboring small histological features or substantial heterogeneity.
ASSESSING THE ACCURACY OF NATIONAL LAND COVER DATASET AREA ESTIMATES AT MULTIPLE SPATIAL EXTENTS
Site specific accuracy assessments provide fine-scale evaluation of the thematic accuracy of land use/land cover (LULC) datasets; however, they provide little insight into LULC accuracy across varying spatial extents. Additionally, LULC data are typically used to describe lands...
Bonmati, Ester; Hu, Yipeng; Villarini, Barbara; Rodell, Rachael; Martin, Paul; Han, Lianghao; Donaldson, Ian; Ahmed, Hashim U; Moore, Caroline M; Emberton, Mark; Barratt, Dean C
2018-04-01
Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems. © 2018 American Association of Physicists in Medicine.
3D Higher Order Modeling in the BEM/FEM Hybrid Formulation
NASA Technical Reports Server (NTRS)
Fink, P. W.; Wilton, D. R.
2000-01-01
Higher order divergence- and curl-conforming bases have been shown to provide significant benefits, in both convergence rate and accuracy, in the 2D hybrid finite element/boundary element formulation (P. Fink and D. Wilton, National Radio Science Meeting, Boulder, CO, Jan. 2000). A critical issue in achieving the potential for accuracy of the approach is the accurate evaluation of all matrix elements. These involve products of high order polynomials and, in some instances, singular Green's functions. In the 2D formulation, the use of a generalized Gaussian quadrature method was found to greatly facilitate the computation and to improve the accuracy of the boundary integral equation self-terms. In this paper, a 3D, hybrid electric field formulation employing higher order bases and higher order elements is presented. The improvements in convergence rate and accuracy, compared to those resulting from lower order modeling, are established. Techniques developed to facilitate the computation of the boundary integral self-terms are also shown to improve the accuracy of these terms. Finally, simple preconditioning techniques are used in conjunction with iterative solution procedures to solve the resulting linear system efficiently. In order to handle the boundary integral singularities in the 3D formulation, the parent element- either a triangle or rectangle-is subdivided into a set of sub-triangles with a common vertex at the singularity. The contribution to the integral from each of the sub-triangles is computed using the Duffy transformation to remove the singularity. This method is shown to greatly facilitate t'pe self-term computation when the bases are of higher order. In addition, the sub-triangles can be further divided to achieve near arbitrary accuracy in the self-term computation. An efficient method for subdividing the parent element is presented. The accuracy obtained using higher order bases is compared to that obtained using lower order bases when the number of unknowns is approximately equal. Also, convergence rates obtained using higher order bases are compared to those obtained with lower order bases for selected sample
Deep Learning Method for Denial of Service Attack Detection Based on Restricted Boltzmann Machine.
Imamverdiyev, Yadigar; Abdullayeva, Fargana
2018-06-01
In this article, the application of the deep learning method based on Gaussian-Bernoulli type restricted Boltzmann machine (RBM) to the detection of denial of service (DoS) attacks is considered. To increase the DoS attack detection accuracy, seven additional layers are added between the visible and the hidden layers of the RBM. Accurate results in DoS attack detection are obtained by optimization of the hyperparameters of the proposed deep RBM model. The form of the RBM that allows application of the continuous data is used. In this type of RBM, the probability distribution of the visible layer is replaced by a Gaussian distribution. Comparative analysis of the accuracy of the proposed method with Bernoulli-Bernoulli RBM, Gaussian-Bernoulli RBM, deep belief network type deep learning methods on DoS attack detection is provided. Detection accuracy of the methods is verified on the NSL-KDD data set. Higher accuracy from the proposed multilayer deep Gaussian-Bernoulli type RBM is obtained.
Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model
NASA Astrophysics Data System (ADS)
Wang, Qijie
2015-08-01
The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.
Improving crop classification through attention to the timing of airborne radar acquisitions
NASA Technical Reports Server (NTRS)
Brisco, B.; Ulaby, F. T.; Protz, R.
1984-01-01
Radar remote sensors may provide valuable input to crop classification procedures because of (1) their independence of weather conditions and solar illumination, and (2) their ability to respond to differences in crop type. Manual classification of multidate synthetic aperture radar (SAR) imagery resulted in an overall accuracy of 83 percent for corn, forest, grain, and 'other' cover types. Forests and corn fields were identified with accuracies approaching or exceeding 90 percent. Grain fields and 'other' fields were often confused with each other, resulting in classification accuracies of 51 and 66 percent, respectively. The 83 percent correct classification represents a 10 percent improvement when compared to similar SAR data for the same area collected at alternate time periods in 1978. These results demonstrate that improvements in crop classification accuracy can be achieved with SAR data by synchronizing data collection times with crop growth stages in order to maximize differences in the geometric and dielectric properties of the cover types of interest.
McCarthy, Jillian H.; Hogan, Tiffany P.; Catts, Hugh W.
2013-01-01
The purpose of this study was to test the hypothesis that word reading accuracy, not oral language, is associated with spelling performance in school-age children. We compared fourth grade spelling accuracy in children with specific language impairment (SLI), dyslexia, or both (SLI/dyslexia) to their typically developing grade-matched peers. Results of the study revealed that children with SLI performed similarly to their typically developing peers on a single word spelling task. Alternatively, those with dyslexia and SLI/dyslexia evidenced poor spelling accuracy. Errors made by both those with dyslexia and SLI/dyslexia were characterized by numerous phonologic, orthographic, and semantic errors. Cumulative results support the hypothesis that word reading accuracy, not oral language, is associated with spelling performance in typically developing school-age children and their peers with SLI and dyslexia. Findings are provided as further support for the notion that SLI and dyslexia are distinct, yet co-morbid, developmental disorders. PMID:22876769
Slicer Method Comparison Using Open-source 3D Printer
NASA Astrophysics Data System (ADS)
Ariffin, M. K. A. Mohd; Sukindar, N. A.; Baharudin, B. T. H. T.; Jaafar, C. N. A.; Ismail, M. I. S.
2018-01-01
Open-source 3D printer has been one of the popular choices in fabricating 3D models. This technology is easily accessible and low in cost. However, several studies have been made to improve the performance of this low-cost technology in term of the accuracy of the parts finish. This study is focusing on the selection of slicer mode between CuraEngine and Slic3r. The effect on this slicer has been observe in terms of accuracy and surface visualization. The result shows that if the accuracy is the top priority, CuraEngine is the better option to use as contribute more accuracy as well as less filament is needed compared to the Slice3r. Slice3r may be very useful for complicated parts such as hanging structure due to excessive material which act as support material. The study provides basic platform for the user to have an idea which option to be used in fabricating 3D model.
Vertical Accuracy Evaluation of Aster GDEM2 Over a Mountainous Area Based on Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Liang, Y.; Qu, Y.; Guo, D.; Cui, T.
2018-05-01
Global digital elevation models (GDEM) provide elementary information on heights of the Earth's surface and objects on the ground. GDEMs have become an important data source for a range of applications. The vertical accuracy of a GDEM is critical for its applications. Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with traditional surveying techniques, UAV photogrammetry are more convenient and more cost-effective. UAV photogrammetry produces the DEM of the survey area with high accuracy and high spatial resolution. As a result, DEMs resulted from UAV photogrammetry can be used for a more detailed and accurate evaluation of the GDEM product. This study investigates the vertical accuracy (in terms of elevation accuracy and systematic errors) of the ASTER GDEM Version 2 dataset over a complex terrain based on UAV photogrammetry. Experimental results show that the elevation errors of ASTER GDEM2 are in normal distribution and the systematic error is quite small. The accuracy of the ASTER GDEM2 coincides well with that reported by the ASTER validation team. The accuracy in the research area is negatively correlated to both the slope of the terrain and the number of stereo observations. This study also evaluates the vertical accuracy of the up-sampled ASTER GDEM2. Experimental results show that the accuracy of the up-sampled ASTER GDEM2 data in the research area is not significantly reduced by the complexity of the terrain. The fine-grained accuracy evaluation of the ASTER GDEM2 is informative for the GDEM-supported UAV photogrammetric applications.
Li, Chunyan; Wu, Pei-Ming; Wu, Zhizhen; Ahn, Chong H; LeDoux, David; Shutter, Lori A; Hartings, Jed A; Narayan, Raj K
2012-02-01
The injured brain is vulnerable to increases in temperature after severe head injury. Therefore, accurate and reliable measurement of brain temperature is important to optimize patient outcome. In this work, we have fabricated, optimized and characterized temperature sensors for use with a micromachined smart catheter for multimodal intracranial monitoring. Developed temperature sensors have resistance of 100.79 ± 1.19Ω and sensitivity of 67.95 mV/°C in the operating range from15-50°C, and time constant of 180 ms. Under the optimized excitation current of 500 μA, adequate signal-to-noise ratio was achieved without causing self-heating, and changes in immersion depth did not introduce clinically significant errors of measurements (<0.01°C). We evaluated the accuracy and long-term drift (5 days) of twenty temperature sensors in comparison to two types of commercial temperature probes (USB Reference Thermometer, NIST-traceable bulk probe with 0.05°C accuracy; and IT-21, type T type clinical microprobe with guaranteed 0.1°C accuracy) under controlled laboratory conditions. These in vitro experimental data showed that the temperature measurement performance of our sensors was accurate and reliable over the course of 5 days. The smart catheter temperature sensors provided accuracy and long-term stability comparable to those of commercial tissue-implantable microprobes, and therefore provide a means for temperature measurement in a microfabricated, multimodal cerebral monitoring device.
Translational Imaging Spectroscopy for Proximal Sensing
Rogass, Christian; Koerting, Friederike M.; Mielke, Christian; Brell, Maximilian; Boesche, Nina K.; Bade, Maria; Hohmann, Christian
2017-01-01
Proximal sensing as the near field counterpart of remote sensing offers a broad variety of applications. Imaging spectroscopy in general and translational laboratory imaging spectroscopy in particular can be utilized for a variety of different research topics. Geoscientific applications require a precise pre-processing of hyperspectral data cubes to retrieve at-surface reflectance in order to conduct spectral feature-based comparison of unknown sample spectra to known library spectra. A new pre-processing chain called GeoMAP-Trans for at-surface reflectance retrieval is proposed here as an analogue to other algorithms published by the team of authors. It consists of a radiometric, a geometric and a spectral module. Each module consists of several processing steps that are described in detail. The processing chain was adapted to the broadly used HySPEX VNIR/SWIR imaging spectrometer system and tested using geological mineral samples. The performance was subjectively and objectively evaluated using standard artificial image quality metrics and comparative measurements of mineral and Lambertian diffuser standards with standard field and laboratory spectrometers. The proposed algorithm provides highly qualitative results, offers broad applicability through its generic design and might be the first one of its kind to be published. A high radiometric accuracy is achieved by the incorporation of the Reduction of Miscalibration Effects (ROME) framework. The geometric accuracy is higher than 1 μpixel. The critical spectral accuracy was relatively estimated by comparing spectra of standard field spectrometers to those from HySPEX for a Lambertian diffuser. The achieved spectral accuracy is better than 0.02% for the full spectrum and better than 98% for the absorption features. It was empirically shown that point and imaging spectrometers provide different results for non-Lambertian samples due to their different sensing principles, adjacency scattering impacts on the signal and anisotropic surface reflection properties. PMID:28800111
Mayoral, Víctor; Pérez-Hernández, Concepción; Muro, Inmaculada; Leal, Ana; Villoria, Jesús; Esquivias, Ana
2018-04-27
Based on the clear neuroanatomical delineation of many neuropathic pain (NP) symptoms, a simple tool for performing a short structured clinical encounter based on the IASP diagnostic criteria was developed to identify NP. This study evaluated its accuracy and usefulness. A case-control study was performed in 19 pain clinics within Spain. A pain clinician used the experimental screening tool (the index test, IT) to assign the descriptions of non-neuropathic (nNP), non-localized neuropathic (nLNP), and localized neuropathic (LNP) to the patients' pain conditions. The reference standard was a formal clinical diagnosis provided by another pain clinician. The accuracy of the IT was compared with that of the Douleur Neuropathique en 4 questions (DN4) and the Leeds Assessment of Neuropathic Signs and Symptoms (LANSS). Six-hundred and sixty-six patients were analyzed. There was a good agreement between the IT and the reference standard (kappa =0.722). The IT was accurate in distinguishing between LNP and nLNP (83.2% sensitivity, 88.2% specificity), between LNP and the other pain categories (nLNP + nNP) (80.0% sensitivity, 90.7% specificity), and between NP and nNP (95.5% sensitivity, 89.1% specificity). The accuracy in distinguishing between NP and nNP was comparable with that of the DN4 and the LANSS. The IT took a median of 10 min to complete. A novel instrument based on an operationalization of the IASP criteria can not only discern between LNP and nLNP, but also provide a high level of diagnostic certainty about the presence of NP after a short clinical encounter.
Development of Parameters for the Collection and Analysis of Lidar at Military Munitions Sites
2010-01-01
and inertial measurement unit (IMU) equipment is used to locate the sensor in the air . The time of return of the laser signal allows for the...approximately 15 centimeters (cm) on soft ground surfaces and a horizontal accuracy of approximately 60 cm, both compared to surveyed control points...provide more accurate topographic data than other sources, at a reasonable cost compared to alternatives such as ground survey or photogrammetry
NASA Astrophysics Data System (ADS)
Sánchez-Ortiz, Noelia; Domínguez-González, Raúl; Krag, Holger
2015-03-01
One of the main objectives of Space Surveillance and Tracking (SST) systems is to support space collision avoidance activities. This collision avoidance capability aims to significantly reduce the catastrophic collision risk of space objects. In particular, for the case of the future European SST, the objective is translated into a risk reduction of one order of magnitude whilst keeping a low number of false alarm events. In order to translate this aim into system requirements, an evaluation of the current catastrophic collision risk for different orbital regimes is addressed. The reduction of such risk depends on the amount of catalogued objects (coverage) and the knowledge of the associated orbits in the catalogue (accuracy). This paper presents an analysis of the impact of those two aspects in the capability to reduce the catastrophic collision risk at some orbital regimes. A reliable collision avoidance support depends on the accuracy of the predicted miss-events. The assessment of possible conjunctions is normally done by computing the estimated miss-distances between objects (which is compared with a defined distance threshold) or by computing the associated collision risk (which is compared with the corresponding accepted collision probability level). This second method is normally recommended because it takes into account the reliability of the orbits and allows reducing false alarm events. The collision risk depends on the estimated miss-distance, the object sizes and the accuracy of the two orbits at the time of event. This accuracy depends on the error of the orbits at the orbit determination epoch and the error derived from the propagation from that epoch up to the time of event. The modified DRAMA ARES (Domínguez-González et al., 2012, 2013a,b; Gelhaus et al., 2014) provides information on the expected number of encounters for a given mission and year. It also provides information on the capacity to reduce the risk of collision by means of avoidance manoeuvres as a function of the accepted collision probability level and the cataloguing performance of the surveillance system (determined by the limiting coverage size-altitude function and the orbital data accuracy). The assessment of avoidance strategies takes into account statistical models of the space object environment, as provided by ESA's MASTER-2009 model, and a mathematical framework for the collision risk estimation as used in satellite operations. In this papers, results are provided for some orbit types, covering different orbital regimes. The analysis is done for different cataloguing capacity levels (accuracy and coverage), concluding that 5 cm are to be covered at LEO for diminishing the catastrophic collision risk by one order of magnitude. For MEO and GEO regime, coverage down to 40 and 100 cm respectively allow similar reduction of risk.
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
The NASA Generic Transport Model (GTM) nonlinear simulation was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of identified parameters in mathematical models describing the flight dynamics and determined from flight data. Measurements from a typical flight condition and system identification maneuver were systematically and progressively deteriorated by introducing noise, resolution errors, and bias errors. The data were then used to estimate nondimensional stability and control derivatives within a Monte Carlo simulation. Based on these results, recommendations are provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using additional flight conditions and parameter estimation methods, as well as a nonlinear flight simulation of the General Dynamics F-16 aircraft, were compared with these recommendations
Information filtering via biased heat conduction
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou , Proc. Natl. Acad. Sci. USA PNASA60027-842410.1073/pnas.1000488107107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
Descriptions and identifications of strangers by youth and adult eyewitnesses.
Pozzulo, Joanna D; Warren, Kelly L
2003-04-01
Two studies varying target gender and mode of target exposure were conducted to compare the quantity, nature, and accuracy of free recall person descriptions provided by youths and adults. In addition, the relation among age, identification accuracy, and number of descriptors reported was considered. Youths (10-14 years) reported fewer descriptors than adults. Exterior facial descriptors (e.g., hair items) were predominant and accurately reported by youths and adults. Accuracy was consistently problematic for youths when reporting body descriptors (e.g., height, weight) and interior facial features. Youths reported a similar number of descriptors when making accurate versus inaccurate identification decisions. This pattern also was consistent for adults. With target-absent lineups, the difference in the number of descriptors reported between adults and youths was greater when making a false positive versus correct rejection.
Kolling, William M; McPherson, Timothy B
2013-04-12
OBJECTIVE. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students' compounding skills. DESIGN. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. ASSESSMENT. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. CONCLUSIONS. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians.
Assessment of the Accuracy of Pharmacy Students’ Compounded Solutions Using Vapor Pressure Osmometry
McPherson, Timothy B.
2013-01-01
Objective. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students’ compounding skills. Design. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. Assessment. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. Conclusions. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians. PMID:23610476
Madison, Matthew J; Bradshaw, Laine P
2015-06-01
Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other multidimensional measurement models. A priori specifications of which latent characteristics or attributes are measured by each item are a core element of the diagnostic assessment design. This item-attribute alignment, expressed in a Q-matrix, precedes and supports any inference resulting from the application of the diagnostic classification model. This study investigates the effects of Q-matrix design on classification accuracy for the log-linear cognitive diagnosis model. Results indicate that classification accuracy, reliability, and convergence rates improve when the Q-matrix contains isolated information from each measured attribute.
Dynamic volume vs respiratory correlated 4DCT for motion assessment in radiation therapy simulation.
Coolens, Catherine; Bracken, John; Driscoll, Brandon; Hope, Andrew; Jaffray, David
2012-05-01
Conventional (i.e., respiratory-correlated) 4DCT exploits the repetitive nature of breathing to provide an estimate of motion; however, it has limitations due to binning artifacts and irregular breathing in actual patient breathing patterns. The aim of this work was to evaluate the accuracy and image quality of a dynamic volume, CT approach (4D(vol)) using a 320-slice CT scanner to minimize these limitations, wherein entire image volumes are acquired dynamically without couch movement. This will be compared to the conventional respiratory-correlated 4DCT approach (RCCT). 4D(vol) CT was performed and characterized on an in-house, programmable respiratory motion phantom containing multiple geometric and morphological "tumor" objects over a range of regular and irregular patient breathing traces obtained from 3D fluoroscopy and compared to RCCT. The accuracy of volumetric capture and breathing displacement were evaluated and compared with the ground truth values and with the results reported using RCCT. A motion model was investigated to validate the number of motion samples needed to obtain accurate motion probability density functions (PDF). The impact of 4D image quality on this accuracy was then investigated. Dose measurements using volumetric and conventional scan techniques were also performed and compared. Both conventional and dynamic volume 4DCT methods were capable of estimating the programmed displacement of sinusoidal motion, but patient breathing is known to not be regular, and obvious differences were seen for realistic, irregular motion. The mean RCCT amplitude error averaged at 4 mm (max. 7.8 mm) whereas the 4D(vol) CT error stayed below 0.5 mm. Similarly, the average absolute volume error was lower with 4D(vol) CT. Under irregular breathing, the 4D(vol) CT method provides a close description of the motion PDF (cross-correlation 0.99) and is able to track each object, whereas the RCCT method results in a significantly different PDF from the ground truth, especially for smaller tumors (cross-correlation ranging between 0.04 and 0.69). For the protocols studied, the dose measurements were higher in the 4D(vol) CT method (40%), but it was shown that significant mAs reductions can be achieved by a factor of 4-5 while maintaining image quality and accuracy. 4D(vol) CT using a scanner with a large cone-angle is a promising alternative for improving the accuracy with which respiration-induced motion can be characterized, particularly for patients with irregular breathing motion. This approach also generates 4DCT image data with a reduced total scan time compared to a RCCT scan, without the need for image binning or external respiration signals within the 16 cm scan length. Scan dose can be made comparable to RCCT by optimization of the scan parameters. In addition, it provides the possibility of measuring breathing motion for more than one breathing cycle to assess stability and obtain a more accurate motion PDF, which is currently not feasible with the conventional RCCT approach.
ERIC Educational Resources Information Center
Tsuji, Keita; To, Haruna; Hara, Atsuyuki
2011-01-01
We asked the same 60 questions using DRS (digital reference services) in Japanese public libraries, face-to-face reference services and Q & A (question and answer) sites. It was found that: (1) The correct answer ratio of DRS is higher than that of Q & A sites; (2) DRS takes longer to provide answers as compared to Q & A sites; and (3)…
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for allmore » exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.« less
Xue, Xiaonan; Kim, Mimi Y; Castle, Philip E; Strickler, Howard D
2014-03-01
Studies to evaluate clinical screening tests often face the problem that the "gold standard" diagnostic approach is costly and/or invasive. It is therefore common to verify only a subset of negative screening tests using the gold standard method. However, undersampling the screen negatives can lead to substantial overestimation of the sensitivity and underestimation of the specificity of the diagnostic test. Our objective was to develop a simple and accurate statistical method to address this "verification bias." We developed a weighted generalized estimating equation approach to estimate, in a single model, the accuracy (eg, sensitivity/specificity) of multiple assays and simultaneously compare results between assays while addressing verification bias. This approach can be implemented using standard statistical software. Simulations were conducted to assess the proposed method. An example is provided using a cervical cancer screening trial that compared the accuracy of human papillomavirus and Pap tests, with histologic data as the gold standard. The proposed approach performed well in estimating and comparing the accuracy of multiple assays in the presence of verification bias. The proposed approach is an easy to apply and accurate method for addressing verification bias in studies of multiple screening methods. Copyright © 2014 Elsevier Inc. All rights reserved.
Diagnostic accuracy of a uniform research case definition for TBM in children: a prospective study.
Solomons, R S; Visser, D H; Marais, B J; Schoeman, J F; van Furth, A M
2016-07-01
Bacteriological confirmation of tuberculous meningitis (TBM) is problematic, and rarely guides initial clinical management. A uniform TBM case definition has been proposed for research purposes. We prospectively enrolled patients aged 3 months to 13 years with meningitis confirmed using cerebrospinal fluid analysis at Tygerberg Hospital, Cape Town, South Africa. Criteria that differentiated TBM from other causes were explored and the accuracy of a probable TBM score assessed by comparing bacteriologically confirmed cases to 'non-TBM' controls. Of 139 meningitis patients, 79 were diagnosed with TBM (35 bacteriologically confirmed), 10 with bacterial meningitis and 50 with viral meningitis. Among those with bacteriologically confirmed TBM, 15 were Mycobacterium tuberculosis culture-positive and 20 were culture-negative but positive on GenoType(®) MTBDRplus or Xpert(®) MTB/RIF; 18 were positive on only a single commercial nucleic acid amplification test. A probable TBM score provided a sensitivity of 74% (95%CI 57-88) and a specificity of 97% (95%CI 86-99) compared to bacteriologically confirmed TBM. A probable TBM score demonstrated excellent specificity compared to bacteriological confirmation. However, 26% of children with TBM would be missed due to the limited accuracy of the case definition. Further prospective testing of an algorithm-based approach to TBM is advisable before recommendation for general clinical practice.
Dall'Ara, E; Barber, D; Viceconti, M
2014-09-22
The accurate measurement of local strain is necessary to study bone mechanics and to validate micro computed tomography (µCT) based finite element (FE) models at the tissue scale. Digital volume correlation (DVC) has been used to provide a volumetric estimation of local strain in trabecular bone sample with a reasonable accuracy. However, nothing has been reported so far for µCT based analysis of cortical bone. The goal of this study was to evaluate accuracy and precision of a deformable registration method for prediction of local zero-strains in bovine cortical and trabecular bone samples. The accuracy and precision were analyzed by comparing scans virtually displaced, repeated scans without any repositioning of the sample in the scanner and repeated scans with repositioning of the samples. The analysis showed that both precision and accuracy errors decrease with increasing the size of the region analyzed, by following power laws. The main source of error was found to be the intrinsic noise of the images compared to the others investigated. The results, once extrapolated for larger regions of interest that are typically used in the literature, were in most cases better than the ones previously reported. For a nodal spacing equal to 50 voxels (498 µm), the accuracy and precision ranges were 425-692 µε and 202-394 µε, respectively. In conclusion, it was shown that the proposed method can be used to study the local deformation of cortical and trabecular bone loaded beyond yield, if a sufficiently high nodal spacing is used. Copyright © 2014 Elsevier Ltd. All rights reserved.
Därr, Roland; Kuhn, Matthias; Bode, Christoph; Bornstein, Stefan R; Pacak, Karel; Lenders, Jacques W M; Eisenhofer, Graeme
2017-06-01
To determine the accuracy of biochemical tests for the diagnosis of pheochromocytoma and paraganglioma. A search of the PubMed database was conducted for English-language articles published between October 1958 and December 2016 on the biochemical diagnosis of pheochromocytoma and paraganglioma using immunoassay methods or high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection for measurement of fractionated metanephrines in 24-h urine collections or plasma-free metanephrines obtained under seated or supine blood sampling conditions. Application of the Standards for Reporting of Diagnostic Studies Accuracy Group criteria yielded 23 suitable articles. Summary receiver operating characteristic analysis revealed sensitivities/specificities of 94/93% and 91/93% for measurement of plasma-free metanephrines and urinary fractionated metanephrines using high-performance liquid chromatography or immunoassay methods, respectively. Partial areas under the curve were 0.947 vs. 0.911. Irrespective of the analytical method, sensitivity was significantly higher for supine compared with seated sampling, 95 vs. 89% (p < 0.02), while specificity was significantly higher for supine sampling compared with 24-h urine, 95 vs. 90% (p < 0.03). Partial areas under the curve were 0.942, 0.913, and 0.932 for supine sampling, seated sampling, and urine. Test accuracy increased linearly from 90 to 93% for 24-h urine at prevalence rates of 0.0-1.0, decreased linearly from 94 to 89% for seated sampling and was constant at 95% for supine conditions. Current tests for the biochemical diagnosis of pheochromocytoma and paraganglioma show excellent diagnostic accuracy. Supine sampling conditions and measurement of plasma-free metanephrines using high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection provides the highest accuracy at all prevalence rates.
Parameter Estimation for Gravitational-wave Bursts with the BayesWave Pipeline
NASA Technical Reports Server (NTRS)
Becsy, Bence; Raffai, Peter; Cornish, Neil; Essick, Reed; Kanner, Jonah; Katsavounidis, Erik; Littenberg, Tyson B.; Millhouse, Margaret; Vitale, Salvatore
2017-01-01
We provide a comprehensive multi-aspect study of the performance of a pipeline used by the LIGO-Virgo Collaboration for estimating parameters of gravitational-wave bursts. We add simulated signals with four different morphologies (sine-Gaussians (SGs), Gaussians, white-noise bursts, and binary black hole signals) to simulated noise samples representing noise of the two Advanced LIGO detectors during their first observing run. We recover them with the BayesWave (BW) pipeline to study its accuracy in sky localization, waveform reconstruction, and estimation of model-independent waveform parameters. BW localizes sources with a level of accuracy comparable for all four morphologies, with the median separation of actual and estimated sky locations ranging from 25.1deg to30.3deg. This is a reasonable accuracy in the two-detector case, and is comparable to accuracies of other localization methods studied previously. As BW reconstructs generic transient signals with SG wavelets, it is unsurprising that BW performs best in reconstructing SG and Gaussian waveforms. The BW accuracy in waveform reconstruction increases steeply with the network signal-to-noise ratio (S/N(sub net), reaching a 85% and 95% match between the reconstructed and actual waveform below S/N(sub net) approx. = 20 and S/N(sub net) approx. = 50, respectively, for all morphologies. The BW accuracy in estimating central moments of waveforms is only limited by statistical errors in the frequency domain, and is also affected by systematic errors in the time domain as BW cannot reconstruct low-amplitude parts of signals that are overwhelmed by noise. The figures of merit we introduce can be used in future characterizations of parameter estimation pipelines.
Mehrban, Hossein; Lee, Deuk Hwan; Moradi, Mohammad Hossein; IlCho, Chung; Naserkheil, Masoumeh; Ibáñez-Escriche, Noelia
2017-01-04
Hanwoo beef is known for its marbled fat, tenderness, juiciness and characteristic flavor, as well as for its low cholesterol and high omega 3 fatty acid contents. As yet, there has been no comprehensive investigation to estimate genomic selection accuracy for carcass traits in Hanwoo cattle using dense markers. This study aimed at evaluating the accuracy of alternative statistical methods that differed in assumptions about the underlying genetic model for various carcass traits: backfat thickness (BT), carcass weight (CW), eye muscle area (EMA), and marbling score (MS). Accuracies of direct genomic breeding values (DGV) for carcass traits were estimated by applying fivefold cross-validation to a dataset including 1183 animals and approximately 34,000 single nucleotide polymorphisms (SNPs). Accuracies of BayesC, Bayesian LASSO (BayesL) and genomic best linear unbiased prediction (GBLUP) methods were similar for BT, EMA and MS. However, for CW, DGV accuracy was 7% higher with BayesC than with BayesL and GBLUP. The increased accuracy of BayesC, compared to GBLUP and BayesL, was maintained for CW, regardless of the training sample size, but not for BT, EMA, and MS. Genome-wide association studies detected consistent large effects for SNPs on chromosomes 6 and 14 for CW. The predictive performance of the models depended on the trait analyzed. For CW, the results showed a clear superiority of BayesC compared to GBLUP and BayesL. These findings indicate the importance of using a proper variable selection method for genomic selection of traits and also suggest that the genetic architecture that underlies CW differs from that of the other carcass traits analyzed. Thus, our study provides significant new insights into the carcass traits of Hanwoo cattle.
Evaluation of a patient specific femoral alignment guide for hip resurfacing.
Olsen, Michael; Naudie, Douglas D; Edwards, Max R; Sellan, Michael E; McCalden, Richard W; Schemitsch, Emil H
2014-03-01
A novel alternative to conventional instrumentation for femoral component insertion in hip resurfacing is a patient specific, computed tomography based femoral alignment guide. A benchside study using cadaveric femora was performed comparing a custom alignment guide to conventional instrumentation and computer navigation. A clinical series of twenty-five hip resurfacings utilizing a custom alignment guide was conducted by three surgeons experienced in hip resurfacing. Using cadaveric femora, the custom guide was comparable to conventional instrumentation with computer navigation proving superior to both. Clinical femoral component alignment accuracy was 3.7° and measured within ± 5° of plan in 20 of 24 cases. Patient specific femoral alignment guides provide a satisfactory level of accuracy and may be a better alternative to conventional instrumentation for initial femoral guidewire placement in hip resurfacing. Crown Copyright © 2014. All rights reserved.
Comparing species distribution models constructed with different subsets of environmental predictors
Bucklin, David N.; Basille, Mathieu; Benscoter, Allison M.; Brandt, Laura A.; Mazzotti, Frank J.; Romañach, Stephanie S.; Speroterra, Carolina; Watling, James I.
2014-01-01
Our results indicate that additional predictors have relatively minor effects on the accuracy of climate-based species distribution models and minor to moderate effects on spatial predictions. We suggest that implementing species distribution models with only climate predictors may provide an effective and efficient approach for initial assessments of environmental suitability.
Modelling of nanoscale quantum tunnelling structures using algebraic topology method
NASA Astrophysics Data System (ADS)
Sankaran, Krishnaswamy; Sairam, B.
2018-05-01
We have modelled nanoscale quantum tunnelling structures using Algebraic Topology Method (ATM). The accuracy of ATM is compared to the analytical solution derived based on the wave nature of tunnelling electrons. ATM provides a versatile, fast, and simple model to simulate complex structures. We are currently expanding the method for modelling electrodynamic systems.
ERIC Educational Resources Information Center
Madison, Matthew J.; Bradshaw, Laine P.
2015-01-01
Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other…
Analysis of Bright Harvest Remote Analysis for Residential Solar Installations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nangle, John; Simon, Joseph
Bright Harvest provides remote shading analysis and design products for residential PV system installers. The National Renewable Energy Laboratory (NREL) through the NREL Commercialization Assistance Program, completed comparative assessments between on-site measurements and remotely calculated values to validate the accuracy of Bright Harvest’s remote shading and power generation.
NASA Astrophysics Data System (ADS)
Lin, Hai-Nan; Li, Jin; Li, Xin
2018-05-01
The detection of gravitational waves (GWs) provides a powerful tool to constrain the cosmological parameters. In this paper, we investigate the possibility of using GWs as standard sirens in testing the anisotropy of the universe. We consider the GW signals produced by the coalescence of binary black hole systems and simulate hundreds of GW events from the advanced laser interferometer gravitational-wave observatory and Virgo. It is found that the anisotropy of the universe can be tightly constrained if the redshift of the GW source is precisely known. The anisotropic amplitude can be constrained with an accuracy comparable to the Union2.1 complication of type-Ia supernovae if ≳ 400 GW events are observed. As for the preferred direction, ≳ 800 GW events are needed in order to achieve the accuracy of Union2.1. With 800 GW events, the probability of pseudo anisotropic signals with an amplitude comparable to Union2.1 is negligible. These results show that GWs can provide a complementary tool to supernovae in testing the anisotropy of the universe.
3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy
NASA Astrophysics Data System (ADS)
Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Siewerdsen, J. H.
2014-01-01
An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ˜0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ˜10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.
Evans-Hoeker, Emily A; Calhoun, Kathryn C; Mersereau, Jennifer E
2014-03-01
To assess healthcare providers' ability to estimate women's body mass index (BMI) based on physical appearance and determine the prevalence of, and barriers to, weight-related counseling. A web-based survey was distributed to healthcare providers ("participants") at a university-based hospital and contained photographs of anonymous women ("photographed women (PW)") as well as questions regarding participant demographics. Participants were asked to estimate BMI category based on physical appearance, state whether they would provide weight-loss counseling for each PW and identify barriers to counseling. One hundred forty-two participants completed the survey. BMI estimations were poor among all participants, with an overall accuracy of only 41% and a large proportion of underestimations. Standardization of PW clothing did not improve accuracy; 41% for own clothing versus 40% for scrubs, P = 0.2. BMI assessments were more accurate for Caucasian versus African American PW (45% versus 36%, P < 0.001) and PW with normal weight (84%) and obesity III (38%) compared to PW with mid-range BMI (P < 0.001). Although the frequency of weight loss counseling was positively associated with PW BMI, participants only intended to counsel 69% of overweight and obese PW. The most commonly cited reason for lack of counseling was time constraints (54%). Healthcare providers are inaccurate at appearance-based BMI categorization and thus, BMI should be routinely calculated in order to improve identification of those in need of counseling. When appropriately identified, time constraints may prevent practitioners from providing appropriate weight-loss counseling-further complicating the already difficult task of fighting obesity. Copyright © 2013 The Obesity Society.
Absolute and relative height-pixel accuracy of SRTM-GL1 over the South American Andean Plateau
NASA Astrophysics Data System (ADS)
Satge, Frédéric; Denezine, Matheus; Pillco, Ramiro; Timouk, Franck; Pinel, Sébastien; Molina, Jorge; Garnier, Jérémie; Seyler, Frédérique; Bonnet, Marie-Paule
2016-11-01
Previously available only over the Continental United States (CONUS), the 1 arc-second mesh size (spatial resolution) SRTM-GL1 (Shuttle Radar Topographic Mission - Global 1) product has been freely available worldwide since November 2014. With a relatively small mesh size, this digital elevation model (DEM) provides valuable topographic information over remote regions. SRTM-GL1 is assessed for the first time over the South American Andean Plateau in terms of both the absolute and relative vertical point-to-point accuracies at the regional scale and for different slope classes. For comparison, SRTM-v4 and GDEM-v2 Global DEM version 2 (GDEM-v2) generated by ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) are also considered. A total of approximately 160,000 ICESat/GLAS (Ice, Cloud and Land Elevation Satellite/Geoscience Laser Altimeter System) data are used as ground reference measurements. Relative error is often neglected in DEM assessments due to the lack of reference data. A new methodology is proposed to assess the relative accuracies of SRTM-GL1, SRTM-v4 and GDEM-v2 based on a comparison with ICESat/GLAS measurements. Slope values derived from DEMs and ICESat/GLAS measurements from approximately 265,000 ICESat/GLAS point pairs are compared using quantitative and categorical statistical analysis introducing a new index: the False Slope Ratio (FSR). Additionally, a reference hydrological network is derived from Google Earth and compared with river networks derived from the DEMs to assess each DEM's potential for hydrological applications over the region. In terms of the absolute vertical accuracy on a global scale, GDEM-v2 is the most accurate DEM, while SRTM-GL1 is more accurate than SRTM-v4. However, a simple bias correction makes SRTM-GL1 the most accurate DEM over the region in terms of vertical accuracy. The relative accuracy results generally did not corroborate the absolute vertical accuracy. GDEM-v2 presents the lowest statistical results based on the relative accuracy, while SRTM-GL1 is the most accurate. Vertical accuracy and relative accuracy are two independent components that must be jointly considered when assessing a DEM's potential. DEM accuracies increased with slope. In terms of hydrological potential, SRTM products are more accurate than GDEM-v2. However, the DEMs exhibit river extraction limitations over the region due to the low regional slope gradient.
Distinguishing Fast and Slow Processes in Accuracy - Response Time Data
Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L. J.; Maris, Gunter
2016-01-01
We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two ‘one-process’ models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a ‘two-process’ model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses. PMID:27167518
Al Kadri, Hanan M F; Al Anazi, Bedayah K; Tamim, Hani M
2011-06-01
One of the major problems in international literature is how to measure postpartum blood loss with accuracy. We aimed in this research to assess the accuracy of visual estimation of postpartum blood loss (by each of two main health-care providers) compared with the gravimetric calculation method. We carried out a prospective cohort study at King Abdulaziz Medical City, Riyadh, Saudi Arabia between 1 November 2009 and 31 December 2009. All women who were admitted to labor and delivery suite and delivered vaginally were included in the study. Postpartum blood loss was visually estimated by the attending physician and obstetrics nurse and then objectively calculated by a gravimetric machine. Comparison between the three methods of blood loss calculation was carried out. A total of 150 patients were included in this study. There was a significant difference between the gravimetric calculated blood loss and both health-care providers' estimation with a tendency to underestimate the loss by about 30%. The background and seniority of the assessing health-care provider did not affect the accuracy of the estimation. The corrected incidence of postpartum hemorrhage in Saudi Arabia was found to be 1.47%. Health-care providers tend to underestimate the volume of postpartum blood loss by about 30%. Training and continuous auditing of the diagnosis of postpartum hemorrhage is needed to avoid missing cases and thus preventing associated morbidity and mortality.
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli
2016-01-01
Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts’ accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts’ accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters. PMID:27508519
Cardoso, Ricardo Lopes; Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli
2016-01-01
Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters.
Modelling of thick composites using a layerwise laminate theory
NASA Technical Reports Server (NTRS)
Robbins, D. H., Jr.; Reddy, J. N.
1993-01-01
The layerwise laminate theory of Reddy (1987) is used to develop a layerwise, two-dimensional, displacement-based, finite element model of laminated composite plates that assumes a piecewise continuous distribution of the tranverse strains through the laminate thickness. The resulting layerwise finite element model is capable of computing interlaminar stresses and other localized effects with the same level of accuracy as a conventional 3D finite element model. Although the total number of degrees of freedom are comparable in both models, the layerwise model maintains a 2D-type data structure that provides several advantages over a conventional 3D finite element model, e.g. simplified input data, ease of mesh alteration, and faster element stiffness matrix formulation. Two sample problems are provided to illustrate the accuracy of the present model in computing interlaminar stresses for laminates in bending and extension.
NASA Astrophysics Data System (ADS)
Fu, Chao; Ren, Xingmin; Yang, Yongfeng; Xia, Yebao; Deng, Wangqun
2018-07-01
A non-intrusive interval precise integration method (IPIM) is proposed in this paper to analyze the transient unbalance response of uncertain rotor systems. The transfer matrix method (TMM) is used to derive the deterministic equations of motion of a hollow-shaft overhung rotor. The uncertain transient dynamic problem is solved by combing the Chebyshev approximation theory with the modified precise integration method (PIM). Transient response bounds are calculated by interval arithmetic of the expansion coefficients. Theoretical error analysis of the proposed method is provided briefly, and its accuracy is further validated by comparing with the scanning method in simulations. Numerical results show that the IPIM can keep good accuracy in vibration prediction of the start-up transient process. Furthermore, the proposed method can also provide theoretical guidance to other transient dynamic mechanical systems with uncertainties.
NASA Astrophysics Data System (ADS)
Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin
2014-12-01
Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.
NASA Astrophysics Data System (ADS)
Chen, C. R.; Chen, C. F.; Nguyen, S. T.; Lau, K.; Lay, J. G.
2016-12-01
Sugarcane mostly grown in tropical and subtropical regions is one of the important commercial crops worldwide, providing significant employment, foreign exchange earnings, and other social and environmental benefits. The sugar industry is a vital component of Belize's economy as it provides employment to 15% of the country's population and 60% of the national agricultural exports. Sugarcane mapping is thus an important task due to official initiatives to provide reliable information on sugarcane-growing areas in respect to improved accuracy in monitoring sugarcane production and yield estimates. Policymakers need such monitoring information to formulate timely plans to ensure sustainably socioeconomic development. Sugarcane monitoring in Belize is traditionally carried out through time-consuming and costly field surveys. Remote sensing is an indispensable tool for crop monitoring on national, regional and global scales. The use of high and low resolution satellites for sugarcane monitoring in Belize is often restricted due to cost limitations and mixed pixel problems because sugarcane fields are small and fragmental. With the launch of Sentinel-2 satellite, it is possible to collectively map small patches of sugarcane fields over a large region as the data are free of charge and have high spectral, spatial, and temporal resolutions. This study aims to develop an object-based classification approach to comparatively map sugarcane fields in Belize from Sentinel-2 data using random forests (RF) and support vector machines (SVM). The data were processed through four main steps: (1) data pre-processing, (2) image segmentation, (3) sugarcane classification, and (4) accuracy assessment. The mapping results compared with the ground reference data indicated satisfactory results. The overall accuracies and Kappa coefficients were generally higher than 80% and 0.7, in both cases. The RF produced slightly more accurate mapping results than SVM. This study demonstrates the realization of the potential application of Sentinel-2 data for sugarcane mapping in Belize with the aid of RF and SVM methods. The methods are thus proposed for monitoring purposes in the country.
A discontinuous Galerkin method for poroelastic wave propagation: The two-dimensional case
NASA Astrophysics Data System (ADS)
Dudley Ward, N. F.; Lähivaara, T.; Eveson, S.
2017-12-01
In this paper, we consider a high-order discontinuous Galerkin (DG) method for modelling wave propagation in coupled poroelastic-elastic media. The upwind numerical flux is derived as an exact solution for the Riemann problem including the poroelastic-elastic interface. Attenuation mechanisms in both Biot's low- and high-frequency regimes are considered. The current implementation supports non-uniform basis orders which can be used to control the numerical accuracy element by element. In the numerical examples, we study the convergence properties of the proposed DG scheme and provide experiments where the numerical accuracy of the scheme under consideration is compared to analytic and other numerical solutions.
A method of solid-solid phase equilibrium calculation by molecular dynamics
NASA Astrophysics Data System (ADS)
Karavaev, A. V.; Dremov, V. V.
2016-12-01
A method for evaluation of solid-solid phase equilibrium curves in molecular dynamics simulation for a given model of interatomic interaction is proposed. The method allows to calculate entropies of crystal phases and provides an accuracy comparable with that of the thermodynamic integration method by Frenkel and Ladd while it is much simpler in realization and less intense computationally. The accuracy of the proposed method was demonstrated in MD calculations of entropies for EAM potential for iron and for MEAM potential for beryllium. The bcc-hcp equilibrium curves for iron calculated for the EAM potential by the thermodynamic integration method and by the proposed one agree quite well.
NASA Technical Reports Server (NTRS)
Loomis, B. D.; Luthcke, S. B.
2016-01-01
We present new measurements of mass evolution for the Mediterranean, Black, Red, and Caspian Seas as determined by the NASA Goddard Space Flight Center (GSFC) GRACE time-variable global gravity mascon solutions. These new solutions are compared to sea surface altimetry measurements of sea level anomalies with steric corrections applied. To assess their accuracy, the GRACE and altimetry-derived solutions are applied to the set of forward models used by GSFC for processing the GRACE Level-1B datasets, with the resulting inter-satellite range acceleration residuals providing a useful metric for analyzing solution quality.
FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.
Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto
2010-01-01
Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.
Das, Dev Kumar; Ghosh, Madhumala; Pal, Mallika; Maiti, Asok K; Chakraborty, Chandan
2013-02-01
The aim of this paper is to address the development of computer assisted malaria parasite characterization and classification using machine learning approach based on light microscopic images of peripheral blood smears. In doing this, microscopic image acquisition from stained slides, illumination correction and noise reduction, erythrocyte segmentation, feature extraction, feature selection and finally classification of different stages of malaria (Plasmodium vivax and Plasmodium falciparum) have been investigated. The erythrocytes are segmented using marker controlled watershed transformation and subsequently total ninety six features describing shape-size and texture of erythrocytes are extracted in respect to the parasitemia infected versus non-infected cells. Ninety four features are found to be statistically significant in discriminating six classes. Here a feature selection-cum-classification scheme has been devised by combining F-statistic, statistical learning techniques i.e., Bayesian learning and support vector machine (SVM) in order to provide the higher classification accuracy using best set of discriminating features. Results show that Bayesian approach provides the highest accuracy i.e., 84% for malaria classification by selecting 19 most significant features while SVM provides highest accuracy i.e., 83.5% with 9 most significant features. Finally, the performance of these two classifiers under feature selection framework has been compared toward malaria parasite classification. Copyright © 2012 Elsevier Ltd. All rights reserved.
Taveira-Gomes, Tiago; Prado-Costa, Rui; Severo, Milton; Ferreira, Maria Amélia
2015-01-24
Spaced-repetition and test-enhanced learning are two methodologies that boost knowledge retention. ALERT STUDENT is a platform that allows creation and distribution of Learning Objects named flashcards, and provides insight into student judgments-of-learning through a metric called 'recall accuracy'. This study aims to understand how the spaced-repetition and test-enhanced learning features provided by the platform affect recall accuracy, and to characterize the effect that students, flashcards and repetitions exert on this measurement. Three spaced laboratory sessions (s0, s1 and s2), were conducted with n=96 medical students. The intervention employed a study task, and a quiz task that consisted in mentally answering open-ended questions about each flashcard and grading recall accuracy. Students were randomized into study-quiz and quiz groups. On s0 both groups performed the quiz task. On s1 and s2, the study-quiz group performed the study task followed by the quiz task, whereas the quiz group only performed the quiz task. We measured differences in recall accuracy between groups/sessions, its variance components, and the G-coefficients for the flashcard component. At s0 there were no differences in recall accuracy between groups. The experiment group achieved a significant increase in recall accuracy that was superior to the quiz group in s1 and s2. In the study-quiz group, increases in recall accuracy were mainly due to the session, followed by flashcard factors and student factors. In the quiz group, increases in recall accuracy were mainly accounted by flashcard factors, followed by student and session factors. The flashcard G-coefficient indicated an agreement on recall accuracy of 91% in the quiz group, and of 47% in the study-quiz group. Recall accuracy is an easily collectible measurement that increases the educational value of Learning Objects and open-ended questions. This metric seems to vary in a way consistent with knowledge retention, but further investigation is necessary to ascertain the nature of such relationship. Recall accuracy has educational implications to students and educators, and may contribute to deliver tailored learning experiences, assess the effectiveness of instruction, and facilitate research comparing blended-learning interventions.
Systematic review of discharge coding accuracy
Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.
2012-01-01
Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302
Improving coding accuracy in an academic practice.
Nguyen, Dana; O'Mara, Heather; Powell, Robert
2017-01-01
Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.
NASA Astrophysics Data System (ADS)
Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.
2014-04-01
Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.
NASA Astrophysics Data System (ADS)
Hall-Brown, Mary
The heterogeneity of Arctic vegetation can make land cover classification vey difficult when using medium to small resolution imagery (Schneider et al., 2009; Muller et al., 1999). Using high radiometric and spatial resolution imagery, such as the SPOT 5 and IKONOS satellites, have helped arctic land cover classification accuracies rise into the 80 and 90 percentiles (Allard, 2003; Stine et al., 2010; Muller et al., 1999). However, those increases usually come at a high price. High resolution imagery is very expensive and can often add tens of thousands of dollars onto the cost of the research. The EO-1 satellite launched in 2002 carries two sensors that have high specral and/or high spatial resolutions and can be an acceptable compromise between the resolution versus cost issues. The Hyperion is a hyperspectral sensor with the capability of collecting 242 spectral bands of information. The Advanced Land Imager (ALI) is an advanced multispectral sensor whose spatial resolution can be sharpened to 10 meters. This dissertation compares the accuracies of arctic land cover classifications produced by the Hyperion and ALI sensors to the classification accuracies produced by the Systeme Pour l' Observation de le Terre (SPOT), the Landsat Thematic Mapper (TM) and the Landsat Enhanced Thematic Mapper Plus (ETM+) sensors. Hyperion and ALI images from August 2004 were collected over the Upper Kuparuk River Basin, Alaska. Image processing included the stepwise discriminant analysis of pixels that were positively classified from coinciding ground control points, geometric and radiometric correction, and principle component analysis. Finally, stratified random sampling was used to perform accuracy assessments on satellite derived land cover classifications. Accuracy was estimated from an error matrix (confusion matrix) that provided the overall, producer's and user's accuracies. This research found that while the Hyperion sensor produced classfication accuracies that were equivalent to the TM and ETM+ sensor (approximately 78%), the Hyperion could not obtain the accuracy of the SPOT 5 HRV sensor. However, the land cover classifications derived from the ALI sensor exceeded most classification accuracies derived from the TM and ETM+ senors and were even comparable to most SPOT 5 HRV classifications (87%). With the deactivation of the Landsat series satellites, the monitoring of remote locations such as in the Arctic on an uninterupted basis thoughout the world is in jeopardy. The utilization of the Hyperion and ALI sensors are a way to keep that endeavor operational. By keeping the ALI sensor active at all times, uninterupted observation of the entire Earth can be accomplished. Keeping the Hyperion sensor as a "tasked" sensor can provide scientists with additional imagery and options for their studies without overburdening storage issues.
Power calculation for comparing diagnostic accuracies in a multi-reader, multi-test design.
Kim, Eunhee; Zhang, Zheng; Wang, Youdan; Zeng, Donglin
2014-12-01
Receiver operating characteristic (ROC) analysis is widely used to evaluate the performance of diagnostic tests with continuous or ordinal responses. A popular study design for assessing the accuracy of diagnostic tests involves multiple readers interpreting multiple diagnostic test results, called the multi-reader, multi-test design. Although several different approaches to analyzing data from this design exist, few methods have discussed the sample size and power issues. In this article, we develop a power formula to compare the correlated areas under the ROC curves (AUC) in a multi-reader, multi-test design. We present a nonparametric approach to estimate and compare the correlated AUCs by extending DeLong et al.'s (1988, Biometrics 44, 837-845) approach. A power formula is derived based on the asymptotic distribution of the nonparametric AUCs. Simulation studies are conducted to demonstrate the performance of the proposed power formula and an example is provided to illustrate the proposed procedure. © 2014, The International Biometric Society.
Effect of Carbohydrate and Caffeine Ingestion on Badminton Performance.
Clarke, Neil D; Duncan, Michael J
2016-01-01
To investigate the effect of ingesting carbohydrate and caffeine solutions on measures that are central to success in badminton. Twelve male badminton players performed a badminton serve-accuracy test, coincidence-anticipation timing (CAT), and a choice reaction-time sprint test 60 min before exercise. Participants then consumed 7 mL/kg body mass of either water (PLA), 6.4% carbohydrate solution (CHO), a solution containing a caffeine dose of 4 mg/kg, or 6.4% carbohydrate and 4 mg/kg caffeine (C+C). All solutions were flavored with orange-flavored concentrate. During the 33-min fatigue protocol, participants were provided with an additional 3 mL/kg body mass of solution, which was ingested before the end of the protocol. As soon as the 33-min fatigue protocol was completed, all measures were recorded again. Short-serve accuracy was improved after the ingestion of CHO and C+C compared with PLA (P = .001, η(p)(2) = .50). Long-serve accuracy was improved after the ingestion of C+C compared with PLA (P < .001, η(p)(2) = .53). Absolute error in CAT demonstrated smaller deteriorations after the ingestion of C+C compared with PLA (P < .05; slow, η(p)(2) = .41; fast, η(p)(2) = .31). Choice reaction time improved in all trials with the exception of PLA, which demonstrated a reduction (P < .001, η(p)(2) = .85), although C+C was faster than all trials (P < .001, η(p)(2) = .76). These findings suggest that the ingestion of a caffeinated carbohydrate solution before and during a badminton match can maintain serve accuracy, anticipation timing, and sprinting actions around the court.
ROC curves in clinical chemistry: uses, misuses, and possible solutions.
Obuchowski, Nancy A; Lieber, Michael L; Wians, Frank H
2004-07-01
ROC curves have become the standard for describing and comparing the accuracy of diagnostic tests. Not surprisingly, ROC curves are used often by clinical chemists. Our aims were to observe how the accuracy of clinical laboratory diagnostic tests is assessed, compared, and reported in the literature; to identify common problems with the use of ROC curves; and to offer some possible solutions. We reviewed every original work using ROC curves and published in Clinical Chemistry in 2001 or 2002. For each article we recorded phase of the research, prospective or retrospective design, sample size, presence/absence of confidence intervals (CIs), nature of the statistical analysis, and major analysis problems. Of 58 articles, 31% were phase I (exploratory), 50% were phase II (challenge), and 19% were phase III (advanced) studies. The studies increased in sample size from phase I to III and showed a progression in the use of prospective designs. Most phase I studies were powered to assess diagnostic tests with ROC areas >/=0.70. Thirty-eight percent of studies failed to include CIs for diagnostic test accuracy or the CIs were constructed inappropriately. Thirty-three percent of studies provided insufficient analysis for comparing diagnostic tests. Other problems included dichotomization of the gold standard scale and inappropriate analysis of the equivalence of two diagnostic tests. We identify available software and make some suggestions for sample size determination, testing for equivalence in diagnostic accuracy, and alternatives to a dichotomous classification of a continuous-scale gold standard. More methodologic research is needed in areas specific to clinical chemistry.
Lee, Chau Hung; Haaland, Benjamin; Earnest, Arul; Tan, Cher Heng
2013-09-01
To determine whether positive oral contrast agents improve accuracy of abdominopelvic CT compared with no, neutral or negative oral contrast agent. Literature was searched for studies evaluating the diagnostic performance of abdominopelvic CT with positive oral contrast agents against imaging with no, neutral or negative oral contrast agent. Meta-analysis reviewed studies correlating CT findings of blunt abdominal injury with positive and without oral contrast agents against surgical, autopsy or clinical outcome allowing derivation of pooled sensitivity and specificity. Systematic review was performed on studies with common design and reference standard. Thirty-two studies were divided into two groups. Group 1 comprised 15 studies comparing CT with positive and without oral contrast agents. Meta-analysis of five studies from group 1 provided no difference in sensitivity or specificity between CT with positive or without oral contrast agents. Group 2 comprised 17 studies comparing CT with positive and neutral or negative oral contrast agents. Systematic review of 12 studies from group 2 indicated that neutral or negative oral contrasts were as effective as positive oral contrast agents for bowel visualisation. There is no difference in accuracy between CT performed with positive oral contrast agents or with no, neutral or negative oral contrast agent. • There is no difference in the accuracy of CT with or without oral contrast agent. • There is no difference in the accuracy of CT with Gastrografin or water. • Omission of oral contrast, utilising neutral or negative oral contrast agent saves time, costs and decreases risk of aspiration.
Classifying four-category visual objects using multiple ERP components in single-trial ERP.
Qin, Yu; Zhan, Yu; Wang, Changming; Zhang, Jiacai; Yao, Li; Guo, Xiaojuan; Wu, Xia; Hu, Bin
2016-08-01
Object categorization using single-trial electroencephalography (EEG) data measured while participants view images has been studied intensively. In previous studies, multiple event-related potential (ERP) components (e.g., P1, N1, P2, and P3) were used to improve the performance of object categorization of visual stimuli. In this study, we introduce a novel method that uses multiple-kernel support vector machine to fuse multiple ERP component features. We investigate whether fusing the potential complementary information of different ERP components (e.g., P1, N1, P2a, and P2b) can improve the performance of four-category visual object classification in single-trial EEGs. We also compare the classification accuracy of different ERP component fusion methods. Our experimental results indicate that the classification accuracy increases through multiple ERP fusion. Additional comparative analyses indicate that the multiple-kernel fusion method can achieve a mean classification accuracy higher than 72 %, which is substantially better than that achieved with any single ERP component feature (55.07 % for the best single ERP component, N1). We compare the classification results with those of other fusion methods and determine that the accuracy of the multiple-kernel fusion method is 5.47, 4.06, and 16.90 % higher than those of feature concatenation, feature extraction, and decision fusion, respectively. Our study shows that our multiple-kernel fusion method outperforms other fusion methods and thus provides a means to improve the classification performance of single-trial ERPs in brain-computer interface research.
Using Meta-Analysis to Inform the Design of Subsequent Studies of Diagnostic Test Accuracy
ERIC Educational Resources Information Center
Hinchliffe, Sally R.; Crowther, Michael J.; Phillips, Robert S.; Sutton, Alex J.
2013-01-01
An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial…
Bradley, David; Nisbet, Andrew
2012-01-01
This study provides a review of recent publications on the physics-aspects of dosimetric accuracy in high dose rate (HDR) brachytherapy. The discussion of accuracy is primarily concerned with uncertainties, but methods to improve dose conformation to the prescribed intended dose distribution are also noted. The main aim of the paper is to review current practical techniques and methods employed for HDR brachytherapy dosimetry. This includes work on the determination of dose rate fields around brachytherapy sources, the capability of treatment planning systems, the performance of treatment units and methods to verify dose delivery. This work highlights the determinants of accuracy in HDR dosimetry and treatment delivery and presents a selection of papers, focusing on articles from the last five years, to reflect active areas of research and development. Apart from Monte Carlo modelling of source dosimetry, there is no clear consensus on the optimum techniques to be used to assure dosimetric accuracy through all the processes involved in HDR brachytherapy treatment. With the exception of the ESTRO mailed dosimetry service, there is little dosimetric audit activity reported in the literature, when compared with external beam radiotherapy verification. PMID:23349649
Developing collaborative classifiers using an expert-based model
Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan
2009-01-01
This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.
Accuracy Analysis of a Dam Model from Drone Surveys
Buffi, Giulia; Venturi, Sara
2017-01-01
This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations. PMID:28771185
Accuracy Analysis of a Dam Model from Drone Surveys.
Ridolfi, Elena; Buffi, Giulia; Venturi, Sara; Manciola, Piergiorgio
2017-08-03
This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations.
Palmer, Antony; Bradley, David; Nisbet, Andrew
2012-06-01
This study provides a review of recent publications on the physics-aspects of dosimetric accuracy in high dose rate (HDR) brachytherapy. The discussion of accuracy is primarily concerned with uncertainties, but methods to improve dose conformation to the prescribed intended dose distribution are also noted. The main aim of the paper is to review current practical techniques and methods employed for HDR brachytherapy dosimetry. This includes work on the determination of dose rate fields around brachytherapy sources, the capability of treatment planning systems, the performance of treatment units and methods to verify dose delivery. This work highlights the determinants of accuracy in HDR dosimetry and treatment delivery and presents a selection of papers, focusing on articles from the last five years, to reflect active areas of research and development. Apart from Monte Carlo modelling of source dosimetry, there is no clear consensus on the optimum techniques to be used to assure dosimetric accuracy through all the processes involved in HDR brachytherapy treatment. With the exception of the ESTRO mailed dosimetry service, there is little dosimetric audit activity reported in the literature, when compared with external beam radiotherapy verification.
Real-time, resource-constrained object classification on a micro-air vehicle
NASA Astrophysics Data System (ADS)
Buck, Louis; Ray, Laura
2013-12-01
A real-time embedded object classification algorithm is developed through the novel combination of binary feature descriptors, a bag-of-visual-words object model and the cortico-striatal loop (CSL) learning algorithm. The BRIEF, ORB and FREAK binary descriptors are tested and compared to SIFT descriptors with regard to their respective classification accuracies, execution times, and memory requirements when used with CSL on a 12.6 g ARM Cortex embedded processor running at 800 MHz. Additionally, the effect of x2 feature mapping and opponent-color representations used with these descriptors is examined. These tests are performed on four data sets of varying sizes and difficulty, and the BRIEF descriptor is found to yield the best combination of speed and classification accuracy. Its use with CSL achieves accuracies between 67% and 95% of those achieved with SIFT descriptors and allows for the embedded classification of a 128x192 pixel image in 0.15 seconds, 60 times faster than classification with SIFT. X2 mapping is found to provide substantial improvements in classification accuracy for all of the descriptors at little cost, while opponent-color descriptors are offer accuracy improvements only on colorful datasets.
Air Quality Monitoring and Forecasting Applications of Suomi NPP VIIRS Aerosol Products
NASA Astrophysics Data System (ADS)
Kondragunta, Shobha
The Suomi National Polar-orbiting Partnership (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) instrument was launched on October 28, 2011. It provides Aerosol Optical Thickness (AOT) at two different spatial resolutions: a pixel level (~750 m at nadir) product called the Intermediate Product (IP) and an aggregated (~6 km at nadir) product called the Environmental Data Record (EDR), and a Suspended Matter (SM) EDR that provides aerosol type (dust, smoke, sea salt, and volcanic ash) information. An extensive validation of VIIRS best quality aerosol products with ground based L1.5 Aerosol Robotic NETwork (AERONET) data shows that the AOT EDR product has an accuracy/precision of -0.01/0.11 and 0.01/0.08 over land and ocean respectively. Globally, VIIRS mean AOT EDR (0.20) is similar to Aqua MODIS (0.16) with some important regional and seasonal differences. The accuracy of the SM product, however, is found to be very low (20 percent) when compared to Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) and AERONET. Several algorithm updates which include a better approach to retrieve surface reflectance have been developed for AOT retrieval. For dust aerosol type retrieval, a new approach that takes advantage of spectral dependence of Rayleigh scattering, surface reflectance, dust absorption in the deep blue (412 nm), blue (440 nm), and mid-IR (2.2 um) has been developed that detects dust with an accuracy of ~80 percent. For smoke plume identification, a source apportionment algorithm that combines fire hot spots with AOT imagery has been developed that provides smoke plume extent with an accuracy of ~70 percent. The VIIRS aerosol products will provide continuity to the current operational use of aerosol products from Aqua and Terra MODIS. These include aerosol data assimilation in Naval Research Laboratory (NRL) global aerosol model, verification of National Weather Service (NWS) dust and smoke forecasts, exceptional events monitoring by different states, air quality warnings by Environmental Protection Agency (EPA). This talk will provide an overview of VIIRS algorithms, aerosol product validation, and examples of various applications with a discussion on the relevance of product accuracy.
Clarke, William L; Anderson, Stacey; Farhy, Leon; Breton, Marc; Gonder-Frederick, Linda; Cox, Daniel; Kovatchev, Boris
2005-10-01
To compare the clinical accuracy of two different continuous glucose sensors (CGS) during euglycemia and hypoglycemia using continuous glucose-error grid analysis (CG-EGA). FreeStyle Navigator (Abbott Laboratories, Alameda, CA) and MiniMed CGMS (Medtronic, Northridge, CA) CGSs were applied to the abdomens of 16 type 1 diabetic subjects (age 42 +/- 3 years) 12 h before the initiation of the study. Each system was calibrated according to the manufacturer's recommendations. Each subject underwent a hyperinsulinemic-euglycemic clamp (blood glucose goal 110 mg/dl) for 70-210 min followed by a 1-mg.dl(-1).min(-1) controlled reduction in blood glucose toward a nadir of 40 mg/dl. Arterialized blood glucose was determined every 5 min using a Beckman Glucose Analyzer (Fullerton, CA). CGS glucose recordings were matched to the reference blood glucose with 30-s precision, and rates of glucose change were calculated for 5-min intervals. CG-EGA was used to quantify the clinical accuracy of both systems by estimating combined point and rate accuracy of each system in the euglycemic (70-180 mg/dl) and hypoglycemic (<70 mg/dl) ranges. A total of 1,104 data pairs were recorded in the euglycemic range and 250 data pairs in the hypoglycemic range. Overall correlation between CGS and reference glucose was similar for both systems (Navigator, r = 0.84; CGMS, r = 0.79, NS). During euglycemia, both CGS systems had similar clinical accuracy (Navigator zones A + B, 88.8%; CGMS zones A + B, 89.3%, NS). However, during hypoglycemia, the Navigator was significantly more clinically accurate than the CGMS (zones A + B = 82.4 vs. 61.6%, Navigator and CGMS, respectively, P < 0.0005). CG-EGA is a helpful tool for evaluating and comparing the clinical accuracy of CGS systems in different blood glucose ranges. CG-EGA provides accuracy details beyond other methods of evaluation, including correlational analysis and the original EGA.
Error and Uncertainty in the Accuracy Assessment of Land Cover Maps
NASA Astrophysics Data System (ADS)
Sarmento, Pedro Alexandre Reis
Traditionally the accuracy assessment of land cover maps is performed through the comparison of these maps with a reference database, which is intended to represent the "real" land cover, being this comparison reported with the thematic accuracy measures through confusion matrixes. Although, these reference databases are also a representation of reality, containing errors due to the human uncertainty in the assignment of the land cover class that best characterizes a certain area, causing bias in the thematic accuracy measures that are reported to the end users of these maps. The main goal of this dissertation is to develop a methodology that allows the integration of human uncertainty present in reference databases in the accuracy assessment of land cover maps, and analyse the impacts that uncertainty may have in the thematic accuracy measures reported to the end users of land cover maps. The utility of the inclusion of human uncertainty in the accuracy assessment of land cover maps is investigated. Specifically we studied the utility of fuzzy sets theory, more precisely of fuzzy arithmetic, for a better understanding of human uncertainty associated to the elaboration of reference databases, and their impacts in the thematic accuracy measures that are derived from confusion matrixes. For this purpose linguistic values transformed in fuzzy intervals that address the uncertainty in the elaboration of reference databases were used to compute fuzzy confusion matrixes. The proposed methodology is illustrated using a case study in which the accuracy assessment of a land cover map for Continental Portugal derived from Medium Resolution Imaging Spectrometer (MERIS) is made. The obtained results demonstrate that the inclusion of human uncertainty in reference databases provides much more information about the quality of land cover maps, when compared with the traditional approach of accuracy assessment of land cover maps. None
Empirical evidence of the importance of comparative studies of diagnostic test accuracy.
Takwoingi, Yemisi; Leeflang, Mariska M G; Deeks, Jonathan J
2013-04-02
Systematic reviews that "compare" the accuracy of 2 or more tests often include different sets of studies for each test. To investigate the availability of direct comparative studies of test accuracy and to assess whether summary estimates of accuracy differ between meta-analyses of noncomparative and comparative studies. Systematic reviews in any language from the Database of Abstracts of Reviews of Effects and the Cochrane Database of Systematic Reviews from 1994 to October 2012. 1 of 2 assessors selected reviews that evaluated at least 2 tests and identified meta-analyses that included both noncomparative studies and comparative studies. 1 of 3 assessors extracted data about review and study characteristics and test performance. 248 reviews compared test accuracy; of the 6915 studies, 2113 (31%) were comparative. Thirty-six reviews (with 52 meta-analyses) had adequate studies to compare results of noncomparative and comparative studies by using a hierarchical summary receiver-operating characteristic meta-regression model for each test comparison. In 10 meta-analyses, noncomparative studies ranked tests in the opposite order of comparative studies. A total of 25 meta-analyses showed more than a 2-fold discrepancy in the relative diagnostic odds ratio between noncomparative and comparative studies. Differences in accuracy estimates between noncomparative and comparative studies were greater than expected by chance (P < 0.001). A paucity of comparative studies limited exploration of direction in bias. Evidence derived from noncomparative studies often differs from that derived from comparative studies. Robustly designed studies in which all patients receive all tests or are randomly assigned to receive one or other of the tests should be more routinely undertaken and are preferred for evidence to guide test selection. National Institute for Health Research (United Kingdom).
Wang, Yali; Hamal, Preeti; You, Xiaofang; Mao, Haixia; Li, Fei; Sun, Xiwen
2017-01-01
The aim of this study was to assess whether CT imaging using an ultra-high-resolution CT (UHRCT) scan with a small scan field of view (FOV) provides higher image quality and helps to reduce the follow-up period compared with a conventional high-resolution CT (CHRCT) scan. We identified patients with at least one pulmonary nodule at our hospital from July 2015 to November 2015. CHRCT and UHRCT scans were conducted in all enrolled patients. Three experienced radiologists evaluated the image quality using a 5-point score and made diagnoses. The paired images were displayed side by side in a random manner and annotations of scan information were removed. The following parameters including image quality, diagnostic confidence of radiologists, follow-up recommendations and diagnostic accuracy were assessed. A total of 52 patients (62 nodules) were included in this study. UHRCT scan provides a better image quality regarding the margin of nodules and solid internal component compared to that of CHRCT (P < 0.05). Readers have higher diagnostic confidence based on the UHRCT images than of CHRCT images (P<0.05). The follow-up recommendations were significantly different between UHRCT and CHRCT images (P<0.05). Compared with the surgical pathological findings, UHRCT had a relative higher diagnostic accuracy than CHRCT (P > 0.05). These findings suggest that the UHRCT prototype scanner provides a better image quality of subsolid nodules compared to CHRCT and contributes significantly to reduce the patients' follow-up period. PMID:28231320
Changes in the relation between snow station observations and basin scale snow water resources
NASA Astrophysics Data System (ADS)
Sexstone, G. A.; Penn, C. A.; Clow, D. W.; Moeser, D.; Liston, G. E.
2017-12-01
Snow monitoring stations that measure snow water equivalent or snow depth provide fundamental observations used for predicting water availability and flood risk in mountainous regions. In the western United States, snow station observations provided by the Natural Resources Conservation Service Snow Telemetry (SNOTEL) network are relied upon for forecasting spring and summer streamflow volume. Streamflow forecast accuracy has declined for many regions over the last several decades. Changes in snow accumulation and melt related to climate, land use, and forest cover are not accounted for in current forecasts, and are likely sources of error. Therefore, understanding and updating relations between snow station observations and basin scale snow water resources is crucial to improve accuracy of streamflow prediction. In this study, we investigated the representativeness of snow station observations when compared to simulated basin-wide snow water resources within the Rio Grande headwaters of Colorado. We used the combination of a process-based snow model (SnowModel), field-based measurements, and remote sensing observations to compare the spatiotemporal variability of simulated basin-wide snow accumulation and melt with that of SNOTEL station observations. Results indicated that observations are comparable to simulated basin-average winter precipitation but overestimate both the simulated basin-average snow water equivalent and snowmelt rate. Changes in the representation of snow station observations over time in the Rio Grande headwaters were also investigated and compared to observed streamflow and streamflow forecasting errors. Results from this study provide important insight in the context of non-stationarity for future water availability assessments and streamflow predictions.
Pine, P S; Boedigheimer, M; Rosenzweig, B A; Turpaz, Y; He, Y D; Delenstarr, G; Ganter, B; Jarnagin, K; Jones, W D; Reid, L H; Thompson, K L
2008-11-01
Effective use of microarray technology in clinical and regulatory settings is contingent on the adoption of standard methods for assessing performance. The MicroArray Quality Control project evaluated the repeatability and comparability of microarray data on the major commercial platforms and laid the groundwork for the application of microarray technology to regulatory assessments. However, methods for assessing performance that are commonly applied to diagnostic assays used in laboratory medicine remain to be developed for microarray assays. A reference system for microarray performance evaluation and process improvement was developed that includes reference samples, metrics and reference datasets. The reference material is composed of two mixes of four different rat tissue RNAs that allow defined target ratios to be assayed using a set of tissue-selective analytes that are distributed along the dynamic range of measurement. The diagnostic accuracy of detected changes in expression ratios, measured as the area under the curve from receiver operating characteristic plots, provides a single commutable value for comparing assay specificity and sensitivity. The utility of this system for assessing overall performance was evaluated for relevant applications like multi-laboratory proficiency testing programs and single-laboratory process drift monitoring. The diagnostic accuracy of detection of a 1.5-fold change in signal level was found to be a sensitive metric for comparing overall performance. This test approaches the technical limit for reliable discrimination of differences between two samples using this technology. We describe a reference system that provides a mechanism for internal and external assessment of laboratory proficiency with microarray technology and is translatable to performance assessments on other whole-genome expression arrays used for basic and clinical research.
Iskandar, Aline; Limone, Brendan; Parker, Matthew W; Perugini, Andrew; Kim, Hyejin; Jones, Charles; Calamari, Brian; Coleman, Craig I; Heller, Gary V
2013-02-01
It remains controversial whether the diagnostic accuracy of single-photon emission computed tomography myocardial perfusion imaging (SPECT MPI) is different in men as compared to women. We performed a meta-analysis to investigate gender differences of SPECT MPI for the diagnosis of CAD (≥50% stenosis). Two investigators independently performed a systematic review of the MEDLINE and EMBASE databases from inception through January 2012 for English-language studies determining the diagnostic accuracy of SPECT MPI. We included prospective studies that compared SPECT MPI with conventional coronary angiography which provided sufficient data to calculate gender-specific true and false positives and negatives. Data from studies evaluating <20 patients of one gender were excluded. Bivariate meta-analysis was used to create summary receiver operating curves. Twenty-six studies met inclusion criteria, representing 1,148 women and 1,142 men. Bivariate meta-analysis yielded a mean sensitivity and specificity of 84.2% (95% confidence interval [CI] 78.7%-88.6%) and 78.7% (CI 70.0%-85.3%) for SPECT MPI in women and 89.1% (CI 84.0%-92.7%) and 71.2% (CI 60.8%-79.8%) for SPECT MPI in men. There was no significant difference in the sensitivity (P = .15) or specificity (P = .23) between male and female subjects. In a bivariate meta-analysis of the available literature, the diagnostic accuracy of SPECT MPI is similar for both men and women.
Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Jörg
2016-01-01
This work presents a computer-based approach to analyze the two-dimensional vocal fold dynamics of endoscopic high-speed videos, and constitutes an extension and generalization of a previously proposed wavelet-based procedure. While most approaches aim for analyzing sustained phonation conditions, the proposed method allows for a clinically adequate analysis of both dynamic as well as sustained phonation paradigms. The analysis procedure is based on a spatio-temporal visualization technique, the phonovibrogram, that facilitates the documentation of the visible laryngeal dynamics. From the phonovibrogram, a low-dimensional set of features is computed using a principle component analysis strategy that quantifies the type of vibration patterns, irregularity, lateral symmetry and synchronicity, as a function of time. Two different test bench data sets are used to validate the approach: (I) 150 healthy and pathologic subjects examined during sustained phonation. (II) 20 healthy and pathologic subjects that were examined twice: during sustained phonation and a glissando from a low to a higher fundamental frequency. In order to assess the discriminative power of the extracted features, a Support Vector Machine is trained to distinguish between physiologic and pathologic vibrations. The results for sustained phonation sequences are compared to the previous approach. Finally, the classification performance of the stationary analyzing procedure is compared to the transient analysis of the glissando maneuver. For the first test bench the proposed procedure outperformed the previous approach (proposed feature set: accuracy: 91.3%, sensitivity: 80%, specificity: 97%, previous approach: accuracy: 89.3%, sensitivity: 76%, specificity: 96%). Comparing the classification performance of the second test bench further corroborates that analyzing transient paradigms provides clear additional diagnostic value (glissando maneuver: accuracy: 90%, sensitivity: 100%, specificity: 80%, sustained phonation: accuracy: 75%, sensitivity: 80%, specificity: 70%). The incorporation of parameters describing the temporal evolvement of vocal fold vibration clearly improves the automatic identification of pathologic vibration patterns. Furthermore, incorporating a dynamic phonation paradigm provides additional valuable information about the underlying laryngeal dynamics that cannot be derived from sustained conditions. The proposed generalized approach provides a better overall classification performance than the previous approach, and hence constitutes a new advantageous tool for an improved clinical diagnosis of voice disorders. Copyright © 2015 Elsevier B.V. All rights reserved.
Influence of metallic dental implants and metal artefacts on dose calculation accuracy.
Maerz, Manuel; Koelbl, Oliver; Dobler, Barbara
2015-03-01
Metallic dental implants cause severe streaking artefacts in computed tomography (CT) data, which inhibit the correct representation of shape and density of the metal and the surrounding tissue. The aim of this study was to investigate the impact of dental implants on the accuracy of dose calculations in radiation therapy planning and the benefit of metal artefact reduction (MAR). A second aim was to determine the treatment technique which is less sensitive to the presence of metallic implants in terms of dose calculation accuracy. Phantoms consisting of homogeneous water equivalent material surrounding dental implants were designed. Artefact-containing CT data were corrected using the correct density information. Intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were calculated on corrected and uncorrected CT data and compared to 2-dimensional dose measurements using GafChromic™ EBT2 films. For all plans the accuracy of dose calculations is significantly higher if performed on corrected CT data (p = 0.015). The agreement of calculated and measured dose distributions is significantly higher for VMAT than for IMRT plans for calculations on uncorrected CT data (p = 0.011) as well as on corrected CT data (p = 0.029). For IMRT and VMAT the application of metal artefact reduction significantly increases the agreement of dose calculations with film measurements. VMAT was found to provide the highest accuracy on corrected as well as on uncorrected CT data. VMAT is therefore preferable over IMRT for patients with metallic implants, if plan quality is comparable for the two techniques.
Ender, Andreas; Mehl, Albert
2015-01-01
To investigate the accuracy of conventional and digital impression methods used to obtain full-arch impressions by using an in-vitro reference model. Eight different conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; and irreversible hydrocolloid, ALG) and digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; and Lava COS, LAV) full-arch impressions were obtained from a reference model with a known morphology, using a highly accurate reference scanner. The impressions obtained were then compared with the original geometry of the reference model and within each test group. A point-to-point measurement of the surface of the model using the signed nearest neighbour method resulted in a mean (10%-90%)/2 percentile value for the difference between the impression and original model (trueness) as well as the difference between impressions within a test group (precision). Trueness values ranged from 11.5 μm (VSE) to 60.2 μm (POE), and precision ranged from 12.3 μm (VSE) to 66.7 μm (POE). Among the test groups, VSE, VSES, and CER showed the highest trueness and precision. The deviation pattern varied with the impression method. Conventional impressions showed high accuracy across the full dental arch in all groups, except POE and ALG. Conventional and digital impression methods show differences regarding full-arch accuracy. Digital impression systems reveal higher local deviations of the full-arch model. Digital intraoral impression systems do not show superior accuracy compared to highly accurate conventional impression techniques. However, they provide excellent clinical results within their indications applying the correct scanning technique.
Diagnostic accuracy of optical coherence tomography in actinic keratosis and basal cell carcinoma.
Olsen, J; Themstrup, L; De Carvalho, N; Mogensen, M; Pellacani, G; Jemec, G B E
2016-12-01
Early diagnosis of non-melanoma skin cancer (NMSC) is potentially possible using optical coherence tomography (OCT) which provides non-invasive, real-time images of skin with micrometre resolution and an imaging depth of up to 2mm. OCT technology for skin imaging has undergone significant developments, improving image quality substantially. The diagnostic accuracy of any method is influenced by continuous technological development making it necessary to regularly re-evaluate methods. The objective of this study is to estimate the diagnostic accuracy of OCT in basal cell carcinomas (BCC) and actinic keratosis (AK) as well as differentiating these lesions from normal skin. A study set consisting of 142 OCT images meeting selection criterea for image quality and diagnosis of AK, BCC and normal skin was presented uniformly to two groups of blinded observers: 5 dermatologists experienced in OCT-image interpretation and 5 dermatologists with no experience in OCT. During the presentation of the study set the observers filled out a standardized questionnaire regarding the OCT diagnosis. Images were captured using a commercially available OCT machine (Vivosight ® , Michelson Diagnostics, UK). Skilled OCT observers were able to diagnose BCC lesions with a sensitivity of 86% to 95% and a specificity of 81% to 98%. Skilled observers with at least one year of OCT-experience showed an overall higher diagnostic accuracy compared to inexperienced observers. The study shows an improved diagnostic accuracy of OCT in differentiating AK and BCC from healthy skin using state-of-the-art technology compared to earlier OCT technology, especially concerning BCC diagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Balekian, Alex A; Silvestri, Gerard A; Simkovich, Suzanne M; Mestaz, Peter J; Sanders, Gillian D; Daniel, Jamie; Porcel, Jackie; Gould, Michael K
2013-12-01
Management of pulmonary nodules depends critically on the probability of malignancy. Models to estimate probability have been developed and validated, but most clinicians rely on judgment. The aim of this study was to compare the accuracy of clinical judgment with that of two prediction models. Physician participants reviewed up to five clinical vignettes, selected at random from a larger pool of 35 vignettes, all based on actual patients with lung nodules of known final diagnosis. Vignettes included clinical information and a representative slice from computed tomography. Clinicians estimated the probability of malignancy for each vignette. To examine agreement with models, we calculated intraclass correlation coefficients (ICC) and kappa statistics. To examine accuracy, we compared areas under the receiver operator characteristic curve (AUC). Thirty-six participants completed 179 vignettes, 47% of which described patients with malignant nodules. Agreement between participants and models was fair for the Mayo Clinic model (ICC, 0.37; 95% confidence interval [CI], 0.23-0.50) and moderate for the Veterans Affairs model (ICC, 0.46; 95% CI, 0.34-0.57). There was no difference in accuracy between participants (AUC, 0.70; 95% CI, 0.62-0.77) and the Mayo Clinic model (AUC, 0.71; 95% CI, 0.62-0.80; P = 0.90) or the Veterans Affairs model (AUC, 0.72; 95% CI, 0.64-0.80; P = 0.54). In this vignette-based study, clinical judgment and models appeared to have similar accuracy for lung nodule characterization, but agreement between judgment and the models was modest, suggesting that qualitative and quantitative approaches may provide complementary information.
Comparing Emotion Recognition Skills among Children with and without Jailed Parents.
Hindt, Lauren A; Davis, Laurel; Schubert, Erin C; Poehlmann-Tynan, Julie; Shlafer, Rebecca J
2016-01-01
Approximately five million children in the United States have experienced a co-resident parent's incarceration in jail or prison. Parental incarceration is associated with multiple risk factors for maladjustment, which may contribute to the increased likelihood of behavioral problems in this population. Few studies have examined early predictors of maladjustment among children with incarcerated parents, limiting scholars' understanding about potential points for prevention and intervention. Emotion recognition skills may play a role in the development of maladjustment and may be amenable to intervention. The current study examined whether emotion recognition skills differed between 3- to 8-year-old children with and without jailed parents. We hypothesized that children with jailed parents would have a negative bias in processing emotions and less accuracy compared to children without incarcerated parents. Data were drawn from 128 families, including 75 children (53.3% male, M = 5.37 years) with jailed parents and 53 children (39.6% male, M = 5.02 years) without jailed parents. Caregivers in both samples provided demographic information. Children performed an emotion recognition task in which they were asked to produce a label for photos expressing six different emotions (i.e., happy, surprised, neutral, sad, angry, and fearful). For scoring, the number of positive and negative labels were totaled; the number of negative labels provided for neutral and positive stimuli were totaled (measuring negative bias/overextension of negative labels); and valence accuracy (i.e., positive, negative, and neutral) and label accuracy were calculated. Results indicated a main effect of parental incarceration on the number of positive labels provided; children with jailed parents presented significantly fewer positive emotions than the comparison group. There was also a main effect of parental incarceration on negative bias (the overextension of negative labels); children with jailed parents had a negative bias compared to children without jailed parents. However, these findings did not hold when controlling for child age, race/ethnicity, receipt of special education services, and caregiver education. The results provide some evidence for the effect of the context of parental incarceration in the development of negative emotion recognition biases. Limitations and implications for future research and interventions are discussed.
Comparing Emotion Recognition Skills among Children with and without Jailed Parents
Hindt, Lauren A.; Davis, Laurel; Schubert, Erin C.; Poehlmann-Tynan, Julie; Shlafer, Rebecca J.
2016-01-01
Approximately five million children in the United States have experienced a co-resident parent’s incarceration in jail or prison. Parental incarceration is associated with multiple risk factors for maladjustment, which may contribute to the increased likelihood of behavioral problems in this population. Few studies have examined early predictors of maladjustment among children with incarcerated parents, limiting scholars’ understanding about potential points for prevention and intervention. Emotion recognition skills may play a role in the development of maladjustment and may be amenable to intervention. The current study examined whether emotion recognition skills differed between 3- to 8-year-old children with and without jailed parents. We hypothesized that children with jailed parents would have a negative bias in processing emotions and less accuracy compared to children without incarcerated parents. Data were drawn from 128 families, including 75 children (53.3% male, M = 5.37 years) with jailed parents and 53 children (39.6% male, M = 5.02 years) without jailed parents. Caregivers in both samples provided demographic information. Children performed an emotion recognition task in which they were asked to produce a label for photos expressing six different emotions (i.e., happy, surprised, neutral, sad, angry, and fearful). For scoring, the number of positive and negative labels were totaled; the number of negative labels provided for neutral and positive stimuli were totaled (measuring negative bias/overextension of negative labels); and valence accuracy (i.e., positive, negative, and neutral) and label accuracy were calculated. Results indicated a main effect of parental incarceration on the number of positive labels provided; children with jailed parents presented significantly fewer positive emotions than the comparison group. There was also a main effect of parental incarceration on negative bias (the overextension of negative labels); children with jailed parents had a negative bias compared to children without jailed parents. However, these findings did not hold when controlling for child age, race/ethnicity, receipt of special education services, and caregiver education. The results provide some evidence for the effect of the context of parental incarceration in the development of negative emotion recognition biases. Limitations and implications for future research and interventions are discussed. PMID:27504101
NASA Astrophysics Data System (ADS)
Henry, Michael E.; Lauriat, Tara L.; Shanahan, Meghan; Renshaw, Perry F.; Jensen, J. Eric
2011-02-01
Proton magnetic resonance spectroscopy has the potential to provide valuable information about alterations in gamma-aminobutyric acid (GABA), glutamate (Glu), and glutamine (Gln) in psychiatric and neurological disorders. In order to use this technique effectively, it is important to establish the accuracy and reproducibility of the methodology. In this study, phantoms with known metabolite concentrations were used to compare the accuracy of 2D J-resolved MRS, single-echo 30 ms PRESS, and GABA-edited MEGA-PRESS for measuring all three aforementioned neurochemicals simultaneously. The phantoms included metabolite concentrations above and below the physiological range and scans were performed at baseline, 1 week, and 1 month time-points. For GABA measurement, MEGA-PRESS proved optimal with a measured-to-target correlation of R2 = 0.999, with J-resolved providing R2 = 0.973 for GABA. All three methods proved effective in measuring Glu with R2 = 0.987 (30 ms PRESS), R2 = 0.996 (J-resolved) and R2 = 0.910 (MEGA-PRESS). J-resolved and MEGA-PRESS yielded good results for Gln measures with respective R2 = 0.855 (J-resolved) and R2 = 0.815 (MEGA-PRESS). The 30 ms PRESS method proved ineffective in measuring GABA and Gln. When measurement stability at in vivo concentration was assessed as a function of varying spectral quality, J-resolved proved the most stable and immune to signal-to-noise and linewidth fluctuation compared to MEGA-PRESS and 30 ms PRESS.
Busk, P K; Pilgaard, B; Lezyk, M J; Meyer, A S; Lange, L
2017-04-12
Carbohydrate-active enzymes are found in all organisms and participate in key biological processes. These enzymes are classified in 274 families in the CAZy database but the sequence diversity within each family makes it a major task to identify new family members and to provide basis for prediction of enzyme function. A fast and reliable method for de novo annotation of genes encoding carbohydrate-active enzymes is to identify conserved peptides in the curated enzyme families followed by matching of the conserved peptides to the sequence of interest as demonstrated for the glycosyl hydrolase and the lytic polysaccharide monooxygenase families. This approach not only assigns the enzymes to families but also provides functional prediction of the enzymes with high accuracy. We identified conserved peptides for all enzyme families in the CAZy database with Peptide Pattern Recognition. The conserved peptides were matched to protein sequence for de novo annotation and functional prediction of carbohydrate-active enzymes with the Hotpep method. Annotation of protein sequences from 12 bacterial and 16 fungal genomes to families with Hotpep had an accuracy of 0.84 (measured as F1-score) compared to semiautomatic annotation by the CAZy database whereas the dbCAN HMM-based method had an accuracy of 0.77 with optimized parameters. Furthermore, Hotpep provided a functional prediction with 86% accuracy for the annotated genes. Hotpep is available as a stand-alone application for MS Windows. Hotpep is a state-of-the-art method for automatic annotation and functional prediction of carbohydrate-active enzymes.
NASA Astrophysics Data System (ADS)
Hilton, J. L.
2012-12-01
In September 2010 IAU Commission 4, Ephemerides, organized a working group to provide a recommendation for a preferred format for solar system ephemerides. The purpose of this recommendation is to provide easy access to a wide range of solar system ephemerides for users. The working group, chaired by Hilton, includes representatives from each of the major planetary ephemeris groups and representatives from the satellite and asteroid ephemeris communities. The working group has tentatively decided to recommend the SPK format developed by the Jet Propulsion Laboratory's Navigation and Ancillary Information Facility for use with its SPICE Toolkit. Certain details, however, must still be resolved before a final recommendation is made by the working group. An update is also provided to ongoing analysis comparing the three high accuracy planetary ephemerides, DE421, EPM2008, and INPOP10a. The principal topics of this update are: replacing the INPOP08 ephemeris with the INPOP10a ephemeris, making the comparisons with respect to DE421 rather than DE405, and comparing the TT - TDB values determined in EPM2008 and INPOP10a with the Fairhead & Bretagnon (1990, A&A, 229, 240) model used in DE421 as T_eph.
Investigations on the Bundle Adjustment Results from Sfm-Based Software for Mapping Purposes
NASA Astrophysics Data System (ADS)
Lumban-Gaol, Y. A.; Murtiyoso, A.; Nugroho, B. H.
2018-05-01
Since its first inception, aerial photography has been used for topographic mapping. Large-scale aerial photography contributed to the creation of many of the topographic maps around the world. In Indonesia, a 2013 government directive on spatial management has re-stressed the need for topographic maps, with aerial photogrammetry providing the main method of acquisition. However, the large need to generate such maps is often limited by budgetary reasons. Today, SfM (Structure-from-Motion) offers quicker and less expensive solutions to this problem. However, considering the required precision for topographic missions, these solutions need to be assessed to see if they provide enough level of accuracy. In this paper, a popular SfM-based software Agisoft PhotoScan is used to perform bundle adjustment on a set of large-scale aerial images. The aim of the paper is to compare its bundle adjustment results with those generated by more classical photogrammetric software, namely Trimble Inpho and ERDAS IMAGINE. Furthermore, in order to provide more bundle adjustment statistics to be compared, the Damped Bundle Adjustment Toolbox (DBAT) was also used to reprocess the PhotoScan project. Results show that PhotoScan results are less stable than those generated by the two photogrammetric software programmes. This translates to lower accuracy, which may impact the final photogrammetric product.
Yang, Guang; Raschke, Felix; Barrick, Thomas R; Howe, Franklyn A
2015-09-01
To investigate whether nonlinear dimensionality reduction improves unsupervised classification of (1) H MRS brain tumor data compared with a linear method. In vivo single-voxel (1) H magnetic resonance spectroscopy (55 patients) and (1) H magnetic resonance spectroscopy imaging (MRSI) (29 patients) data were acquired from histopathologically diagnosed gliomas. Data reduction using Laplacian eigenmaps (LE) or independent component analysis (ICA) was followed by k-means clustering or agglomerative hierarchical clustering (AHC) for unsupervised learning to assess tumor grade and for tissue type segmentation of MRSI data. An accuracy of 93% in classification of glioma grade II and grade IV, with 100% accuracy in distinguishing tumor and normal spectra, was obtained by LE with unsupervised clustering, but not with the combination of k-means and ICA. With (1) H MRSI data, LE provided a more linear distribution of data for cluster analysis and better cluster stability than ICA. LE combined with k-means or AHC provided 91% accuracy for classifying tumor grade and 100% accuracy for identifying normal tissue voxels. Color-coded visualization of normal brain, tumor core, and infiltration regions was achieved with LE combined with AHC. The LE method is promising for unsupervised clustering to separate brain and tumor tissue with automated color-coding for visualization of (1) H MRSI data after cluster analysis. © 2014 Wiley Periodicals, Inc.
Azola, Alba M; Sunday, Kirstyn L; Humbert, Ianessa A
2017-02-01
Submental surface electromyography (ssEMG) visual biofeedback is widely used to train swallowing maneuvers. This study compares the effect of ssEMG and videofluoroscopy (VF) visual biofeedback on hyo-laryngeal accuracy when training a swallowing maneuver. Furthermore, it examines the clinician's ability to provide accurate verbal cues during swallowing maneuver training. Thirty healthy adults performed the volitional laryngeal vestibule closure maneuver (vLVC), which involves swallowing and sustaining closure of the laryngeal vestibule for 2 s. The study included two stages: (1) first accurate demonstration of the vLVC maneuver, followed by (2) training-20 vLVC training swallows. Participants were randomized into three groups: (a) ssEMG biofeedback only, (b) VF biofeedback only, and (c) mixed biofeedback (VF for the first accurate demonstration achieving stage and ssEMG for the training stage). Participants' performances were verbally critiqued or reinforced in real time while both the clinician and participant were observing the assigned visual biofeedback. VF and ssEMG were continuously recorded for all participants. Results show that accuracy of both vLVC performance and clinician cues was greater with VF biofeedback than with either ssEMG or mixed biofeedback (p < 0.001). Using ssEMG for providing real-time biofeedback during training could lead to errors while learning and training a swallowing maneuver.
Myocardial perfusion imaging with PET
Nakazato, Ryo; Berman, Daniel S; Alexanderson, Erick; Slomka, Piotr
2013-01-01
PET-myocardial perfusion imaging (MPI) allows accurate measurement of myocardial perfusion, absolute myocardial blood flow and function at stress and rest in a single study session performed in approximately 30 min. Various PET tracers are available for MPI, and rubidium-82 or nitrogen-13-ammonia is most commonly used. In addition, a new fluorine-18-based PET-MPI tracer is currently being evaluated. Relative quantification of PET perfusion images shows very high diagnostic accuracy for detection of obstructive coronary artery disease. Dynamic myocardial blood flow analysis has demonstrated additional prognostic value beyond relative perfusion imaging. Patient radiation dose can be reduced and image quality can be improved with latest advances in PET/CT equipment. Simultaneous assessment of both anatomy and perfusion by hybrid PET/CT can result in improved diagnostic accuracy. Compared with SPECT-MPI, PET-MPI provides higher diagnostic accuracy, using lower radiation doses during a shorter examination time period for the detection of coronary artery disease. PMID:23671459
Zhang, Ming-juan; Yang, Jun; Ge, Heng; Qiang, Lei; Duan, Zong-ming; Wang, Cong-xia; Wang, Rong; Lu, Zhuo-rern
2007-11-01
To improve specificity and accuracy of endogenous ouabain measurement assay. Anti-ouabain polyclonal antibody egg yolk (IgY) and anti-ouabain rabbit antibody (IgG) were prepared respectively. In the presence of two kinds of antibody, then the specificity and accuracy of enzyme-linked immunosorbent assay (ELISA) were compared. The ELISA, in the presence of IgY, provided a sensitivity of the average intraassay coefficient of variation(CV) was 2.03%, and the inter-assay CV was 2.34% respectively. In contrast, IgG were 2.83% and 3.29%. No significant interferences were observed with hydrocortisone and dexamethasone. There was 3.45% vs. 5.95%, 3.20% vs. 5.20% of crossreaction with cedilanid and digoxin. The specificity and accuracy of ELISA, in which IgY was used, were more better than IgG.
Refined Simulation of Satellite Laser Altimeter Full Echo Waveform
NASA Astrophysics Data System (ADS)
Men, H.; Xing, Y.; Li, G.; Gao, X.; Zhao, Y.; Gao, X.
2018-04-01
The return waveform of satellite laser altimeter plays vital role in the satellite parameters designation, data processing and application. In this paper, a method of refined full waveform simulation is proposed based on the reflectivity of the ground target, the true emission waveform and the Laser Profile Array (LPA). The ICESat/GLAS data is used as the validation data. Finally, we evaluated the simulation accuracy with the correlation coefficient. It was found that the accuracy of echo simulation could be significantly improved by considering the reflectivity of the ground target and the emission waveform. However, the laser intensity distribution recorded by the LPA has little effect on the echo simulation accuracy when compared with the distribution of the simulated laser energy. At last, we proposed a refinement idea by analyzing the experimental results, in the hope of providing references for the waveform data simulation and processing of GF-7 satellite in the future.
A joint tracking method for NSCC based on WLS algorithm
NASA Astrophysics Data System (ADS)
Luo, Ruidan; Xu, Ying; Yuan, Hong
2017-12-01
Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.
ERIC Educational Resources Information Center
Yael, Weiss; Tami, Katzir; Tali, Bitan
2015-01-01
The current study examined the effects of transparency and familiarity on word recognition in adult Hebrew dyslexic readers with a phonological processing deficit as compared to typical readers. We measured oral reading response time and accuracy of single nouns in several conditions: diacritics that provide transparent but less familiar…
Visual Scanning: Comparisons Between Student and Instructor Pilots. Final Report.
ERIC Educational Resources Information Center
DeMaio, Joseph; And Others
The performance of instructor pilots and student pilots was compared in two visual scanning tasks. In the first task both groups were shown slides of T-37 instrument displays in which errors were to be detected. Instructor pilots detected errors faster and with greater accuracy than student pilots, thus providing evidence for the validity of the…
Online Periodic Table: A Cautionary Note
ERIC Educational Resources Information Center
Izci, Kemal; Barrow, Lloyd H.; Thornhill, Erica
2013-01-01
The purpose of this study was (a) to evaluate ten online periodic table sources for their accuracy and (b) to compare the types of information and links provided to users. Limited studies have been reported on online periodic table (Diener and Moore 2011; Slocum and Moore in "J Chem Educ" 86(10):1167, 2009). Chemistry students'…
Characterization and delineation of caribou habitat on Unimak Island using remote sensing techniques
NASA Astrophysics Data System (ADS)
Atkinson, Brain M.
The assessment of herbivore habitat quality is traditionally based on quantifying the forages available to the animal across their home range through ground-based techniques. While these methods are highly accurate, they can be time-consuming and highly expensive, especially for herbivores that occupy vast spatial landscapes. The Unimak Island caribou herd has been decreasing in the last decade at rates that have prompted discussion of management intervention. Frequent inclement weather in this region of Alaska has provided for little opportunity to study the caribou forage habitat on Unimak Island. The overall objectives of this study were two-fold 1) to assess the feasibility of using high-resolution color and near-infrared aerial imagery to map the forage distribution of caribou habitat on Unimak Island and 2) to assess the use of a new high-resolution multispectral satellite imagery platform, RapidEye, and use of the "red-edge" spectral band on vegetation classification accuracy. Maximum likelihood classification algorithms were used to create land cover maps in aerial and satellite imagery. Accuracy assessments and transformed divergence values were produced to assess vegetative spectral information and classification accuracy. By using RapidEye and aerial digital imagery in a hierarchical supervised classification technique, we were able to produce a high resolution land cover map of Unimak Island. We obtained overall accuracy rates of 71.4 percent which are comparable to other land cover maps using RapidEye imagery. The "red-edge" spectral band included in the RapidEye imagery provides additional spectral information that allows for a more accurate overall classification, raising overall accuracy 5.2 percent.
An interpolation method for stream habitat assessments
Sheehan, Kenneth R.; Welsh, Stuart A.
2015-01-01
Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.
Factors influencing accuracy of cortical thickness in the diagnosis of Alzheimer's disease.
Belathur Suresh, Mahanand; Fischl, Bruce; Salat, David H
2018-04-01
There is great value to use of structural neuroimaging in the assessment of Alzheimer's disease (AD). However, to date, predictive value of structural imaging tend to range between 80% and 90% in accuracy and it is unclear why this is the case given that structural imaging should parallel the pathologic processes of AD. There is a possibility that clinical misdiagnosis relative to the gold standard pathologic diagnosis and/or additional brain pathologies are confounding factors contributing to reduced structural imaging classification accuracy. We examined potential factors contributing to misclassification of individuals with clinically diagnosed AD purely from cortical thickness measures. Correctly classified and incorrectly classified groups were compared across a range of demographic, biological, and neuropsychological data including cerebrospinal fluid biomarkers, amyloid imaging, white matter hyperintensity (WMH) volume, cognitive, and genetic factors. Individual subject analyses suggested that at least a portion of the control individuals misclassified as AD from structural imaging additionally harbor substantial AD biomarker pathology and risk, yet are relatively resistant to cognitive symptoms, likely due to "cognitive reserve," and therefore clinically unimpaired. In contrast, certain clinical control individuals misclassified as AD from cortical thickness had increased WMH volume relative to other controls in the sample, suggesting that vascular conditions may contribute to classification accuracy from cortical thickness measures. These results provide examples of factors that contribute to the accuracy of structural imaging in predicting a clinical diagnosis of AD, and provide important information about considerations for future work aimed at optimizing structural based diagnostic classifiers for AD. © 2017 Wiley Periodicals, Inc.
ROC analysis for diagnostic accuracy of fracture by using different monitors.
Liang, Zhigang; Li, Kuncheng; Yang, Xiaolin; Du, Xiangying; Liu, Jiabin; Zhao, Xin; Qi, Xiangdong
2006-09-01
The purpose of this study was to compare diagnostic accuracy by using two types of monitors. Four radiologists with 10 years experience twice interpreted the films of 77 fracture cases by using the ViewSonic P75f+ and BARCO MGD221 monitors, with a time interval of 3 weeks. Each time the radiologists used one type of monitor to interpret the images. The image browser used was the Unisight software provided by Atlastiger Company (Shanghai, China), and interpretation result was analyzed via the LABMRMC software. In studies of receiver operating characteristics to score the presence or absence of fracture, the results of images interpreted through monochromic monitors showed significant statistical difference compared to those interpreted using the color monitors. A significant difference was observed in the results obtained by using two kinds of monitors. Color monitors cannot serve as substitutes for monochromatic monitors in the process of interpreting computed radiography (CR) images with fractures.
Validation of enhanced kinect sensor based motion capturing for gait assessment
Müller, Björn; Ilg, Winfried; Giese, Martin A.
2017-01-01
Optical motion capturing systems are expensive and require substantial dedicated space to be set up. On the other hand, they provide unsurpassed accuracy and reliability. In many situations however flexibility is required and the motion capturing system can only temporarily be placed. The Microsoft Kinect v2 sensor is comparatively cheap and with respect to gait analysis promising results have been published. We here present a motion capturing system that is easy to set up, flexible with respect to the sensor locations and delivers high accuracy in gait parameters comparable to a gold standard motion capturing system (VICON). Further, we demonstrate that sensor setups which track the person only from one-side are less accurate and should be replaced by two-sided setups. With respect to commonly analyzed gait parameters, especially step width, our system shows higher agreement with the VICON system than previous reports. PMID:28410413
Matrix-vector multiplication using digital partitioning for more accurate optical computing
NASA Technical Reports Server (NTRS)
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
Candra, Henry; Yuwono, Mitchell; Rifai Chai; Nguyen, Hung T; Su, Steven
2016-08-01
Psychotherapy requires appropriate recognition of patient's facial-emotion expression to provide proper treatment in psychotherapy session. To address the needs this paper proposed a facial emotion recognition system using Combination of Viola-Jones detector together with a feature descriptor we term Edge-Histogram of Oriented Gradients (E-HOG). The performance of the proposed method is compared with various feature sources including the face, the eyes, the mouth, as well as both the eyes and the mouth. Seven classes of basic emotions have been successfully identified with 96.4% accuracy using Multi-class Support Vector Machine (SVM). The proposed descriptor E-HOG is much leaner to compute compared to traditional HOG as shown by a significant improvement in processing time as high as 1833.33% (p-value = 2.43E-17) with a slight reduction in accuracy of only 1.17% (p-value = 0.0016).
Comparison of tablet-based strategies for incision planning in laser microsurgery
NASA Astrophysics Data System (ADS)
Schoob, Andreas; Lekon, Stefan; Kundrat, Dennis; Kahrs, Lüder A.; Mattos, Leonardo S.; Ortmaier, Tobias
2015-03-01
Recent research has revealed that incision planning in laser surgery deploying stylus and tablet outperforms state-of-the-art micro-manipulator-based laser control. Providing more detailed quantitation regarding that approach, a comparative study of six tablet-based strategies for laser path planning is presented. Reference strategy is defined by monoscopic visualization and continuous path drawing on a graphics tablet. Further concepts deploying stereoscopic or a synthesized laser view, point-based path definition, real-time teleoperation or a pen display are compared with the reference scenario. Volunteers were asked to redraw and ablate stamped lines on a sample. Performance is assessed by measuring planning accuracy, completion time and ease of use. Results demonstrate that significant differences exist between proposed concepts. The reference strategy provides more accurate incision planning than the stereo or laser view scenario. Real-time teleoperation performs best with respect to completion time without indicating any significant deviation in accuracy and usability. Point-based planning as well as the pen display provide most accurate planning and increased ease of use compared to the reference strategy. As a result, combining the pen display approach with point-based planning has potential to become a powerful strategy because of benefiting from improved hand-eye-coordination on the one hand and from a simple but accurate technique for path definition on the other hand. These findings as well as the overall usability scale indicating high acceptance and consistence of proposed strategies motivate further advanced tablet-based planning in laser microsurgery.
Reichert, Christoph; Dürschmid, Stefan; Heinze, Hans-Jochen; Hinrichs, Hermann
2017-01-01
In brain-computer interface (BCI) applications the detection of neural processing as revealed by event-related potentials (ERPs) is a frequently used approach to regain communication for people unable to interact through any peripheral muscle control. However, the commonly used electroencephalography (EEG) provides signals of low signal-to-noise ratio, making the systems slow and inaccurate. As an alternative noninvasive recording technique, the magnetoencephalography (MEG) could provide more advantageous electrophysiological signals due to a higher number of sensors and the magnetic fields not being influenced by volume conduction. We investigated whether MEG provides higher accuracy in detecting event-related fields (ERFs) compared to detecting ERPs in simultaneously recorded EEG, both evoked by a covert attention task, and whether a combination of the modalities is advantageous. In our approach, a detection algorithm based on spatial filtering is used to identify ERP/ERF components in a data-driven manner. We found that MEG achieves higher decoding accuracy (DA) compared to EEG and that the combination of both further improves the performance significantly. However, MEG data showed poor performance in cross-subject classification, indicating that the algorithm's ability for transfer learning across subjects is better in EEG. Here we show that BCI control by covert attention is feasible with EEG and MEG using a data-driven spatial filter approach with a clear advantage of the MEG regarding DA but with a better transfer learning in EEG. PMID:29085279
Candelario, Danielle M; Vazquez, Victoria; Jackson, William; Reilly, Timothy
This study determined the completeness, accuracy, and reading level of Wikipedia patient drug information compared with the corresponding United States product insert medication guides. From the Top 200 Drugs of 2012, the top 33 medications with medication guides were analyzed. Medication guides and Wikipedia pages were downloaded on a single date to ensure continuity of Wikipedia content. To quantify the completeness and accuracy of the Wikipedia medication information, a scoring system was adapted from previously published work and compared with the 7 core domains of medication guides. Wikipedia did not provide patient information that was as complete or accurate as the information within the medication guides: 14.73 out of 42 (SD 5.75). Wikipedia medication pages were written at a significantly higher reading level compared with medication guides (Flesch reading ease score 52.93 vs. 33.24 [P <0.001]; Flesch-Kincaid grade level 10.26 vs. 6.86 [P <0.001]). Wikipedia medication pages include incomplete and inaccurate patient information compared with the corresponding product medication guides. Wikipedia patient drug information was also written at reading levels above that of medication guides and substantially above the average United States consumer health literacy level. As the public use of Wikipedia increases, the need for educating patients about the quality of information on Wikipedia and the availability of adequate patient education resources is ever more important to minimize inaccuracies and incomplete information sharing. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Su, Yushan; Hung, Hayley; Stern, Gary; Sverko, Ed; Lao, Randy; Barresi, Enzo; Rosenberg, Bruno; Fellin, Phil; Li, Henrik; Xiao, Hang
2011-11-01
Initiated in 1992, air monitoring of organic pollutants in the Canadian Arctic provided spatial and temporal trends in support of Canada's participation in the Stockholm Convention of Persistent Organic Pollutants. The specific analytical laboratory charged with this task was changed in 2002 while field sampling protocols remained unchanged. Three rounds of intensive comparison studies were conducted in 2004, 2005, and 2008 to assess data comparability between the two laboratories. Analysis was compared for organochlorine pesticides (OCPs), polychlorinated biphenyls (PCBs) and polycyclic aromatic hydrocarbons (PAHs) in standards, blind samples of mixed standards and extracts of real air samples. Good measurement accuracy was achieved for both laboratories when standards were analyzed. Variation of measurement accuracy over time was found for some OCPs and PCBs in standards on a random and non-systematic manner. Relatively low accuracy in analyzing blind samples was likely related to the process of sample purification. Inter-laboratory measurement differences for standards (<30%) and samples (<70%) were generally less than or comparable to those reported in a previous inter-laboratory study with 21 participating laboratories. Regression analysis showed inconsistent data comparability between the two laboratories during the initial stages of the study. These inter-laboratory differences can complicate abilities to discern long-term trends of pollutants in a given sampling site. It is advisable to maintain long-term measurements with minimal changes in sample analysis.
Feasibility of Multimodal Deformable Registration for Head and Neck Tumor Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fortunati, Valerio, E-mail: v.fortunati@erasmusmc.nl; Verhaart, René F.; Angeloni, Francesco
2014-09-01
Purpose: To investigate the feasibility of using deformable registration in clinical practice to fuse MR and CT images of the head and neck for treatment planning. Method and Materials: A state-of-the-art deformable registration algorithm was optimized, evaluated, and compared with rigid registration. The evaluation was based on manually annotated anatomic landmarks and regions of interest in both modalities. We also developed a multiparametric registration approach, which simultaneously aligns T1- and T2-weighted MR sequences to CT. This was evaluated and compared with single-parametric approaches. Results: Our results show that deformable registration yielded a better accuracy than rigid registration, without introducing unrealisticmore » deformations. For deformable registration, an average landmark alignment of approximatively 1.7 mm was obtained. For all the regions of interest excluding the cerebellum and the parotids, deformable registration provided a median modified Hausdorff distance of approximatively 1 mm. Similar accuracies were obtained for the single-parameter and multiparameter approaches. Conclusions: This study demonstrates that deformable registration of head-and-neck CT and MR images is feasible, with overall a significanlty higher accuracy than for rigid registration.« less
Deshpande, Gopikrishna; Wang, Peng; Rangaprakash, D; Wilamowski, Bogdan
2015-12-01
Automated recognition and classification of brain diseases are of tremendous value to society. Attention deficit hyperactivity disorder (ADHD) is a diverse spectrum disorder whose diagnosis is based on behavior and hence will benefit from classification utilizing objective neuroimaging measures. Toward this end, an international competition was conducted for classifying ADHD using functional magnetic resonance imaging data acquired from multiple sites worldwide. Here, we consider the data from this competition as an example to illustrate the utility of fully connected cascade (FCC) artificial neural network (ANN) architecture for performing classification. We employed various directional and nondirectional brain connectivity-based methods to extract discriminative features which gave better classification accuracy compared to raw data. Our accuracy for distinguishing ADHD from healthy subjects was close to 90% and between the ADHD subtypes was close to 95%. Further, we show that, if properly used, FCC ANN performs very well compared to other classifiers such as support vector machines in terms of accuracy, irrespective of the feature used. Finally, the most discriminative connectivity features provided insights about the pathophysiology of ADHD and showed reduced and altered connectivity involving the left orbitofrontal cortex and various cerebellar regions in ADHD.
Design and evaluation of an augmented reality simulator using leap motion.
Wright, Trinette; de Ribaupierre, Sandrine; Eagleson, Roy
2017-10-01
Advances in virtual and augmented reality (AR) are having an impact on the medical field in areas such as surgical simulation. Improvements to surgical simulation will provide students and residents with additional training and evaluation methods. This is particularly important for procedures such as the endoscopic third ventriculostomy (ETV), which residents perform regularly. Simulators such as NeuroTouch, have been designed to aid in training associated with this procedure. The authors have designed an affordable and easily accessible ETV simulator, and compare it with the existing NeuroTouch for its usability and training effectiveness. This simulator was developed using Unity, Vuforia and the leap motion (LM) for an AR environment. The participants, 16 novices and two expert neurosurgeons, were asked to complete 40 targeting tasks. Participants used the NeuroTouch tool or a virtual hand controlled by the LM to select the position and orientation for these tasks. The length of time to complete each task was recorded and the trajectory log files were used to calculate performance. The resulting data from the novices' and experts' speed and accuracy are compared, and they discuss the objective performance of training in terms of the speed and accuracy of targeting accuracy for each system.
QUADAS and STARD: evaluating the quality of diagnostic accuracy studies.
Oliveira, Maria Regina Fernandes de; Gomes, Almério de Castro; Toscano, Cristiana Maria
2011-04-01
To compare the performance of two approaches, one based on the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) and another on the Standards for Reporting Studies of Diagnostic Accuracy (STARD), in evaluating the quality of studies validating the OptiMal® rapid malaria diagnostic test. Articles validating the rapid test published until 2007 were searched in the Medline/PubMed database. This search retrieved 13 articles. A combination of 12 QUADAS criteria and three STARD criteria were compared with the 12 QUADAS criteria alone. Articles that fulfilled at least 50% of QUADAS criteria were considered as regular to good quality. Of the 13 articles retrieved, 12 fulfilled at least 50% of QUADAS criteria, and only two fulfilled the STARD/QUADAS criteria combined. Considering the two criteria combination (> 6 QUADAS and > 3 STARD), two studies (15.4%) showed good methodological quality. The articles selection using the proposed combination resulted in two to eight articles, depending on the number of items assumed as cutoff point. The STARD/QUADAS combination has the potential to provide greater rigor when evaluating the quality of studies validating malaria diagnostic tests, given that it incorporates relevant information not contemplated in the QUADAS criteria alone.
Development of a machine learning potential for graphene
NASA Astrophysics Data System (ADS)
Rowe, Patrick; Csányi, Gábor; Alfè, Dario; Michaelides, Angelos
2018-02-01
We present an accurate interatomic potential for graphene, constructed using the Gaussian approximation potential (GAP) machine learning methodology. This GAP model obtains a faithful representation of a density functional theory (DFT) potential energy surface, facilitating highly accurate (approaching the accuracy of ab initio methods) molecular dynamics simulations. This is achieved at a computational cost which is orders of magnitude lower than that of comparable calculations which directly invoke electronic structure methods. We evaluate the accuracy of our machine learning model alongside that of a number of popular empirical and bond-order potentials, using both experimental and ab initio data as references. We find that whilst significant discrepancies exist between the empirical interatomic potentials and the reference data—and amongst the empirical potentials themselves—the machine learning model introduced here provides exemplary performance in all of the tested areas. The calculated properties include: graphene phonon dispersion curves at 0 K (which we predict with sub-meV accuracy), phonon spectra at finite temperature, in-plane thermal expansion up to 2500 K as compared to NPT ab initio molecular dynamics simulations and a comparison of the thermally induced dispersion of graphene Raman bands to experimental observations. We have made our potential freely available online at [http://www.libatoms.org].
Aboagye-Sarfo, Patrick; Mai, Qun; Sanfilippo, Frank M; Preen, David B; Stewart, Louise M; Fatovich, Daniel M
2015-10-01
To develop multivariate vector-ARMA (VARMA) forecast models for predicting emergency department (ED) demand in Western Australia (WA) and compare them to the benchmark univariate autoregressive moving average (ARMA) and Winters' models. Seven-year monthly WA state-wide public hospital ED presentation data from 2006/07 to 2012/13 were modelled. Graphical and VARMA modelling methods were used for descriptive analysis and model fitting. The VARMA models were compared to the benchmark univariate ARMA and Winters' models to determine their accuracy to predict ED demand. The best models were evaluated by using error correction methods for accuracy. Descriptive analysis of all the dependent variables showed an increasing pattern of ED use with seasonal trends over time. The VARMA models provided a more precise and accurate forecast with smaller confidence intervals and better measures of accuracy in predicting ED demand in WA than the ARMA and Winters' method. VARMA models are a reliable forecasting method to predict ED demand for strategic planning and resource allocation. While the ARMA models are a closely competing alternative, they under-estimated future ED demand. Copyright © 2015 Elsevier Inc. All rights reserved.
Image analysis software versus direct anthropometry for breast measurements.
Quieregatto, Paulo Rogério; Hochman, Bernardo; Furtado, Fabianne; Machado, Aline Fernanda Perez; Sabino Neto, Miguel; Ferreira, Lydia Masako
2014-10-01
To compare breast measurements performed using the software packages ImageTool(r), AutoCAD(r) and Adobe Photoshop(r) with direct anthropometric measurements. Points were marked on the breasts and arms of 40 volunteer women aged between 18 and 60 years. When connecting the points, seven linear segments and one angular measurement on each half of the body, and one medial segment common to both body halves were defined. The volunteers were photographed in a standardized manner. Photogrammetric measurements were performed by three independent observers using the three software packages and compared to direct anthropometric measurements made with calipers and a protractor. Measurements obtained with AutoCAD(r) were the most reproducible and those made with ImageTool(r) were the most similar to direct anthropometry, while measurements with Adobe Photoshop(r) showed the largest differences. Except for angular measurements, significant differences were found between measurements of line segments made using the three software packages and those obtained by direct anthropometry. AutoCAD(r) provided the highest precision and intermediate accuracy; ImageTool(r) had the highest accuracy and lowest precision; and Adobe Photoshop(r) showed intermediate precision and the worst accuracy among the three software packages.
Design and evaluation of an augmented reality simulator using leap motion
de Ribaupierre, Sandrine; Eagleson, Roy
2017-01-01
Advances in virtual and augmented reality (AR) are having an impact on the medical field in areas such as surgical simulation. Improvements to surgical simulation will provide students and residents with additional training and evaluation methods. This is particularly important for procedures such as the endoscopic third ventriculostomy (ETV), which residents perform regularly. Simulators such as NeuroTouch, have been designed to aid in training associated with this procedure. The authors have designed an affordable and easily accessible ETV simulator, and compare it with the existing NeuroTouch for its usability and training effectiveness. This simulator was developed using Unity, Vuforia and the leap motion (LM) for an AR environment. The participants, 16 novices and two expert neurosurgeons, were asked to complete 40 targeting tasks. Participants used the NeuroTouch tool or a virtual hand controlled by the LM to select the position and orientation for these tasks. The length of time to complete each task was recorded and the trajectory log files were used to calculate performance. The resulting data from the novices' and experts' speed and accuracy are compared, and they discuss the objective performance of training in terms of the speed and accuracy of targeting accuracy for each system. PMID:29184667
How Fit is Your Citizen Science Data?
NASA Astrophysics Data System (ADS)
Fischer, H. A.; Gerber, L. R.; Wentz, E. A.
2017-12-01
Data quality and accuracy is a fundamental concern with utilizing citizen science data. Although many methods can be used to assess quality and accuracy, these methods may not be sufficient to qualify citizen science data for widespread use in scientific research. While Data Fitness For Use (DFFU) does not provide a blanket assessment of data quality, it does assesses the data's ability to be used for a specific application, within a given area (Devillers and Bédard 2007). The STAAq (Spatial, Temporal, Aptness, and Accuracy) assessment was developed to assess the fitness for use of citizen science data, this assessment can be used on a stand alone dataset or be used to compare multiple datasets. The citizen science data used in this assessment was collected by volunteers of the Map of Life- Denali project, which is a tourist-centric citizen science project developed through a partnership with Arizona State University, Map of Life at Yale University, and Denali National Park and Preserve. Volunteers use the offline version of the Map of Life app to record their wildlife, insect, and plant observations in the park. To test the STAAq assessment data from different sources- Map of Life- Denali, Ride Observe and Record, and NPS wildlife surveys- were compared to determined which dataset is most fit for use for a specific research question; What is the recent Grizzly bear distribution in areas of high visitor use in Denali National Park and Preserve? These datasets were compared and ranked according to how well they performed in each of the components of the STAAq assessment. These components include spatial scale, temporal scale, aptness, and application. The Map of Life- Denali data and the ROAR program data were most for use for this research question. The STAAq assessment can be adjusted to assess the fitness for use of a single dataset or being used to compare any number of datasets. This data fitness for use assessment provides a means to assess data fitness instead of data quality for citizen science data.
Atropos: specific, sensitive, and speedy trimming of sequencing reads.
Didion, John P; Martin, Marcel; Collins, Francis S
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos.
Fusing face-verification algorithms and humans.
O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon
2007-10-01
It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans.
Wong, Yu-Tung; Finley, Charles C; Giallo, Joseph F; Buckmire, Robert A
2011-08-01
To introduce a novel method of combining robotics and the CO(2) laser micromanipulator to provide excellent precision and performance repeatability designed for surgical applications. Pilot feasibility study. We developed a portable robotic controller that appends to a standard CO(2) laser micromanipulator. The robotic accuracy and laser beam path repeatability were compared to six experienced users of the industry standard micromanipulator performing the same simulated surgical tasks. Helium-neon laser beam video tracking techniques were employed. The robotic controller demonstrated superiority over experienced human manual micromanipulator control in accuracy (laser path within 1 mm of idealized centerline), 97.42% (standard deviation [SD] 2.65%), versus 85.11% (SD 14.51%), P = .018; and laser beam path repeatability (area of laser path divergence on successive trials), 21.42 mm(2) (SD 4.35 mm(2) ) versus 65.84 mm(2) (SD 11.93 mm(2) ), P = .006. Robotic micromanipulator control enhances accuracy and repeatability for specific laser tasks. Computerized control opens opportunity for alternative user interfaces and additional safety features. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
Atropos: specific, sensitive, and speedy trimming of sequencing reads
Collins, Francis S.
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos. PMID:28875074
Application of preconditioned alternating direction method of multipliers in depth from focal stack
NASA Astrophysics Data System (ADS)
Javidnia, Hossein; Corcoran, Peter
2018-03-01
Postcapture refocusing effect in smartphone cameras is achievable using focal stacks. However, the accuracy of this effect is totally dependent on the combination of the depth layers in the stack. The accuracy of the extended depth of field effect in this application can be improved significantly by computing an accurate depth map, which has been an open issue for decades. To tackle this issue, a framework is proposed based on a preconditioned alternating direction method of multipliers for depth from the focal stack and synthetic defocus application. In addition to its ability to provide high structural accuracy, the optimization function of the proposed framework can, in fact, converge faster and better than state-of-the-art methods. The qualitative evaluation has been done on 21 sets of focal stacks and the optimization function has been compared against five other methods. Later, 10 light field image sets have been transformed into focal stacks for quantitative evaluation purposes. Preliminary results indicate that the proposed framework has a better performance in terms of structural accuracy and optimization in comparison to the current state-of-the-art methods.
Building Energy Simulation Test for Existing Homes (BESTEST-EX) (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, R.; Neymark, J.; Polly, B.
2011-12-01
This presentation discusses the goals of NREL Analysis Accuracy R&D; BESTEST-EX goals; what BESTEST-EX is; how it works; 'Building Physics' cases; 'Building Physics' reference results; 'utility bill calibration' cases; limitations and potential future work. Goals of NREL Analysis Accuracy R&D are: (1) Provide industry with the tools and technical information needed to improve the accuracy and consistency of analysis methods; (2) Reduce the risks associated with purchasing, financing, and selling energy efficiency upgrades; and (3) Enhance software and input collection methods considering impacts on accuracy, cost, and time of energy assessments. BESTEST-EX Goals are: (1) Test software predictions of retrofitmore » energy savings in existing homes; (2) Ensure building physics calculations and utility bill calibration procedures perform up to a minimum standard; and (3) Quantify impact of uncertainties in input audit data and occupant behavior. BESTEST-EX is a repeatable procedure that tests how well audit software predictions compare to the current state of the art in building energy simulation. There is no direct truth standard. However, reference software have been subjected to validation testing, including comparisons with empirical data.« less
Edwards, Jan; Beckman, Mary E; Munson, Benjamin
2004-04-01
Adults' performance on a variety of tasks suggests that phonological processing of nonwords is grounded in generalizations about sublexical patterns over all known words. A small body of research suggests that children's phonological acquisition is similarly based on generalizations over the lexicon. To test this account, production accuracy and fluency were examined in nonword repetitions by 104 children and 22 adults. Stimuli were 22 pairs of nonwords, in which one nonword contained a low-frequency or unattested two-phoneme sequence and the other contained a high-frequency sequence. For a subset of these nonword pairs, segment durations were measured. The same sound was produced with a longer duration (less fluently) when it appeared in a low-frequency sequence, as compared to a high-frequency sequence. Low-frequency sequences were also repeated with lower accuracy than high-frequency sequences. Moreover, children with smaller vocabularies showed a larger influence of frequency on accuracy than children with larger vocabularies. Taken together, these results provide support for a model of phonological acquisition in which knowledge of sublexical units emerges from generalizations made over lexical items.
NASA Astrophysics Data System (ADS)
Holmes, Philip; Eckhoff, Philip; Wong-Lin, K. F.; Bogacz, Rafal; Zacksenhouse, Miriam; Cohen, Jonathan D.
2010-03-01
We describe how drift-diffusion (DD) processes - systems familiar in physics - can be used to model evidence accumulation and decision-making in two-alternative, forced choice tasks. We sketch the derivation of these stochastic differential equations from biophysically-detailed models of spiking neurons. DD processes are also continuum limits of the sequential probability ratio test and are therefore optimal in the sense that they deliver decisions of specified accuracy in the shortest possible time. This leaves open the critical balance of accuracy and speed. Using the DD model, we derive a speed-accuracy tradeoff that optimizes reward rate for a simple perceptual decision task, compare human performance with this benchmark, and discuss possible reasons for prevalent sub-optimality, focussing on the question of uncertain estimates of key parameters. We present an alternative theory of robust decisions that allows for uncertainty, and show that its predictions provide better fits to experimental data than a more prevalent account that emphasises a commitment to accuracy. The article illustrates how mathematical models can illuminate the neural basis of cognitive processes.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements
NASA Astrophysics Data System (ADS)
Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
Airborne Laser/GPS Mapping of Assateague National Seashore Beach
NASA Technical Reports Server (NTRS)
Kradill, W. B.; Wright, C. W.; Brock, John C.; Swift, R. N.; Frederick, E. B.; Manizade, S. S.; Yungel, J. K.; Martin, C. F.; Sonntag, J. G.; Duffy, Mark;
1997-01-01
Results are presented from topographic surveys of the Assateague Island National Seashore using recently developed Airborne Topographic Mapper (ATM) and kinematic Global Positioning System (GPS) technology. In November, 1995, and again in May, 1996, the NASA Arctic Ice Mapping (AIM) group from the Goddard Space Flight Center's Wallops Flight Facility conducted the topographic surveys as a part of technology enhancement activities prior to conducting missions to measure the elevation of extensive sections of the Greenland Ice Sheet as part of NASA's Global Climate Change program. Differences between overlapping portions of both surveys are compared for quality control. An independent assessment of the accuracy of the ATM survey is provided by comparison to surface surveys which were conducted using standard techniques. The goal of these projects is to mdke these measurements to an accuracy of +/- 10 cm. Differences between the fall 1995 and 1996 surveys provides an assessment of net changes in the beach morphology over an annual cycle.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements.
Tsouri, Gill R; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li
2011-01-01
Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184
Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li
2011-02-16
Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.
Balanced Flow Metering and Conditioning: Technology for Fluid Systems
NASA Technical Reports Server (NTRS)
Kelley, Anthony R.
2006-01-01
Revolutionary new technology that creates balanced conditions across the face of a multi-hole orifice plate has been developed, patented and exclusively licensed for commercialization. This balanced flow technology simultaneously measures mass flow rate, volumetric flow rate, and fluid density with little or no straight pipe run requirements. Initially, the balanced plate was a drop in replacement for a traditional orifice plate, but testing revealed substantially better performance as compared to the orifice plate such as, 10 times better accuracy, 2 times faster (shorter distance) pressure recovery, 15 times less acoustic noise energy generation, and 2.5 times less permanent pressure loss. During 2004 testing at MSFC, testing revealed several configurations of the balanced flow meter that match the accuracy of Venturi meters while having only slightly more permanent pressure loss. However, the balanced meter only requires a 0.25 inch plate and has no upstream or downstream straight pipe requirements. As a fluid conditioning device, the fluid usually reaches fully developed flow within 1 pipe diameter of the balanced conditioning plate. This paper will describe the basic balanced flow metering technology, provide performance details generated by testing to date and provide implementation details along with calculations required for differing degrees of flow metering accuracy.
Volumetric calibration of a plenoptic camera.
Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S
2018-02-01
The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.
PPP Sliding Window Algorithm and Its Application in Deformation Monitoring.
Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming
2016-05-31
Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts.
A Novel Sensor System for Measuring Wheel Loads of Vehicles on Highways
Zhang, Wenbin; Suo, Chunguang; Wang, Qi
2008-01-01
With the development of the highway transportation and business trade, vehicle Weigh-In-Motion (WIM) technology has become a key technology for measuring traffic loads. In this paper a novel WIM system based on monitoring of pavement strain responses in rigid pavement was investigated. In this WIM system multiple low cost, light weight, small volume and high accuracy embedded concrete strain sensors were used as WIM sensors to measure rigid pavement strain responses. In order to verify the feasibility of the method, a system prototype based on multiple sensors was designed and deployed on a relatively busy freeway. Field calibration and tests were performed with known two-axle truck wheel loads and the measurement errors were calculated based on the static weights measured with a static weighbridge. This enables the weights of other vehicles to be calculated from the calibration constant. Calibration and test results for individual sensors or three-sensor fusions are both provided. Repeatability, sources of error, and weight accuracy are discussed. Successful results showed that the proposed method was feasible and proven to have a high accuracy. Furthermore, a sample mean approach using multiple fused individual sensors could provide better performance compared to individual sensors. PMID:27873952
Bisgin, Halil; Bera, Tanmay; Ding, Hongjian; Semey, Howard G; Wu, Leihong; Liu, Zhichao; Barnes, Amy E; Langley, Darryl A; Pava-Ripoll, Monica; Vyas, Himansu J; Tong, Weida; Xu, Joshua
2018-04-25
Insect pests, such as pantry beetles, are often associated with food contaminations and public health risks. Machine learning has the potential to provide a more accurate and efficient solution in detecting their presence in food products, which is currently done manually. In our previous research, we demonstrated such feasibility where Artificial Neural Network (ANN) based pattern recognition techniques could be implemented for species identification in the context of food safety. In this study, we present a Support Vector Machine (SVM) model which improved the average accuracy up to 85%. Contrary to this, the ANN method yielded ~80% accuracy after extensive parameter optimization. Both methods showed excellent genus level identification, but SVM showed slightly better accuracy for most species. Highly accurate species level identification remains a challenge, especially in distinguishing between species from the same genus which may require improvements in both imaging and machine learning techniques. In summary, our work does illustrate a new SVM based technique and provides a good comparison with the ANN model in our context. We believe such insights will pave better way forward for the application of machine learning towards species identification and food safety.
Velpuri, Naga M.; Senay, Gabriel B.; Singh, Ramesh K.; Bohms, Stefanie; Verdin, James P.
2013-01-01
Remote sensing datasets are increasingly being used to provide spatially explicit large scale evapotranspiration (ET) estimates. Extensive evaluation of such large scale estimates is necessary before they can be used in various applications. In this study, two monthly MODIS 1 km ET products, MODIS global ET (MOD16) and Operational Simplified Surface Energy Balance (SSEBop) ET, are validated over the conterminous United States at both point and basin scales. Point scale validation was performed using eddy covariance FLUXNET ET (FLET) data (2001–2007) aggregated by year, land cover, elevation and climate zone. Basin scale validation was performed using annual gridded FLUXNET ET (GFET) and annual basin water balance ET (WBET) data aggregated by various hydrologic unit code (HUC) levels. Point scale validation using monthly data aggregated by years revealed that the MOD16 ET and SSEBop ET products showed overall comparable annual accuracies. For most land cover types, both ET products showed comparable results. However, SSEBop showed higher performance for Grassland and Forest classes; MOD16 showed improved performance in the Woody Savanna class. Accuracy of both the ET products was also found to be comparable over different climate zones. However, SSEBop data showed higher skill score across the climate zones covering the western United States. Validation results at different HUC levels over 2000–2011 using GFET as a reference indicate higher accuracies for MOD16 ET data. MOD16, SSEBop and GFET data were validated against WBET (2000–2009), and results indicate that both MOD16 and SSEBop ET matched the accuracies of the global GFET dataset at different HUC levels. Our results indicate that both MODIS ET products effectively reproduced basin scale ET response (up to 25% uncertainty) compared to CONUS-wide point-based ET response (up to 50–60% uncertainty) illustrating the reliability of MODIS ET products for basin-scale ET estimation. Results from this research would guide the additional parameter refinement required for the MOD16 and SSEBop algorithms in order to further improve their accuracy and performance for agro-hydrologic applications.
NASA Astrophysics Data System (ADS)
Dube, Timothy; Mutanga, Onisimo; Sibanda, Mbulisi; Bangamwabo, Victor; Shoko, Cletah
2017-08-01
The remote sensing of freshwater resources is increasingly becoming important, due to increased patterns of water use and the current or projected impacts of climate change and the rapid invasion by lethal water weeds. This study therefore sought to explore the potential of the recently-launched Landsat 8 OLI/TIRS sensor in mapping invasive species in inland lakes. Specifically, the study compares the performance of the newly-launched Landsat 8 sensor, with more advanced sensor design and image acquisition approach to the traditional Landsat-7 ETM+ in detecting and mapping the water hyacinth (Eichhornia crassipes) invasive species across Lake Chivero, in Zimbabwe. The analysis of variance test was used to identify windows of spectral separability between water hyacinth and other land cover types. The results showed that portions of the visible (B3), NIR (B4), as well as the shortwave bands (Band 8, 9 and 10) of both Landsat 8 OLI and Landsat 7 ETM, exhibited windows of separability between water hyacinth and other land cover types. It was also observed that on the use of Landsat 8 OLI produced high overall classification accuracy of 72%, when compared Landsat 7 ETM, which yielded lower accuracy of 57%. Water hyacinth had optimal accuracies (i.e. 92%), when compared to other land cover types, based on Landsat 8 OLI data. However, when using Landsat 7 ETM data, classification accuracies of water hyacinth were relatively lower (i.e. 67%), when compared to other land cover types (i.e. water with accuracy of 100%). Spectral curves of the old, intermediate and the young water hyacinth in Lake Chivero based on: (a) Landsat 8 OLI, and (b) Landsat 7 ETM were derived. Overall, the findings of this study underscores the relevance of the new generation multispectral sensors in providing primary data-source required for mapping the spatial distribution, and even configuration of water weeds at lower or no cost over time and space.
Hinnen, Deborah A; Buskirk, Ann; Lyden, Maureen; Amstutz, Linda; Hunter, Tracy; Parkin, Christopher G; Wagner, Robin
2015-03-01
We assessed users' proficiency and efficiency in identifying and interpreting self-monitored blood glucose (SMBG), insulin, and carbohydrate intake data using data management software reports compared with standard logbooks. This prospective, self-controlled, randomized study enrolled insulin-treated patients with diabetes (PWDs) (continuous subcutaneous insulin infusion [CSII] and multiple daily insulin injection [MDI] therapy), patient caregivers [CGVs]) and health care providers (HCPs) who were naïve to diabetes data management computer software. Six paired clinical cases (3 CSII, 3 MDI) and associated multiple-choice questions/answers were reviewed by diabetes specialists and presented to participants via a web portal in both software report (SR) and traditional logbook (TL) formats. Participant response time and accuracy were documented and assessed. Participants completed a preference questionnaire at study completion. All participants (54 PWDs, 24 CGVs, 33 HCPs) completed the cases. Participants achieved greater accuracy (assessed by percentage of accurate answers) using the SR versus TL formats: PWDs, 80.3 (13.2)% versus 63.7 (15.0)%, P < .0001; CGVs, 84.6 (8.9)% versus 63.6 (14.4)%, P < .0001; HCPs, 89.5 (8.0)% versus 66.4 (12.3)%, P < .0001. Participants spent less time (minutes) with each case using the SR versus TL formats: PWDs, 8.6 (4.3) versus 19.9 (12.2), P < .0001; CGVs, 7.0 (3.5) versus 15.5 (11.8), P = .0005; HCPs, 6.7 (2.9) versus 16.0 (12.0), P < .0001. The majority of participants preferred using the software reports versus logbook data. Use of the Accu-Chek Connect Online software reports enabled PWDs, CGVs, and HCPs, naïve to diabetes data management software, to identify and utilize key diabetes information with significantly greater accuracy and efficiency compared with traditional logbook information. Use of SRs was preferred over logbooks. © 2014 Diabetes Technology Society.
Tiss, Ali; Timms, John F; Smith, Celia; Devetyarov, Dmitry; Gentry-Maharaj, Aleksandra; Camuzeaux, Stephane; Burford, Brian; Nouretdinov, Ilia; Ford, Jeremy; Luo, Zhiyuan; Jacobs, Ian; Menon, Usha; Gammerman, Alex; Cramer, Rainer
2010-12-01
Our objective was to test the performance of CA125 in classifying serum samples from a cohort of malignant and benign ovarian cancers and age-matched healthy controls and to assess whether combining information from matrix-assisted laser desorption/ionization (MALDI) time-of-flight profiling could improve diagnostic performance. Serum samples from women with ovarian neoplasms and healthy volunteers were subjected to CA125 assay and MALDI time-of-flight mass spectrometry (MS) profiling. Models were built from training data sets using discriminatory MALDI MS peaks in combination with CA125 values and tested their ability to classify blinded test samples. These were compared with models using CA125 threshold levels from 193 patients with ovarian cancer, 290 with benign neoplasm, and 2236 postmenopausal healthy controls. Using a CA125 cutoff of 30 U/mL, an overall sensitivity of 94.8% (96.6% specificity) was obtained when comparing malignancies versus healthy postmenopausal controls, whereas a cutoff of 65 U/mL provided a sensitivity of 83.9% (99.6% specificity). High classification accuracies were obtained for early-stage cancers (93.5% sensitivity). Reasons for high accuracies include recruitment bias, restriction to postmenopausal women, and inclusion of only primary invasive epithelial ovarian cancer cases. The combination of MS profiling information with CA125 did not significantly improve the specificity/accuracy compared with classifications on the basis of CA125 alone. We report unexpectedly good performance of serum CA125 using threshold classification in discriminating healthy controls and women with benign masses from those with invasive ovarian cancer. This highlights the dependence of diagnostic tests on the characteristics of the study population and the crucial need for authors to provide sufficient relevant details to allow comparison. Our study also shows that MS profiling information adds little to diagnostic accuracy. This finding is in contrast with other reports and shows the limitations of serum MS profiling for biomarker discovery and as a diagnostic tool.
Paramedic Application of a Triage Sieve: A Paper-Based Exercise.
Cuttance, Glen; Dansie, Kathryn; Rayner, Tim
2017-02-01
Introduction Triage is the systematic prioritization of casualties when there is an imbalance between the needs of these casualties and resource availability. The triage sieve is a recognized process for prioritizing casualties for treatment during mass-casualty incidents (MCIs). While the application of a triage sieve generally is well-accepted, the measurement of its accuracy has been somewhat limited. Obtaining reliable measures for triage sieve accuracy rates is viewed as a necessity for future development in this area. The goal of this study was to investigate how theoretical knowledge acquisition and the practical application of an aide-memoir impacted triage sieve accuracy rates. Two hundred and ninety-two paramedics were allocated randomly to one of four separate sub-groups, a non-intervention control group, and three intervention groups, which involved them receiving either an educational review session and/or an aide-memoir. Participants were asked to triage sieve 20 casualties using a previously trialed questionnaire. The study showed the non-intervention control group had a correct accuracy rate of 47%, a similar proportion of casualties found to be under-triaged (37%), but a significantly lower number of casualties were over-triaged (16%). The provision of either an educational review or aide-memoir significantly increased the correct triage sieve accuracy rate to 77% and 90%, respectively. Participants who received both the educational review and aide-memoir had an overall accuracy rate of 89%. Over-triaged rates were found not to differ significantly across any of the study groups. This study supports the use of an aide-memoir for maximizing MCI triage accuracy rates. A "just-in-time" educational refresher provided comparable benefits, however its practical application to the MCI setting has significant operational limitations. In addition, this study provides some guidance on triage sieve accuracy rate measures that can be applied to define acceptable performance of a triage sieve during a MCI. Cuttance G , Dansie K , Rayner T . Paramedic application of a triage sieve: a paper-based exercise. Prehosp Disaster Med. 2017;32(1):3-13.
Schonberg, Dana; Wang, Lin-Fan; Bennett, Ariana H; Gold, Marji; Jackson, Emily
2014-11-01
We sought to evaluate the accuracy of assessing gestational age (GA) prior to first trimester medication abortion using last menstrual period (LMP) compared to ultrasound (U/S). We searched Medline, Embase and Cochrane databases through October 2013 for peer-reviewed articles comparing LMP to U/S for GA dating in abortion care. Two teams of investigators independently evaluated data using standard abstraction forms. The US Preventive Services Task Force and Quality Assessment of Diagnostic Accuracy Studies guidelines were used to assess quality. Of 318 articles identified, 5 met inclusion criteria. Three studies reported that 2.5-11.8% of women were eligible for medication abortion by LMP and ineligible by U/S. The number of women who underestimated GA using LMP compared to U/S ranged from 1.8 to 14.8%, with lower rates found when the sample was limited to a GA <63 days. Most women (90.5-99.1%) knew their LMP, 70.8-90.5% with certainty. Our results support that LMP can be used to assess GA prior to medication abortion at GA <63 days. Further research looking at patient outcomes and identifying women eligible for medication abortion by LMP but ineligible by U/S is needed to confirm the safety and effectiveness of providing medication abortion using LMP alone to determine GA. Copyright © 2014 Elsevier Inc. All rights reserved.
Sefton, Gerri; Lane, Steven; Killen, Roger; Black, Stuart; Lyon, Max; Ampah, Pearl; Sproule, Cathryn; Loren-Gosling, Dominic; Richards, Caitlin; Spinty, Jean; Holloway, Colette; Davies, Coral; Wilson, April; Chean, Chung Shen; Carter, Bernie; Carrol, E D
2017-05-01
Pediatric Early Warning Scores are advocated to assist health professionals to identify early signs of serious illness or deterioration in hospitalized children. Scores are derived from the weighting applied to recorded vital signs and clinical observations reflecting deviation from a predetermined "norm." Higher aggregate scores trigger an escalation in care aimed at preventing critical deterioration. Process errors made while recording these data, including plotting or calculation errors, have the potential to impede the reliability of the score. To test this hypothesis, we conducted a controlled study of documentation using five clinical vignettes. We measured the accuracy of vital sign recording, score calculation, and time taken to complete documentation using a handheld electronic physiological surveillance system, VitalPAC Pediatric, compared with traditional paper-based charts. We explored the user acceptability of both methods using a Web-based survey. Twenty-three staff participated in the controlled study. The electronic physiological surveillance system improved the accuracy of vital sign recording, 98.5% versus 85.6%, P < .02, Pediatric Early Warning Score calculation, 94.6% versus 55.7%, P < .02, and saved time, 68 versus 98 seconds, compared with paper-based documentation, P < .002. Twenty-nine staff completed the Web-based survey. They perceived that the electronic physiological surveillance system offered safety benefits by reducing human error while providing instant visibility of recorded data to the entire clinical team.
HLA imputation in an admixed population: An assessment of the 1000 Genomes data as a training set.
Nunes, Kelly; Zheng, Xiuwen; Torres, Margareth; Moraes, Maria Elisa; Piovezan, Bruno Z; Pontes, Gerlandia N; Kimura, Lilian; Carnavalli, Juliana E P; Mingroni Netto, Regina C; Meyer, Diogo
2016-03-01
Methods to impute HLA alleles based on dense single nucleotide polymorphism (SNP) data provide a valuable resource to association studies and evolutionary investigation of the MHC region. The availability of appropriate training sets is critical to the accuracy of HLA imputation, and the inclusion of samples with various ancestries is an important pre-requisite in studies of admixed populations. We assess the accuracy of HLA imputation using 1000 Genomes Project data as a training set, applying it to a highly admixed Brazilian population, the Quilombos from the state of São Paulo. To assess accuracy, we compared imputed and experimentally determined genotypes for 146 samples at 4 HLA classical loci. We found imputation accuracies of 82.9%, 81.8%, 94.8% and 86.6% for HLA-A, -B, -C and -DRB1 respectively (two-field resolution). Accuracies were improved when we included a subset of Quilombo individuals in the training set. We conclude that the 1000 Genomes data is a valuable resource for construction of training sets due to the diversity of ancestries and the potential for a large overlap of SNPs with the target population. We also show that tailoring training sets to features of the target population substantially enhances imputation accuracy. Copyright © 2016 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Jekeli, C.
1979-01-01
Through the method of truncation functions, the oceanic geoid undulation is divided into two constituents: an inner zone contribution expressed as an integral of surface gravity disturbances over a spherical cap; and an outer zone contribution derived from a finite set of potential harmonic coefficients. Global, average error estimates are formulated for undulation differences, thereby providing accuracies for a relative geoid. The error analysis focuses on the outer zone contribution for which the potential coefficient errors are modeled. The method of computing undulations based on gravity disturbance data for the inner zone is compared to the similar, conventional method which presupposes gravity anomaly data within this zone.
Distinguishing between extra natural inflation and natural inflation after BICEP2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kohri, Kazunori; Lim, C.S.; Lin, Chia-Min, E-mail: kohri@post.kek.jp, E-mail: lim@lab.twcu.ac.jp, E-mail: lin@chuo-u.ac.jp
2014-08-01
In this paper, we carefully calculated the tensor-to-scalar ratio, the running spectral index, and the running of running spectrum for (extra) natural inflation in order to compare with recent BICEP2 data, PLANCK satellite data and future 21 cm data. We discovered that the prediction for running spectral index and the running of running spectrum in natural inflation is different from that in the case of extra natural inflation. Near future observation for the running spectral index can only provide marginal accuracy which may not allow us distinguishing between extra natural inflation from natural inflation clearly unless the experimental accuracy canmore » be further improved.« less
Shiiba, Takuro; Kuga, Naoya; Kuroiwa, Yasuyoshi; Sato, Tatsuhiko
2017-10-01
We assessed the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels (DPKs) calculated using the particle and heavy ion transport code system (PHITS) for patient-specific dosimetry in targeted radionuclide treatment (TRT) and compared our data with published data. All mono-energetic and beta-emitting isotope DPKs calculated using PHITS, both in water and compact bone, were in good agreement with those in literature using other MC codes. PHITS provided reliable mono-energetic electron and beta-emitting isotope scaled DPKs for patient-specific dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.
FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts
Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto
2010-01-01
Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. PMID:22319304
Doubly stochastic radial basis function methods
NASA Astrophysics Data System (ADS)
Yang, Fenglian; Yan, Liang; Ling, Leevan
2018-06-01
We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).
a Comparative Analysis of Five Cropland Datasets in Africa
NASA Astrophysics Data System (ADS)
Wei, Y.; Lu, M.; Wu, W.
2018-04-01
The food security, particularly in Africa, is a challenge to be resolved. The cropland area and spatial distribution obtained from remote sensing imagery are vital information. In this paper, according to cropland area and spatial location, we compare five global cropland datasets including CCI Land Cover, GlobCover, MODIS Collection 5, GlobeLand30 and Unified Cropland in circa 2010 of Africa in terms of cropland area and spatial location. The accuracy of cropland area calculated from five datasets was analyzed compared with statistic data. Based on validation samples, the accuracies of spatial location for the five cropland products were assessed by error matrix. The results show that GlobeLand30 has the best fitness with the statistics, followed by MODIS Collection 5 and Unified Cropland, GlobCover and CCI Land Cover have the lower accuracies. For the accuracy of spatial location of cropland, GlobeLand30 reaches the highest accuracy, followed by Unified Cropland, MODIS Collection 5 and GlobCover, CCI Land Cover has the lowest accuracy. The spatial location accuracy of five datasets in the Csa with suitable farming condition is generally higher than in the Bsk.
Accuracy and Reliability of the Kinect Version 2 for Clinical Measurement of Motor Function
Kayser, Bastian; Mansow-Model, Sebastian; Verrel, Julius; Paul, Friedemann; Brandt, Alexander U.; Schmitz-Hübsch, Tanja
2016-01-01
Background The introduction of low cost optical 3D motion tracking sensors provides new options for effective quantification of motor dysfunction. Objective The present study aimed to evaluate the Kinect V2 sensor against a gold standard motion capture system with respect to accuracy of tracked landmark movements and accuracy and repeatability of derived clinical parameters. Methods Nineteen healthy subjects were concurrently recorded with a Kinect V2 sensor and an optical motion tracking system (Vicon). Six different movement tasks were recorded with 3D full-body kinematics from both systems. Tasks included walking in different conditions, balance and adaptive postural control. After temporal and spatial alignment, agreement of movements signals was described by Pearson’s correlation coefficient and signal to noise ratios per dimension. From these movement signals, 45 clinical parameters were calculated, including ranges of motions, torso sway, movement velocities and cadence. Accuracy of parameters was described as absolute agreement, consistency agreement and limits of agreement. Intra-session reliability of 3 to 5 measurement repetitions was described as repeatability coefficient and standard error of measurement for each system. Results Accuracy of Kinect V2 landmark movements was moderate to excellent and depended on movement dimension, landmark location and performed task. Signal to noise ratio provided information about Kinect V2 landmark stability and indicated larger noise behaviour in feet and ankles. Most of the derived clinical parameters showed good to excellent absolute agreement (30 parameters showed ICC(3,1) > 0.7) and consistency (38 parameters showed r > 0.7) between both systems. Conclusion Given that this system is low-cost, portable and does not require any sensors to be attached to the body, it could provide numerous advantages when compared to established marker- or wearable sensor based system. The Kinect V2 has the potential to be used as a reliable and valid clinical measurement tool. PMID:27861541
Differential diagnosis of neurodegenerative diseases using structural MRI data
Koikkalainen, Juha; Rhodius-Meester, Hanneke; Tolonen, Antti; Barkhof, Frederik; Tijms, Betty; Lemstra, Afina W.; Tong, Tong; Guerrero, Ricardo; Schuh, Andreas; Ledig, Christian; Rueckert, Daniel; Soininen, Hilkka; Remes, Anne M.; Waldemar, Gunhild; Hasselbalch, Steen; Mecocci, Patrizia; van der Flier, Wiesje; Lötjönen, Jyrki
2016-01-01
Different neurodegenerative diseases can cause memory disorders and other cognitive impairments. The early detection and the stratification of patients according to the underlying disease are essential for an efficient approach to this healthcare challenge. This emphasizes the importance of differential diagnostics. Most studies compare patients and controls, or Alzheimer's disease with one other type of dementia. Such a bilateral comparison does not resemble clinical practice, where a clinician is faced with a number of different possible types of dementia. Here we studied which features in structural magnetic resonance imaging (MRI) scans could best distinguish four types of dementia, Alzheimer's disease, frontotemporal dementia, vascular dementia, and dementia with Lewy bodies, and control subjects. We extracted an extensive set of features quantifying volumetric and morphometric characteristics from T1 images, and vascular characteristics from FLAIR images. Classification was performed using a multi-class classifier based on Disease State Index methodology. The classifier provided continuous probability indices for each disease to support clinical decision making. A dataset of 504 individuals was used for evaluation. The cross-validated classification accuracy was 70.6% and balanced accuracy was 69.1% for the five disease groups using only automatically determined MRI features. Vascular dementia patients could be detected with high sensitivity (96%) using features from FLAIR images. Controls (sensitivity 82%) and Alzheimer's disease patients (sensitivity 74%) could be accurately classified using T1-based features, whereas the most difficult group was the dementia with Lewy bodies (sensitivity 32%). These results were notable better than the classification accuracies obtained with visual MRI ratings (accuracy 44.6%, balanced accuracy 51.6%). Different quantification methods provided complementary information, and consequently, the best results were obtained by utilizing several quantification methods. The results prove that automatic quantification methods and computerized decision support methods are feasible for clinical practice and provide comprehensive information that may help clinicians in the diagnosis making. PMID:27104138
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn; Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn; Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate newmore » cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required. - Highlights: • Higher-order cubature points for degrees 7 to 9 are developed. • The effects of quadrature rule on the mass and stiffness matrices has been conducted. • The cubature points have always positive integration weights. • Freeing from the inversion of a wide bandwidth mass matrix. • The accuracy of the TSEM has been improved in about one order of magnitude.« less
Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.
Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J
2015-06-01
Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. © The Author(s) 2014.
Hind, Jacqueline A.; Gensler, Gary; Brandt, Diane K.; Miller Gardner, Patricia J.; Blumenthal, Loreen; Gramigna, Gary D.; Kosek, Steven; Lundy, Donna; McGarvey-Toler, Susan; Rockafellow, Susan; Sullivan, Paula A.; Villa, Marybell; Gill, Gary D.; Lindblad, Anne S.; Logemann, Jeri A.; Robbins, JoAnne
2009-01-01
Accurate detection and classification of aspiration is a critical component of videofluoroscopic swallowing evaluation, the most commonly utilized instrumental method for dysphagia diagnosis and treatment. Currently published literature indicates that inter-judge reliability for the identification of aspiration ranges from poor to fairly good depending on the amount of training provided to clinicians. The majority of extant studies compared judgments among clinicians. No studies included judgments made during the use of a postural compensatory strategy. The purpose of this study was to examine the accuracy of judgments made by speech-language pathologists (SLPs) practicing in hospitals compared with unblinded expert judges when identifying aspiration and using the 8-point Penetration/Aspiration Scale. Clinicians received extensive training for the detection of aspiration and minimal training on use of the Penetration/Aspiration Scale. Videofluoroscopic data were collected from 669 patients as part of a large, randomized clinical trial and include judgments of 10,200 swallows made by 76 clinicians from 44 hospitals in 11 states. Judgments were made on swallows during use of dysphagia compensatory strategies: chin down posture with thin-liquids and thickened liquids (nectar-thick and honey-thick consistencies) in a head neutral posture. The subject population included patients with Parkinson’s disease and/or dementia. Kappa statistics indicate high accuracy for all interventions by SLPs for identification of aspiration (all К > .86) and variable accuracy (range 69%–76%) using the Penetration/Aspiration Scale when compared to expert judges. It is concluded that while the accuracy of identifying the presence of aspiration by SLPs is excellent, more extensive training and/or image enhancement is recommended for precise use of the Penetration/Aspiration Scale. PMID:18953607
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva
2014-08-01
To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.
Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva
2013-01-01
Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175
60 seconds to survival: A pilot study of a disaster triage video game for prehospital providers.
Cicero, Mark X; Whitfill, Travis; Munjal, Kevin; Madhok, Manu; Diaz, Maria Carmen G; Scherzer, Daniel J; Walsh, Barbara M; Bowen, Angela; Redlener, Michael; Goldberg, Scott A; Symons, Nadine; Burkett, James; Santos, Joseph C; Kessler, David; Barnicle, Ryan N; Paesano, Geno; Auerbach, Marc A
2017-01-01
Disaster triage training for emergency medical service (EMS) providers is not standardized. Simulation training is costly and time-consuming. In contrast, educational video games enable low-cost and more time-efficient standardized training. We hypothesized that players of the video game "60 Seconds to Survival" (60S) would have greater improvements in disaster triage accuracy compared to control subjects who did not play 60S. Participants recorded their demographics and highest EMS training level and were randomized to play 60S (intervention) or serve as controls. At baseline, all participants completed a live school-shooting simulation in which manikins and standardized patients depicted 10 adult and pediatric victims. The intervention group then played 60S at least three times over the course of 13 weeks (time 2). Players triaged 12 patients in three scenarios (school shooting, house fire, tornado), and received in-game performance feedback. At time 2, the same live simulation was conducted for all participants. Controls had no disaster training during the study. The main outcome was improvement in triage accuracy in live simulations from baseline to time 2. Physicians and EMS providers predetermined expected triage level (RED/YELLOW/GREEN/BLACK) via modified Delphi method. There were 26 participants in the intervention group and 21 in the control group. There was no difference in gender, level of training, or years of EMS experience (median 5.5 years intervention, 3.5 years control, p = 0.49) between the groups. At baseline, both groups demonstrated median triage accuracy of 80 percent (IQR 70-90 percent, p = 0.457). At time 2, the intervention group had a significant improvement from baseline (median accuracy = 90 percent [IQR: 80-90 percent], p = 0.005), while the control group did not (median accuracy = 80 percent [IQR:80-95], p = 0.174). However, the mean improvement from baseline was not significant between the two groups (difference = 6.5, p = 0.335). The intervention demonstrated a significant improvement in accuracy from baseline to time 2 while the control did not. However, there was no significant difference in the improvement between the intervention and control groups. These results may be due to small sample size. Future directions include assessment of the game's effect on triage accuracy with a larger, multisite site cohort and iterative development to improve 60S.
Libration Point Navigation Concepts Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Folta, David C.; Moreau, Michael C.; Quinn, David A.
2004-01-01
This work examines the autonomous navigation accuracy achievable for a lunar exploration trajectory from a translunar libration point lunar navigation relay satellite, augmented by signals from the Global Positioning System (GPS). We also provide a brief analysis comparing the libration point relay to lunar orbit relay architectures, and discuss some issues of GPS usage for cis-lunar trajectories.
Atmospheric absorption measurements in the region of 1 mm wavelength.
NASA Technical Reports Server (NTRS)
Emery, R.
1972-01-01
A Froome-type plasma-metal-junction device (1962) was used in high-resolution radiation transmission measurements in the atmosphere at wavelengths from 0.5 to 3.0 mm. The experimental and theoretical results for water vapor absorption lines in two submillimeter wavelength windows were compared, showing that this technique provided a much higher wavelength accuracy than more conventional optical-type spectroscopy.
ERIC Educational Resources Information Center
Berkovits, Shira Melody
2011-01-01
College instructors often provide homework so that their students can review class material; however some students do not take advantage of these review opportunities. This study compared the effects of a certain reward and a lottery reward on the quiz submission rates and accuracy of 112 college students. In Baseline, quizzes were for practice…
Wind speed vector restoration algorithm
NASA Astrophysics Data System (ADS)
Baranov, Nikolay; Petrov, Gleb; Shiriaev, Ilia
2018-04-01
Impulse wind lidar (IWL) signal processing software developed by JSC «BANS» recovers full wind speed vector by radial projections and provides wind parameters information up to 2 km distance. Increasing accuracy and speed of wind parameters calculation signal processing technics have been studied in this research. Measurements results of IWL and continuous scanning lidar were compared. Also, IWL data processing modeling results have been analyzed.
NASA Technical Reports Server (NTRS)
Madore, B. F.; Freedman, W. L.
1994-01-01
Based on both empirical data for nearby galaxies, and on computer simulations, we show that measuring the position of the tip of the first-ascent red-giant branch (TRGB) provides a means of obtaining the distances to nearby galaxies with a precision and accuracy comparable to using Cepheids and/or RR Lyrae variables.
NASA Astrophysics Data System (ADS)
Eyer, L.; Dubath, P.; Saesen, S.; Evans, D. W.; Wyrzykowski, L.; Hodgkin, S.; Mowlavi, N.
2012-04-01
The measurement of the positions, distances, motions and luminosities of stars represents the foundations of modern astronomical knowledge. Launched at the end of the eighties, the ESA Hipparcos satellite was the first space mission dedicated to such measurements. Hipparcos improved position accuracies by a factor of 100 compared to typical ground-based results and provided astrometric and photometric multi-epoch observations of 118,000 stars over the entire sky. The impact of Hipparcos on astrophysics has been extremely valuable and diverse. Building on this important European success, the ESA Gaia cornerstone mission promises an even more impressive advance. Compared to Hipparcos, it will bring a gain of a factor 50 to 100 in position accuracy and of a factor of 10,000 in star number, collecting photometric, spectrophotometric and spectroscopic data for one billion celestial objects. During its 5-year flight, Gaia will measure objects repeatedly, up to a few hundred times, providing an unprecedented database to study the variability of all types of celestial objects. Gaia will bring outstanding contributions, directly or indirectly, to most fields of research in astrophysics, such as the study of our Galaxy and of its stellar constituents, and the search for planets outside the solar system.
Saravanan, Konda Mani; Dunker, A Keith; Krishnaswamy, Sankaran
2017-12-27
More than 60 prediction methods for intrinsically disordered proteins (IDPs) have been developed over the years, many of which are accessible on the World Wide Web. Nearly, all of these predictors give balanced accuracies in the ~65%-~80% range. Since predictors are not perfect, further studies are required to uncover the role of amino acid residues in native IDP as compared to predicted IDP regions. In the present work, we make use of sequences of 100% predicted IDP regions, false positive disorder predictions, and experimentally determined IDP regions to distinguish the characteristics of native versus predicted IDP regions. A higher occurrence of asparagine is observed in sequences of native IDP regions but not in sequences of false positive predictions of IDP regions. The occurrences of certain combinations of amino acids at the pentapeptide level provide a distinguishing feature in the IDPs with respect to globular proteins. The distinguishing features presented in this paper provide insights into the sequence fingerprints of amino acid residues in experimentally determined as compared to predicted IDP regions. These observations and additional work along these lines should enable the development of improvements in the accuracy of disorder prediction algorithm.
NASA Technical Reports Server (NTRS)
Lesar, Douglas E.
1992-01-01
The performance of the NASTRAN CQUAD4 membrane and plate element in the analysis of undamped natural vibration modes of thin fiber reinforced composite plates was evaluated. The element provides natural frequency estimates that are comparable in accuracy to alternative formulations, and, in most cases, deviate by less than 10 percent from experimentally measured frequencies. The predictions lie within roughly equal accuracy bounds for the two material types treated (GFRP and CFRP), and for the ply layups considered (unidirectional, cross-ply, and angle-ply). Effective elastic lamina moduli had to be adjusted for fiber volume fraction to attain this level of frequency. The lumped mass option provides more accurate frequencies than the consistent mass option. This evaluation concerned only plates with L/t ratios on the order of 100 to 150. Since the CQUAD4 utilizes first-order corrections for transverse laminate shear stiffness, the element should provide useful frequency estimates for plate-like structures with lower L/t. For plates with L/t below 20, consideration should be given to idealizing with 3-D solid elements. Based on the observation that natural frequencies and mode shapes are predicted with acceptable engineering accuracy, it is concluded that CQUAD4 should be a useful and accurate element for transient shock and steady state vibration analysis of naval ship
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan
2013-01-01
Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in tennis using ecologically valid designs that mimic more closely the demands of match play. Key Points Groundstroke accuracy under moderate-intensity fatigue is equivalent to performance at rest. Groundstroke accuracy declines significantly in both expert (40.3% decline) and non-expert (49.6%) tennis players following high-intensity fatigue. Expert players are more consistent, hit more accurate shots and fewer out shots across all fatigue intensities. The effects of fatigue on groundstroke accuracy are the same regardless of gender and player’s achievement goal indicators. PMID:24149809
The Effects of High- and Low-Anxiety Training on the Anticipation Judgments of Elite Performers.
Alder, David; Ford, Paul R; Causer, Joe; Williams, A Mark
2016-02-01
We examined the effects of high- versus low-anxiety conditions during video-based training of anticipation judgments using international-level badminton players facing serves and the transfer to high-anxiety and field-based conditions. Players were assigned to a high-anxiety training (HA), low-anxiety training (LA) or control group (CON) in a pretraining-posttest design. In the pre- and posttest, players anticipated serves from video and on court under high- and low-anxiety conditions. In the video-based high-anxiety pretest, anticipation response accuracy was lower and final fixations shorter when compared with the low-anxiety pretest. In the low-anxiety posttest, HA and LA demonstrated greater accuracy of judgments and longer final fixations compared with pretest and CON. In the high-anxiety posttest, HA maintained accuracy when compared with the low-anxiety posttest, whereas LA had lower accuracy. In the on-court posttest, the training groups demonstrated greater accuracy of judgments compared with the pretest and CON.
Speed Pressure in Conflict Situations Impedes Inhibitory Action Control in Parkinson’s Disease
Van Wouwe, N.C.; van den Wildenberg, W.P.M.; Claassen, D.O.; Kanoff, K.; Bashore, T.R.; Wylie, S.A.
2014-01-01
Parkinson’s disease (PD) is a neurodegenerative basal ganglia disease that disrupts cognitive control processes involved in response selection. The current study investigated the effects of PD on the ability to resolve conflicts during response selection when performance emphasized response speed versus response accuracy. Twenty-one (21) PD patients and 21 healthy controls (HC) completed a Simon conflict task, and a subset of 10 participants from each group provided simultaneous movement-related potential (MRP) data to track patterns of motor cortex activation and inhibition associated with the successful resolution of conflicting response tendencies. Both groups adjusted performance strategically to emphasize response speed or accuracy (i.e., speed-accuracy effect). For HC, interference from a conflicting response was reduced when response accuracy rather than speed was prioritized. For PD patients, however, there was a reduction in interference, but it was not statistically significant. The conceptual framework of the Dual-Process Activation-Suppression (DPAS) model revealed that the groups experienced similar susceptibility to making fast impulsive errors in conflict trials irrespective of speed-accuracy instructions, but PD patients were less proficient and delayed compared to HC at suppressing the interference from these incorrect response tendencies, especially under speed pressure. Analysis of MRPs on response conflict trials showed attenuated inhibition of the motor cortex controlling the conflicting impulsive response tendency in PD patients compared to HC. These results further confirm the detrimental effects of PD inhibitory control mechanisms and their exacerbation when patients perform under speed pressure. The results also suggest that a downstream effect of inhibitory dysfunction in PD is diminished inhibition of motor cortex controlling conflicting response tendencies. PMID:25017503
Evaluation of Masimo signal extraction technology pulse oximetry in anaesthetized pregnant sheep.
Quinn, Christopher T; Raisis, Anthea L; Musk, Gabrielle C
2013-03-01
Evaluation of the accuracy of Masimo signal extraction technology (SET) pulse oximetry in anaesthetized late gestational pregnant sheep. Prospective experimental study. Seventeen pregnant Merino ewes. Animals included in study were late gestation ewes undergoing general anaesthesia for Caesarean delivery or foetal surgery in a medical research laboratory. Masimo Radical-7 pulse oximetry (SpO(2) ) measurements were compared to co-oximetry (SaO(2) ) measurements from arterial blood gas analyses. The failure rate of the pulse oximeter was calculated. Accuracy was assessed by Bland & Altman's (2007) limits of agreement method. The effect of mean arterial blood pressure (MAP), perfusion index (PI) and haemoglobin (Hb) concentration on accuracy were assessed by regression analysis. Forty arterial blood samples paired with SpO(2) and blood pressure measurements were obtained. SpO(2) ranged from 42 to 99% and SaO(2) from 43.7 to 99.9%. MAP ranged from 24 to 82 mmHg, PI from 0.1 to 1.56 and Hb concentration from 71 to 114 g L(-1) . Masimo pulse oximetry measurements tended to underestimate oxyhaemoglobin saturation compared to co-oximetry with a bias (mean difference) of -2% and precision (standard deviation of the differences) of 6%. Accuracy appeared to decrease when SpO(2) was <75%, however numbers were too small for statistical comparisons. Hb concentration and PI had no significant effect on accuracy, whereas MAP was negatively correlated with SpO(2) bias. Masimo SET pulse oximetry can provide reliable and continuous monitoring of arterial oxyhaemoglobin saturation in anaesthetized pregnant sheep during clinically relevant levels of cardiopulmonary dysfunction. Further work is needed to assess pulse oximeter function during extreme hypotension and hypoxaemia. © 2012 The Authors. Veterinary Anaesthesia and Analgesia. © 2012 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesiologists.
Landry, Guillaume; Reniers, Brigitte; Granton, Patrick Vincent; van Rooijen, Bart; Beaulieu, Luc; Wildberger, Joachim E; Verhaegen, Frank
2011-09-01
Dual energy CT (DECT) imaging can provide both the electron density ρ(e) and effective atomic number Z(eff), thus facilitating tissue type identification. This paper investigates the accuracy of a dual source DECT scanner by means of measurements and simulations. Previous simulation work suggested improved Monte Carlo dose calculation accuracy when compared to single energy CT for low energy photon brachytherapy, but lacked validation. As such, we aim to validate our DECT simulation model in this work. A cylindrical phantom containing tissue mimicking inserts was scanned with a second generation dual source scanner (SOMATOM Definition FLASH) to obtain Z(eff) and ρ(e). A model of the scanner was designed in ImaSim, a CT simulation program, and was used to simulate the experiment. Accuracy of measured Z(eff) (labelled Z) was found to vary from -10% to 10% from low to high Z tissue substitutes while the accuracy on ρ(e) from DECT was about 2.5%. Our simulation reproduced the experiments within ±5% for both Z and ρ(e). A clinical DECT scanner was able to extract Z and ρ(e) of tissue substitutes. Our simulation tool replicates the experiments within a reasonable accuracy. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Propagation of measurement accuracy to biomass soft-sensor estimation and control quality.
Steinwandter, Valentin; Zahel, Thomas; Sagmeister, Patrick; Herwig, Christoph
2017-01-01
In biopharmaceutical process development and manufacturing, the online measurement of biomass and derived specific turnover rates is a central task to physiologically monitor and control the process. However, hard-type sensors such as dielectric spectroscopy, broth fluorescence, or permittivity measurement harbor various disadvantages. Therefore, soft-sensors, which use measurements of the off-gas stream and substrate feed to reconcile turnover rates and provide an online estimate of the biomass formation, are smart alternatives. For the reconciliation procedure, mass and energy balances are used together with accuracy estimations of measured conversion rates, which were so far arbitrarily chosen and static over the entire process. In this contribution, we present a novel strategy within the soft-sensor framework (named adaptive soft-sensor) to propagate uncertainties from measurements to conversion rates and demonstrate the benefits: For industrially relevant conditions, hereby the error of the resulting estimated biomass formation rate and specific substrate consumption rate could be decreased by 43 and 64 %, respectively, compared to traditional soft-sensor approaches. Moreover, we present a generic workflow to determine the required raw signal accuracy to obtain predefined accuracies of soft-sensor estimations. Thereby, appropriate measurement devices and maintenance intervals can be selected. Furthermore, using this workflow, we demonstrate that the estimation accuracy of the soft-sensor can be additionally and substantially increased.
Validation of heart rate extraction through an iPhone accelerometer.
Kwon, Sungjun; Lee, Jeongsu; Chung, Gih Sung; Park, Kwang Suk
2011-01-01
Ubiquitous medical technology may provide advanced utility for evaluating the status of the patient beyond the clinical environment. The iPhone provides the capacity to measure the heart rate, as the iPhone consists of a 3-axis accelerometer that is sufficiently sensitive to perceive tiny body movements caused by heart pumping. In this preliminary study, an iPhone was tested and evaluated as the reliable heart rate extractor to use for medical purpose by comparing with reference electrocardiogram. By comparing the extracted heart rate from acquired acceleration data with the extracted one from ECG reference signal, iPhone functioning as the reliable heart rate extractor has demonstrated sufficient accuracy and consistency.
Boursier, Jérôme; Bertrais, Sandrine; Oberti, Frédéric; Gallois, Yves; Fouchard-Hubert, Isabelle; Rousselet, Marie-Christine; Zarski, Jean-Pierre; Calès, Paul
2011-11-30
Non-invasive tests have been constructed and evaluated mainly for binary diagnoses such as significant fibrosis. Recently, detailed fibrosis classifications for several non-invasive tests have been developed, but their accuracy has not been thoroughly evaluated in comparison to liver biopsy, especially in clinical practice and for Fibroscan. Therefore, the main aim of the present study was to evaluate the accuracy of detailed fibrosis classifications available for non-invasive tests and liver biopsy. The secondary aim was to validate these accuracies in independent populations. Four HCV populations provided 2,068 patients with liver biopsy, four different pathologist skill-levels and non-invasive tests. Results were expressed as percentages of correctly classified patients. In population #1 including 205 patients and comparing liver biopsy (reference: consensus reading by two experts) and blood tests, Metavir fibrosis (FM) stage accuracy was 64.4% in local pathologists vs. 82.2% (p < 10-3) in single expert pathologist. Significant discrepancy (≥ 2FM vs reference histological result) rates were: Fibrotest: 17.2%, FibroMeter2G: 5.6%, local pathologists: 4.9%, FibroMeter3G: 0.5%, expert pathologist: 0% (p < 10-3). In population #2 including 1,056 patients and comparing blood tests, the discrepancy scores, taking into account the error magnitude, of detailed fibrosis classification were significantly different between FibroMeter2G (0.30 ± 0.55) and FibroMeter3G (0.14 ± 0.37, p < 10-3) or Fibrotest (0.84 ± 0.80, p < 10-3). In population #3 (and #4) including 458 (359) patients and comparing blood tests and Fibroscan, accuracies of detailed fibrosis classification were, respectively: Fibrotest: 42.5% (33.5%), Fibroscan: 64.9% (50.7%), FibroMeter2G: 68.7% (68.2%), FibroMeter3G: 77.1% (83.4%), p < 10-3 (p < 10-3). Significant discrepancy (≥ 2 FM) rates were, respectively: Fibrotest: 21.3% (22.2%), Fibroscan: 12.9% (12.3%), FibroMeter2G: 5.7% (6.0%), FibroMeter3G: 0.9% (0.9%), p < 10-3 (p < 10-3). The accuracy in detailed fibrosis classification of the best-performing blood test outperforms liver biopsy read by a local pathologist, i.e., in clinical practice; however, the classification precision is apparently lesser. This detailed classification accuracy is much lower than that of significant fibrosis with Fibroscan and even Fibrotest but higher with FibroMeter3G. FibroMeter classification accuracy was significantly higher than those of other non-invasive tests. Finally, for hepatitis C evaluation in clinical practice, fibrosis degree can be evaluated using an accurate blood test.
2011-01-01
Background Non-invasive tests have been constructed and evaluated mainly for binary diagnoses such as significant fibrosis. Recently, detailed fibrosis classifications for several non-invasive tests have been developed, but their accuracy has not been thoroughly evaluated in comparison to liver biopsy, especially in clinical practice and for Fibroscan. Therefore, the main aim of the present study was to evaluate the accuracy of detailed fibrosis classifications available for non-invasive tests and liver biopsy. The secondary aim was to validate these accuracies in independent populations. Methods Four HCV populations provided 2,068 patients with liver biopsy, four different pathologist skill-levels and non-invasive tests. Results were expressed as percentages of correctly classified patients. Results In population #1 including 205 patients and comparing liver biopsy (reference: consensus reading by two experts) and blood tests, Metavir fibrosis (FM) stage accuracy was 64.4% in local pathologists vs. 82.2% (p < 10-3) in single expert pathologist. Significant discrepancy (≥ 2FM vs reference histological result) rates were: Fibrotest: 17.2%, FibroMeter2G: 5.6%, local pathologists: 4.9%, FibroMeter3G: 0.5%, expert pathologist: 0% (p < 10-3). In population #2 including 1,056 patients and comparing blood tests, the discrepancy scores, taking into account the error magnitude, of detailed fibrosis classification were significantly different between FibroMeter2G (0.30 ± 0.55) and FibroMeter3G (0.14 ± 0.37, p < 10-3) or Fibrotest (0.84 ± 0.80, p < 10-3). In population #3 (and #4) including 458 (359) patients and comparing blood tests and Fibroscan, accuracies of detailed fibrosis classification were, respectively: Fibrotest: 42.5% (33.5%), Fibroscan: 64.9% (50.7%), FibroMeter2G: 68.7% (68.2%), FibroMeter3G: 77.1% (83.4%), p < 10-3 (p < 10-3). Significant discrepancy (≥ 2 FM) rates were, respectively: Fibrotest: 21.3% (22.2%), Fibroscan: 12.9% (12.3%), FibroMeter2G: 5.7% (6.0%), FibroMeter3G: 0.9% (0.9%), p < 10-3 (p < 10-3). Conclusions The accuracy in detailed fibrosis classification of the best-performing blood test outperforms liver biopsy read by a local pathologist, i.e., in clinical practice; however, the classification precision is apparently lesser. This detailed classification accuracy is much lower than that of significant fibrosis with Fibroscan and even Fibrotest but higher with FibroMeter3G. FibroMeter classification accuracy was significantly higher than those of other non-invasive tests. Finally, for hepatitis C evaluation in clinical practice, fibrosis degree can be evaluated using an accurate blood test. PMID:22129438
Perceptual experience and posttest improvements in perceptual accuracy and consistency.
Wagman, Jeffrey B; McBride, Dawn M; Trefzger, Amanda J
2008-08-01
Two experiments investigated the relationship between perceptual experience (during practice) and posttest improvements in perceptual accuracy and consistency. Experiment 1 investigated the potential relationship between how often knowledge of results (KR) is provided during a practice session and posttest improvements in perceptual accuracy. Experiment 2 investigated the potential relationship between how often practice (PR) is provided during a practice session and posttest improvements in perceptual consistency. The results of both experiments are consistent with previous findings that perceptual accuracy improves only when practice includes KR and that perceptual consistency improves regardless of whether practice includes KR. In addition, the results showed that although there is a relationship between how often KR is provided during a practice session and posttest improvements in perceptual accuracy, there is no relationship between how often PR is provided during a practice session and posttest improvements in consistency.
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
Localization accuracy of sphere fiducials in computed tomography images
NASA Astrophysics Data System (ADS)
Kobler, Jan-Philipp; Díaz Díaz, Jesus; Fitzpatrick, J. Michael; Lexow, G. Jakob; Majdani, Omid; Ortmaier, Tobias
2014-03-01
In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 μm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.
The stars: an absolute radiometric reference for the on-orbit calibration of PLEIADES-HR satellites
NASA Astrophysics Data System (ADS)
Meygret, Aimé; Blanchet, Gwendoline; Mounier, Flore; Buil, Christian
2017-09-01
The accurate on-orbit radiometric calibration of optical sensors has become a challenge for space agencies who gather their effort through international working groups such as CEOS/WGCV or GSICS with the objective to insure the consistency of space measurements and to reach an absolute accuracy compatible with more and more demanding scientific needs. Different targets are traditionally used for calibration depending on the sensor or spacecraft specificities: from on-board calibration systems to ground targets, they all take advantage of our capacity to characterize and model them. But achieving the in-flight stability of a diffuser panel is always a challenge while the calibration over ground targets is often limited by their BDRF characterization and the atmosphere variability. Thanks to their agility, some satellites have the capability to view extra-terrestrial targets such as the moon or stars. The moon is widely used for calibration and its albedo is known through ROLO (RObotic Lunar Observatory) USGS model but with a poor absolute accuracy limiting its use to sensor drift monitoring or cross-calibration. Although the spectral irradiance of some stars is known with a very high accuracy, it was not really shown that they could provide an absolute reference for remote sensors calibration. This paper shows that high resolution optical sensors can be calibrated with a high absolute accuracy using stars. The agile-body PLEIADES 1A satellite is used for this demonstration. The star based calibration principle is described and the results are provided for different stars, each one being acquired several times. These results are compared to the official calibration provided by ground targets and the main error contributors are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Callaghan, Michael E., E-mail: elspeth.raymond@health.sa.gov.au; Freemasons Foundation Centre for Men's Health, University of Adelaide; Urology Unit, Repatriation General Hospital, SA Health, Flinders Centre for Innovation in Cancer
Purpose: To identify, through a systematic review, all validated tools used for the prediction of patient-reported outcome measures (PROMs) in patients being treated with radiation therapy for prostate cancer, and provide a comparative summary of accuracy and generalizability. Methods and Materials: PubMed and EMBASE were searched from July 2007. Title/abstract screening, full text review, and critical appraisal were undertaken by 2 reviewers, whereas data extraction was performed by a single reviewer. Eligible articles had to provide a summary measure of accuracy and undertake internal or external validation. Tools were recommended for clinical implementation if they had been externally validated and foundmore » to have accuracy ≥70%. Results: The search strategy identified 3839 potential studies, of which 236 progressed to full text review and 22 were included. From these studies, 50 tools predicted gastrointestinal/rectal symptoms, 29 tools predicted genitourinary symptoms, 4 tools predicted erectile dysfunction, and no tools predicted quality of life. For patients treated with external beam radiation therapy, 3 tools could be recommended for the prediction of rectal toxicity, gastrointestinal toxicity, and erectile dysfunction. For patients treated with brachytherapy, 2 tools could be recommended for the prediction of urinary retention and erectile dysfunction. Conclusions: A large number of tools for the prediction of PROMs in prostate cancer patients treated with radiation therapy have been developed. Only a small minority are accurate and have been shown to be generalizable through external validation. This review provides an accessible catalogue of tools that are ready for clinical implementation as well as which should be prioritized for validation.« less
ERIC Educational Resources Information Center
Wade, Ros; Corbett, Mark; Eastwood, Alison
2013-01-01
Assessing the quality of included studies is a vital step in undertaking a systematic review. The recently revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool (QUADAS-2), which is the only validated quality assessment tool for diagnostic accuracy studies, does not include specific criteria for assessing comparative studies. As…
Continuous decoding of human grasp kinematics using epidural and subdural signals
NASA Astrophysics Data System (ADS)
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-02-01
Objective. Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces. Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials (EFPs). Approach. We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with EFPs, with both standard- and high-resolution electrode arrays. Main results. In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean ± SD grasp aperture variance accounted for was 0.54 ± 0.05 across all subjects, 0.75 ± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7-20 Hz and 70-115 Hz spectral bands contained the most information about grasp kinematics, with the 70-115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance. To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface.
Tarzwell, Robert; Newberg, Andrew; Henderson, Theodore A.
2015-01-01
Background Traumatic brain injury (TBI) and posttraumatic stress disorder (PTSD) are highly heterogeneous and often present with overlapping symptomology, providing challenges in reliable classification and treatment. Single photon emission computed tomography (SPECT) may be advantageous in the diagnostic separation of these disorders when comorbid or clinically indistinct. Methods Subjects were selected from a multisite database, where rest and on-task SPECT scans were obtained on a large group of neuropsychiatric patients. Two groups were analyzed: Group 1 with TBI (n=104), PTSD (n=104) or both (n=73) closely matched for demographics and comorbidity, compared to each other and healthy controls (N=116); Group 2 with TBI (n=7,505), PTSD (n=1,077) or both (n=1,017) compared to n=11,147 without either. ROIs and visual readings (VRs) were analyzed using a binary logistic regression model with predicted probabilities inputted into a Receiver Operating Characteristic analysis to identify sensitivity, specificity, and accuracy. One-way ANOVA identified the most diagnostically significant regions of increased perfusion in PTSD compared to TBI. Analysis included a 10-fold cross validation of the protocol in the larger community sample (Group 2). Results For Group 1, baseline and on-task ROIs and VRs showed a high level of accuracy in differentiating PTSD, TBI and PTSD+TBI conditions. This carefully matched group separated with 100% sensitivity, specificity and accuracy for the ROI analysis and at 89% or above for VRs. Group 2 had lower sensitivity, specificity and accuracy, but still in a clinically relevant range. Compared to subjects with TBI, PTSD showed increases in the limbic regions, cingulum, basal ganglia, insula, thalamus, prefrontal cortex and temporal lobes. Conclusions This study demonstrates the ability to separate PTSD and TBI from healthy controls, from each other, and detect their co-occurrence, even in highly comorbid samples, using SPECT. This modality may offer a clinical option for aiding diagnosis and treatment of these conditions. PMID:26132293
Amen, Daniel G; Raji, Cyrus A; Willeumier, Kristen; Taylor, Derek; Tarzwell, Robert; Newberg, Andrew; Henderson, Theodore A
2015-01-01
Traumatic brain injury (TBI) and posttraumatic stress disorder (PTSD) are highly heterogeneous and often present with overlapping symptomology, providing challenges in reliable classification and treatment. Single photon emission computed tomography (SPECT) may be advantageous in the diagnostic separation of these disorders when comorbid or clinically indistinct. Subjects were selected from a multisite database, where rest and on-task SPECT scans were obtained on a large group of neuropsychiatric patients. Two groups were analyzed: Group 1 with TBI (n=104), PTSD (n=104) or both (n=73) closely matched for demographics and comorbidity, compared to each other and healthy controls (N=116); Group 2 with TBI (n=7,505), PTSD (n=1,077) or both (n=1,017) compared to n=11,147 without either. ROIs and visual readings (VRs) were analyzed using a binary logistic regression model with predicted probabilities inputted into a Receiver Operating Characteristic analysis to identify sensitivity, specificity, and accuracy. One-way ANOVA identified the most diagnostically significant regions of increased perfusion in PTSD compared to TBI. Analysis included a 10-fold cross validation of the protocol in the larger community sample (Group 2). For Group 1, baseline and on-task ROIs and VRs showed a high level of accuracy in differentiating PTSD, TBI and PTSD+TBI conditions. This carefully matched group separated with 100% sensitivity, specificity and accuracy for the ROI analysis and at 89% or above for VRs. Group 2 had lower sensitivity, specificity and accuracy, but still in a clinically relevant range. Compared to subjects with TBI, PTSD showed increases in the limbic regions, cingulum, basal ganglia, insula, thalamus, prefrontal cortex and temporal lobes. This study demonstrates the ability to separate PTSD and TBI from healthy controls, from each other, and detect their co-occurrence, even in highly comorbid samples, using SPECT. This modality may offer a clinical option for aiding diagnosis and treatment of these conditions.
Continuous decoding of human grasp kinematics using epidural and subdural signals
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-01-01
Objective Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces (BMIs). Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are: accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials. Approach We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with epidural field potentials (EFPs), with both standard- and high-resolution electrode arrays. Main results In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean± SD grasp aperture variance accounted for was 0.54± 0.05 across all subjects, 0.75± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7–20 Hz and 70–115 Hz spectral bands contained the most information about grasp kinematics, with the 70–115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface. PMID:27900947
Bickelhaupt, Sebastian; Tesdorff, Jana; Laun, Frederik Bernd; Kuder, Tristan Anselm; Lederer, Wolfgang; Teiner, Susanne; Maier-Hein, Klaus; Daniel, Heidi; Stieber, Anne; Delorme, Stefan; Schlemmer, Heinz-Peter
2017-02-01
The aim of this study was to evaluate the accuracy and applicability of solitarily reading fused image series of T2-weighted and high-b-value diffusion-weighted sequences for lesion characterization as compared to sequential or combined image analysis of these unenhanced sequences and to contrast- enhanced breast MRI. This IRB-approved study included 50 female participants with suspicious breast lesions detected in screening X-ray mammograms, all of which provided written informed consent. Prior to biopsy, all women underwent MRI including diffusion-weighted imaging (DWIBS, b = 1500s/mm 2 ). Images were analyzed as follows: prospective image fusion of DWIBS and T2-weighted images (FU), side-by-side analysis of DWIBS and T2-weighted series (CO), combination of the first two methods (CO+FU), and full contrast-enhanced diagnostic protocol (FDP). Diagnostic indices, confidence, and image quality of the protocols were compared by two blinded readers. Reading the CO+FU (accuracy 0.92; NPV 96.1 %; PPV 87.6 %) and the CO series (0.90; 96.1 %; 83.7 %) provided a diagnostic performance similar to the FDP (0.95; 96.1 %; 91.3 %; p > 0.05). FU reading alone significantly reduced the diagnostic accuracy (0.82; 93.3 %; 73.4 %; p = 0.023). MR evaluation of suspicious BI-RADS 4 and 5 lesions detected on mammography by using a non-contrast-enhanced T2-weighted and DWIBS sequence protocol is most accurate if MR images were read using the CO+FU protocol. • Unenhanced breast MRI with additional DWIBS/T2w-image fusion allows reliable lesion characterization. • Abbreviated reading of fused DWIBS/T2w-images alone decreases diagnostic confidence and accuracy. • Reading fused DWIBS/T2w-images as the sole diagnostic method should be avoided.
Peyser, Thomas A; Nakamura, Katherine; Price, David; Bohnett, Lucas C; Hirsch, Irl B; Balo, Andrew
2015-08-01
Accuracy of continuous glucose monitoring (CGM) devices in hypoglycemia has been a widely reported shortcoming of this technology. We report the accuracy in hypoglycemia of a new version of the Dexcom (San Diego, CA) G4 Platinum CGM system (software 505) and present results regarding the optimum setting of CGM hypoglycemic alerts. CGM values were compared with YSI analyzer (YSI Life Sciences, Yellow Springs, OH) measurements every 15 min. We reviewed the accuracy of the CGM system in the hypoglycemic range using standard metrics. We analyzed the time required for the CGM system to detect biochemical hypoglycemia (70 mg/dL) compared with the YSI with alert settings at 70 mg/dL and 80 mg/dL. We also analyzed the time between the YSI value crossing 55 mg/dL, defined as the threshold for cognitive impairment due to hypoglycemia, and when the CGM system alerted for hypoglycemia. The mean absolute difference for a glucose level of less than 70 mg/dL was 6 mg/dL. Ninety-six percent of CGM values were within 20 mg/dL of the YSI values between 40 and 80 mg/dL. When the CGM hypoglycemic alert was set at 80 mg/dL, the device provided an alert for biochemical hypoglycemia within 10 min in 95% of instances and at least a 10-min advance warning before the cognitive impairment threshold in 91% of instances in the study. Use of an 80 mg/dL threshold setting for hypoglycemic alerts on the G4 Platinum (software 505) may provide patients with timely warning of hypoglycemia before the onset of cognitive impairment, enabling them to treat themselves for hypoglycemia with fast-acting carbohydrates and prevent neuroglycopenia associated with very low glucose levels.
Price, Owen F; Penman, Trent; Bradstock, Ross; Borah, Rittick
2016-10-01
Wildfires are complex adaptive systems, and have been hypothesized to exhibit scale-dependent transitions in the drivers of fire spread. Among other things, this makes the prediction of final fire size from conditions at the ignition difficult. We test this hypothesis by conducting a multi-scale statistical modelling of the factors determining whether fires reached 10 ha, then 100 ha then 1000 ha and the final size of fires >1000 ha. At each stage, the predictors were measures of weather, fuels, topography and fire suppression. The objectives were to identify differences among the models indicative of scale transitions, assess the accuracy of the multi-step method for predicting fire size (compared to predicting final size from initial conditions) and to quantify the importance of the predictors. The data were 1116 fires that occurred in the eucalypt forests of New South Wales between 1985 and 2010. The models were similar at the different scales, though there were subtle differences. For example, the presence of roads affected whether fires reached 10 ha but not larger scales. Weather was the most important predictor overall, though fuel load, topography and ease of suppression all showed effects. Overall, there was no evidence that fires have scale-dependent transitions in behaviour. The models had a predictive accuracy of 73%, 66%, 72% and 53% accuracy at 10 ha, 100 ha, 1000 ha and final size scales. When these steps were combined, the overall accuracy for predicting the size of fires was 62%, while the accuracy of the one step model was only 20%. Thus, the multi-scale approach was an improvement on the single scale approach, even though the predictive accuracy was probably insufficient for use as an operational tool. The analysis has also provided further evidence of the important role of weather, compared to fuel, suppression and topography in driving fire behaviour. Copyright © 2016. Published by Elsevier Ltd.
A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures
2014-01-01
Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in this work are freely available at http://www.cs.ubc.ca/~hjabbari/software.php. PMID:24884954
Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio
2007-12-01
Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.
Ying, Gui-shuang; Maguire, Maureen; Quinn, Graham; Kulp, Marjean Taylor; Cyert, Lynn
2011-12-28
To evaluate, by receiver operating characteristic (ROC) analysis, the accuracy of three instruments of refractive error in detecting eye conditions among 3- to 5-year-old Head Start preschoolers and to evaluate differences in accuracy between instruments and screeners and by age of the child. Children participating in the Vision In Preschoolers (VIP) Study (n = 4040), had screening tests administered by pediatric eye care providers (phase I) or by both nurse and lay screeners (phase II). Noncycloplegic retinoscopy (NCR), the Retinomax Autorefractor (Nikon, Tokyo, Japan), and the SureSight Vision Screener (SureSight, Alpharetta, GA) were used in phase I, and Retinomax and SureSight were used in phase II. Pediatric eye care providers performed a standardized eye examination to identify amblyopia, strabismus, significant refractive error, and reduced visual acuity. The accuracy of the screening tests was summarized by the area under the ROC curve (AUC) and compared between instruments and screeners and by age group. The three screening tests had a high AUC for all categories of screening personnel. The AUC for detecting any VIP-targeted condition was 0.83 for NCR, 0.83 (phase I) to 0.88 (phase II) for Retinomax, and 0.86 (phase I) to 0.87 (phase II) for SureSight. The AUC was 0.93 to 0.95 for detecting group 1 (most severe) conditions and did not differ between instruments or screeners or by age of the child. NCR, Retinomax, and SureSight had similar and high accuracy in detecting vision disorders in preschoolers across all types of screeners and age of child, consistent with previously reported results at specificity levels of 90% and 94%.
The Frontier Fields lens modelling comparison project
NASA Astrophysics Data System (ADS)
Meneghetti, M.; Natarajan, P.; Coe, D.; Contini, E.; De Lucia, G.; Giocoli, C.; Acebron, A.; Borgani, S.; Bradac, M.; Diego, J. M.; Hoag, A.; Ishigaki, M.; Johnson, T. L.; Jullo, E.; Kawamata, R.; Lam, D.; Limousin, M.; Liesenborgs, J.; Oguri, M.; Sebesta, K.; Sharon, K.; Williams, L. L. R.; Zitrin, A.
2017-12-01
Gravitational lensing by clusters of galaxies offers a powerful probe of their structure and mass distribution. Several research groups have developed techniques independently to achieve this goal. While these methods have all provided remarkably high-precision mass maps, particularly with exquisite imaging data from the Hubble Space Telescope (HST), the reconstructions themselves have never been directly compared. In this paper, we present for the first time a detailed comparison of methodologies for fidelity, accuracy and precision. For this collaborative exercise, the lens modelling community was provided simulated cluster images that mimic the depth and resolution of the ongoing HST Frontier Fields. The results of the submitted reconstructions with the un-blinded true mass profile of these two clusters are presented here. Parametric, free-form and hybrid techniques have been deployed by the participating groups and we detail the strengths and trade-offs in accuracy and systematics that arise for each methodology. We note in conclusion that several properties of the lensing clusters are recovered equally well by most of the lensing techniques compared in this study. For example, the reconstruction of azimuthally averaged density and mass profiles by both parametric and free-form methods matches the input models at the level of ∼10 per cent. Parametric techniques are generally better at recovering the 2D maps of the convergence and of the magnification. For the best-performing algorithms, the accuracy in the magnification estimate is ∼10 per cent at μtrue = 3 and it degrades to ∼30 per cent at μtrue ∼ 10.
Garcia-Retamero, Rocio; Hoffrage, Ulrich
2013-04-01
Doctors and patients have difficulty inferring the predictive value of a medical test from information about the prevalence of a disease and the sensitivity and false-positive rate of the test. Previous research has established that communicating such information in a format the human mind is adapted to-namely natural frequencies-as compared to probabilities, boosts accuracy of diagnostic inferences. In a study, we investigated to what extent these inferences can be improved-beyond the effect of natural frequencies-by providing visual aids. Participants were 81 doctors and 81 patients who made diagnostic inferences about three medical tests on the basis of information about prevalence of a disease, and the sensitivity and false-positive rate of the tests. Half of the participants received the information in natural frequencies, while the other half received the information in probabilities. Half of the participants only received numerical information, while the other half additionally received a visual aid representing the numerical information. In addition, participants completed a numeracy scale. Our study showed three important findings: (1) doctors and patients made more accurate inferences when information was communicated in natural frequencies as compared to probabilities; (2) visual aids boosted accuracy even when the information was provided in natural frequencies; and (3) doctors were more accurate in their diagnostic inferences than patients, though differences in accuracy disappeared when differences in numerical skills were controlled for. Our findings have important implications for medical practice as they suggest suitable ways to communicate quantitative medical data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Henry, Michael E.; Lauriat, Tara L.; Shanahan, Meghan; Renshaw, Perry F.; Jensen, J. Eric
2015-01-01
Proton magnetic resonance spectroscopy has the potential to provide valuable information about alterations in gamma-aminobutyric acid (GABA), glutamate (Glu), and glutamine (Gln) in psychiatric and neurological disorders. In order to use this technique effectively, it is important to establish the accuracy and reproducibility of the methodology. In this study, phantoms with known metabolite concentrations were used to compare the accuracy of 2D J-resolved MRS, single-echo 30 ms PRESS, and GABA-edited MEGA-PRESS for measuring all three aforementioned neurochemicals simultaneously. The phantoms included metabolite concentrations above and below the physiological range and scans were performed at baseline, 1 week, and 1 month time-points. For GABA measurement, MEGA-PRESS proved optimal with a measured-to-target correlation of R2 = 0.999, with J-resolved providing R2 = 0.973 for GABA. All three methods proved effective in measuring Glu with R2 = 0.987 (30 ms PRESS), R2 = 0.996 (J-resolved) and R2 = 0.910 (MEGA-PRESS). J-resolved and MEGA-PRESS yielded good results for Gln measures with respective R2 = 0.855 (J-resolved) and R2 = 0.815 (MEGA-PRESS). The 30 ms PRESS method proved ineffective in measuring GABA and Gln. When measurement stability at in vivo concentration was assessed as a function of varying spectral quality, J-resolved proved the most stable and immune to signal-to-noise and linewidth fluctuation compared to MEGA-PRESS and 30 ms PRESS. PMID:21130670
High accuracy autonomous navigation using the global positioning system (GPS)
NASA Technical Reports Server (NTRS)
Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul
1997-01-01
The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.
Santiago, Teresa C; Jenkins, Jesse J; Pedrosa, Francisco; Billups, Catherine; Quintana, Yuri; Ribeiro, Raul C; Qaddoumi, Ibrahim
2012-08-01
Accurate diagnosis is critical for optimal management of pediatric cancer. Pathologists with experience in pediatric oncology are in short supply in the developing world. Telepathology is increasingly used for consultations but its overall contribution to diagnostic accuracy is unknown. We developed a strategy to provide a focused training in pediatric cancer and telepathology support to pathologists in the developing world. After the training period, we compared trainee's diagnoses with those of an experienced pathologist. We next compared the effectiveness of static versus dynamic telepathology review in 127 cases. Results were compared by Fisher's exact test. The diagnoses of the trainee and the expert pathologist differed in only 6.5% of cases (95% CI, 1.2-20.0%). The overall concordance between the telepathology and original diagnoses was 90.6% (115/127; 95% CI, 84.1-94.6%). Brief, focused training in pediatric cancer histopathology can improve diagnostic accuracy. Dynamic and static telepathology analyses are equally effective for diagnostic review. Copyright © 2012 Wiley Periodicals, Inc.
Kockmann, Tobias; Trachsel, Christian; Panse, Christian; Wahlander, Asa; Selevsek, Nathalie; Grossmann, Jonas; Wolski, Witold E; Schlapbach, Ralph
2016-08-01
Quantitative mass spectrometry is a rapidly evolving methodology applied in a large number of omics-type research projects. During the past years, new designs of mass spectrometers have been developed and launched as commercial systems while in parallel new data acquisition schemes and data analysis paradigms have been introduced. Core facilities provide access to such technologies, but also actively support the researchers in finding and applying the best-suited analytical approach. In order to implement a solid fundament for this decision making process, core facilities need to constantly compare and benchmark the various approaches. In this article we compare the quantitative accuracy and precision of current state of the art targeted proteomics approaches single reaction monitoring (SRM), parallel reaction monitoring (PRM) and data independent acquisition (DIA) across multiple liquid chromatography mass spectrometry (LC-MS) platforms, using a readily available commercial standard sample. All workflows are able to reproducibly generate accurate quantitative data. However, SRM and PRM workflows show higher accuracy and precision compared to DIA approaches, especially when analyzing low concentrated analytes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Comparison of two surface temperature measurement using thermocouples and infrared camera
NASA Astrophysics Data System (ADS)
Michalski, Dariusz; Strąk, Kinga; Piasecka, Magdalena
This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.
Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny
2011-01-01
Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Meta-analysis of diagnostic accuracy studies in mental health
Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J
2015-01-01
Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042
Estimating Soil Moisture Using Polsar Data: a Machine Learning Approach
NASA Astrophysics Data System (ADS)
Khedri, E.; Hasanlou, M.; Tabatabaeenejad, A.
2017-09-01
Soil moisture is an important parameter that affects several environmental processes. This parameter has many important functions in numerous sciences including agriculture, hydrology, aerology, flood prediction, and drought occurrence. However, field procedures for moisture calculations are not feasible in a vast agricultural region territory. This is due to the difficulty in calculating soil moisture in vast territories and high-cost nature as well as spatial and local variability of soil moisture. Polarimetric synthetic aperture radar (PolSAR) imaging is a powerful tool for estimating soil moisture. These images provide a wide field of view and high spatial resolution. For estimating soil moisture, in this study, a model of support vector regression (SVR) is proposed based on obtained data from AIRSAR in 2003 in C, L, and P channels. In this endeavor, sequential forward selection (SFS) and sequential backward selection (SBS) are evaluated to select suitable features of polarized image dataset for high efficient modeling. We compare the obtained data with in-situ data. Output results show that the SBS-SVR method results in higher modeling accuracy compared to SFS-SVR model. Statistical parameters obtained from this method show an R2 of 97% and an RMSE of lower than 0.00041 (m3/m3) for P, L, and C channels, which has provided better accuracy compared to other feature selection algorithms.
Agrawal, Sony; Cifelli, Steven; Johnstone, Richard; Pechter, David; Barbey, Deborah A; Lin, Karen; Allison, Tim; Agrawal, Shree; Rivera-Gines, Aida; Milligan, James A; Schneeweis, Jonathan; Houle, Kevin; Struck, Alice J; Visconti, Richard; Sills, Matthew; Wildey, Mary Jo
2016-02-01
Quantitative reverse transcription PCR (qRT-PCR) is a valuable tool for characterizing the effects of inhibitors on viral replication. The amplification of target viral genes through the use of specifically designed fluorescent probes and primers provides a reliable method for quantifying RNA. Due to reagent costs, use of these assays for compound evaluation is limited. Until recently, the inability to accurately dispense low volumes of qRT-PCR assay reagents precluded the routine use of this PCR assay for compound evaluation in drug discovery. Acoustic dispensing has become an integral part of drug discovery during the past decade; however, acoustic transfer of microliter volumes of aqueous reagents was time consuming. The Labcyte Echo 525 liquid handler was designed to enable rapid aqueous transfers. We compared the accuracy and precision of a qPCR assay using the Labcyte Echo 525 to those of the BioMek FX, a traditional liquid handler, with the goal of reducing the volume and cost of the assay. The data show that the Echo 525 provides higher accuracy and precision compared to the current process using a traditional liquid handler. Comparable data for assay volumes from 500 nL to 12 µL allowed the miniaturization of the assay, resulting in significant cost savings of drug discovery and process streamlining. © 2015 Society for Laboratory Automation and Screening.
On the accuracy of personality judgment: a realistic approach.
Funder, D C
1995-10-01
The "accuracy paradigm" for the study of personality judgment provides an important, new complement to the "error paradigm" that dominated this area of research for almost 2 decades. The present article introduces a specific approach within the accuracy paradigm called the Realistic Accuracy Model (RAM). RAM begins with the assumption that personality traits are real attributes of individuals. This assumption entails the use of a broad array of criteria for the evaluation of personality judgment and leads to a model that describes accuracy as a function of the availability, detection, and utilization of relevant behavioral cues. RAM provides a common explanation for basic moderators of accuracy, sheds light on how these moderators interact, and outlines a research agenda that includes the reintegration of the study of error with the study of accuracy.
Melanson, Edward L; Swibas, Tracy; Kohrt, Wendy M; Catenacci, Vicki A; Creasy, Seth A; Plasqui, Guy; Wouters, Loek; Speakman, John R; Berman, Elena S F
2018-02-01
When the doubly labeled water (DLW) method is used to measure total daily energy expenditure (TDEE), isotope measurements are typically performed using isotope ratio mass spectrometry (IRMS). New technologies, such as off-axis integrated cavity output spectroscopy (OA-ICOS) provide comparable isotopic measurements of standard waters and human urine samples, but the accuracy of carbon dioxide production (V̇co 2 ) determined with OA-ICOS has not been demonstrated. We compared simultaneous measurement V̇co 2 obtained using whole-room indirect calorimetry (IC) with DLW-based measurements from IRMS and OA-ICOS. Seventeen subjects (10 female; 22 to 63 yr) were studied for 7 consecutive days in the IC. Subjects consumed a dose of 0.25 g H 2 18 O (98% APE) and 0.14 g 2 H 2 O (99.8% APE) per kilogram of total body water, and urine samples were obtained on days 1 and 8 to measure average daily V̇co 2 using OA-ICOS and IRMS. V̇co 2 was calculated using both the plateau and intercept methods. There were no differences in V̇co 2 measured by OA-ICOS or IRMS compared with IC when the plateau method was used. When the intercept method was used, V̇co 2 using OA-ICOS did not differ from IC, but V̇co 2 measured using IRMS was significantly lower than IC. Accuracy (~1-5%), precision (~8%), intraclass correlation coefficients ( R = 0.87-90), and root mean squared error (30-40 liters/day) of V̇co 2 measured by OA-ICOS and IRMS were similar. Both OA-ICOS and IRMS produced measurements of V̇co 2 with comparable accuracy and precision compared with IC.
Split-mouth comparison of the accuracy of computer-generated and conventional surgical guides.
Farley, Nathaniel E; Kennedy, Kelly; McGlumphy, Edwin A; Clelland, Nancy L
2013-01-01
Recent clinical studies have shown that implant placement is highly predictable with computer-generated surgical guides; however, the reliability of these guides has not been compared to that of conventional guides clinically. This study aimed to compare the accuracy of reproducing planned implant positions with computer-generated and conventional surgical guides using a split-mouth design. Ten patients received two implants each in symmetric locations. All implants were planned virtually using a software program and information from cone beam computed tomographic scans taken with scan appliances in place. Patients were randomly selected for computer-aided design/computer-assisted manufacture (CAD/CAM)-guided implant placement on their right or left side. Conventional guides were used on the contralateral side. Patients underwent operative cone beam computed tomography postoperatively. Planned and actual implant positions were compared using three-dimensional analyses capable of measuring volume overlap as well as differences in angles and coronal and apical positions. Results were compared using a mixed-model repeated-measures analysis of variance and were further analyzed using a Bartlett test for unequal variance (α = .05). Implants placed with CAD/CAM guides were closer to the planned positions in all eight categories examined. However, statistically significant differences were shown only for coronal horizontal distances. It was also shown that CAD/CAM guides had less variability than conventional guides, which was statistically significant for apical distance. Implants placed using CAD/CAM surgical guides provided greater accuracy in a lateral direction than conventional guides. In addition, CAD/CAM guides were more consistent in their deviation from the planned locations than conventional guides.
Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements
Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.
2016-01-01
Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037
Jeon, Jin-Hun; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Woong-Chul
2014-12-01
This study aimed to evaluate the accuracy of digitizing dental impressions of abutment teeth using a white light scanner and to compare the findings among teeth types. To assess precision, impressions of the canine, premolar, and molar prepared to receive all-ceramic crowns were repeatedly scanned to obtain five sets of 3-D data (STL files). Point clouds were compared and error sizes were measured (n=10 per type). Next, to evaluate trueness, impressions of teeth were rotated by 10°-20° and scanned. The obtained data were compared with the first set of data for precision assessment, and the error sizes were measured (n=5 per type). The Kruskal-Wallis test was performed to evaluate precision and trueness among three teeth types, and post-hoc comparisons were performed using the Mann-Whitney U test with Bonferroni correction (α=.05). Precision discrepancies for the canine, premolar, and molar were 3.7 µm, 3.2 µm, and 7.3 µm, respectively, indicating the poorest precision for the molar (P<.001). Trueness discrepancies for teeth types were 6.2 µm, 11.2 µm, and 21.8 µm, respectively, indicating the poorest trueness for the molar (P=.007). In respect to accuracy the molar showed the largest discrepancies compared with the canine and premolar. Digitizing of dental impressions of abutment teeth using a white light scanner was assessed to be a highly accurate method and provided discrepancy values in a clinically acceptable range. Further study is needed to improve digitizing performance of white light scanning in axial wall.
NASA Astrophysics Data System (ADS)
Ghimire, Suman; Xystrakis, Fotios; Koutsias, Nikos
2017-04-01
Forest inventory variables are essential in accessing the potential of wildfire hazard, obtaining above ground biomass and carbon sequestration which helps developing strategies for sustainable management of forests. Effective management of forest resources relies on the accuracy of such inventory variables. This study aims to compare the accuracy in obtaining the forest inventory variables like diameter at breast height (DBH) and tree height from Terrestrial Laser Scanner (Faro Focus 3D X 330) with that from the traditional forest inventory techniques in the Mediterranean forests of Greece. The data acquisition was carried out on an area of 9,539.8 m2 with six plots each of radius 6 m. Computree algorithm was applied for automatic detection of DBH from terrestrial laser scanner data. Similarly, tree height was estimated manually using CloudCompare software for the terrestrial laser scanner data. The field estimates of DBH and tree height was carried out using calipers and Nikon Forestry 550 Laser Rangefinder. The comparison of DBH measured between field estimates and Terrestrial Laser Scanner (TLS), resulted in R squared values ranging from 0.75 to 0.96 at the plot level. An average R2 and RMSE value of 0.80 and 1.07 m respectively was obtained when comparing the tree height between TLS and field data. Our results confirm that terrestrial laser scanner can provide nondestructive, high-resolution, and precise determination of forest inventory for better decision making in sustainable forest management and assessing potential of forest fire hazards.
Posture Detection Based on Smart Cushion for Wheelchair Users
Ma, Congcong; Li, Wenfeng; Gravina, Raffaele; Fortino, Giancarlo
2017-01-01
The postures of wheelchair users can reveal their sitting habit, mood, and even predict health risks such as pressure ulcers or lower back pain. Mining the hidden information of the postures can reveal their wellness and general health conditions. In this paper, a cushion-based posture recognition system is used to process pressure sensor signals for the detection of user’s posture in the wheelchair. The proposed posture detection method is composed of three main steps: data level classification for posture detection, backward selection of sensor configuration, and recognition results compared with previous literature. Five supervised classification techniques—Decision Tree (J48), Support Vector Machines (SVM), Multilayer Perceptron (MLP), Naive Bayes, and k-Nearest Neighbor (k-NN)—are compared in terms of classification accuracy, precision, recall, and F-measure. Results indicate that the J48 classifier provides the highest accuracy compared to other techniques. The backward selection method was used to determine the best sensor deployment configuration of the wheelchair. Several kinds of pressure sensor deployments are compared and our new method of deployment is shown to better detect postures of the wheelchair users. Performance analysis also took into account the Body Mass Index (BMI), useful for evaluating the robustness of the method across individual physical differences. Results show that our proposed sensor deployment is effective, achieving 99.47% posture recognition accuracy. Our proposed method is very competitive for posture recognition and robust in comparison with other former research. Accurate posture detection represents a fundamental basic block to develop several applications, including fatigue estimation and activity level assessment. PMID:28353684
The effect of letter string length and report condition on letter recognition accuracy.
Raghunandan, Avesh; Karmazinaite, Berta; Rossow, Andrea S
Letter sequence recognition accuracy has been postulated to be limited primarily by low-level visual factors. The influence of high level factors such as visual memory (load and decay) has been largely overlooked. This study provides insight into the role of these factors by investigating the interaction between letter sequence recognition accuracy, letter string length and report condition. Letter sequence recognition accuracy for trigrams and pentagrams were measured in 10 adult subjects for two report conditions. In the complete report condition subjects reported all 3 or all 5 letters comprising trigrams and pentagrams, respectively. In the partial report condition, subjects reported only a single letter in the trigram or pentagram. Letters were presented for 100ms and rendered in high contrast, using black lowercase Courier font that subtended 0.4° at the fixation distance of 0.57m. Letter sequence recognition accuracy was consistently higher for trigrams compared to pentagrams especially for letter positions away from fixation. While partial report increased recognition accuracy in both string length conditions, the effect was larger for pentagrams, and most evident for the final letter positions within trigrams and pentagrams. The effect of partial report on recognition accuracy for the final letter positions increased as eccentricity increased away from fixation, and was independent of the inner/outer position of a letter. Higher-level visual memory functions (memory load and decay) play a role in letter sequence recognition accuracy. There is also suggestion of additional delays imposed on memory encoding by crowded letter elements. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Rice, Danielle B; Kloda, Lorie A; Shrier, Ian; Thombs, Brett D
2016-08-01
Meta-analyses that are conducted rigorously and reported completely and transparently can provide accurate evidence to inform the best possible healthcare decisions. Guideline makers have raised concerns about the utility of existing evidence on the diagnostic accuracy of depression screening tools. The objective of our study was to evaluate the transparency and completeness of reporting in meta-analyses of the diagnostic accuracy of depression screening tools using the PRISMA tool adapted for diagnostic test accuracy meta-analyses. We searched MEDLINE and PsycINFO from January 1, 2005 through March 13, 2016 for recent meta-analyses in any language on the diagnostic accuracy of depression screening tools. Two reviewers independently assessed the transparency in reporting using the PRISMA tool with appropriate adaptations made for studies of diagnostic test accuracy. We identified 21 eligible meta-analyses. Twelve of 21 meta-analyses complied with at least 50% of adapted PRISMA items. Of 30 adapted PRISMA items, 11 were fulfilled by ≥80% of included meta-analyses, 3 by 50-79% of meta-analyses, 7 by 25-45% of meta-analyses, and 9 by <25%. On average, post-PRISMA meta-analyses complied with 17 of 30 items compared to 13 of 30 items pre-PRISMA. Deficiencies in the transparency of reporting in meta-analyses of the diagnostic test accuracy of depression screening tools of meta-analyses were identified. Authors, reviewers, and editors should adhere to the PRISMA statement to improve the reporting of meta-analyses of the diagnostic accuracy of depression screening tools. Copyright © 2016 Elsevier Inc. All rights reserved.
Caggiano, Michael D; Tinkham, Wade T; Hoffman, Chad; Cheng, Antony S; Hawbaker, Todd J
2016-10-01
The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m 2 ) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.
Caggiano, Michael D.; Tinkham, Wade T.; Hoffman, Chad; Cheng, Antony S.; Hawbaker, Todd J.
2016-01-01
The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m2) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.
Reinisch, S; Schweiger, K; Pablik, E; Collet-Fenetrier, B; Peyrin-Biroulet, L; Alfaro, I; Panés, J; Moayyedi, P; Reinisch, W
2016-09-01
The Lennard-Jones criteria are considered the gold standard for diagnosing Crohn's disease (CD) and include the items granuloma, macroscopic discontinuity, transmural inflammation, fibrosis, lymphoid aggregates and discontinuous inflammation on histology. The criteria have never been subjected to a formal validation process. To develop a validated and improved diagnostic index based on the items of Lennard-Jones criteria. Included were 328 adult patients with long-standing CD (median disease duration 10 years) from three centres and classified as 'established', 'probable' or 'non-CD' by Lennard-Jones criteria at time of diagnosis. Controls were patients with ulcerative colitis (n = 170). The performance of each of the six diagnostic items of Lennard-Jones criteria was modelled by logistic regression and a new index based on stepwise backward selection and cut-offs was developed. The diagnostic value of the new index was analysed by comparing sensitivity, specificity and accuracy vs. Lennard-Jones criteria. By Lennard-Jones criteria 49% (n = 162) of CD patients would have been diagnosed as 'non-CD' at time of diagnosis (sensitivity/specificity/accuracy, 'established' CD: 0.34/0.99/0.67; 'probable' CD: 0.51/0.95/0.73). A new index was derived from granuloma, fibrosis, transmural inflammation and macroscopic discontinuity, but excluded lymphoid aggregates and discontinuous inflammation on histology. Our index provided improved diagnostic accuracy for 'established' and 'probable' CD (sensitivity/specificity/accuracy, 'established' CD: 0.45/1/0.72; 'probable' CD: 0.8/0.85/0.82), including the subgroup isolated colonic CD ('probable' CD, new index: 0.73/0.85/0.79; Lennard-Jones criteria: 0.43/0.95/0.69). We developed an index based on items of Lennard-Jones criteria providing improved diagnostic accuracy for the differential diagnosis between CD and UC. © 2016 John Wiley & Sons Ltd.
An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm
NASA Astrophysics Data System (ADS)
Jacques, Robert; McNutt, Todd
2014-03-01
Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.
MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS
Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...
Luu, Hung N; Dahlstrom, Kristina R; Mullen, Patricia Dolan; VonVille, Helena M; Scheurer, Michael E
2013-01-01
The effectiveness of screening programs for cervical cancer has benefited from the inclusion of Human papillomavirus (HPV) DNA assays; which assay to choose, however, is not clear based on previous reviews. Our review addressed test accuracy of Hybrid Capture II (HCII) and polymerase chain reaction (PCR) assays based on studies with stronger designs and with more clinically relevant outcomes. We searched OvidMedline, PubMed, and the Cochrane Library for English language studies comparing both tests, published 1985–2012, with cervical dysplasia defined by the Bethesda classification. Meta-analysis provided pooled sensitivity, specificity, and 95% confidence intervals (CIs); meta-regression identified sources of heterogeneity. From 29 reports, we found that the pooled sensitivity and specificity to detect high-grade squamous intraepithelial lesion (HSIL) was higher for HCII than PCR (0.89 [CI: 0.89–0.90] and 0.85 [CI: 0.84–0.86] vs. 0.73 [CI: 0.73–0.74] and 0.62 [CI: 0.62–0.64]). Both assays had higher accuracy to detect cervical dysplasia in Europe than in Asia-Pacific or North America (diagnostic odd ratio – dOR = 4.08 [CI: 1.39–11.91] and 4.56 [CI: 1.86–11.17] for HCII vs. 2.66 [CI: 1.16–6.53] and 3.78 [CI: 1.50–9.51] for PCR) and accuracy to detect HSIL than atypical squamous cells of undetermined significance (ASCUS)/ low-grade squamous intraepithelial lesion (LSIL) (HCII-dOR = 9.04 [CI: 4.12–19.86] and PCR-dOR = 5.60 [CI: 2.87–10.94]). For HCII, using histology as a gold standard results in higher accuracy than using cytology (dOR = 2.87 [CI: 1.31–6.29]). Based on higher test accuracy, our results support the use of HCII in cervical cancer screening programs. The role of HPV type distribution should be explored to determine the worldwide comparability of HPV test accuracy. PMID:23930214
Medication reconciliation in a rural trauma population.
Miller, S Lee; Miller, Stephanie; Balon, Jennifer; Helling, Thomas S
2008-11-01
Medication errors during hospitalization can lead to adverse drug events. Because of preoccupation by health care providers with life-threatening injuries, trauma patients may be particularly prone to medication errors. Medication reconciliation on admission can result in decreased medication errors and adverse drug events in this patient population. The purpose of this study is to determine the accuracy of medication histories obtained on trauma patients by initial health care providers compared to a medication reconciliation process by a designated clinical pharmacist after the patient's admission and secondarily to determine whether trauma-associated factors affected medication accuracy. This was a prospective enrollment study during 13 months in which trauma patients admitted to a Level I trauma center were enrolled in a stepwise medication reconciliation process by the clinical pharmacist. The setting was a rural Level I trauma center. Patients admitted to the trauma service were studied. The intervention was medication reconciliation by a clinical pharmacist. The main outcome measure was accuracy of medication history by initial trauma health care providers compared to a medication reconciliation process by a clinical pharmacist who compared all sources, including telephone calls to pharmacies. Patients taking no medications (whether correctly identified as such or not) were not analyzed in these results. Variables examined included admission medication list accuracy, age, trauma team activation mode, Injury Severity Score, and Glasgow Coma Scale (GCS) score. Two hundred thirty-four patients were enrolled. Eighty-four of 234 patients (36%) had an Injury Severity Score greater than 15. Medications were reconciled within an average of 3 days of admission (range 1 to 8) by the clinical pharmacist. Overall, medications as reconciled by the clinical pharmacist were recorded correctly for 15% of patients. Admission trauma team medication lists were inaccurate in 224 of 234 cases (96%). Admitting nurses' lists were more accurate than the trauma team's (11% versus 4%; 95% confidence interval 2.5% to 11.2%). Errors were found by the clinical pharmacist in medication name, strength, route, and frequency. No patients (0/20) with admission GCS less than 13 had accurate medication lists. Seventy of 84 patients (83%) with an Injury Severity Score greater than 15 had inaccurate medication lists. Ten of 234 patients (4%) were ordered wrong medications, and 1 adverse drug event (hypoglycemia) occurred. The median duration of the reconciliation process was 2 days. Only 12% of cases were completed in 1 day, and almost 25% required 3 or more (maximum 8) days. This study showed that medication history recorded on admission was inaccurate. This patient population overall was susceptible to medication inaccuracies from multiple sources, even with duplication of medication histories by initial health care providers. Medication reconciliation for trauma patients by a clinical pharmacist may improve safety and prevent adverse drug events but did not occur quickly in this setting.
ERIC Educational Resources Information Center
Williams, Lynda Patterson
The purpose of the study was to compare two methods of learning multiplication facts in order to develop speed and accuracy. The researcher conducted the action research project with a seventh grade enrichment class, which met for seven weeks during the school year. As part of the curriculum students were provided with activities to refine their…
QuickBird and OrbView-3 Geopositional Accuracy Assessment
NASA Technical Reports Server (NTRS)
Helder, Dennis; Ross, Kenton
2006-01-01
Objective: Compare vendor-provided image coordinates with known references visible in the imagery. Approach: Use multiple, well-characterized sites with >40 ground control points (GCPs); sites that are a) Well distributed; b) Accurately surveyed; and c) Easily found in imagery. Perform independent assessments with independent teams. Each team has slightly different measurement techniques and data processing methods. NASA Stennis Space Center. South Dakota State University.
Marine benefits from NASA's global differential system: sub-meter positioning, anywhere, anytime
NASA Technical Reports Server (NTRS)
Bar-Sever, Y.
2000-01-01
Precise real-time, onboard knowledge of a platform s state (position and velocity) is a critical compponent in many marine applications. This article describes a recent technology development that provides a breakthrough in this capability for platforms carrying a dual-frequency GPS receiver - seamless global coverage and roughly an order of magnitude improvement in accuracy compared to state-of-the-art.
2010-12-01
Simulation of Free -Field Blast ........................................................................45 27. (a) Peak Incident Pressure and (b...several types of problems involving blast propagation. Mastin et al. (1995) compared CTH simulations to free -field incident pressure as predicted by...a measure of accuracy and efficiency. To provide this direct comparison, a series of 2D-axisymmetric free -field air blast simulations were
Implementation and Assessment of Advanced Analog Vector-Matrix Processor
NASA Technical Reports Server (NTRS)
Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.
A novel minimally invasive dual-modality fiber optic probe for prostate cancer detection
NASA Astrophysics Data System (ADS)
Sharma, Vikrant
Prostate cancer is the most common form of cancer in males, and is the second leading cause of cancer related deaths in United States. In prostate cancer diagnostics and therapy, there is a critical need for a minimally invasive tool for in vivo evaluation of prostate tissue. Such a tool finds its niche in improving TRUS (trans-rectal ultrasound) guided biopsy procedure, surgical margin assessment during radical prostatectomy, and active surveillance of patients with a certain risk levels. This work is focused on development of a fiber-based dual-modality optical device (dMOD), to differentiate prostate cancer from benign tissue, in vivo. dMOD utilizes two independent optical techniques, LRS (light reflectance spectroscopy) and AFLS (auto-fluorescence lifetime spectroscopy). LRS quantifies scattering coefficient of the tissue, as well as concentrations of major tissue chromophores like hemoglobin derivatives, β-carotene and melanin. AFLS was designed to target lifetime signatures of multiple endogenous fluorophores like flavins, porphyrins and lipo-pigments. Each of these methods was independently developed, and the two modalities were integrated using a thin (1-mm outer diameter) fiber-optic probe. Resulting dMOD probe was implemented and evaluated on animal models of prostate cancer, as well as on human prostate tissue. Application of dMOD to human breast cancer (invasive ductal carcinoma) identification was also evaluated. The results obtained reveal that both LRS and AFLS are excellent techniques to discriminate prostate cancer tissue from surrounding benign tissue in animal models. Each technique independently is capable of providing near absolute (100%) accuracy for cancer detection, indicating that either of them could be used independently without the need of implementing them together. Also, in case of human breast cancer, LRS and AFLS provided comparable accuracies to dMOD, LRS accuracy (96%) being the highest for the studied population. However, the dual-modality integration proved to be ideal for human prostate cancer detection, as dMOD provided much better accuracy i.e., 82.7% for cancer detection in intra-capsular prostatic tissues (ICT), and 92.4% for cancer detection in extra-capsular prostatic tissues (ECT), when compared with either LRS (74.7% ICT, 86.6% ECT) or AFLS(67.1% ICT, 82.1% ECT) alone. A classification algorithm was also developed to identify different grades of prostate cancers based on Gleason scores (GS). When stratified by grade, each high grade prostate cancer (GS 7, 8 and 9) was successfully identified using dMOD with excellent accuracy in ICT (88%, 90%, 85%), as well as ECT (91%, 92%, 94%).
Park, Charlie C; Hooker, Catherine; Hooker, Jonathan C; Bass, Emily; Haufe, William; Schlein, Alexandra; Covarrubias, Yesenia; Heba, Elhamy; Bydder, Mark; Wolfson, Tanya; Gamst, Anthony; Loomba, Rohit; Schwimmer, Jeffrey; Hernando, Diego; Reeder, Scott B; Middleton, Michael; Sirlin, Claude B; Hamilton, Gavin
2018-04-29
Improving the signal-to-noise ratio (SNR) of chemical-shift-encoded MRI acquisition with complex reconstruction (MRI-C) may improve the accuracy and precision of noninvasive proton density fat fraction (PDFF) quantification in patients with hepatic steatosis. To assess the accuracy of high SNR (Hi-SNR) MRI-C versus standard MRI-C acquisition to estimate hepatic PDFF in adult and pediatric nonalcoholic fatty liver disease (NAFLD) using an MR spectroscopy (MRS) sequence as the reference standard. Prospective. In all, 231 adult and pediatric patients with known or suspected NAFLD. PDFF estimated at 3T by three MR techniques: standard MRI-C; a Hi-SNR MRI-C variant with increased slice thickness, decreased matrix size, and no parallel imaging; and MRS (reference standard). MRI-PDFF was measured by image analysts using a region of interest coregistered with the MRS-PDFF voxel. Linear regression analyses were used to assess accuracy and precision of MRI-estimated PDFF for MRS-PDFF as a function of MRI-PDFF using the standard and Hi-SNR MRI-C for all patients and for patients with MRS-PDFF <10%. In all, 271 exams from 231 patients were included (mean MRS-PDFF: 12.6% [SD: 10.4]; range: 0.9-41.9). High agreement between MRI-PDFF and MRS-PDFF was demonstrated across the overall range of PDFF, with a regression slope of 1.035 for the standard MRI-C and 1.008 for Hi-SNR MRI-C. Hi-SNR MRI-C, compared to standard MRI-C, provided small but statistically significant improvements in the slope (respectively, 1.008 vs. 1.035, P = 0.004) and mean bias (0.412 vs. 0.673, P < 0.0001) overall. In the low-fat patients only, Hi-SNR MRI-C provided improvements in the slope (1.058 vs. 1.190, P = 0.002), mean bias (0.168 vs. 0.368, P = 0.007), intercept (-0.153 vs. -0.796, P < 0.0001), and borderline improvement in the R 2 (0.888 vs. 0.813, P = 0.01). Compared to standard MRI-C, Hi-SNR MRI-C provides slightly higher MRI-PDFF estimation accuracy across the overall range of PDFF and improves both accuracy and precision in the low PDFF range. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Comparative utility of LANDSAT-1 and Skylab data for coastal wetland mapping and ecological studies
NASA Technical Reports Server (NTRS)
Anderson, R.; Alsid, L.; Carter, V.
1975-01-01
Skylab 190-A photography and LANDSAT-1 analog data have been analyzed to determine coastal wetland mapping potential as a near term substitute for aircraft data and as a long term monitoring tool. The level of detail and accuracy of each was compared. Skylab data provides more accurate classification of wetland types, better delineation of freshwater marshes and more detailed analysis of drainage patterns. LANDSAT-1 analog data is useful for general classification, boundary definition and monitoring of human impact in wetlands.
Baseline mathematics and geodetics for tracking operations
NASA Technical Reports Server (NTRS)
James, R.
1981-01-01
Various geodetic and mapping algorithms are analyzed as they apply to radar tracking systems and tested in extended BASIC computer language for real time computer applications. Closed-form approaches to the solution of converting Earth centered coordinates to latitude, longitude, and altitude are compared with classical approximations. A simplified approach to atmospheric refractivity called gradient refraction is compared with conventional ray tracing processes. An extremely detailed set of documentation which provides the theory, derivations, and application of algorithms used in the programs is included. Validation methods are also presented for testing the accuracy of the algorithms.
Physician Evaluation of Internet Health Information on Proton Therapy for Prostate Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Anand, E-mail: as4351@columbia.edu; Department of Radiation Oncology, Columbia University Medical Center, New York, New York; Paly, Jonathan J.
Purpose: Many patients considering prostate cancer (PCa) treatment options report seeking proton beam therapy (PBT) based in part on information readily available on the Internet. There is, however, potential for considerable variation in Internet health information (IHI). We thus evaluated the characteristics, quality, and accuracy of IHI on PBT for PCa. Methods and Materials: We undertook a qualitative research study using snowball-purposive sampling in which we evaluated the top 50 Google search results for “proton prostate cancer.” Quality was evaluated on a 5-point scale using the validated 15-question DISCERN instrument. Accuracy was evaluated by comparing IHI with the best availablemore » evidence. Results: Thirty-seven IHI websites were included in the final sample. These websites most frequently were patient information/support resources (46%), were focused exclusively on PBT (51%), and had a commercial affiliation (38%). There was a significant difference in quality according to the type of IHI. Substantial inaccuracies were noted in the study sample compared with best available or contextual evidence. Conclusions: There are shortcomings in quality and accuracy in consumer-oriented IHI on PBT for PCa. Providers must be prepared to educate patients how to critically evaluate IHI related to PBT for PCa to best inform their treatment decisions.« less
On the primary variable switching technique for simulating unsaturated-saturated flows
NASA Astrophysics Data System (ADS)
Diersch, H.-J. G.; Perrochet, P.
Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.
NASA Astrophysics Data System (ADS)
Jayasekare, Ajith S.; Wickramasuriya, Rohan; Namazi-Rad, Mohammad-Reza; Perez, Pascal; Singh, Gaurav
2017-07-01
A continuous update of building information is necessary in today's urban planning. Digital images acquired by remote sensing platforms at appropriate spatial and temporal resolutions provide an excellent data source to achieve this. In particular, high-resolution satellite images are often used to retrieve objects such as rooftops using feature extraction. However, high-resolution images acquired over built-up areas are associated with noises such as shadows that reduce the accuracy of feature extraction. Feature extraction heavily relies on the reflectance purity of objects, which is difficult to perfect in complex urban landscapes. An attempt was made to increase the reflectance purity of building rooftops affected by shadows. In addition to the multispectral (MS) image, derivatives thereof namely, normalized difference vegetation index and principle component (PC) images were incorporated in generating the probability image. This hybrid probability image generation ensured that the effect of shadows on rooftop extraction, particularly on light-colored roofs, is largely eliminated. The PC image was also used for image segmentation, which further increased the accuracy compared to segmentation performed on an MS image. Results show that the presented method can achieve higher rooftop extraction accuracy (70.4%) in vegetation-rich urban areas compared to traditional methods.
The effect of superior auditory skills on vocal accuracy
NASA Astrophysics Data System (ADS)
Amir, Ofer; Amir, Noam; Kishon-Rabin, Liat
2003-02-01
The relationship between auditory perception and vocal production has been typically investigated by evaluating the effect of either altered or degraded auditory feedback on speech production in either normal hearing or hearing-impaired individuals. Our goal in the present study was to examine this relationship in individuals with superior auditory abilities. Thirteen professional musicians and thirteen nonmusicians, with no vocal or singing training, participated in this study. For vocal production accuracy, subjects were presented with three tones. They were asked to reproduce the pitch using the vowel /a/. This procedure was repeated three times. The fundamental frequency of each production was measured using an autocorrelation pitch detection algorithm designed for this study. The musicians' superior auditory abilities (compared to the nonmusicians) were established in a frequency discrimination task reported elsewhere. Results indicate that (a) musicians had better vocal production accuracy than nonmusicians (production errors of 1/2 a semitone compared to 1.3 semitones, respectively); (b) frequency discrimination thresholds explain 43% of the variance of the production data, and (c) all subjects with superior frequency discrimination thresholds showed accurate vocal production; the reverse relationship, however, does not hold true. In this study we provide empirical evidence to the importance of auditory feedback on vocal production in listeners with superior auditory skills.
Playing vs. nonplaying aerobic training in tennis: physiological and performance outcomes.
Pialoux, Vincent; Genevois, Cyril; Capoen, Arnaud; Forbes, Scott C; Thomas, Jordan; Rogowski, Isabelle
2015-01-01
This study compared the effects of playing and nonplaying high intensity intermittent training (HIIT) on physiological demands and tennis stroke performance in young tennis players. Eleven competitive male players (13.4 ± 1.3 years) completed both a playing and nonplaying HIIT session of equal distance, in random order. During each HIIT session, heart rate (HR), blood lactate, and ratings of perceived exertion (RPE) were monitored. Before and after each HIIT session, the velocity and accuracy of the serve, and forehand and backhand strokes were evaluated. The results demonstrated that both HIIT sessions achieved an average HR greater than 90% HRmax. The physiological demands (average HR) were greater during the playing session compared to the nonplaying session, despite similar lactate concentrations and a lower RPE. The results also indicate a reduction in shot velocity after both HIIT sessions; however, the playing HIIT session had a more deleterious effect on stroke accuracy. These findings suggest that 1) both HIIT sessions may be sufficient to develop maximal aerobic power, 2) playing HIIT sessions provide a greater physiological demand with a lower RPE, and 3) playing HIIT has a greater deleterious effect on stroke performance, and in particular on the accuracy component of the ground stroke performance, and should be incorporated appropriately into a periodization program in young male tennis players.
Multiple performance measures are needed to evaluate triage systems in the emergency department.
Zachariasse, Joany M; Nieboer, Daan; Oostenbrink, Rianne; Moll, Henriëtte A; Steyerberg, Ewout W
2018-02-01
Emergency department triage systems can be considered prediction rules with an ordinal outcome, where different directions of misclassification have different clinical consequences. We evaluated strategies to compare the performance of triage systems and aimed to propose a set of performance measures that should be used in future studies. We identified performance measures based on literature review and expert knowledge. Their properties are illustrated in a case study evaluating two triage modifications in a cohort of 14,485 pediatric emergency department visits. Strengths and weaknesses of the performance measures were systematically appraised. Commonly reported performance measures are measures of statistical association (34/60 studies) and diagnostic accuracy (17/60 studies). The case study illustrates that none of the performance measures fulfills all criteria for triage evaluation. Decision curves are the performance measures with the most attractive features but require dichotomization. In addition, paired diagnostic accuracy measures can be recommended for dichotomized analysis, and the triage-weighted kappa and Nagelkerke's R 2 for ordinal analyses. Other performance measures provide limited additional information. When comparing modifications of triage systems, decision curves and diagnostic accuracy measures should be used in a dichotomized analysis, and the triage-weighted kappa and Nagelkerke's R 2 in an ordinal approach. Copyright © 2017 Elsevier Inc. All rights reserved.
Playing vs. Nonplaying Aerobic Training in Tennis: Physiological and Performance Outcomes
Pialoux, Vincent; Genevois, Cyril; Capoen, Arnaud; Forbes, Scott C.; Thomas, Jordan; Rogowski, Isabelle
2015-01-01
This study compared the effects of playing and nonplaying high intensity intermittent training (HIIT) on physiological demands and tennis stroke performance in young tennis players. Eleven competitive male players (13.4 ± 1.3 years) completed both a playing and nonplaying HIIT session of equal distance, in random order. During each HIIT session, heart rate (HR), blood lactate, and ratings of perceived exertion (RPE) were monitored. Before and after each HIIT session, the velocity and accuracy of the serve, and forehand and backhand strokes were evaluated. The results demonstrated that both HIIT sessions achieved an average HR greater than 90% HRmax. The physiological demands (average HR) were greater during the playing session compared to the nonplaying session, despite similar lactate concentrations and a lower RPE. The results also indicate a reduction in shot velocity after both HIIT sessions; however, the playing HIIT session had a more deleterious effect on stroke accuracy. These findings suggest that 1) both HIIT sessions may be sufficient to develop maximal aerobic power, 2) playing HIIT sessions provide a greater physiological demand with a lower RPE, and 3) playing HIIT has a greater deleterious effect on stroke performance, and in particular on the accuracy component of the ground stroke performance, and should be incorporated appropriately into a periodization program in young male tennis players. PMID:25816346
Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement
Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; ...
2013-12-10
A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidsmeier, T.; Koehl, R.; Lanham, R.
2008-07-15
The current design and fabrication process for RERTR fuel plates utilizes film radiography during the nondestructive testing and characterization. Digital radiographic methods offer a potential increases in efficiency and accuracy. The traditional and digital radiographic methods are described and demonstrated on a fuel plate constructed with and average of 51% by volume fuel using the dispersion method. Fuel loading data from each method is analyzed and compared to a third baseline method to assess accuracy. The new digital method is shown to be more accurate, save hours of work, and provide additional information not easily available in the traditional method.more » Additional possible improvements suggested by the new digital method are also raised. (author)« less
Identification accuracy of children versus adults: a meta-analysis.
Pozzulo, J D; Lindsay, R C
1998-10-01
Identification accuracy of children and adults was examined in a meta-analysis. Preschoolers (M = 4 years) were less likely than adults to make correct identifications. Children over the age of 5 did not differ significantly from adults with regard to correct identification rate. Children of all ages examined were less likely than adults to correctly reject a target-absent lineup. Even adolescents (M = 12-13 years) did not reach an adult rate of correct rejection. Compared to simultaneous lineup presentation, sequential lineups increased the child-adult gap for correct rejections. Providing child witnesses with identification practice or training did not increase their correct rejection rates. Suggestions for children's inability to correctly reject target-absent lineups are discussed. Future directions for identification research are presented.
NASA Astrophysics Data System (ADS)
Westphal, T.; Nijssen, R. P. L.
2014-12-01
The effect of Constant Life Diagram (CLD) formulation on the fatigue life prediction under variable amplitude (VA) loading was investigated based on variable amplitude tests using three different load spectra representative for wind turbine loading. Next to the Wisper and WisperX spectra, the recently developed NewWisper2 spectrum was used. Based on these variable amplitude fatigue results the prediction accuracy of 4 CLD formulations is investigated. In the study a piecewise linear CLD based on the S-N curves for 9 load ratios compares favourably in terms of prediction accuracy and conservativeness. For the specific laminate used in this study Boerstra's Multislope model provides a good alternative at reduced test effort.
Experience Gained From Launch and Early Orbit Support of the Rossi X-Ray Timing Explorer (RXTE)
NASA Technical Reports Server (NTRS)
Fink, D. R.; Chapman, K. B.; Davis, W. S.; Hashmall, J. A.; Shulman, S. E.; Underwood, S. C.; Zsoldos, J. M.; Harman, R. R.
1996-01-01
this paper reports the results to date of early mission support provided by the personnel of the Goddard Space Flight Center Flight Dynamics Division (FDD) for the Rossi X-Ray Timing Explorer (RXTE) spacecraft. For this mission, the FDD supports onboard attitude determination and ephemeris propagation by supplying ground-based orbit and attitude solutions and calibration results. The first phase of that support was to provide launch window analyses. As the launch window was determined, acquisition attitudes were calculated and calibration slews were planned. postlaunch, these slews provided the basis for ground determined calibration. Ground determined calibration results are used to improve the accuracy of onboard solutions. The FDD is applying new calibration tools designed to facilitate use of the simultaneous, high-accuracy star observations from the two RXTE star trackers for ground attitude determination and calibration. An evaluation of the performance of these tools is presented. The FDD provides updates to the onboard star catalog based on preflight analysis and analysis of flight data. The in-flight results of the mission support in each area are summarized and compared with pre-mission expectations.
Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C
2018-06-01
Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.
Theoretical study of surface plasmon resonance sensors based on 2D bimetallic alloy grating
NASA Astrophysics Data System (ADS)
Dhibi, Abdelhak; Khemiri, Mehdi; Oumezzine, Mohamed
2016-11-01
A surface plasmon resonance (SPR) sensor based on 2D alloy grating with a high performance is proposed. The grating consists of homogeneous alloys of formula MxAg1-x, where M is gold, copper, platinum and palladium. Compared to the SPR sensors based a pure metal, the sensor based on angular interrogation with silver exhibits a sharper (i.e. larger depth-to-width ratio) reflectivity dip, which provides a big detection accuracy, whereas the sensor based on gold exhibits the broadest dips and the highest sensitivity. The detection accuracy of SPR sensor based a metal alloy is enhanced by the increase of silver composition. In addition, the composition of silver which is around 0.8 improves the sensitivity and the quality of SPR sensor of pure metal. Numerical simulations based on rigorous coupled wave analysis (RCWA) show that the sensor based on a metal alloy not only has a high sensitivity and a high detection accuracy, but also exhibits a good linearity and a good quality.
Competitive Deep-Belief Networks for Underwater Acoustic Target Recognition
Shen, Sheng; Yao, Xiaohui; Sheng, Meiping; Wang, Chen
2018-01-01
Underwater acoustic target recognition based on ship-radiated noise belongs to the small-sample-size recognition problems. A competitive deep-belief network is proposed to learn features with more discriminative information from labeled and unlabeled samples. The proposed model consists of four stages: (1) A standard restricted Boltzmann machine is pretrained using a large number of unlabeled data to initialize its parameters; (2) the hidden units are grouped according to categories, which provides an initial clustering model for competitive learning; (3) competitive training and back-propagation algorithms are used to update the parameters to accomplish the task of clustering; (4) by applying layer-wise training and supervised fine-tuning, a deep neural network is built to obtain features. Experimental results show that the proposed method can achieve classification accuracy of 90.89%, which is 8.95% higher than the accuracy obtained by the compared methods. In addition, the highest accuracy of our method is obtained with fewer features than other methods. PMID:29570642
Paraskevopoulou, Sivylla E; Wu, Di; Eftekhar, Amir; Constandinou, Timothy G
2014-09-30
This work presents a novel unsupervised algorithm for real-time adaptive clustering of neural spike data (spike sorting). The proposed Hierarchical Adaptive Means (HAM) clustering method combines centroid-based clustering with hierarchical cluster connectivity to classify incoming spikes using groups of clusters. It is described how the proposed method can adaptively track the incoming spike data without requiring any past history, iteration or training and autonomously determines the number of spike classes. Its performance (classification accuracy) has been tested using multiple datasets (both simulated and recorded) achieving a near-identical accuracy compared to k-means (using 10-iterations and provided with the number of spike classes). Also, its robustness in applying to different feature extraction methods has been demonstrated by achieving classification accuracies above 80% across multiple datasets. Last but crucially, its low complexity, that has been quantified through both memory and computation requirements makes this method hugely attractive for future hardware implementation. Copyright © 2014 Elsevier B.V. All rights reserved.
Positioning stability improvement with inter-system biases on multi-GNSS PPP
NASA Astrophysics Data System (ADS)
Choi, Byung-Kyu; Yoon, Hasu
2018-07-01
The availability of multiple signals from different Global Navigation Satellite System (GNSS) constellations provides opportunities for improving positioning accuracy and initial convergence time. With dual-frequency observations from the four constellations (GPS, GLONASS, Galileo, and BeiDou), it is possible to investigate combined GNSS precise point positioning (PPP) accuracy and stability. The differences between GNSS systems result in inter-system biases (ISBs). We consider several ISB values such as GPS-GLONASS, GPS-Galileo, and GPS-BeiDou. These biases are compliant with key parameters defined in the multi-GNSS PPP processing. In this study, we present a unified PPP method that sets ISB values as fixed or constant. A comprehensive analysis that includes satellite visibility, position dilution of precision, position accuracy is performed to evaluate a unified PPP method with constrained cut-off elevation angles. Compared to the conventional PPP solutions, our approach shows more stable positioning at a constrained cut-off elevation angle of 50 degrees.
Luo, Xiongbiao; Jayarathne, Uditha L; McLeod, A Jonathan; Mori, Kensaku
2014-01-01
Endoscopic navigation generally integrates different modalities of sensory information in order to continuously locate an endoscope relative to suspicious tissues in the body during interventions. Current electromagnetic tracking techniques for endoscopic navigation have limited accuracy due to tissue deformation and magnetic field distortion. To avoid these limitations and improve the endoscopic localization accuracy, this paper proposes a new endoscopic navigation framework that uses an optical mouse sensor to measure the endoscope movements along its viewing direction. We then enhance the differential evolution algorithm by modifying its mutation operation. Based on the enhanced differential evolution method, these movement measurements and image structural patches in endoscopic videos are fused to accurately determine the endoscope position. An evaluation on a dynamic phantom demonstrated that our method provides a more accurate navigation framework. Compared to state-of-the-art methods, it improved the navigation accuracy from 2.4 to 1.6 mm and reduced the processing time from 2.8 to 0.9 seconds.
An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling
NASA Astrophysics Data System (ADS)
Wang, Enjiang; Liu, Yang
2018-01-01
The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.
Mikhno, Arthur; Nuevo, Pablo Martinez; Devanand, Davangere P.; Parsey, Ramin V.; Laine, Andrew F.
2013-01-01
Multimodality classification of Alzheimer’s disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), is of interest to the medical community. We improve on prior classification frameworks by incorporating multiple features from MRI and PET data obtained with multiple radioligands, fluorodeoxyglucose (FDG) and Pittsburg compound B (PIB). We also introduce a new MRI feature, invariant shape descriptors based on 3D Zernike moments applied to the hippocampus region. Classification performance is evaluated on data from 17 healthy controls (CTR), 22 MCI, and 17 AD subjects. Zernike significantly outperforms volume, accuracy (Zernike to volume): CTR/AD (90.7% to 71.6%), CTR/MCI (76.2% to 60.0%), MCI/AD (84.3% to 65.5%). Zernike also provides comparable and complementary performance to PET. Optimal accuracy is achieved when Zernike and PET features are combined (accuracy, specificity, sensitivity), CTR/AD (98.8%, 99.5%, 98.1%), CTR/MCI (84.3%, 82.9%, 85.9%) and MCI/AD (93.3%, 93.6%, 93.3%). PMID:24576927
Mikhno, Arthur; Nuevo, Pablo Martinez; Devanand, Davangere P; Parsey, Ramin V; Laine, Andrew F
2012-01-01
Multimodality classification of Alzheimer's disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), is of interest to the medical community. We improve on prior classification frameworks by incorporating multiple features from MRI and PET data obtained with multiple radioligands, fluorodeoxyglucose (FDG) and Pittsburg compound B (PIB). We also introduce a new MRI feature, invariant shape descriptors based on 3D Zernike moments applied to the hippocampus region. Classification performance is evaluated on data from 17 healthy controls (CTR), 22 MCI, and 17 AD subjects. Zernike significantly outperforms volume, accuracy (Zernike to volume): CTR/AD (90.7% to 71.6%), CTR/MCI (76.2% to 60.0%), MCI/AD (84.3% to 65.5%). Zernike also provides comparable and complementary performance to PET. Optimal accuracy is achieved when Zernike and PET features are combined (accuracy, specificity, sensitivity), CTR/AD (98.8%, 99.5%, 98.1%), CTR/MCI (84.3%, 82.9%, 85.9%) and MCI/AD (93.3%, 93.6%, 93.3%).
Zhang, Zelun; Poslad, Stefan
2013-11-01
Wearable and accompanied sensors and devices are increasingly being used for user activity recognition. However, typical GPS-based and accelerometer-based (ACC) methods face three main challenges: a low recognition accuracy; a coarse recognition capability, i.e., they cannot recognise both human posture (during travelling) and transportation mode simultaneously, and a relatively high computational complexity. Here, a new GPS and Foot-Force (GPS + FF) sensor method is proposed to overcome these challenges that leverages a set of wearable FF sensors in combination with GPS, e.g., in a mobile phone. User mobility activities that can be recognised include both daily user postures and common transportation modes: sitting, standing, walking, cycling, bus passenger, car passenger (including private cars and taxis) and car driver. The novelty of this work is that our approach provides a more comprehensive recognition capability in terms of reliably recognising both human posture and transportation mode simultaneously during travel. In addition, by comparing the new GPS + FF method with both an ACC method (62% accuracy) and a GPS + ACC based method (70% accuracy) as baseline methods, it obtains a higher accuracy (95%) with less computational complexity, when tested on a dataset obtained from ten individuals.
Accuracy of Binary Black Hole waveforms for Advanced LIGO searches
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela
2015-04-01
Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.
NASA Technical Reports Server (NTRS)
Rapp, Richard H.
1993-01-01
The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.
Support vector machine and principal component analysis for microarray data classification
NASA Astrophysics Data System (ADS)
Astuti, Widi; Adiwijaya
2018-03-01
Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.
Pseudorange Measurement Method Based on AIS Signals.
Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng
2017-05-22
In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system.
Pseudorange Measurement Method Based on AIS Signals
Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng
2017-01-01
In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system. PMID:28531153
Exploring the Relationship Between Eye Movements and Electrocardiogram Interpretation Accuracy
NASA Astrophysics Data System (ADS)
Davies, Alan; Brown, Gavin; Vigo, Markel; Harper, Simon; Horseman, Laura; Splendiani, Bruno; Hill, Elspeth; Jay, Caroline
2016-12-01
Interpretation of electrocardiograms (ECGs) is a complex task involving visual inspection. This paper aims to improve understanding of how practitioners perceive ECGs, and determine whether visual behaviour can indicate differences in interpretation accuracy. A group of healthcare practitioners (n = 31) who interpret ECGs as part of their clinical role were shown 11 commonly encountered ECGs on a computer screen. The participants’ eye movement data were recorded as they viewed the ECGs and attempted interpretation. The Jensen-Shannon distance was computed for the distance between two Markov chains, constructed from the transition matrices (visual shifts from and to ECG leads) of the correct and incorrect interpretation groups for each ECG. A permutation test was then used to compare this distance against 10,000 randomly shuffled groups made up of the same participants. The results demonstrated a statistically significant (α 0.05) result in 5 of the 11 stimuli demonstrating that the gaze shift between the ECG leads is different between the groups making correct and incorrect interpretations and therefore a factor in interpretation accuracy. The results shed further light on the relationship between visual behaviour and ECG interpretation accuracy, providing information that can be used to improve both human and automated interpretation approaches.
Effects of self-referencing on feeling-of-knowing accuracy and recollective experience.
Boduroglu, Aysecan; Pehlivanoglu, Didem; Tekcan, Ali I; Kapucu, Aycan
2015-01-01
The current research investigated the impact of self-referencing (SR) on feeling-of-knowing (FOK) judgements to improve our understanding of the mechanisms underlying these metamemory judgements and specifically test the relationship between recollective experiences and FOK accuracy within the accessibility framework FOK judgements are thought to be by-products of the retrieval process and are therefore closely related to memory performance. Because relating information to one's self is one of the factors enhancing memory performance, we investigated the effect of self-related encoding on FOK accuracy and recollective experience. We compared performance on this condition to a separate deep processing condition in which participants reported the frequency of occurrence of pairs of words. Participants encoded pairs of words incidentally, and following a delay interval, they attempted at retrieving each target prompted by its cue. Then, they were re-presented with all cues and asked to provide FOK ratings regarding their likelihood of recognising the targets amongst distractors. Finally, they were given a surprise recognition task in which following each response they identified whether the response was remembered, known or just guessed. Our results showed that only SR at encoding resulted in better memory, higher FOK accuracy and increased recollective experience.
Robotic surgery: current perceptions and the clinical evidence.
Ahmad, Arif; Ahmad, Zoha F; Carleton, Jared D; Agarwala, Ashish
2017-01-01
It appears that a discrepancy exists between the perception of robotic-assisted surgery (RAS) and the current clinical evidence regarding robotic-assisted surgery among patients, healthcare providers, and hospital administrators. The purpose of this study was to assess whether or not such a discrepancy exists. We administered survey questionnaires via face-to-face interviews with surgical patients (n = 101), healthcare providers (n = 58), and senior members of hospital administration (n = 6) at a community hospital that performs robotic surgery. The respondents were asked about their perception regarding the infection rate, operative time, operative blood loss, incision size, cost, length of hospital stay (LOS), risk of complications, precision and accuracy, tactile sensation, and technique of robotic-assisted surgery as compared with conventional laparoscopic surgery. We then performed a comprehensive literature review to assess whether or not these perceptions could be corroborated with clinical evidence. The majority of survey respondents perceived RAS as modality to decrease infection rate, increase operative time, decrease operative blood loss, smaller incision size, a shorter LOS, and a lower risk of complications, while increasing the cost. Respondents also believed that robotic surgery provides greater precision, accuracy, and tactile sensation, while improving intra-operative access to organs. A comprehensive literature review found little-to-no clinical evidence that supported the respondent's favorable perceptions of robotic surgery except for the increased costs, and precision and accuracy of the robotic-assisted technique. There is a discrepancy between the perceptions of robotic surgery and the clinical evidence among patients, healthcare providers, and hospital administrators surveyed.
Song, Na; Du, Yong; He, Bin; Frey, Eric C.
2011-01-01
Purpose: The radionuclide 131I has found widespread use in targeted radionuclide therapy (TRT), partly due to the fact that it emits photons that can be imaged to perform treatment planning or posttherapy dose verification as well as beta rays that are suitable for therapy. In both the treatment planning and dose verification applications, it is necessary to estimate the activity distribution in organs or tumors at several time points. In vivo estimates of the 131I activity distribution at each time point can be obtained from quantitative single-photon emission computed tomography (QSPECT) images and organ activity estimates can be obtained either from QSPECT images or quantification of planar projection data. However, in addition to the photon used for imaging, 131I decay results in emission of a number of other higher-energy photons with significant abundances. These higher-energy photons can scatter in the body, collimator, or detector and be counted in the 364 keV photopeak energy window, resulting in reduced image contrast and degraded quantitative accuracy; these photons are referred to as downscatter. The goal of this study was to develop and evaluate a model-based downscatter compensation method specifically designed for the compensation of high-energy photons emitted by 131I and detected in the imaging energy window. Methods: In the evaluation study, we used a Monte Carlo simulation (MCS) code that had previously been validated for other radionuclides. Thus, in preparation for the evaluation study, we first validated the code for 131I imaging simulation by comparison with experimental data. Next, we assessed the accuracy of the downscatter model by comparing downscatter estimates with MCS results. Finally, we combined the downscatter model with iterative reconstruction-based compensation for attenuation (A) and scatter (S) and the full (D) collimator-detector response of the 364 keV photons to form a comprehensive compensation method. We evaluated this combined method in terms of quantitative accuracy using the realistic 3D NCAT phantom and an activity distribution obtained from patient studies. We compared the accuracy of organ activity estimates in images reconstructed with and without addition of downscatter compensation from projections with and without downscatter contamination. Results: We observed that the proposed method provided substantial improvements in accuracy compared to no downscatter compensation and had accuracies comparable to reconstructions from projections without downscatter contamination. Conclusions: The results demonstrate that the proposed model-based downscatter compensation method is effective and may have a role in quantitative 131I imaging. PMID:21815394
Neural substrates of empathic accuracy in people with schizophrenia.
Harvey, Philippe-Olivier; Zaki, Jamil; Lee, Junghee; Ochsner, Kevin; Green, Michael F
2013-05-01
Empathic deficits in schizophrenia may lead to social dysfunction, but previous studies of schizophrenia have not modeled empathy through paradigms that (1) present participants with naturalistic social stimuli and (2) link brain activity to "accuracy" about inferring other's emotional states. This study addressed this gap by investigating the neural correlates of empathic accuracy (EA) in schizophrenia. Fifteen schizophrenia patients and 15 controls were scanned while continuously rating the affective state of another person shown in a series of videos (ie, targets). These ratings were compared with targets' own self-rated affect, and EA was defined as the correlation between participants' ratings and targets' self-ratings. Targets' self-reported emotional expressivity also was measured. We searched for brain regions whose activity tracked parametrically with (1) perceivers' EA and (2) targets' expressivity. Patients showed reduced EA compared with controls. The left precuneus, left middle frontal gyrus, and bilateral thalamus were significantly more correlated with EA in controls compared with patients. High expressivity in targets was associated with better EA in controls but not in patients. High expressivity was associated with increased brain activity in a large set of regions in controls (eg, fusiform gyrus, medial prefrontal cortex) but not in patients. These results use a naturalistic performance measure to confirm that schizophrenic patients demonstrate impaired ability to understand others' internal states. They provide novel evidence about a potential mechanism for this impairment: schizophrenic patients failed to capitalize on targets' emotional expressivity and also demonstrate reduced neural sensitivity to targets' affective cues.
Schnakers, Caroline; Vanhaudenhuyse, Audrey; Giacino, Joseph; Ventura, Manfredi; Boly, Melanie; Majerus, Steve; Moonen, Gustave; Laureys, Steven
2009-07-21
Previously published studies have reported that up to 43% of patients with disorders of consciousness are erroneously assigned a diagnosis of vegetative state (VS). However, no recent studies have investigated the accuracy of this grave clinical diagnosis. In this study, we compared consensus-based diagnoses of VS and MCS to those based on a well-established standardized neurobehavioral rating scale, the JFK Coma Recovery Scale-Revised (CRS-R). We prospectively followed 103 patients (55 +/- 19 years) with mixed etiologies and compared the clinical consensus diagnosis provided by the physician on the basis of the medical staff's daily observations to diagnoses derived from CRS-R assessments performed by research staff. All patients were assigned a diagnosis of 'VS', 'MCS' or 'uncertain diagnosis.' Of the 44 patients diagnosed with VS based on the clinical consensus of the medical team, 18 (41%) were found to be in MCS following standardized assessment with the CRS-R. In the 41 patients with a consensus diagnosis of MCS, 4 (10%) had emerged from MCS, according to the CRS-R. We also found that the majority of patients assigned an uncertain diagnosis by clinical consensus (89%) were in MCS based on CRS-R findings. Despite the importance of diagnostic accuracy, the rate of misdiagnosis of VS has not substantially changed in the past 15 years. Standardized neurobehavioral assessment is a more sensitive means of establishing differential diagnosis in patients with disorders of consciousness when compared to diagnoses determined by clinical consensus.
Sefton, Gerri; Lane, Steven; Killen, Roger; Black, Stuart; Lyon, Max; Ampah, Pearl; Sproule, Cathryn; Loren-Gosling, Dominic; Richards, Caitlin; Spinty, Jean; Holloway, Colette; Davies, Coral; Wilson, April; Chean, Chung Shen; Carter, Bernie; Carrol, E.D.
2017-01-01
Pediatric Early Warning Scores are advocated to assist health professionals to identify early signs of serious illness or deterioration in hospitalized children. Scores are derived from the weighting applied to recorded vital signs and clinical observations reflecting deviation from a predetermined “norm.” Higher aggregate scores trigger an escalation in care aimed at preventing critical deterioration. Process errors made while recording these data, including plotting or calculation errors, have the potential to impede the reliability of the score. To test this hypothesis, we conducted a controlled study of documentation using five clinical vignettes. We measured the accuracy of vital sign recording, score calculation, and time taken to complete documentation using a handheld electronic physiological surveillance system, VitalPAC Pediatric, compared with traditional paper-based charts. We explored the user acceptability of both methods using a Web-based survey. Twenty-three staff participated in the controlled study. The electronic physiological surveillance system improved the accuracy of vital sign recording, 98.5% versus 85.6%, P < .02, Pediatric Early Warning Score calculation, 94.6% versus 55.7%, P < .02, and saved time, 68 versus 98 seconds, compared with paper-based documentation, P < .002. Twenty-nine staff completed the Web-based survey. They perceived that the electronic physiological surveillance system offered safety benefits by reducing human error while providing instant visibility of recorded data to the entire clinical team. PMID:27832032
A visual analytics approach for pattern-recognition in patient-generated data.
Feller, Daniel J; Burgermaster, Marissa; Levine, Matthew E; Smaldone, Arlene; Davidson, Patricia G; Albers, David J; Mamykina, Lena
2018-06-13
To develop and test a visual analytics tool to help clinicians identify systematic and clinically meaningful patterns in patient-generated data (PGD) while decreasing perceived information overload. Participatory design was used to develop Glucolyzer, an interactive tool featuring hierarchical clustering and a heatmap visualization to help registered dietitians (RDs) identify associative patterns between blood glucose levels and per-meal macronutrient composition for individuals with type 2 diabetes (T2DM). Ten RDs participated in a within-subjects experiment to compare Glucolyzer to a static logbook format. For each representation, participants had 25 minutes to examine 1 month of diabetes self-monitoring data captured by an individual with T2DM and identify clinically meaningful patterns. We compared the quality and accuracy of the observations generated using each representation. Participants generated 50% more observations when using Glucolyzer (98) than when using the logbook format (64) without any loss in accuracy (69% accuracy vs 62%, respectively, p = .17). Participants identified more observations that included ingredients other than carbohydrates using Glucolyzer (36% vs 16%, p = .027). Fewer RDs reported feelings of information overload using Glucolyzer compared to the logbook format. Study participants displayed variable acceptance of hierarchical clustering. Visual analytics have the potential to mitigate provider concerns about the volume of self-monitoring data. Glucolyzer helped dietitians identify meaningful patterns in self-monitoring data without incurring perceived information overload. Future studies should assess whether similar tools can support clinicians in personalizing behavioral interventions that improve patient outcomes.
Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography
NASA Astrophysics Data System (ADS)
Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.
2016-10-01
With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.
SU-F-T-480: Evaluation of the Role of Varian Machine Performance Check (MPC) in Our Daily QA Routine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juneja, B; Gao, S; Balter, P
2016-06-15
Purpose: (A) To assess the role of Varian MPC in our daily QA routine, and (B) evaluate the accuracy and precision of MPC. Methods: The MPC was performed weekly, for five months, on a Varian TrueBeam for five photon (6x, 10x, 15x, 6xFFF, and 10xFFF) and electron (6e, 9e, 12e, 16e, and 20e) energies. Output results were compared to those determined with an ionization chamber (TN30001, PTW-Freiburg) in plastic and a daily check device (DQA3, Sun Nuclear). Consistency of the Mechanical measurements over five months was analyzed and compared to monthly IsoCal results. Results: The MPC randomly showed large deviationsmore » (3–7%) that disappeared upon reacquisition. The MPC output closely matched monthly ion chamber and DQA3 measurements. The maximum and mean absolute difference between monthly and MPC was 1.18% and 0.28±0.21% for all energies. The maximum and mean absolute difference between DQA3 and MPC was 3.26% and 0.85±0.61%. The results suggest the MPC is comparable to the DQA3 for measuring output. The DQA3 provides wedge output, flatness, symmetry, and energy constancy checks, which are missing from the current implementation of the MPC. However, the MPC provides additional mechanical tests, such as size of the radiation isocenter (0.33±0.02 mm) and its coincidence with MV and kV isocenters (0.17±0.05 and 0.21±0.03 mm). It also provides positional accuracy of individual jaws (maximum σ, 0.33mm), all the MLC leaves (0.08mm), gantry (0.05°) and collimator (0.13°) rotation angles, and couch positioning (0.11mm) accuracy. MPC mechanical tests could replace our current daily on-board imaging QA routine and provide some additional QA not currently performed. Conclusion: MPC has the potential to be a valuable tool that facilitates reliable daily QA including many mechanical tests that are not currently performed. This system can add to our daily QA, but further development would be needed to fully replace our current Daily QA device.« less
Sheffield, Catherine A; Kane, Michael P; Bakst, Gary; Busch, Robert S; Abelseth, Jill M; Hamilton, Robert A
2009-09-01
This study compared the accuracy and precision of four value-added glucose meters. Finger stick glucose measurements in diabetes patients were performed using the Abbott Diabetes Care (Alameda, CA) Optium, Diagnostic Devices, Inc. (Miami, FL) DDI Prodigy, Home Diagnostics, Inc. (Fort Lauderdale, FL) HDI True Track Smart System, and Arkray, USA (Minneapolis, MN) HypoGuard Assure Pro. Finger glucose measurements were compared with laboratory reference results. Accuracy was assessed by a Clarke error grid analysis (EGA), a Parkes EGA, and within 5%, 10%, 15%, and 20% of the laboratory value criteria (chi2 analysis). Meter precision was determined by calculating absolute mean differences in glucose values between duplicate samples (Kruskal-Wallis test). Finger sticks were obtained from 125 diabetes patients, of which 90.4% were Caucasian, 51.2% were female, 83.2% had type 2 diabetes, and average age of 59 years (SD 14 years). Mean venipuncture blood glucose was 151 mg/dL (SD +/-65 mg/dL; range, 58-474 mg/dL). Clinical accuracy by Clarke EGA was demonstrated in 94% of Optium, 82% of Prodigy, 61% of True Track, and 77% of the Assure Pro samples (P < 0.05 for Optium and True Track compared to all others). By Parkes EGA, the True Track was significantly less accurate than the other meters. Within 5% accuracy was achieved in 34%, 24%, 29%, and 13%, respectively (P < 0.05 for Optium, Prodigy, and Assure Pro compared to True Track). Within 10% accuracy was significantly greater for the Optium, Prodigy, and Assure Pro compared to True Track. Significantly more Optium results demonstrated within 15% and 20% accuracy compared to the other meter systems. The HDI True Track was significantly less precise than the other meter systems. The Abbott Optium was significantly more accurate than the other meter systems, whereas the HDI True Track was significantly less accurate and less precise compared to the other meter systems.
Lee, Clara; Bolck, Jan; Naguib, Nagy N.N.; Schulz, Boris; Eichler, Katrin; Aschenbach, Rene; Wichmann, Julian L.; Vogl, Thomas. J.; Zangos, Stephan
2015-01-01
Objective To investigate the accuracy, efficiency and radiation dose of a novel laser navigation system (LNS) compared to those of free-handed punctures on computed tomography (CT). Materials and Methods Sixty punctures were performed using a phantom body to compare accuracy, timely effort, and radiation dose of the conventional free-handed procedure to those of the LNS-guided method. An additional 20 LNS-guided interventions were performed on another phantom to confirm accuracy. Ten patients subsequently underwent LNS-guided punctures. Results The phantom 1-LNS group showed a target point accuracy of 4.0 ± 2.7 mm (freehand, 6.3 ± 3.6 mm; p = 0.008), entrance point accuracy of 0.8 ± 0.6 mm (freehand, 6.1 ± 4.7 mm), needle angulation accuracy of 1.3 ± 0.9° (freehand, 3.4 ± 3.1°; p < 0.001), intervention time of 7.03 ± 5.18 minutes (freehand, 8.38 ± 4.09 minutes; p = 0.006), and 4.2 ± 3.6 CT images (freehand, 7.9 ± 5.1; p < 0.001). These results show significant improvement in 60 punctures compared to freehand. The phantom 2-LNS group showed a target point accuracy of 3.6 ± 2.5 mm, entrance point accuracy of 1.4 ± 2.0 mm, needle angulation accuracy of 1.0 ± 1.2°, intervention time of 1.44 ± 0.22 minutes, and 3.4 ± 1.7 CT images. The LNS group achieved target point accuracy of 5.0 ± 1.2 mm, entrance point accuracy of 2.0 ± 1.5 mm, needle angulation accuracy of 1.5 ± 0.3°, intervention time of 12.08 ± 3.07 minutes, and used 5.7 ± 1.6 CT-images for the first experience with patients. Conclusion Laser navigation system improved accuracy, duration of intervention, and radiation dose of CT-guided interventions. PMID:26175571
The effort to increase the space weather forecasting accuracy in KSWC
NASA Astrophysics Data System (ADS)
Choi, J. S.
2017-12-01
The Korean Space Weather Center (KSWC) of the National Radio Research Agency (RRA) is a government agency which is the official source of space weather information for Korean Government and the primary action agency of emergency measure to severe space weather condition as the Regional Warning Center of the International Space Environment Service (ISES). KSWC's main role is providing alerts, watches, and forecasts in order to minimize the space weather impacts on both of public and commercial sectors of satellites, aviation, communications, navigations, power grids, and etc. KSWC is also in charge of monitoring the space weather condition and conducting research and development for its main role of space weather operation in Korea. Recently, KSWC are focusing on increasing the accuracy of space weather forecasting results and verifying the model generated results. The forecasting accuracy will be calculated based on the probability statistical estimation so that the results can be compared numerically. Regarding the cosmic radiation does, we are gathering the actual measured data of radiation does using the instrument by cooperation with the domestic airlines. Based on the measurement, we are going to verify the reliability of SAFE system which was developed by KSWC to provide the cosmic radiation does information with the airplane cabin crew and public users.
Volumetric calibration of a plenoptic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Accuracy of force and center of pressure measures of the Wii Balance Board.
Bartlett, Harrison L; Ting, Lena H; Bingham, Jeffrey T
2014-01-01
The Nintendo Wii Balance Board (WBB) is increasingly used as an inexpensive force plate for assessment of postural control; however, no documentation of force and COP accuracy and reliability is publicly available. Therefore, we performed a standard measurement uncertainty analysis on 3 lightly and 6 heavily used WBBs to provide future users with information about the repeatability and accuracy of the WBB force and COP measurements. Across WBBs, we found the total uncertainty of force measurements to be within ± 9.1N, and of COP location within ± 4.1mm. However, repeatability of a single measurement within a board was better (4.5 N, 1.5mm), suggesting that the WBB is best used for relative measures using the same device, rather than absolute measurement across devices. Internally stored calibration values were comparable to those determined experimentally. Further, heavy wear did not significantly degrade performance. In combination with prior evaluation of WBB performance and published standards for measuring human balance, our study provides necessary information to evaluate the use of the WBB for analysis of human balance control. We suggest the WBB may be useful for low-resolution measurements, but should not be considered as a replacement for laboratory-grade force plates. Published by Elsevier B.V.
Accuracy of force and center of pressure measures of the Wii Balance Board
Bartlett, Harrison L.; Ting, Lena H.; Bingham, Jeffrey T.
2013-01-01
The Nintendo Wii Balance Board (WBB) is increasingly used as an inexpensive force plate for assessment of postural control; however, no documentation of force and COP accuracy and reliability is publicly available. Therefore, we performed a standard measurement uncertainty analysis on 3 lightly and 6 heavily used WBBs to provide future users with information about the repeatability and accuracy of the WBB force and COP measurements. Across WBBs, we found the total uncertainty of force measurements to be within ±9.1 N, and of COP location within ±4.1 mm. However, repeatability of a single measurement within a board was better (4.5 N, 1.5 mm), suggesting that the WBB is best used for relative measures using the same device, rather than absolute measurement across devices. Internally stored calibration values were comparable to those determined experimentally. Further, heavy wear did not significantly degrade performance. In combination with prior evaluation of WBB performance and published standards for measuring human balance, our study provides necessary information to evaluate the use of the WBB for analysis of human balance control. We suggest the WBB may be useful for low-resolution measurements, but should not be considered as a replacement for laboratory-grade force plates. PMID:23910725
Oncologic PET/MRI, part 1: tumors of the brain, head and neck, chest, abdomen, and pelvis.
Buchbender, Christian; Heusner, Till A; Lauenstein, Thomas C; Bockisch, Andreas; Antoch, Gerald
2012-06-01
In oncology, staging forms the basis for prognostic consideration and directly influences patient care by determining the therapeutic approach. Cross-sectional imaging techniques, especially when combined with PET information, play an important role in cancer staging. With the recent introduction of integrated whole-body PET/MRI into clinical practice, a novel metabolic-anatomic imaging technique is now available. PET/MRI seems to be highly accurate in T-staging of tumor entities for which MRI has traditionally been favored, such as squamous cell carcinomas of the head and neck. By adding functional MRI to PET, PET/MRI may further improve diagnostic accuracy in the differentiation of scar tissue from recurrence of tumors such as rectal cancer. This hypothesis will have to be assessed in future studies. With regard to N-staging, PET/MRI does not seem to provide a considerable benefit as compared with PET/CT but provides similar N-staging accuracy when applied as a whole-body staging approach. M-staging will benefit from MRI accuracy in the brain and the liver. The purpose of this review is to summarize the available first experiences with PET/MRI and to outline the potential value of PET/MRI in oncologic applications for which data on PET/MRI are still lacking.
Volumetric calibration of a plenoptic camera
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...
2018-02-01
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Augmented Reality Based Navigation for Computer Assisted Hip Resurfacing: A Proof of Concept Study.
Liu, He; Auvinet, Edouard; Giles, Joshua; Rodriguez Y Baena, Ferdinando
2018-05-23
Implantation accuracy has a great impact on the outcomes of hip resurfacing such as recovery of hip function. Computer assisted orthopedic surgery has demonstrated clear advantages for the patients, with improved placement accuracy and fewer outliers, but the intrusiveness, cost, and added complexity have limited its widespread adoption. To provide seamless computer assistance with improved immersion and a more natural surgical workflow, we propose an augmented-reality (AR) based navigation system for hip resurfacing. The operative femur is registered by processing depth information from the surgical site with a commercial depth camera. By coupling depth data with robotic assistance, obstacles that may obstruct the femur can be tracked and avoided automatically to reduce the chance of disruption to the surgical workflow. Using the registration result and the pre-operative plan, intra-operative surgical guidance is provided through a commercial AR headset so that the user can perform the operation without additional physical guides. To assess the accuracy of the navigation system, experiments of guide hole drilling were performed on femur phantoms. The position and orientation of the drilled holes were compared with the pre-operative plan, and the mean errors were found to be approximately 2 mm and 2°, results which are in line with commercial computer assisted orthopedic systems today.
IEEE 802.15.4 ZigBee-Based Time-of-Arrival Estimation for Wireless Sensor Networks.
Cheon, Jeonghyeon; Hwang, Hyunsu; Kim, Dongsun; Jung, Yunho
2016-02-05
Precise time-of-arrival (TOA) estimation is one of the most important techniques in RF-based positioning systems that use wireless sensor networks (WSNs). Because the accuracy of TOA estimation is proportional to the RF signal bandwidth, using broad bandwidth is the most fundamental approach for achieving higher accuracy. Hence, ultra-wide-band (UWB) systems with a bandwidth of 500 MHz are commonly used. However, wireless systems with broad bandwidth suffer from the disadvantages of high complexity and high power consumption. Therefore, it is difficult to employ such systems in various WSN applications. In this paper, we present a precise time-of-arrival (TOA) estimation algorithm using an IEEE 802.15.4 ZigBee system with a narrow bandwidth of 2 MHz. In order to overcome the lack of bandwidth, the proposed algorithm estimates the fractional TOA within the sampling interval. Simulation results show that the proposed TOA estimation algorithm provides an accuracy of 0.5 m at a signal-to-noise ratio (SNR) of 8 dB and achieves an SNR gain of 5 dB as compared with the existing algorithm. In addition, experimental results indicate that the proposed algorithm provides accurate TOA estimation in a real indoor environment.
Case studies on forecasting for innovative technologies: frequent revisions improve accuracy.
Lerner, Jeffrey C; Robertson, Diane C; Goldstein, Sara M
2015-02-01
Health technology forecasting is designed to provide reliable predictions about costs, utilization, diffusion, and other market realities before the technologies enter routine clinical use. In this article we address three questions central to forecasting's usefulness: Are early forecasts sufficiently accurate to help providers acquire the most promising technology and payers to set effective coverage policies? What variables contribute to inaccurate forecasts? How can forecasters manage the variables to improve accuracy? We analyzed forecasts published between 2007 and 2010 by the ECRI Institute on four technologies: single-room proton beam radiation therapy for various cancers; digital breast tomosynthesis imaging technology for breast cancer screening; transcatheter aortic valve replacement for serious heart valve disease; and minimally invasive robot-assisted surgery for various cancers. We then examined revised ECRI forecasts published in 2013 (digital breast tomosynthesis) and 2014 (the other three topics) to identify inaccuracies in the earlier forecasts and explore why they occurred. We found that five of twenty early predictions were inaccurate when compared with the updated forecasts. The inaccuracies pertained to two technologies that had more time-sensitive variables to consider. The case studies suggest that frequent revision of forecasts could improve accuracy, especially for complex technologies whose eventual use is governed by multiple interactive factors. Project HOPE—The People-to-People Health Foundation, Inc.
Gu, Changzhan; Li, Ruijiang; Zhang, Hualiang; Fung, Albert Y C; Torres, Carlos; Jiang, Steve B; Li, Changzhi
2012-11-01
Accurate respiration measurement is crucial in motion-adaptive cancer radiotherapy. Conventional methods for respiration measurement are undesirable because they are either invasive to the patient or do not have sufficient accuracy. In addition, measurement of external respiration signal based on conventional approaches requires close patient contact to the physical device which often causes patient discomfort and undesirable motion during radiation dose delivery. In this paper, a dc-coupled continuous-wave radar sensor was presented to provide a noncontact and noninvasive approach for respiration measurement. The radar sensor was designed with dc-coupled adaptive tuning architectures that include RF coarse-tuning and baseband fine-tuning, which allows the radar sensor to precisely measure movement with stationary moment and always work with the maximum dynamic range. The accuracy of respiration measurement with the proposed radar sensor was experimentally evaluated using a physical phantom, human subject, and moving plate in a radiotherapy environment. It was shown that respiration measurement with radar sensor while the radiation beam is on is feasible and the measurement has a submillimeter accuracy when compared with a commercial respiration monitoring system which requires patient contact. The proposed radar sensor provides accurate, noninvasive, and noncontact respiration measurement and therefore has a great potential in motion-adaptive radiotherapy.
Moritz, Steffen; Kloss, Martin; von Eckstaedt, Francesca Vitzthum; Jelinek, Lena
2009-04-30
The memory deficit or forgetfulness hypothesis of obsessive-compulsive disorder (OCD) has received considerable attention and empirical effort over the past decades. The present study aimed to provide a fair test of its various formulations: (1) memory dysfunction in OCD is ubiquitous, that is, manifests irrespective of modality and material; (2) memory dysfunction is found for nonverbal but not verbal material, (3) memory dysfunction is secondary to executive impairment; and (4) memory dysfunction affects meta-memory rather than memory accuracy. Participants comprised 43 OCD patients and 46 healthy controls who were tested on the Picture Word Memory Test (PWMT), which provides several unconfounded parameters for nonverbal and verbal memory accuracy and confidence measures across different time-points. In addition, the Trail-Making Test B was administered to test assumption number 3. Replicating earlier work of our group, samples displayed similar performance on all indices. None of the different formulations of the memory deficit hypothesis were supported. In view of waning evidence for a global memory deficit in OCD, neuropsychological research on OCD should more thoroughly investigate moderators and triggers of occasional instances of impaired performance, particularly cognitive biases such as perfectionism and an inflated sense of responsibility.